Re: [gentoo-user] Getting maximum space out of a hard drive
Sorry for the duplicate post. I had an email client error that accidentally caused me to hit send on the window I was composing in. On 8/20/22 1:15 PM, Dale wrote: Howdy, Hi, Related question. Does encryption slow the read/write speeds of a drive down a fair amount? My experience has been the opposite. I know that it's unintuitive that encryption would make things faster. But my understanding is that it alters how data is read from / written to the disk such that it's done in more optimized batches and / or optimized caching. This was so surprising that I decrypted a drive / re-encrypted a drive multiple times to compare things to come to the conclusion that encryption was noticeably better. Plus, encryption has the advantage of destroying the key rendering the drive safe to use independent of the data that was on it. N.B. The actual encryption key is encrypted with the passphrase. The passphrase isn't the encryption key itself. This new 10TB drive is maxing out at about 49.51MB/s or so. I wonder if you are possibly running into performance issues related to shingled drives. Their raw capacity comes at a performance penalty. I actually copied that from the progress of rsync and a nice sized file. It's been running over 24 hours now so I'd think buffer and cache would be well done with. LOL Ya, you have /probably/ exceeded the write back cache in the system's memory. It did pass both a short and long self test. I used cryptsetup -s 512 to encrypt with, nice password too. My rig has a FX-8350 8 core running at 4GHz CPU and 32GBs of memory. The CPU is fairly busy. A little more than normal anyway. Keep in mind, I have two encrypted drives connected right now. The last time I looked at cryptsetup / LUKS, I found that there was a [kernel] process per encrypted block device. A hack that I did while testing things was to slice up a drive into multiple partitions, encrypt each one, and then re-aggregate the LUKS devices as PVs in LVM. This surprisingly was a worthwhile performance boost. Just curious if that speed is normal or not. I suspect that your drive is FAR more the bottleneck than the encryption itself is. There is a chance that the encryption's access pattern is exascerbating a drive performance issue. Thoughts? Conceptually working in 512 B blocks on a drive that is natively 4 kB sectors. Thus causing the drive to do lots of extra work to account for the other seven 512 B blocks in a 4 kB sector. P. S. The pulled drive I bought had like 60 hours on it. Dang near new. :-) -- Grant. . . . unix || die
Re: [gentoo-user] VirtualBox question on Thinkpad laptop
On 8/20/22 12:30 AM, Walter Dnes wrote: Long-story-short; I run ArcaOS (backwards compatable OS/2 successor) as a guest on QEMU on my desktop. Aside: Is ArcaOS really a different version of OS/2? Or is it still 4.x with patches and updated drivers? I saw extremely little difference, other than eye candy / included open source packages, between IBM OS/2 Warp 4.5x, eComm Server, and ArcaOS. Further Aside: I run anything in the above to be able to drive my P/390-E PCI card. The Lenovo Thinkpad has the "vmx" cpu flag, so QEMU is theoretically doable. But the mouse is extremely flakey, to the point of unusability, under QEMU on the Thinkpad. I've tried various tweaks, but no luck. I "asked Mr. Google", but only found other people with the same problem... and no solution. This sounds extremely reminiscent of guest OS driver / utility integration, or rather the lack there of, when running OS/2 et al. in VM. Are there any booby-traps to watch out for? What I'm most concerned about is the default "qt5" USE flag. Is VirtualBox usable without the qt5 GUI? I've not fond much effective difference in the various hyper visors, save for driver / guest OS additions / integration maturity level. Sure, different hyper visors have varying maturity levels of the management utilities. But I've gotten all of them to do what I want. I prefer VirtualBox on stand alone workstation for lab / play thing and VMware's (free) ESXi on my server for things I want running months at a time (read: to continue running when I reboot my workstation to change kernels). I assume that since you're running ArcaOS, that you have support from Arca Noae. As such, I'd open a support ticket with them and ask about guest add-ons for various hyper visors. I don't know the current state of 3rd party guest add-ons for OS/2 / eCS / ArcaOS under VirtualBox. Hopefully they've improved since the last time I looked. Surprisingly enough, I think the best integration that I ever saw was under an *OLD* version of Microsoft's Virtual PC / Virtual Server / Hyper-V. Back when they still supported OS/2 as a guest OS in an official capacity. Perhaps you can run an old version thereof or extract the guest add-ons therefrom and use them elsewhere. -- Grant. . . . unix || die
Re: [gentoo-user] Getting maximum space out of a hard drive
On 8/20/22 1:15 PM, Dale wrote: Howdy, Hi, Related question. Does encryption slow the read/write speeds of a drive down a fair amount? m This new 10TB drive is maxing out at about 49.51MB/s or so. I actually copied that from the progress of rsync and a nice sized file. It's been running over 24 hours now so I'd think buffer and cache would be well done with. LOL It did pass both a short and long self test. I used cryptsetup -s 512 to encrypt with, nice password too. My rig has a FX-8350 8 core running at 4GHz CPU and 32GBs of memory. The CPU is fairly busy. A little more than normal anyway. Keep in mind, I have two encrypted drives connected right now. Just curious if that speed is normal or not. Thoughts? Dale :-) :-) P. S. The pulled drive I bought had like 60 hours on it. Dang near new. -- Grant. . . . unix || die
[gentoo-user] Re: chrome vs. wayland wierdness
On 2022-08-04, Neil Bothwick wrote: > On Thu, 4 Aug 2022 21:49:59 - (UTC), Grant Edwards wrote: > >> emerge --depeclean --ask >> >> That removed a couple wayland packages (yay! I didn't really want >> wayland). Then it warned me >> >>!!! existing preserved libs: >>>>> package: dev-libs/wayland-1.21.0 >> * - /usr/lib64/libwayland-client.so.0 >> * - /usr/lib64/libwayland-client.so.0.21.0 >> * used by /opt/google/chrome/libGLESv2.so >> (www-client/google-chrome-104.0.5112.79) Use emerge @preserved-rebuild >> to rebuild packages using these libraries >> >> I do as instructed and run 'emerge @preserved-rebuild' and it >> reinstalls chrome. >> >> But a subsequent emerge --depeclean --ask again produces the same >> warnings about wayland libraries that have been preserved. >> >> Are the dependencies for chrome broken? > > chrome is a binary package, unlike chromium, so rebuilding will not change > the libraries it depends on. Right. I didn't expect that it would. > It sounds like those wayland packages should not have been > depcleaned and are a requirement for chrome. I'm pretty sure that wayland was originally installed a few weeks ago to satisfy a dependency of chrome that had been newly added (and now apparently removed). This bug might be related: https://bugs.gentoo.org/858191 It seems to mostly be a debate over whether wayland is really required for the chrome binary package. Removing wayland reportedly breaks webGL, sometimes, for some people, depending on how chrome is invoked. If I read that issue's history correctly...
[gentoo-user] chrome vs. wayland wierdness
I just did my usual emerge --sync emerge -auvND world Everything seemed to be fine (IIRC, it upgraded chrome). emerge --depeclean --ask That removed a couple wayland packages (yay! I didn't really want wayland). Then it warned me !!! existing preserved libs: >>> package: dev-libs/wayland-1.21.0 * - /usr/lib64/libwayland-client.so.0 * - /usr/lib64/libwayland-client.so.0.21.0 * used by /opt/google/chrome/libGLESv2.so (www-client/google-chrome-104.0.5112.79) Use emerge @preserved-rebuild to rebuild packages using these libraries I do as instructed and run 'emerge @preserved-rebuild' and it reinstalls chrome. But a subsequent emerge --depeclean --ask again produces the same warnings about wayland libraries that have been preserved. Are the dependencies for chrome broken?
[gentoo-user] Re: --sync
On 2022-07-31, n952162 wrote: > I've been running gentoo for years now, and every time I go to --sync, > it's really a painful process. > > The process can take *very* before you find out if it succeeded or not. In my experience, long --sync times have always been due to a slow rsync server. Switching to a different server fixed it for me (though sometimes it took a few tries to find a server that responded in a timly manner). That said, git is an order of magnitude faster than even the fastest rsync server. > It can take several hours before it finally works That seems like something other than just a slow server. When dealing with slow servers, it never took more than 15-30 minutes. -- Grant
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/18/22 3:28 AM, J. Roeleveld wrote: Either on the client where the agent is running, but also on the system I connected to. I have always considered that there is enough sensitive data on the client and that there are already enough things running there that I end up considering the client a sensitive / secure system as a unit. This seems to be especially true with servers hosting automation. But to each their own. As for the security of the forwarded agent, I've generally been okay with root on the target system having access to the agent. Especial when I have used different key pairs for different destination hosts and / or specify the from stanza in the authorized_keys file. If you want to, you can specify how long, in seconds, that a key can be used in an agent. So if you have a running agent, you can load a key and specify that it can be used for up to two seconds. So even if someone does compromise the target host and does talk to the agent, the agent won't allow the key to be used and will behave as if the key wasn't loaded. You can also lock / unlock the agent on the source side as you see fit. Unlock it for authentication, and then immediately re-lock it after authenticating. Local commands and / or a local process using ssh remote commands makes this more reasonable. Aside: Backgrounded / multiplexed connections make running multiple remote commands on a host a lot more expedient. 1) Log in to the remote host with a background connection. 2) Run multiple remote commands via "ssh @ " 3) Log out of the remote host closing the background connection. The business logic of the script lives on the client and all the intermediate commands (#2) avoid the overhead of establishing a connection and authenticating again. But, I just noticed the following, which is hopeful, but need to read up on this: https://www.openssh.com/agent-restrict.html Interesting. More reading. Agreed, which is why I always stop and think when I see that. ;-) Usually the answer is: "Oh, yes, I didn't access this host from my laptop yet". But that is usually after the 2nd or 3rd connection attempt with retyping the hostname and verifying the IP-address that is resolved for it first. I think I mis-took a previous statement to mean that you did something to distribute the contents of the known_hosts file so that re-loads would already be known. I guess I misunderstood. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/18/22 12:23 AM, J. Roeleveld wrote: I've been using ansible for some of my automation scripts and am happy with the way that works. The existing implementations for "adding users" and such is tested plenty by others and does actually check if the user exists before trying to add one. ACK I only use expect to automate the login-process as mentioned in the original email. I've been a fan of the sshpass command explicitly for sshing into systems. Though I've gotten it to work for a few other very similar things. The line it's expecting is more then just "*?assword" like in all the examples. Currently, SSH puts the password-prompt as: (@) Password: As I know both, the expected string is this full line. If SSH changes its behaviour, the script will simply fail. Nice! -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/17/22 11:48 PM, J. Roeleveld wrote: It could, but that would open up an unsecured key to interception if an intermediate host is compromised. What are you thinking? -- I've got a few ideas, but rather than speculating, I'll just ask. See previous answer, the agent, as far as I know, will have the keys in memory and I haven't seen evidence that it won't provide the keys without authenticating the requestor. Are you concerned about a rogue requestor on the host where the agent is running or elsewhere? Yes, copy/paste has no issues with multi-page texts. But manually reading a long password and copying that over by typing on a keyboard when the font can make the difference between "1" (ONE), "l" (small letter L) and "|" (pipe- character) and similar characters make it annoying to say the least. Agreed. Currently, when that comment pops up, the first thing I do is wait and wonder why it's asking for it. As all the systems are already added to the list. Such a pop-up would be a very likely indication of a problem. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/17/22 11:24 PM, J. Roeleveld wrote: If I have 1 desktop and 1 laptop, that means 2 client machines. Add 5 servers/vms. /Clients/ need (non-host) key pairs. Servers shouldn't need non-host key pairs. Servers should only need the clients' public keys on them. That means 10 ssh-keys per person to manage and keep track off. If you're using per-host-per-client key pairs, sure. If you're only using per-client key pairs and copying the public key to the server, no. When a laptop gets replaced, I need to ensure the keys get removed from the authorized_keys section. If the new key pair would be using the same algorithm and bit length and there is no reason to suspect compromise, then I see no reason to replace the key pair. I'd just copy the key pair from the old client to the new client and destroy it on the old client. This is especially true if the authorized_keys file has a from stanza on the public key. Same goes for when the ssh-keys need refreshing. Which, due to the amount, I never got round to. I've not run into any situation where policy mandates that a key pair be replaced unless when there isn't any reason to suspect it's compromise. I actually have more then the amount mentioned above, the amount of ssh-keys gets too much to manage without an automated tool to keep track of them and automate the changing of the keys. I never got the time to create that tool and never found anything that would make it easier. As I think about it, I'd probably leverage the comment stanza of the public key so that I could do an in place delete with sed and then append the new public key. E.g. have a comment that consists of the client's host name, some delimiter, and the date. That way it would be easy to remove any and all keys for the client in the future. When hosts can get added and removed regularly for testing purposes, this requires a management tool. It depends on how you configure things. It seems as if it's possible to use the "%h" parameter when specifying the IdentityFile. So you could have a wild card stanza that would look for a file based on the host name. You could put "root" without a valid password, making it impossible to "su -" into and add a 2nd uid/gid 0 account with a valid password. I know of 1 organisation where they had a 2nd root account added which could be used by the orgs sys-admins for emergency access. (These were student owned servers directly connected to the internet) I absolutely hate the idea of having multiple accounts using the same UID. I'd be far more likely to have a per host account with UID=0 / GID=0 and have the root account have a different UID / GID. I'll need to try this at some point in the future. I expect the "wheel" group to only be for changing into "root", that's what it's advertised as. I've seen some binaries in the wheel group and 0550 permission. Still needs the clients to be actually running when the server runs the script. Or it needs to be added to a schedule and gets triggered when the client becomes available. This would make the scheduler too complex. Why can't the script that's running ssh simply start an agent, run ssh, then stop the agent? There's no coordination necessary. I agree, but root-access is only needed for specific tasks, like updates. Most access is done using service-specific accounts. I only have 2 where users have shell-accounts. Many people forget about problems on boot that require root's password. I'd love to implement Kerberos, mostly for the SSO abilities, but haven't found a simple to follow howto yet which can be easily adjusted so it can be added to an existing environment. ACK -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 11:46 PM, J. Roeleveld wrote: Hmm... interesting. I will look into this. :-) But, it needs the agent to be running, which will make it tricky for automation. Why can't automation start an agent? Why can't there be an agent running that automation has access to? (I have some scripts that need to do things on different systems in a sequence for which this could help) :-) I know, which is why I was investigating automating it. The passwords are too long to comfortably copy by hand. I assume that you mean "type" when you say "copy". I will definitely investigate this. They sound interesting. I'd set the validity to a lot less if this can be automated easily. Yes, it can be fairly easily automated. One of the other advantages of SSH /certificates/ is when you flip things around and use a /host/ certificate. Clients can recognize that the target host's certificate is signed by the trusted SSH CA and not prompt for the typical Trust On First Use (TOFU) scenario. Thus you can actually leverage the target host SSH fingerprint and not need to ignore that security aspect like so many people do. Added to my research-list. :-) -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 11:42 PM, J. Roeleveld wrote: True, properly done automation is necessary to make our lives easier. #truth I tried this approach in the past and some levels of automation still use this, but for being able to login myself, I found having different keys become cumbersome and I ended up never actually replacing them. I'm curious what you found to be cumbersome. I make extensive use of the client SSH configuration file (~/.ssh/config) such that I don't need to worry about which key is used for which host. This means that anything that uses ssh / sftp / scp /just/ /works/ (tm) using the contents of the configuration file. The goal is to have whichever authentication system used, the passwords/keys to be replaced often with hard to brute-force passwords/keys. I can currently replace all passwords on a daily basis and not have a problem with accessing any system. I agree in concept. Though I question the veracity of that statement when things aren't working normally. E.g. system is offline for X hours do to hardware failure or an old version restored from backup that is now out of sync with the central system. For normal use, most systems don't need to be logged into a shell. For the few where this is needed, individual accounts exists. But, no individual account is a member of "wheel". For admin access, there are admin accounts on the machines. (they are all named individually and you won't find the same admin-account-username on more then 1 system) I've wondered about having the account for UID / GID 0 be named something other than root. But the testing that I did showed that there were too many things that assumed "root". :-/ Though I did find that I was able to successfully convert a test VM to use something other than root and the proof of concept was a success. It's just that the PoC was too much effort / fragile to be used in production. I find that the wheel group is mostly for su and a few other commands. But the concept of you must be a member of a group or have special permissions applied directly to your account is conceptually quite similar to being a member of the wheel group. As such I don't think the abstraction makes much difference other than obfuscation. True, but this needs to run from the client. Not the server. Which means it will need to be triggered manually and not scheduled. The algorithm could be refactored such that it is run from the server. E.g. if you can ensure that the old key is replaced with the new key, it can safely be done server side. I did this for a few colleagues that had forgotten the passphrase for their old private key and needed their new public key to be put into place. I don't even have sudo installed on most systems, only where it's needed for certain scripts to work and there it's only used to avoid "setuid" which is an even bigger issue. I tend to prefer sudo's security posture where people need to know /their/ password. Meaning that there was no need for multiple people to know the shared target user's password like su does. If I was in a different environment, I'd consider Kerberized versions of su as an alternative. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 4:11 PM, Neil Bothwick wrote: I've never used it before, mainly because I wasn't aware of its existence until I re-read the ssh-keygen man page, but it seems to be simple timestamps passed to valid-before/valid-after. I'm not sure that's applicable to /keys/ verses /certificates/. Excerpt from the ssh-keygen man page: -V validity_interval Specify a validity interval when signing a /certificate/. A validity interval may consist of a single time, indicating that the /certificate/ is valid beginning now and expiring at that time, or may consist of two times separated by a colon to indicate an explicit time interval. Maybe there's something else, but it seems like the validity period is for SSH /certificates/ and not SSH /keys/. -- Grant. . . . unix || die
[gentoo-user] Re: Google Chrome now requires wayland and jack audio?
On 2022-07-15, Mark Knecht wrote: > On Fri, Jul 15, 2022 at 12:28 PM Grant Edwards > wrote: >> >> It looks like www-client/google-chrome just added wayland and jack >> audio to the dependancies. So now I have to have Pulse _and_ Jack? > Is that truly a Chrome requirement, like the company Google wrote > the ebuild, or is this something a Gentoo dev did for some reason? Google doesn't provide an ebuild. The ebuild is written maintained by the kind volunteers of the Chromium in Gentoo Project. For the binary distribution from Google, those devs have no control over what libraries the Chrome executables are built to use. All they can do is try to figure out which libraries Chrome needs, and reflect that in the ebuild so that after the binary from Google gets installed, it works. That said, there was no jack audio requirement for Chrome. I misread the emerge output. The two new requirements that google-chrome was pulling in were dev-libs/wayland dev-util/wayland-scanner You don't have to be running Wayland, but you now need the above wayland pieces. There isn't actually a pulse audio requirement in the google-chrome ebuild either, but if I don't have pulse installed, some audio stuff in Chrome doesn't work. In web apps like Google Voice * I can select my headset mic as audio in, but it won't work. * I can't select headset as audio out. Installing pulse audio fixed those problems. > I'm curious as the USB disconnect problem seems somehow to be > related to using Chrome on the host machine for sites that do a lot > of audio, like YouTube. A clean boot of the host machine, followed > by a clean boot of the VM and I've run for at least an hour with no > disconnection problems. I can use Chrome for email, messaging and > reading newspapers with no problem, but I run YouTube and twice I've > had USB problems in the VM. Yep, it sounds like doing audio via Chrome is disrupting the the USB audio device that's in-use by the VM. Are there Linux audio drivers for that hardware that you could uninstall to keep Chrome from seeing it? -- Grant
[gentoo-user] Re: Google Chrome now requires wayland and jack audio?
On 2022-07-15, Julien Roy wrote: > One of the side effects of using proprietary software : you can't > control with which flags it gets built. Yep. I didn't used to have the chrome binary package installed, but there are a couple things that I've never gotten to work in Chromium (e.g. Webex). > With chromium-bin, there is a wayland USE flag, but nothing for > jack. I looked into that more, and I had misread the emerge output. It wasn't google-chrome that depended on jack, and now I can't figure out why it was installed. I did # emerge -C virtual/jack media-sound/jack-audio-connection-kit # emerge -auvND world It didn't get reinstalled. And then a subsequenct # emerge --depclean --ask removed another half-dozen audio-related packagets (zita-* and realtime-*, whatever they are). I'm sure the next time I try to use audio on that machine it won't work. I used to think that someday Linux sound support would get straightened out, but it just keeps getting worse... -- Grant
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 1:12 PM, Neil Bothwick wrote: I'll check that out, but it is also possible to set time limits on SSH keys, and limit them to specific commands. Please elaborate on the time limit capability of SSH /keys/. I wasn't aware of that. Is it hours of the day / days of the week they can be used? Or is it the number of days / date range that they can be used? -- Grant. . . . unix || die
[gentoo-user] Google Chrome now requires wayland and jack audio?
It looks like www-client/google-chrome just added wayland and jack audio to the dependancies. So now I have to have Pulse _and_ Jack? -- Grant
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/14/22 3:22 PM, Steve Wilson wrote: Have you looked at dev-tcltk/expect? Expect has it's place. Just be EXTREMELY careful when using it for anything security related. Always check for what is expected before sending data. Don't assume that something comes next and blindly send it (possibly after a pause). Things break in a really weird and unexpected way. (No pun intended.) Also, do as much logic outside of expect as possible. E.g. don't try to add a user and then respond to a failure. Instead check to see if the user exists /before/ trying to add it. Plan on things failing and try to control the likely ways that it can fail. Paying yourself forward with time and effort developing (expect) scripts will mean that you reap the rewards for years to come. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 6:44 AM, Neil Bothwick wrote: I don't share keys, each desktop/laptop has its own keys. Not if they use their own keys. It should be simple to script generating a new key, then SSHing to a list of machines and replacing the old key with the new one in authorized_keys. +1 Indeed it is, and now you've found a way to do what you want with passwords, all is well. However, I will look at scripting regular replacements for SSH keys, for my own peace of mind. /me loudly says "SSH /certificates/" from the top atop a pile of old servers in the server room. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 1:53 AM, J. Roeleveld wrote: I agree, but that is a tedious process. Yes, it can be. That's where some automation comes into play. I have multiple machines I use as desktop depending on where I am. And either I need to securely share the private keys between them or set up different keys per desktop. I /currently/ use unique keys /per/ /client/ /system/. I am /planing/ on starting to use unique keys /per/ /client/ /per/ /server/. Meaning that each client will use a different key for each remote server. I think that this combined with location restrictions in the authorized_keys file will mean that SSH keys (or certificates) can't be used from anywhere other than their approved location or for anything other than their intended purpose. I assume the same is true for most people. Yes. It depends what security posture you / your organization want. Never mind that access to the servers needs to be possible for others as well. I assume that other users will use their own individual accounts to log into the target systems with a similar configuration. E.g. I log into remote systems as "gtaylor" and you log into remote systems as "joost", and Neil logs into remote systems as "neil". We would all then escalate to root via "su -" with the automation providing the password to su. Either way, to do this automatically, all the desktop machines need to be powered and running while changing the keys. No, they don't. You just need to account for current and prior keys. I've done exactly this on a fleet of about 800 Unix systems that I helped administer at my last job. You do something like the following: 1) Log into the remote system explicitly using the prior key. 2) Append the current key to the ~/.ssh/authorized_keys file. 3) Logout of the remote system. 4) Log into the remote system explicitly using the current key. 5) Remove the prior key from the ~/.ssh/authorized_keys file. 6) Logout of the remote system. This can be fairly easily automated. You can then loop across systems using this automation to update the key on systems that are online. You can relatively easily deal with systems that are offline currently later when they are back online. -- There are ways to differentiate between offline and bad credentials during day to day operations. So when you hit the bad credentials you leverage the automation that tries old credentials to update them. You end up bifurcating the pool of systems into different groups that need to be dealt with differently. Online and doing what you want; online but not doing what you want; and offline. Changing passwords for servers and storing them in a password vault is easier to automate. I disagree. Using passwords tends to negate things like authenticating to sudo with SSH keys / certificates, thus prompting the use of NOPASSWD:. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 1:15 AM, J. Roeleveld wrote: Yes. Okay. That simply means that SSH keys won't be used to authenticate to the remote system. How would it not prompt for a password. There is a PAM module; pam_ssh_agent_auth, which can be used to enable users to authenticate to sudo using SSH keys. This means that the user /does/ authenticate to sudo as necessary. It's just that the authentication happens behind the scenes and they don't need to enter their password. Thus you can avoid the NOPASSWD: option which means a better security posture. I need something that will take the password from the vault (I can do this in Python and shell-scripting. Probably also in other scripts). Authenticating to the vault can be done on a session basis and shared. So locally, I'd only login once. Sure. Currently, yes. I never physically see the password as it currently goes into the clipboard and gets wiped from there after a short time period. Enough time to paste it into the password-prompt. It's the copy/pasting that I am looking to automate into a single "login-to-remote-host" script. I would not consider the copy and paste method to be secure. There are plenty of utilities to monitor the clipboard et al. and copy the new contents in extremely short order. As such, users could arrange to acquire copies of the password passing through the clipboard. I would strongly suggest exploring options that don't use the clipboard and instead retrieve the password from the vault and inject it into the remote system without using the clipboard. Or, authenticate to sudo a different way that doesn't involve a password. This will work for 90+ percent of the use cases. Meaning that the sensitive password is needed for 10 percent or less of the time. Thereby reducing the possible sensitive password exposure. }:-) I prefer not to use SSH keys for this as they tend to exist for years in my experience. And one unnoticed leak can open up a lot of systems. That is a valid concern. I'd strongly suggest that you research SSH /certificates/. SSH /certificates/ support a finite life time /and/ can specify what command(s) / action(s) they can be used for. My $EMPLOYER uses SSH /certificates/ that last about 8 hours. I've heard of others that use SSH /certificates/ that last for a single digit number of minutes or even seconds. The idea being that the SSH /certificate/ only lasts just long enough for it to be used for it's intended purpose and no longer. The ability to specify the command; e.g. "su -" that is allowed to be executed means that people can't use them to start any other command. }:-) This is why I use passwords. (passwords are long random strings that are changed regularly) Fair enough. I only counter with take a few minutes to research SSH /certificates/ and see if they are of any interest to you. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 1:07 AM, J. Roeleveld wrote: What I am looking for is: 1) Lookup credentials from password vault (I can do this in script-form, already doing this in limited form for ansible-scripts, but this doesn't give me an interactive shell) ACK You indicated you already had a solution for this. So I'm leaving it in your capable hands. 2) Use admin-account credentials to login via SSH into host When you say "admin-account", do you mean the given System Administrator's personal account or a common / shared administrative account? E.g. would I log in as myself; "gtaylor", or something shared "helpdeskadmin"? I'm assuming the former unless corrected. Do you want the user to be prompted for the Unix account password (on the remote system) or can they use SSH keys to login without a password prompt? 3) On remote host, initiate "su -" to switch to root and provide root-password over SSH link at the right time I would suggest having the SSH command invoke the "su -" command automatically. Note: You will probably want to run a command something like this to make sure that a TTY is allocated for proper interaction with su. ssh -t @ "/path/to/su -" 4) Give me an interactive root-shell on remote-host Okay. Not what I would have expected, but it's your system and you do you. :-) When I close the shell, I expect to be fully logged out (eg, I go straight back to the local host, not to the admin-account) The nice thing about having SSH invoke the "su -" command directly is that once you exit su, you also end up exiting the SSH session. I see plenty of google-results and also as answers for ssh directly to "root" using ssh-keys. I do not consider this a safe method, I use it for un- priviliges accounts (not member of "wheel"). I don't use it for admin- accounts. Thank you for the elaboration. I tend to agree with your stance. I have exceedingly few things that can SSH into systems as the root user, and they all have forced commands. They all have to do with the backup system which can't use sudo /or/ I want the ability to get in and restore a sudoers file if it gets messed up, thus avoiding the chicken / egg problem. Following the same security mentality, I prefer to specify the full path to executables, when possible, in order to make sure that someone doesn't put a Trojanized version earlier in the path. }:-) -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/14/22 1:08 PM, Neil Bothwick wrote: I was accepting your point, one I hadn't considered. Ah. Okay. :-/ Here I was hoping to learn something new from you. ;-) Still a good discussion none the less. :-) -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/14/22 9:56 AM, Neil Bothwick wrote: That is true, but it is also true about the current setup as that also gives root access. I get the impression that Joost is looking for a more convenient approach that does not reduce security, which is true here... I'm all for being /more/ secure, especially when doing so can be made to appear to be /simpler/ for the end user. I think the quintessential example of this is authenticating to sudo with SSH keys via SSH agent forwarding. It eliminates the password prompt or the NOPASSWD: option. Either way, you have better security posture (always authenticated) and / or users have a better experience (no password prompt). Well, almost true. Please elaborate. I consider it fairly difficult for non-root users to get a copy of the /etc/shadow file on most systems. Conversely, SSH private key files tend to ... leak / be forgotten. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/14/22 8:48 AM, Neil Bothwick wrote: Is this user only used as a gateway to root access, or can you set up such a user? If so you could use key-based authentication for that user, with a passphrase, and add command="/bin/su --login" to the authorized_keys line. That way you still need three pieces of information, Be mindful that despite the fact that this protects things on the surface, it is / can be a way to boot strap changing this. After all, nothing about this forced command prevents the user from using the acquired root access to modify the ~/.ssh/authorized_keys file enforcing the command. This is one of the pitfalls that I alluded to in my earlier reply about security vs automation. Quite simply, this is NOT security as it's trivial to use the access (su -) to gain more access (edit the ~/.ssh/authorized_keys file). replacing the user's password with the user's key passphrase. This is another slippery slope. SSH key pass phrases can be brute forced in an offline fashion. Conversely, system passwords are more of an online attack. Assuming that standard system protections are in place for /etc/shadow*. -- It's easier to get a copy of someone's private SSH key file, especially if they are somewhat lax about it's security believing that the passphrase will protect it. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/14/22 3:54 AM, J. Roeleveld wrote: For security reasons, I do not want direct login to root under any circumstances. This is disabled on all systems and will stay this way. +10 for security Currently, to login as root, you need to know: - admin user account name - admin user account password - root user account password Please describe what an ideal scenario would be from a flow perspective, independent of the underlying technology. I do not want to reduce this to a single ssh-key-passphrase. Please elaborate as I suspect that the reasoning behind that statement is quite germane to this larger discussion. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/14/22 12:35 AM, J. Roeleveld wrote: Hi All, Hi, I am looking for a way to login to a host and automatically change to root using a password provided by an external program. Please clarify if you want to /require/ a password? I can think of some options that would authenticate, thus avoiding sudo's NOPASSWD:, but not prompt for a password. I want to know if those types of options are on the table or if they should be discarded. The root passwords are stored in a vault and I can get passwords out using a script after authenticating. Okay. Currently, I need to do a lot of the steps manually: ssh @ su - You could alter that slightly to be: ssh @ su - That would combine the steps into one. (copy/paste password from vault) Are you actually copying & pasting the password? Or will you be using something to retrieve the password from the vault and automatically provide it to su? I think that removing the human's need ~> ability to copy & paste would close some security exposures. Aside: This remove the human's ability to copy ~> know the password from the mix as a security measure can be a slippery slope and I consider it to be questionable at best. -- Conversely, doing it on behalf of the human with a password that they know simply as automation is fine. I would like to change this to: I think that's doable. I've done a lot of that. I'll take it one step further and put " " in a for loop to do my bidding on a number of systems. I think the "ssh @ su -" method might be a bit cleaner from a STDIN / TTY / FD perspective. Does anyone have any hints on how to achieve this without adding a "NOPASSWD" entry into /etc/sudoers ? Flag on the play: You've now mixed privilege elevation mechanism. You originally talked about "su" and now you're talking about "sudo". They are distinctly different things. Though admittedly they can be used in concert with each other. If you are using SSH keys /and/ sudo, then I'd recommend that you investigate authenticating to sudo via (forwarded) SSH keys. This means that your interactions with sudo are /always/ authenticated *and* done so without requiring an interactive prompt. Thanks in advance, There's more than a little bit here. There are a number of ways that this could go. -- Grant. . . . unix || die
[gentoo-user] Re: python mess - random winge!
On 2022-07-05, Jack wrote: > On 2022.07.05 12:24, Grant Edwards wrote: >> On 2022-07-05, William Kenworthy wrote: >> It would be nice if the news item explained how to let the upgrade >> procede while holding back a few packages. >> >> Can you set 3_9 and 3_10 globally, and then disable 3_10 for a few >> individual packages that can't be built with 3_10? > As far as I can tell, you just need to add python_targets_python3_9 for > the package in the appropriate package.use file. I've tried that, and it takes forever. Everything time you do it, the next emerge attempt will fail because one of that package's dependencies doesn't have 3_9 set. So you set that one, do an emerge, and it fails because there's another depenency that doesn't have 3_9 set. Repeat this for an hour or two... If it would tell you about all of them at once, it wouldn't be so bad. But, if you're trying to hold back a large application with dozens and dozens of dependancies it makes you want to scream. Or doesn't it torture you like that any longer? -- Grant
[gentoo-user] Re: python mess - random winge!
On 2022-07-05, William Kenworthy wrote: > I synced portage a couple of days now and now my systems are rebuilding > python modules for 3.10 without any input from me [...] Every time there's a Python upgrade like this, it turns into a bit of an ordeal because I always have a small handful of packages that don't support the newer version. The news item offers no advice on what to do in this situation other than completely postponing the upgrade of everything (which doesn't seem like the best choice.) It would be nice if the news item explained how to let the upgrade procede while holding back a few packages. Can you set 3_9 and 3_10 globally, and then disable 3_10 for a few individual packages that can't be built with 3_10? -- Grant
[gentoo-user] Re: Google pop3 authentication failure
On 2022-06-30, Walter Dnes wrote: > On Wed, Jun 29, 2022 at 10:26:55PM -0000, Grant Edwards wrote >> >> AFAIK, you've got two choices. >> >> 1. Use an "app password" >> >> https://support.google.com/accounts/answer/185833 >> >> https://www.lifewire.com/get-a-password-to-access-gmail-by-pop-imap-2-1171882 >> >> 2. Use OAUTH 2.0 >> >> https://developers.google.com/gmail/imap/xoauth2-protocol >> https://oauth.net/2/ > > I looked at those instructions and also at setting up mutt, which I > currently use. Clear as mud. OAUTH is pretty complicated. However, setting up an app password is very simple. It only takes a few clicks. Quoting from the google support page (first link above): 1. Log in to your Google Account. 2. Click "Security". 3. Click "App Passwords". That brings up the App Paswords page 4. Select app (pick app from dropdown, you want "Mail") 5. Select device (pick device from dropdown, pick whatever you want, I recommend custom) 6. Click "Generate" That will pop up a dialog containing a 16-character password to be used by fetchmail. Copy that password 7. Click "Done"
[gentoo-user] Re: Google pop3 authentication failure
On 2022-06-29, Walter Dnes wrote: > On Mon, Jun 13, 2022 at 02:47:16PM +, spareproject776 wrote >> >> They flushed all the app password creds and forced 2fa. >> Need to go through the accounts.google.com login to recover. > > Sorry for the delay responding. I can login fine with my password on > accounts.google.com but it does not work on pop.gmail.com. Google disabled the use of normal passwords for IMAP and POP authentication a year or so back. [These days I say "a year or so back" that could be anything from 5 months to about 5 years. I think "a couple years ago" now averages at about 7. "Five or six years ago" reaches back to the end of the Clinton adminstration.] > What exactly do I have to do to get it working again? AFAIK, you've got two choices. 1. Use an "app password" https://support.google.com/accounts/answer/185833 https://www.lifewire.com/get-a-password-to-access-gmail-by-pop-imap-2-1171882 2. Use OAUTH 2.0 https://developers.google.com/gmail/imap/xoauth2-protocol https://oauth.net/2/ AIUI, the latter is considered more secure. But, a lot of applications don't support OAUTH 2.0 (or if they do, it's via a complex plugin/helper scheme). For mutt I looked into OAUTH, and it can be done with some external helper applications. Creating an app password for mutt to use with IMAP was much easier. -- Grant
[gentoo-user] Re: dbus now requires CONFIG_EPOLL in kernel?
On 2022-06-20, Grant Edwards wrote: > At the end of an update today, I got an error message from > > sys-apps/dbus-1.12.22-r2: > > * CONFIG_EPOLL: is not set when it should be. >Please check to make sure these options are set correctly. Failure >to do so may cause unexpected problems. Opps. Apparently at some point I somehow overwrote the file at /usr/src/linux/.config with something completely unrelated (looks like the contents of /proc/cpuinfo). The EPOLL option is set in the actual kernel: $ zcat /proc/config.gz | grep EPOLL CONFIG_EPOLL=y and in all my prevous version .config files. -- Grant
[gentoo-user] dbus now requires CONFIG_EPOLL in kernel?
At the end of an update today, I got an error message from sys-apps/dbus-1.12.22-r2: * CONFIG_EPOLL: is not set when it should be. Please check to make sure these options are set correctly. Failure to do so may cause unexpected problems. In the kernel config menu, that option is under an "experts only" parent menu that warns This is for specialized environments which can tolerate a "non-standard" kernel. Only use this if you really know what you are doing. Is this new requirement for CONFIG_EPOLL legit? -- Grant
[gentoo-user] Re: Reinstall
On 2022-06-19, Francisco Ares wrote: > Just for the sake of preventing a future failure, besides personal > files (minimum and obvious) the "world" file and the binary packages, > built along with the package installation, what else should I backup > so that I would be able to quickly restore the same full working > Gentoo in a new hardware without having to work from stage3 up? The > portage tree is one of those items, for sure. But what else? Make a backup copy of everything under /etc. I used to try to backup individual /etc/... files that I would need, but I always forgot something. -- Grant
[gentoo-user] Re: Replacement for RabbitVCS SVN/Git browser?
On 2022-06-06, Grant Edwards wrote: > On 2022-06-06, Grant Edwards wrote: >> Can anybody recommend a good replacement for RabbitVCS? I've been >> using it for ages to browse repos (mainly SVN), but it seems to have >> died off. It's no longer in the package database nor in PyPi. The >> last update in the developer blog is 2-1/2 years old. >> >> What are the alternatives? > > I found tkcvs, but it doesn't seem to do what I want. I mainly want to > look at file change logs, click on commits and see what changed in > that commit. > > I found an ebuild for rapidsvn, but it doesn't seem to build because > it's using depricated library functions and various other C++ compiler > complaints. I cloned rabbitvcs, from the "official" github repo and it seemed to install just fine using the usual "python setup.py install". I had to fix a couple minor syntax errors to get it to run, but it seems to be working OK.
[gentoo-user] Re: Replacement for RabbitVCS SVN/Git browser?
On 2022-06-06, Grant Edwards wrote: > Can anybody recommend a good replacement for RabbitVCS? I've been > using it for ages to browse repos (mainly SVN), but it seems to have > died off. It's no longer in the package database nor in PyPi. The > last update in the developer blog is 2-1/2 years old. > > What are the alternatives? I found tkcvs, but it doesn't seem to do what I want. I mainly want to look at file change logs, click on commits and see what changed in that commit. I found an ebuild for rapidsvn, but it doesn't seem to build because it's using depricated library functions and various other C++ compiler complaints. -- Grant
[gentoo-user] Replacement for RabbitVCS SVN/Git browser?
Can anybody recommend a good replacement for RabbitVCS? I've been using it for ages to browse repos (mainly SVN), but it seems to have died off. It's no longer in the package database nor in PyPi. The last update in the developer blog is 2-1/2 years old. What are the alternatives? -- Grant
Re: [gentoo-user] Change in sudoers format?
On 5/29/22 9:48 AM, w...@op.pl wrote: User xyz can exacute command D on host A as user B in group C ... is just a matter of consistency ;) The group that a command is run as starts to become much more germane when you are using sudo to run commands as a different non-root user. E.g. if you want to run commands as the Oracle user to manage things about a database. In some ways this is somewhat akin to setting the GID bit on a directory so that newly created files inherit the group of the directory. At least insofar as the type of situation that would necessitate the use of this feature. -- Grant. . . . unix || die
Re: [gentoo-user] problem with saslauthd
On 5/12/22 8:42 AM, John Covici wrote: So, I went on to the sasl mailing list and someone found a patch -- seems to be available for the freebsd port, and the patch was specific to sendmail and dev-libs/cyrus-sasl 2.1.28. I modified it for gentoo and it fixed everything up! I wonder if I should file this somewhere -- funny no one else noticed this before -- I saw nothing on bgo. Hi John, I'm glad that you found a solution. I'm sorry that I've not responded to your detailed message yet. Life / $WORK has been really busy this week. I was planing on giving your message the attention it deserved this weekend. Yes, I suspect that a patch or at least a bug report to Gentoo would be good. I'd suggest starting communications with the Gentoo package maintainer if there is no better place. I expect that they will receive the patch and / or redirect you somewhere better. -- Grant. . . . unix || die
[gentoo-user] Re: Remove rust completely
On 2022-05-12, Mansour Al Akeel wrote: > Thank you for your response. The idea of "getting harder and harder" > is hard to accept. Gentoo has always been about having choices. It is. You can choose to avoid Rust if you want. > Firefox requires rust, but is there a way to disable this? No. > There must be another way to let the user decide if they need it or not! If you need Firefox, you need rust. > And yes, the compile time is one of the factors in not wanting it on > my system. rust-bin solves that problem. > The second factor is a natural reaction toward feeling that I am > forced to have it. Another reason is the growing collection of > compilers and development tools and their build time (gcc, > bin-utils, llvm, clang ... etc.) and now rust. > > Firefox itself takes a lot of time to build, and if rust is a must > have, then maybe it is time for me to look into something else. I know > there's firefox-bin, and if it doesn't need rust, then maybe it is an > option. Firefox-bin does not require rust.
Re: [gentoo-user] problem with saslauthd
On 5/6/22 4:09 AM, John Covici wrote: So, I restored all the files, I could like sendmail.mc and the Sendmail.conf, but no joy, still no authentication mechanisms. I restored them to about first of April. Well darn. :-/ This still leads me to saslauthd. I didn't mean to imply that it /wasn't/ SASL, just that the two are separate. Have you been maintaining your sendmail.cf via the sendmail.mc file? Or are there unaccounted for hand edits? -- I'll often test new things in sendmail.cf directly and then promote them to sendmail.mc once I have identified what I want. Likewise with submit.cf / submit.mc. Would you be willing to share your sendmail.mc and submit.mc files? Feel free to "REDACT" things as necessary. (Please make sure it's easy to tell what is redacted.) -- Grant. . . . unix || die
[gentoo-user] Re: Bluetooth speakers
On 2022-05-06, Peter Humphrey wrote: > On Thursday, 5 May 2022 21:37:12 BST Michael wrote: > >> I've never had speakers blowing the audio chips driving them. I >> would have thought they would be protected electrically from such >> events occurring. I doubt there is much protection on line-out connections. > The sound chips have failed on both my workstations' motherboards > over the last five years or so. They only seem to last a couple of > years. Each time I've plugged in a USB dongle instead, and both of > those have now failed. That's very odd. > Or perhaps it's the speakers and their amplifiers. IMO, that's the logical conclusion. I've never had the audio chip on any computer fail -- ever. Nor have I ever had a USB audio adapter fail (though I've only used a couple of them over the years). -- Grant
Re: [gentoo-user] problem with saslauthd
On 5/5/22 1:24 PM, John Covici wrote: I do have a submit.mc file, but I have not changed this at all. What is strange to me is that if I do saslauthd -v should not I get everything that my Sendmail.conf has? I would not assume so. I say that based on my understanding of how SASL and Sendmail interact. In many ways, Sendmail and SASL are two entirely separate sub-systems. Sendmail (as I usually see it configured) wholesale outsources outsources testing authentication credentials. It does so by asking the completely independent SASL authentication daemon to test the credentials (nominally a username and password pair) to see if they are valid. SASL returns a yes / no to Sendmail. Sendmail alters what it does based on that answer. Since Sendmail and SASL are independent entities there is no reason for SASL to know anything about how Sendmail is configured. I can check an old backup and see if I have one for my sendmail.mc and get back. ACK -- Grant. . . . unix || die
Re: [gentoo-user] problem with saslauthd
On 5/5/22 10:39 AM, John Covici wrote: saslauthd is running, but it seems to ignore the Sendmail.conf . I think it's the other way around. Sendmail is told to support authentication via one or more methods, one of which can be SASL and co. The actual SASL auth daemon just listens on a unix socket and / or TCP port for clients to test authentication pairs, returning a pass fail type message. I used openssl s_client to connect to my sendmail, it was happy with the certs, but in response to the ehlo gives me no auth line at all. :-/ Very strange. Very annoying, definitely. I don't know if it's strange yet or not. I think the strangeness will be confirmed or refuted after finding out why Sendmail isn't offering AUTH options. My favorite thing to turn to when things that used to work and now don't is to restore a backup of the configuration file and compare them. Can you do that with your sendmail.cf or sendmail.mc file? There's also a chance that it's your submit.cf or submit.mc file since we're talking about the MSA on port 587. (Unless you aren't using the separate MSA which has been standard for 15+ years.) -- Grant. . . . unix || die
Re: [gentoo-user] problem with saslauthd
On 5/4/22 7:31 AM, John Covici wrote: Hi. I have been using various clients to connect to my sendmail server using port 587 and using starttls to encrypt the connections and then using the plain mechanism to send the user name and password to authenticate. Last day or so this has stopped working -- I don't know that I changed anything (famous last words), Assume that your configuration is at least acceptable until you have a reason to think otherwise. So, after all that, anyone have an idea as to how to fix? Start with the simpler thing first. Is the SASL authentication daemon running? Did your (START)TLS certificate expire? Contemporary clients may silently refuse to use expired certs. Thanks. You're welcome. Feel free to poke things and respond with more questions / details / errors / etc. -- Grant. . . . unix || die
[gentoo-user] Re: ALSA forgot default device
On 2022-05-01, John Covici wrote: > These configurations are in /etc/modprobe.d/alsa.conf as to which is > the default sound card and its parameters. I believe that file is only used if alsa is a module. I've never configured alsa as a module. > The name might not be alsa.conf, but you would have to unload the > module and reload, that is why you had to do a reboot. I have always had alsa built in to the kernel, and the default configuration is in /usr/share/alsa/alsa.conf. That file is the one installed by installed by the emerge, and generally should not be edited. System-wide settings should go in /etc/asound.conf and per-user settings should go in ~/.asoundrc. Those files are both included by /usr/share/alsa/alsa.conf.
[gentoo-user] Re: ALSA forgot default device
On 2022-05-01, Grant Edwards wrote: > >> The usual fallback is wiki.archlinux.org, but its instructions to >> place the following in /etc/asound.conf or ~/.asoundrc doesn't work: >> >> defaults.pcm.card 1 >> defaults.ctl.card 1 > > wiki.gentoo.org is back, and it says to use something like this in > /etc/asound.conf > or ~/asoundrc: > > defaults.pcm.!card 1 > defaults.pcm.!device 0 > defaults.ctl.!card 1 > > That also does nothing. Using card names as shown at > https://wiki.gentoo.org/wiki/ALSA#Files also does nothing. > > Yes, I'm restarting alsasound after changing /etc/asound.conf Apparently, restarting alsasound after changing /etc/asound.conf isn't enough. A reboot was required. I never saw that mentioned in any of the half-dozen sources I read about alsa configuration, so I presume it's a personal problem? -- Grant
[gentoo-user] Re: ALSA forgot default device
On 2022-05-01, Grant Edwards wrote: > On Gentoo, with OpenRC, how do you configure the default board/device > for ALSA? > > I've asked Google, and all the links it comes up with are for sites > that are broken because of PHP or database failures (e.g. wiki.gentoo.org > and forums.gentoo.org). > > The usual fallback is wiki.archlinux.org, but its instructions to > place the following in /etc/asound.conf or ~/.asoundrc doesn't work: > > defaults.pcm.card 1 > defaults.ctl.card 1 wiki.gentoo.org is back, and it says to use something like this in /etc/asound.conf or ~/asoundrc: defaults.pcm.!card 1 defaults.pcm.!device 0 defaults.ctl.!card 1 That also does nothing. Using card names as shown at https://wiki.gentoo.org/wiki/ALSA#Files also does nothing. Yes, I'm restarting alsasound after changing /etc/asound.conf The only way I can get sound is to specify the card using -D with aplay, or using -ao alsa:device= with mplayer, or by manually choosing the correct output device in VLC.
[gentoo-user] ALSA forgot default device
After a recent update ALSA stopped working. Apparently, ALSA now defaults to a device that doesn't work (fails to open, or just hangs). On Gentoo, with OpenRC, how do you configure the default board/device for ALSA? I've asked Google, and all the links it comes up with are for sites that are broken because of PHP or database failures (e.g. wiki.gentoo.org and forums.gentoo.org). The usual fallback is wiki.archlinux.org, but its instructions to place the following in /etc/asound.conf or ~/.asoundrc doesn't work: defaults.pcm.card 1 defaults.ctl.card 1 Likewise the wiki.archlinux.org advice to set ALSA_CARD to the card name shown by aplay -l also doesn't work. How do you set the devault card/device for ALSA on Gentoo? -- Grant
[gentoo-user] Re: sync-type: rsync vs git
On 2022-04-27, Rich Freeman wrote: > On Wed, Apr 27, 2022 at 10:22 AM Grant Edwards > wrote: >> >> Is there any advantage (either to me or the Gentoo community) to >> continue to use rsync and the rsync pool instead of switching the >> rest of my machines to git? >> >> I've been very impressed with the reliability and speed of sync >> operations using git they never take more than a few seconds. > > With git you might need to occasionally wipe your repository to > delete history if you don't want it to accumulate (I don't think > there is a way to do that automatically but if you can tell git to > drop history let me know). I don't think I have any history. I use sync-depth=1 and clone-depth=1. Both git log and git whatchanged only show one commit. > Of course that history can come in handy if you need to revert > something/etc. Perhaps I should keep a few levels of history... > If you sync infrequently - say once a month or less frequently, then > I'd expect rsync to be faster. I generally sync several times a week, and git is often very much faster than rsync. Git is always done in a few seconds. The time required for rsync varies widely from a handfull of seconds to tens of minutes. > This is because git has to fetch every single set of changes since > the last sync, while rsync just compares everything at a file level. > [...] > That can add up if it has been a long time. AFAICT, the emerge repo git "depth" settings of 1 prevent that: the intermediate versions are discarded on the server side as is previous local history. The end result is similar to rsync: you fetch only the current version of what's changed since the last "sync", and there's no local history. > Bottom line is that I think git just makes more sense these days for > the typical gentoo user, who is far more likely to be interested in > things like changelogs and commit histories than users of other > distros. I'm not saying it is always the best choice for everybody, > but you should consider it and improve your git-fu if you need to. > Oh, and if you want the equivalent of an old changelog, just go into a > directory and run "git whatchanged ." Right now with a depth of 1, git log/whatchanged don't provide any information (they think all files were new as of the last "sync"). What I should figure out is what settings will preserver a few levels of changes that have been made to my local repo, without preserving intermediate changes to the master repo that never got used locally. IOW, I want all the changes made during a single "sync" to go into my local repo as a single commit regardless of how many commits have been made to the master repo since my previous "sync". I think git can do that -- whether the emerge sync settings in /etc/portage/repos.conf/gentoo.conf allow me to tell emerge to tell git to do that is the question. -- Grant
[gentoo-user] sync-type: rsync vs git
A while back I switched one of my machines sync-type for the gentoo repo from rsync to git using https://anongit.gentoo.org/git/repo/sync/gentoo.git because that machine is behind a firewall that stopped allowing rsync connections. Is there any advantage (either to me or the Gentoo community) to continue to use rsync and the rsync pool instead of switching the rest of my machines to git? I've been very impressed with the reliability and speed of sync operations using git they never take more than a few seconds. When using rsync, it seems like I regularly used to have to spend time trying different mirrors and hard-wiring one in my config file because the one I (or the pool) had chosen had fallen back to using a Bell-212 modem for its internet connection. Sync operations often used to take many minutes and would sometimes just hang. -- Grant
[gentoo-user] Re: Would a Thinkpad X200 be too much trouble too run gentoo on?
On 2022-04-21, Michael Orlitzky wrote: > On Thu, 2022-04-21 at 15:49 +0300, Dex Conner wrote: > >> So I've found a Thinkpad X200 online and I'm thinking of buying it for >> libreboot purposes. Do you think the P8600 cpu can handle all the >> compiling on gentoo? For the record, I don't have any of the "big stuff" >> like KDE, GNOME, Firefox (all I have is Tor Browser [which I don't >> compile], dwl and some terminal programs like neomutt and profanity). >> Surely, I wouldn't be spending 5 hours to do small upgrades, >> right?..right? > > It's getting harder and harder. There's always GCC, which is going to > take you most of the day to build and will probably require -j1 to keep > you from running out of memory. But aside from that, the big ones are > > * dev-lang/rust: pulled in by anything that needs SVG support unless > you unmask an old insecure version of librsvg or can tolerate half- > broken SVG support. This takes over 24h, requires -j1, and gets > worse every day because it bundles all of its (growing list of) > dependencies. Have you tried using dev-lang/rust-bin? I switched all my machines to rust-bin a while back, and never noticed any problem. > > * LLVM: needed by rust, some video cards, and certain picky packages. > This one is at least _legitimately_ large but has annoying point > releases every once in a while that trigger a rebuild for little > benefit. Again, expect ~24h. Yea, building LLVM is brutal, and pretty much unavoidable these days. -- Grant
Re: [gentoo-user] Fully-Defined-Domain-Name for nullmailer
On 4/13/22 6:31 AM, n952162 wrote: Unfortunately, I get a 550 from my network provider for all of these: 1. me 2. localdomain 3. net 4. web.de So, how does thunderbird do it? I don't know what name Thunderbird uses in it's HELO / EHLO command(s). Though it shouldn't matter much which name is used. The important thing should be that the SMTP client, be it Thunderbird or nullmailer or something else, should authenticate to the outbound relay / MSA. The MSA should then use that authentication as a control for what is and is not allowed to be relayed. Nominally, the name used has little effect on the SMTP session. However there is more and more sanity checking being applied for server to server SMTP connections. Mostly the sanity checking is around that a sender isn't obviously lying or trying to get around security checks. These attempts usually take the form of pretending to be the destination or another known / easily identifiable lie. Mail servers that send server to server traffic actually SHOULD use proper names that validate. Clients shouldn't need to adhere to as high a standard. I consider nullmailer to be a client in this case. -- Grant. . . . unix || die
Re: [gentoo-user] Two wifi client interfaces and routing
On 3/31/22 10:17 AM, Grant Taylor wrote: I do know that the DHCP protocol supports adding additional options / definitions / parameters (?term?) to specify ... static routes. In case others are interested in this, a few pointers about using it. ISC's DHCP server has two options for advertising routes that clients should install; subnet ... netmask ... { ... option cidr-static-route ...; ... ms-static-route ...; ... } Both *-static-route options use the same format and the format took a little bit to wrap my head around. It consists of sets of length>, followed by the , followed by the router. E.g. option cidr-static-route 10, 100, 64, 192, 0, 2, 123, 0, 192, 0, 2, 1; That says: - 100.64.0.0/10 is reachable via 192.0.2.123 - 0/0 is reachable via 192.0.2.1 ProTip: Go ahead and add the default gateway 0/0 route to the *-static-route entries as some clients ignore the option routers entry when *-static-route option is present. I have multiple macOS, iOS, Windows 10, Linux, and other esoteric things correctly using a route to a lab / sandbox subnet via a system that isn't the LAN's default gateway. Finally: This seems to be a well defined DHCP standard, but seemingly not well known option by the various people that I've discussed this with. -- Grant. . . . unix || die
Re: [gentoo-user] Two wifi client interfaces and routing
On 3/31/22 7:21 AM, William Kenworthy wrote: Hi, Hi, I am trying to use a raspberry pi ... to create a routed link between two access points ... so I can access the monitoring port ... from homeassistant. I'm distilling this down to a Gentoo system participating in two two LANs, both of which are connected as DHCP clients. -- Correct me if I've distilled too much. -- And you want other systems on either LAN to use this system as a communications path to systems on the opposing LAN. Both AP's connect ok from the rpi but the routing is wrong - I can ping in both directions from the rpi, but only sometimes from devices further hops away - can openrc even do this? This seems like a classic routing issue. To me, it's not even an OpenRC issue in any way other than how to add static routes /after/ the network is brought up via DHCP. My experimenting so far is hit and miss. Trying to static route or override the default routes doesn't survive a network glitch, and half the time doesn't seem to "take" at all. Ya. At a higher level, this can be non-obvious how to do this as it's niche routing configuration. A working example I could adapt would be great! I don't have an example off hand. -- Seeing as I use static IPs on almost all of my machines, I don't even know if OpenRC supports adding a static route /after/ bringing an interface up with DHCP. I do know that the DHCP protocol supports adding additional options / definitions / parameters (?term?) to specify -- what I've been describing as -- static routes. That way DHCP clients will learn about these additional routes and install them in their local routing table. Though I don't know if you will have the necessary control over /both/ DHCP servers that's needed to do this. Presuming that you don't have control over /both/ DHCP servers (as control over /both/ will be needed), I'm going to fall back and suggest what I call the "Customer Interface Router". Specifically, set up port forwarding on the Pi such that when clients on LAN1 connect to $PORT on the Pi, the traffic is DNATed to the HomeAssistant on LAN2 /and/ the traffic is SNATed to the LAN2 interface on the Pi. Thus every system on each LAN thinks that it's talking to a directly attached system in the same LAN. There is no need for routing in this case. I typically only use the C.I.R. when there are reasons that more proper routing can't be configured. The C.I.R. is an abstraction layer that allows either side to operate almost completely independently of each other, save for IP conflicts between each directly attached LAN. -- Grant. . . . unix || die
[gentoo-user] Routine update wants to install 41 new packages
One of the nagging annoyances with Gentoo is the constant, steady increase in the number of packages installed (even though I'm not changing anything or adding anything new). It's usually just 1 or 2 new packages every now and then, which is tolerable. Then there are routine updates like this morning's which wanted to install 41 new packages (all java libraries). And it had only been a couple days since the previous update. I think the offender was dev-java/avalon-framework suddenly deciding it needed a shit-ton of new libraries for some reason. The only thing that used avalon-framework was dev-java/fop, and I don't actually use FOP anymore. So I removed fop, did a depclean (which removed a few dozen other java packages), and the subsequent update no longer required any new packages. FOP has always seemed a bit bloated, but this was a bit beyond the pale. So I escaped this time. Next time I probably won't be so lucky. -- Grant
[gentoo-user] Re: How to run X11 apps remotely?
On 2022-03-22, Grant Taylor wrote: > On 3/22/22 10:41 AM, Grant Edwards wrote: >> How does one run "modern" X11 apps remotely? > > Xvnc > > As in run an Xvnc server as an X11 server / display. Point your > programs at that display / server. Then have a VNC client connect to > said VNC server. I've used VNC in the past, and always ended up with a virtual desktop/screen rather than having a remote application show up in a window. >> I do not want a "remote desktop". I just want to run a single >> application on a remote machine and have its window show up locally. > > You can adjust the size of the Xvnc's display so that it's the size of > just the application in question. You also don't need the full desktop > to display on that screen. OK, I've done that, but it's a little awkward to have to constantly adjust the Xvnc display to match the application window size. It appears that Xpra can handle that automatically. >> X11 transparent network support was its killer feature, > I completely agree. Especially when you start running different > programs on different systems / users / contexts. > >> but for all practical purpopses, that feature seems to have been >> killed. > > I don't think that's true. Of course it depends on which X11 apps you need to run remotely. For everything I've needed to run remotely in the past decade or so, it was unusable. The path to my remote host is also rather ugly. It jumps most of the way across the county and back through at least two NAT firewalls. Though the ping time is actually pretty decent (15-20ms) for the path it has to take. -- Grant
Re: [gentoo-user] How to run X11 apps remotely?
Some clarifications. On 3/22/22 1:28 PM, Grant Taylor wrote: Xvnc I have looked at NoMachine (a.k.a. NX) in the past. But I've not tried it myself because my work client machine has a VNC client built in and doesn't have an NX client. As in run an Xvnc server as an X11 server / display. Point your programs at that display / server. Then have a VNC client connect to said VNC server. There's another option in the VNC / NX arena, but the name escapes me at the moment. There is also the possibility of RDP and / or ICA (whatever name old Citrix technology is going by these days). If you're into retro computing, PC Anywhere / Timbuktu are options. I run programs like this on the daily. E.g. Lotus Notes 9.x running on an old CentOS 6.x VM (last supported version) displaying on contemporary Gentoo on my workstation. The latency is noticeable if you know what to look for. But the latency is also quite tolerable. To be crystal clear, my Gentoo physical machine SSHs to my CentOS virtual machine with X11 forwarding such that the Notes client shows up on my Gentoo system. It's about as stock X11 as you can get. -- I have contemplated messing with xhost / xauth (cookies) to avoid the encryption / decryption overhead. But I found that I still needed remote command execution to set the DISPLAY and launch the Notes client. SSH makes this latter part trivial while also providing the former part. This is across a switched 1 Gbps LAN in the same subnet. This works well enough that I'm considering evaluating running more programs on discrete systems / VMs / containers with X11 networking. -- Grant. . . . unix || die
Re: [gentoo-user] How to run X11 apps remotely?
On 3/22/22 10:41 AM, Grant Edwards wrote: How does one run "modern" X11 apps remotely? Xvnc As in run an Xvnc server as an X11 server / display. Point your programs at that display / server. Then have a VNC client connect to said VNC server. Using ssh -X or ssh -Y works fine for older applications, but not for things that use "modern" toolkits. Modern tookit designers appear to have adopted a life mission to maximize the number of client-server round-trips required for even a trivial event like a keystroke in a text box. Yes. The back and forth between the X11 client (program) and server (display) is quite chatty and latency sensitive. The thing that running the Xvnc server on the same system as the X11 clients is that the latency between the two that the X11 protocol sees is effectively as small as possible. Then VNC's Remote Frame Buffer (RFB) protocol is more forgiving with latency between the VNC server and the VNC client. As a result, even with a 5-10Mbps remote connection, it takes several minutes to enter a string of even a few characters. A mouseclick on a button can take a minute or two to get processed. Resizing a window pretty much means it's time for a cuppa. Been there. Done that. Opening chrome and loading a web page can take 10-15 minutes. No activity at all on the screen, but the network connection to the remote machine is saturated at 5Mbps for minutes at a time. WTF? You also want to minimize spurious / superfluous updates that aren't actually /needed/. E.g. things fading in / out / animations. I do not want a "remote desktop". I just want to run a single application on a remote machine and have its window show up locally. You can adjust the size of the Xvnc's display so that it's the size of just the application in question. You also don't need the full desktop to display on that screen. Back in the day, I used to run X11 apps remotely through dial-up connections, and most of them were a little sluggish but still actually usable... The X11 protocol has changed a lot over the years. Older versions of X11 are less chatty than newer versions of X11. Reducing color depth also helps reduce the amount of data that needs to be exchanged. X11 transparent network support was its killer feature, I completely agree. Especially when you start running different programs on different systems / users / contexts. but for all practical purpopses, that feature seems to have been killed. I don't think that's true. I run programs like this on the daily. E.g. Lotus Notes 9.x running on an old CentOS 6.x VM (last supported version) displaying on contemporary Gentoo on my workstation. The latency is noticeable if you know what to look for. But the latency is also quite tolerable. I find web browsing to be considerably slower than my Notes client which I use interactively on the daily, if not hourly. -- Grant. . . . unix || die
[gentoo-user] Re: How to run X11 apps remotely?
On 2022-03-22, Grant Edwards wrote: > How does one run "modern" X11 apps remotely? > [...] > I do not want a "remote desktop". I just want to run a single > application on a remote machine and have its window show up locally. It looks like xpra will do what I want: https://packages.gentoo.org/packages/x11-wm/xpra It's interesting that it's classified as a window manager. >From https://xpra.org/: It gives you remote access to individual applications or full desktops. Xpra is usable over reasonably slow links and does its best to adapt to changing network bandwidth constraints. haven't tried it yet...
[gentoo-user] Re: How to run X11 apps remotely?
On 2022-03-22, Laurence Perkins wrote: >>Even something "lightweight" like atril is so slow it's barely usable. >> >>I do not want a "remote desktop". I just want to run a single >>application on a remote machine and have its window show up locally. >> >>Back in the day, I used to run X11 apps remotely through dial-up >>connections, and most of them were a little sluggish but still >>actually usable... >> >>X11 transparent network support was its killer feature, but for all >>practical purpopses, that feature seems to have been killed. > As you mentioned, it's a lot of extra round-trips. Which means that > it's not primarily your bandwidth that's the limiting factor, it's > the latency. > > Unfortunately, the speed of light being what it is, there are > practical limits to what you can do about latency depending on how > far apart the systems in question are. Where "far" is measured more in in hops than miles. :) Even with cut-through routing, each hop can be expensive. Add a couple firewalls with stateful packet inpsection, and latency from my house to the house next door isn't great. > But, check for and mitigate any bufferbloat issues you may have, > that will spike your latency quite a bit. > > The key back in the day was that people used X11 primitives > directly. But the X11 primitives are ugly, and there weren't any > tools for making them pretty. Yea, I remember. I wrote a couple xlib apps way back back when and it was painful. Even the old Xt toolkit wasn't fun. I do appreciate how easy it is to slap together something in Python and Gtk, I just wish it worked remotely after it was done. :) > So rather than add those mechanisms all the toolkit authors just did > their own thing and now everything is just bitmaps and practically > no processing can be done locally. > > Some programs like gVim will detect that they're running over SSH > and fall back to basic X11 for the speed factor. Not sure what > browsers might do that. Things like Xemacs are still usable, but if I'm doing emacs, I usually just run it directly in an ssh "terminal".
[gentoo-user] How to run X11 apps remotely?
How does one run "modern" X11 apps remotely? Using ssh -X or ssh -Y works fine for older applications, but not for things that use "modern" toolkits. Modern tookit designers appear to have adopted a life mission to maximize the number of client-server round-trips required for even a trivial event like a keystroke in a text box. As a result, even with a 5-10Mbps remote connection, it takes several minutes to enter a string of even a few characters. A mouseclick on a button can take a minute or two to get processed. Resizing a window pretty much means it's time for a cuppa. Opening chrome and loading a web page can take 10-15 minutes. No activity at all on the screen, but the network connection to the remote machine is saturated at 5Mbps for minutes at a time. WTF? Something like LibreOffice is completely unusable. Even something "lightweight" like atril is so slow it's barely usable. I do not want a "remote desktop". I just want to run a single application on a remote machine and have its window show up locally. Back in the day, I used to run X11 apps remotely through dial-up connections, and most of them were a little sluggish but still actually usable... X11 transparent network support was its killer feature, but for all practical purpopses, that feature seems to have been killed. -- Grant
Re: [gentoo-user] gentoo for a virtual server in the cloud?
On 3/18/22 1:03 PM, n952162 wrote: I rent a low-cost virtual server in the cloud. The platform offers me some choices in linux distributions, but I'm wondering if I can compile gentoo to run on it. Anybody have experience doing this? I've got a Gentoo image running in Linode without any problem. I'm fairly certain that they offer Gentoo as an option when creating the VPS. It's been too long and I've messed with too many things since then. -- Grant. . . . unix || die
[gentoo-user] Re: depclean wants to remove xf86-video-intel
On 2022-03-15, Grant Edwards wrote: >> I bit the bullet, let it depclean and rebooted. > > I'll give that a go the next time I'm in the office (which is where > the machine in question lives). It _almost_ "just worked". The names of the displays changed, so I had to modify my xinit/openbox startup file according to get things arranged correctly.
[gentoo-user] Re: depclean wants to remove xf86-video-intel
On 2022-03-15, Neil Bothwick wrote: > If X doesn't come up, simply re-emerge xf86-video-intel. That won't take > long because you will obviously have quickpkg'd it before depcleaning... You would think so. And you would think that would fix it. -- Grant
[gentoo-user] Re: depclean wants to remove xf86-video-intel
On 2022-03-14, Neil Bothwick wrote: > On Mon, 14 Mar 2022 17:07:54 - (UTC), Grant Edwards wrote: > >> I was a bit startled thos morning when emerge --depclean wanted to >> remove xf86-video-intel. I presume this is a result of the switch to >> the "built in" modesetting driver? And there are no corresponding Xorg >> config changes that need to be made? > > I bit the bullet, let it depclean and rebooted. I'll give that a go the next time I'm in the office (which is where the machine in question lives). I've got to remember to drag a loptop along with me so that if X doesn't come up I can still Google for help. From what I've seen on both Gentoo and Arch wiki's I believe it should "just work". But I know that having no other web access besides my phone will make it fail... -- Grant
[gentoo-user] depclean wants to remove xf86-video-intel
I was a bit startled thos morning when emerge --depclean wanted to remove xf86-video-intel. I presume this is a result of the switch to the "built in" modesetting driver? And there are no corresponding Xorg config changes that need to be made? My video chipset is 00:02.0 VGA compatible controller: Intel Corporation IvyBridge GT2 [HD Graphics 4000] (rev 09) And the only card-selection configuration I've done was to set VIDEO_CARDS="intel" in make.conf. -- Grant
[gentoo-user] Re: sys-devel/llvm and LLVM_TARGETS
On 2022-03-12, Nikos Chantziaras wrote: > On 12/03/2022 18:03, Grant Edwards wrote: >> On 2022-03-12, Nikos Chantziaras wrote: >>> On 12/03/2022 10:43, Dale wrote: >>>> https://bugs.gentoo.org/767700 >>>> >>>> Is that the one? It mentions the target but I don't quite understand >>>> the why. The biggest thing, will this break something if I let it do >>>> it? >>> >>> No. Unlike GCC, LLVM/Clang is always a cross-compiler. >> >> You can't use LLVM/Clang to compile for the host on which it's >> running? > > Why not? Because "LLVM/Clang is always a cross compiler". A cross compiler is a compiler that compiles for a target architecture/OS different than that of the host on which it is running. Therefore, LLVM/Clang always compiles for a target architecture/OS different than that of the host on which it is running. -- Grant
[gentoo-user] Re: sys-devel/llvm and LLVM_TARGETS
On 2022-03-12, Nikos Chantziaras wrote: > On 12/03/2022 10:43, Dale wrote: >> https://bugs.gentoo.org/767700 >> >> Is that the one? It mentions the target but I don't quite understand >> the why. The biggest thing, will this break something if I let it do >> it? > > No. Unlike GCC, LLVM/Clang is always a cross-compiler. You can't use LLVM/Clang to compile for the host on which it's running? -- Grant
Re: [gentoo-user] Re: Root can't write to files owned by others?
On 3/9/22 11:50 PM, Nikos Chantziaras wrote: This is normal, at least when using systemd. How is this a /systemd/ thing? Is it because systemd is enabling a /kernel/ thing that probably is otherwise un(der)used? I ask as someone who disliked systemd as many others do. But I fail to see how this is systemd's fault. To disable this behavior, you have to set: sysctl fs.protected_regular=0 But you should know what this means when it comes to security. See: https://www.spinics.net/lists/fedora-devel/msg252452.html I read that message, but no messages linked therefrom, and don't see any security gotchas about disabling (setting to 0) fs.protected_* I see some value in a tunable to protect against writing to files of different type in the guise of protecting against writing somewhere that you probably want to not write. Sort of like shell redirection ">" protection for clobbering existing files where you likely meant to append ">>" to them. But I am ignorant as to how this is a /systemd/ thing. -- Grant. . . . unix || die
Re: [gentoo-user] strange errors in http log, what can/should I do about it.
On 2/28/22 5:04 AM, Adam Carter wrote: If you put that url in a browser does it show your passwd file? I assume because the logs say 200 it will. If so shut down the httpd and reset all the passwords Note the question mark after the leading slash. As such, the path traversal component is for a query parameter, named f / file / filename / id. There is a reasonable chance that the web server returned the index / default page for the document root and that the query parameter didn't actually change any thing. With this in mind, it would be normal to return a 200 status code for the index / default page for the document root. Check your httpd config… seems odd that an old attack like this would still work. If this did return the actual contents of /etc/password then there is quite likely a different problem in that the index / default page is accepting query parameters as paths, independent of the HTTP daemon. Aside: +1 to everything that Stefan S. said. -- Grant. . . . unix || die
[gentoo-user] Re: How to copy gzip data from bytestream?
On 2022-02-22, Felix Kuperjans wrote: > you could use gzip to tell you the compressed size of the file and then > use another method to copy just those bytes (dd for example): > > gzip -clt > Should print the compressed size in bytes, although by reading through > the entire stream once. That doesn't work. It shows the size of the drive as the "uncompressed" size and 0 as compressed: # gzip -clt foo $ ls -l foo -rw-r--r-- 1 grante users 12923 Feb 22 07:51 foo $ gzip foo $ ls -l foo.gz -rw-r--r-- 1 grante users 6083 Feb 22 07:51 foo.gz $ gzip -clt > foo.gz $ gzip -clt
[gentoo-user] Re: How to copy gzip data from bytestream?
On 2022-02-22, Rich Freeman wrote: > On Mon, Feb 21, 2022 at 8:29 PM Grant Edwards > wrote: >> >> But I was trying to figure out a way to do it without uncompressing >> and recompressing the data. I had hoped that the gzip header would >> contain a "length" field (so I would know how many bytes to copy using >> dd), but it does not. Apparently, the only way to find the end of the >> compressed data is to parse it using the proper algorithm (deflate, in >> this case). > > I'm guessing that the reason it lacks such a header, is precisely so > that you can use it in a stream in just this manner. In order to > have a length in the header it would need to be able to seek back to > the start of the file to modify the header, which isn't always > possible. Indeed. It's clearly designed to be used on non-seekable media/devices like pipes and tapes. I should have realized that would be the case and would preclude a length field in the header. > I wouldn't be surprised if it stores some kind of metadata at the end > of the file, but of course you can only find that if the end of the > file is marked in some way. The gzip file format has a length and CRC field in a trailer at the end (after the compressed data). But, the only way to locate the end is to parse the data using the appropriate decompression algorithm. The header allows for multiple algorithms, but only one (deflate) is actually defined. > If you google the details of the gzip file format I did -- link is below. > you might be able to figure out how to identify the end of the file, > scan the image to find this marker, I'm pretty sure the only way to find the end of the file is to parse the compressed data payload itself. There isn't a marker. > and then use dd to extract just the desired range. Unless the file > is VERY large I suspect that is going to take you longer than just > recompressing it all. Definitely. It's purely an academic question at this point. > I can't imagine that there is any way around sequentially reading > the entire file to find the end, I believe you're right. > unless you have some mechanism that can read a random block and > determine if it is valid gzip data and if so you can do a binary > search assuming the data on the drive past the end of the file isn't > valid gzip. I don't think that determining if something is valid deflate data is easy (and may be impossible in the general case). I implemented the deflate algorithm from scratch once a few years ago, and vaguely recall that you can usually deflate almost anything. It turns out that the flash drive I used was pretty new, and almost all 0x00 bytes. Once I knew where to look it was pretty obvious where the gzip data ended. I've copied it the easy way (zcat | gzip -c), and verified that the copy matches byte-for-byte except for the MTIME field in the gzip header. It appears that gzipping stdin produces an empty MTIME field. No surprise there. gzip file format: https://datatracker.ietf.org/doc/html/rfc1952
[gentoo-user] How to copy gzip data from bytestream?
I've got a "raw" USB flash drive containing a large chunk of gzipped data. By "raw" I mean no partition table, now filesystem. Think of it as a tape (if you're old enough). gzip -tv is quite happy to validate the data and says it's OK, though it says it ignored extra bytes after the end of the "file". The flash drive size is 128GB, but the gzipped data is only maybe 20-30GB. Question: is there a simple way to copy just the 'gzip' data from the drive without copying the extra bytes after the end of the 'gzip' data? The only thing I can think of is: $ zcat /dev/sdX | gzip -c > data.gz But I was trying to figure out a way to do it without uncompressing and recompressing the data. I had hoped that the gzip header would contain a "length" field (so I would know how many bytes to copy using dd), but it does not. Apparently, the only way to find the end of the compressed data is to parse it using the proper algorithm (deflate, in this case). -- Grant
[gentoo-user] Re: [OT] mounting screws
On 2022-02-21, Peter Humphrey wrote: > Countersunk - that's the operative word here, so I ended up googling for "M3 > x > 5 countersunk", taking a guess at the M3, and found a specialist supplier. Next time, you might want to search for "flathead" instead of "countersunk". I think the former is the more common name for what you're looking for. There are several different types of countersunk heads, and flathead is the one you want. -- Grant
Re: [gentoo-user] [OT] mounting screws
On 2/20/22 10:24 AM, Peter Humphrey wrote: Hello list, Hi, I have a couple of vertically mounted easy-swap disk caddies in the back of my workstation, and I'm having trouble finding screws to mount the disk in the caddy. Clearance is nil, so the screws must be countersunk so they aren't proud of the surface. They seem to be m3 perhaps 5mm long. I just can't find any via Google. Can anyone in UK please help out? I consider screw / bolt / etc. acquisition to start with three primary things: 1) Thread identification; diameter and pitch of threads 2) Length identification 3) Head identification; what sort of screw head do you need? Once you have this information, chances are quite good that you will be able to find a screw / bolt (that you can modify) to fit your needs. I would expect the thread to be well known / documented for the type of hard drive you're using. Maybe even some of the length information related to how much goes into the drive body. You might be able to find out some head information from documents from the manufacturer of the drive sled, maybe some length information too. I have a micrometer that I use for measuring some things like this. It's an inexpensive plastic one from a local hardware store, but it gets the job done. (I'm only going to one decimal place on mm measurements.) -- Grant. . . . unix || die
[gentoo-user] Re: How to invoke non-selected versions of 'java'?
On 2022-02-04, Arve Barsnes wrote: > On Fri, 4 Feb 2022 at 22:49, Grant Edwards wrote: >> >> I've got two "slots" of java currently installed (8 and 11). >> [...] >> How does one manually invoke non-selected version(s) of java? >> [...] > > I don't think there is any convenient out of the box link like for > python or gcc, That was what I concluded, but I was a bit surprised. > but you could make equivalent links if you want. Otherwise you > should use the paths in your commands. On this box I have: > > /usr/lib64/openjdk-8/bin/java > /usr/lib64/openjdk-11/bin/java Yep. I've currently got '-bin' versions installed so here it's: $ find /opt/{icedtea*,openjdk*} -type f -executable -name 'java' /opt/icedtea-bin-3.16.0/jre/bin/java /opt/icedtea-bin-3.16.0/bin/java /opt/openjdk-bin-11.0.14_p9/bin/java
[gentoo-user] How to invoke non-selected versions of 'java'?
I've got two "slots" of java currently installed (8 and 11). I see how one uses "eselect java" to contol which one is invoked by /usr/bin/java. How does one manually invoke non-selected version(s) of java? For other slotted things like gcc and python, you can use pythonX.Y or gcc-X.Y.Z to invoke the non-selected version. What's the equivalent for java? -- Grant
[gentoo-user] Re: Why is "mtp-probe" running when I plug in a USB device?
On 2022-01-21, Grant Edwards wrote: > [...] > > This appears to be triggered by a rule in > >/lib/udev/rules.d/69-libmtp.rules > > which is owned by media-libs/libmtp > > Why does that library think it should be probing every USB device I > [...] Oh, and tell those damn kids to GET OFF MY LAWN! -- Grant
[gentoo-user] Why is "mtp-probe" running when I plug in a USB device?
I've noticed that whenever I plug in any sort of USB device, "mtp-probe" runs and logs the fact that the newly attached thing "was not an MTP device". This appears to be triggered by a rule in /lib/udev/rules.d/69-libmtp.rules which is owned by media-libs/libmtp Why does that library think it should be probing every USB device I plug in? Is that automatic probing required for libmtp and mtpfs to work? I do _not_ want anything to happen "automagically" when I plug in a USB mtp device. I know if a device is an MTP device, and if I want it mounted, I'll mount it manually. -- Grant
Re: [gentoo-user] Reverse Proxy with Apache2
On 1/18/22 1:26 PM, Raphael Mejias Dias wrote: Hello, Hi, I've modified a little my config file: Okay. ProxyPass "zmz" "http://raphaxx.intranet:8280/zm/ ProxyPassReverse "zmz" "http://raphaxx.intranet:8280/zm/; I would expect the first parameter to be anchored / fully qualified from within the site's URL. E.g. ProxyPass "/zmz" "http://raphaxx.intranet:8280/zm/; ProxyPassReverse "/zmz" "http://raphaxx.intranet:8280/zm/; My expectation would be that for this to proxy any requests to the "/zmz" path (sub-directory?) to the "/zm/" path on an HTTP server on port 8280 of raphaxx.intranet. Aside: Make sure that "raphaxx.intranet" resolves where you want it to. Be mindful of IPv4 vs IPv6. My ssl is ok, the ssl redirect is on default.conf Okay. But this ProxyReverse, I've been trying in many ways, another file, and so on, but nothing works. I have the following in a config file for a service that I disabled a few months ago. ProxyPass "/" "http://127.0.0.1:8080/; ProxyPassReverse "/" "http://127.0.0.1:8080/; This was in use in a Named Virtual Host that reverse proxied everything to port 8080 listening on localhost (127.0.0.1). Aside: Port 8080 on localhost (127.0.0.1) was actually an SSH remote port forward to a web server running on the remote client machine. You will want to adjust the source path ("/") and the destination ("http://127.0.0.1:8080/;) as you need. But this is copied verbatim from a site that I disabled recently. (Disabling is typical Ubuntu / Debian remove a sym-link so that the config is not in the sites-enabled directory. No changes to the actual config file.) About the VirtualHost for the 8280, I'm guessing it was not necessary, because the 8280 is the VM and the VM has its own apache2. ACK I have a nat rule to redirect 192.168.0.15:8280 to my VM server 192.168.2.100:80 on my root server 192.168.0.15. Okay. That could be a complicating factor. You say "NAT rule". I'm taking that to mean a Destination NAT (DNAT) rule for port forwarding. The important bit is that it doesn't alter the source IP (SNAT). So you could potentially be running into a TCP triangle scenario. Unless you have a specific reason to use the NAT rule, I would strongly suggest altering the ProxyPass(Reverse) rules to use the proper target. ProxyPass "/zmz" "http://192.168.2.100:80/zm/; ProxyPassReverse "/zmz" "http://192.168.2.100:80/zm/; Just avoid the potential for a TCP triangle all together. Considering the potential complexity, please share what sort of errors / failures you are seeing. Given the remote nature of the real server (from the point of view of the Apache HTTPD instance), please provide output of a TCP dump for tests. Let's make sure that all the bases are covered. About Caddy, I do not want to install another server and deal with another config. I can fully understand and appreciate that. Thanks! You're welcome. -- Grant. . . . unix || die
Re: [gentoo-user] Reverse Proxy with Apache2
On 1/18/22 1:30 PM, Anatoly Laskaris wrote: Age migth mean a lot when we are talking about software. Modern software usually is easier to configure, has sane defaults, more secure and has integration with other modern software. I'll concede that those points are /possibilities/. But they are not guaranteed. And is much more popular in the community meaning better support. I do not agree that something being more common means, much less implies, better support. There are an awful lot of bad recommendations all over the Internet. I'm was not talking about adding software, I was talking about replacing software. But you are. Replacing something inherently implies adding and / or configuring something old with something new. Time saved in managing complex software that does a simple task can be applied elsewhere. Sometimes yes, sometimes no. In regards to "already having a software" most modern applications don't require "having" them. It works out of the box, usually with one command and you can switch parts of your infrastructure without pain thanks to containers (or statically linked binaries in golang and rust) without downtime (if done right). "if done right" is so over the top the /operative/ /phrase/ of that statement that it's not even remotely funny. Dynamic ports with service discovery == no port conflicts. There's no dynamic ports / service discovery in what the OP asked about. The OP asked how to configure a feature (reverse proxy) of the software that they are already (Apache HTTPD) using for a part of a URL (https://192.168.0.15:443/zv) for a service that's currently listening on a given IP and port pair (https://192.168.0.15:443/). So please elaborate on what the right way is to replace (as in add new and remove old) the existing software /or/ split the IP & port (192.168.0.15 TCP port 443) across multiple daemons is. I would very much be interested in learning how to do this the right way. I can think of many ways to do this, but all of which require something intercepting the port & IP pair at some point up stream. Not that old as apache. I take your statement to be that the Apache HTTPD developers and administrators have more experience than Nginx / caddy / traefik developers and administrators by the simple fact that it has existed longer. What /new/ thing are you using to communicate with caddy / traefik if you don't use the old crufty IPv4 / IPv6? Nginx is still widly used (contrast to apache), The first four reports I found when searching for web server popularity show that Apache and Nginx are the top two popular servers. Which one is number one depends on the report. Link - Global Web Server Market Share January 2022 - https://hostadvice.com/marketshare/server/ Link - Web and Application Servers Software Market Share - https://www.datanyze.com/market-share/web-and-application-servers--425 Link - Usage statistics of web servers - https://w3techs.com/technologies/overview/web_server Link - January 2022 Web Server Survey - https://news.netcraft.com/archives/category/web-server-survey/ My opinion is that being the first, or the close second is a good indication that Apache is still wildly used. but is being replaced by caddy/traefik. Apache is ancient and I've never seen it running in production. If you've never seen the first or second most popular web server running in production, I can only question where you are looking. I know multiple people that have run Apache HTTP Server (both by Apache and rebranded by IBM / Oracle) web server in production on multiple platforms for each and every year for the last two decades. I've personally run Apache in production for that entire time. -- Grant. . . . unix || die
Re: [gentoo-user] TLD for home LAN?
On 1/18/22 1:50 PM, Rich Freeman wrote: No, I'm talking about the opposite situation. I'm talking about you have foo.local resolvable via mDNS, but not DNS - then there is a chance you won't be able to access the host. It's the same problem just opposite directions. The solution is to use something to unify the .local name in the mDNS and uDNS name spaces. This can be done via a gateway that speaks both protocols. E.g. listens for mDNS queries as well as being an authoritative uDNS server for the .local domain / TLD. It's not /simple/ but nor is it /impossible/. -- Grant. . . . unix || die
Re: [gentoo-user] Reverse Proxy with Apache2
On 1/18/22 11:24 AM, Anatoly Laskaris wrote: I'm sorry for not answering to the question directly, but why use apache2? - Because Apache is already installed and listening on the port in question. - Because that's what the OP asked about. - Because it might be IBM / Oracle HTTP Server which are re-rolls of Apache HTTP Server. - $REASONS There are modern alternatives ... Age of something doesn't mean a lot. - TCP/IP is from the 80s and yet we are still using it. - OSI is newer than IPv4. - IPv6 is newer than IPv4 and OSI. Yet we are still talking about the venerable IPv4. And something completely different like Traefik (https://doc.traefik.io/traefik/getting-started/quick-start/) which is geared towards modern cloud native infrastructure with containers and workload orchestrators like Nomad or Kubernetes. Usually you don't configure Traefik with static config file, but with metadata and annotations in K8S and Consul so it is dynamic and reactive. I view adding /additional/ software / daemons as poor form, especially when the /existing/ software can do the task at hand. Don't overlook the port conflict. Or you can use nginx (which is already considered pretty old and clunky, but it is much easier than apache still). Why start the email asking why something old is used and then finish the email suggesting the possibility of using something else old? -- Grant. . . . unix || die
Re: [gentoo-user] Reverse Proxy with Apache2
On 1/18/22 9:57 AM, Raphael Mejias Dias wrote: Hello, Hi, I'm trying to setup a reverse proxy on my apache2 server to serve an another apache2 server running on a vm, basically my root apache2 is at 192.168.0.15 and my second apache2 is at 192.168.0.15:8280. My idea is to have 192.168.0.15/zm as 192.168.0.15:8280. If I understand you correctly, you want to take a sub-directory / path from a site on one port (80) and reverse proxy it to the root of another site on a different port (8280) on the same host. Am I understanding you correctly? The question is, how to do it? I need to finish my $CAFFEINE before I formulate a complete answer. But I'm sharing an incomplete answer to hopefully get you down the road sooner. I've looked up some guides, but it is difficult to setup. Like most things Apache, it's mostly difficult the first (few) time(s) you do it. Once you've done it, it's not as bad. My config: I'm redacting the things that I think aren't germane to the question at hand. ServerName 192.168.0.15 DocumentRoot /var/www/html ServerName 192.168.0.15/zm ProxyPass /zm http://192.168.0.15:8280/zm ProxyPassReverse /zm http://192.168.0.15:8280/zm Does it look any good? I question the use of "_default_" and "*", both of which on port 443. My fear is that there is a large potential for confusion ~> conflict between these two named virtual hosts. I'm also not seeing the config for the instance listening on port 8280. If the second named virtual host was put in place specifically in support of the reverse proxy, then I think you want to refactor it as a ... under the original named virtual host. The other thing that I'm not seeing is the ... configuration that I would expect to see. E.g. Orderdeny,allow Deny fromall Allow from 192.0.2.0/24 Allow from 198.51.100.0/24 Allow from 203.0.113.0/24 Beyond that, I need to finish my $CAFFEINE, have some clarification from you, and look at specific failures. N.B.: The access and error log files are going to be your friend when configuring this (or really anything Apache httpd related) as they will let you know when your configuration is correct but things like permission (Allow from) are the problem. Also apache(2)ctl configtest is your friend. Thanks. You're welcome. -- Grant. . . . unix || die
Re: [gentoo-user] Kernel config thingy, "make menuconfig"
On 1/15/22 7:47 AM, tastytea wrote: Did you know you can search with / and then jump to the results with the number keys? I've been using the search for decades*. But I didn't know about the number keys to jump until reading this message and trying it. #TIL *Yes, I've been using Linux for more than two decades. It's been my primary desktop for almost all of that time too. -- Grant. . . . unix || die
Re: [gentoo-user] TLD for home LAN?
On 1/15/22 3:33 AM, Peter Humphrey wrote: Hello list, Hi. Rich F said recently, "I'd avoid using the .local TLD due to RFC 6762." Ya I've read RFC 6762 in the past and I just skimmed part of it again. I didn't find anything that prohibited the use of the local top level domain for things other than mDNS et al. The only hard requirement that I did see is that if mDNS is used, that queries for .local /MUST/ be sent to mDNS. N.B. that does not preclude /also/ sending queries for .local to other name resolution systems like traditional unicast DNS. Ergo, RFC 6762 does not preclude the use of the local top level domain in traditional unicast DNS. That brings me back to a thorny problem: what should I call my local network? Maybe it's just me, I'm weird like that, but I vehemently believe that *I* am the authority for the names of *MY* network(s). As such, whatever name /I/ choose is the name that /my/ network(s) will use. I don't care that a cable internet provider wants my router to be called .. What's more is that I don't fathom, much less allow, the cable company's -- let's go with -- questionable naming have any influence on what my internal network is called. It used to be .prhnet, but then a program I tried a few years ago insisted on a two-component name, so I changed it to .prhnet.local. There are /some/ complications that may have some influence on what names are chosen. But I point out that your network quite likely did exactly what you wanted to do up until that point. Q: Did you continue to use the software that you tried? Or did you end up renaming your network for something that you are no longer using? }:-) Now I've read that RFC - well, Appendix G to it - and I'm scratching my head. I note the distinct absence of the quintessential SHOULD or MUST that RFCs are notorious for in RFC 6762 Appendix G. So ... I don't give the recommendation there in much credence. What's more is that RFC 6762 Appendix G fails to take into account gateways that bridge mDNS into Unicast DNS. E.g. they receive an mDNS query and gateway it to the configured uDNS. Thereby (mostly seamlessly) tying the mDNS and uDNS name space together. I really feel like RFC 6762 is a "you might want to consider not using the .local top level domain on the off hand chance that you ever have something that can't / won't work with it." I suppose it's possible that someone may want to connect an Apple device to my network, so perhaps I should clear the way for that eventuality. Is that possibility significant enough to influence how /you/ run /your/ network? /me puts his hand up to block glare looking out over the horizon looking for the SHOULD and MUST statements again, still not finding them. I can tell you that I have first hand experience with using Apple devices on a network that used the local top level domain without problems. So, what TLD should I use? Should I use .home, or just go back to .prhnet? It isn't going to be visible to the Big Bad World, so does it even matter? Use whatever TLD you want to use. Be aware of any potential gotchas and decide if they are worth avoiding or not. The old fable of "The Miller, his son, and the donkey" comes to mind. -- Make yourself happy. -- Grant. . . . unix || die
Re: [gentoo-user] BIND Configuration for DNS
On 1/14/22 8:45 AM, Raphael Mejias Dias wrote: Hello, Hi, I'm trying to configure BIND for a local DNS server, but I'm not sure that it's ok. Based on your other comments, it seems as if there is more of a question about overall DNS configuration and operation than about the BIND DNS server (named) itself. Basically, I'm wanting to create an internal address like intranet.local, Okay. this way, I can change the internal IP address, without the obligation to reconfigure the client machines to lookup the new IP, only changing the DNS lookup table. It sounds like you might be referring to updating DNS vs updating the hosts file. First, I had followed the Gentoo Wiki and after I tried BIND official documentation. ACK I've realized the network PC's did not find the DNS address, only the localhost can find it, I'm assuming that means the server running BIND (named). when I force the DNS, the client PC cannot access the internet anymore. I'm assuming that means that BIND (named) is working and doing what you want with regard to the local / internal domain name. With these assumptions, it seems to me like BIND (named) is working and that it is likely not configured to allow clients to perform recursive queries. Assuming this is the case, you need to change the allow-recursion parameter to allow the LAN clients to perform recursive queries. This is predicated on the system BIND (named) is running on being able to access the internet to query external resources on behalf of the LAN clients. If someone knows a guide to help, I'll be glad to know. Please reply if any of my assumptions are wrong or if you have other questions. Thanks. You're welcome. -- Grant. . . . unix || die
[gentoo-user] Re: urxvt asking for confirmation when pasting
On 2022-01-14, Michael wrote: > On Friday, 14 January 2022 16:53:06 GMT Grant Edwards wrote: >> On 2022-01-14, Grant Edwards wrote: >> > urxvt has suddenly started prompting for confimation when pasting text >> > by clicking the middle mouse button. This is excruciatingly >> > annoying. I don't see any relevent X resources when I do 'urxvt >> > -help'. Does anybody know how to disable this horrible new "feature"? >> >> This appears to cause be a perl extension which is now enabled by default? >> >> Disabling the perl useflag for rxvt-unicode fixed the problem. > > Aren't you also disabling the tabbed function? I don't know. I've never used the tabbed function. -- Grant
[gentoo-user] Re: urxvt asking for confirmation when pasting
On 2022-01-14, Grant Edwards wrote: > urxvt has suddenly started prompting for confimation when pasting text > by clicking the middle mouse button. This is excruciatingly > annoying. I don't see any relevent X resources when I do 'urxvt > -help'. Does anybody know how to disable this horrible new "feature"? This appears to cause be a perl extension which is now enabled by default? Disabling the perl useflag for rxvt-unicode fixed the problem.
[gentoo-user] urxvt asking for confirmation when pasting
urxvt has suddenly started prompting for confimation when pasting text by clicking the middle mouse button. This is excruciatingly annoying. I don't see any relevent X resources when I do 'urxvt -help'. Does anybody know how to disable this horrible new "feature"? -- Grant
[gentoo-user] Re: How to diagnose version conflicts?
On 2022-01-12, Neil Bothwick wrote: > On Wed, 12 Jan 2022 16:25:29 - (UTC), Grant Edwards wrote: > >> > If it was installed through portage, there would have been an ebuild >> > for it, in /var/db/pkg. >> >> Yes, correct past tense. There was at some point in the past when >> ipkg-utils was installed. >> >> > That's what portage was referencing when if made the dependency >> > calculations. >> >> There was no ipkg ebuild. [...] > > The ebuild would still have been on /var/db/pkg as long as it was > installed. Doh! Of course you're right. I was getting /var/db/repos and /var/db/pkg mixed up. I had been searching /var/db/repos not /var/db/pkg -- Grant
[gentoo-user] Re: How to diagnose version conflicts?
On 2022-01-12, Neil Bothwick wrote: > On Wed, 12 Jan 2022 14:53:06 - (UTC), Grant Edwards wrote: > >> Then it must have been ipkg-utils itself that required the older >> python_exec, but there was no ebuild present for it. > > If it was installed through portage, there would have been an ebuild > for it, in /var/db/pkg. Yes, correct past tense. There was at some point in the past when ipkg-utils was installed. > That's what portage was referencing when if made the dependency > calculations. There was no ipkg ebuild. There had been in the past, but it was removed during an emerge --sync a while back. Last rites on 1 Aug 2020, removal 30 days later: https://www.mail-archive.com/gentoo-dev@lists.gentoo.org/msg90135.html My conclusion was that dependency info for currently installed packages is also stored somewhere else, since emerge still knew that python-2.7 was required for ipkg-utils. -- Grant
[gentoo-user] Re: How to diagnose version conflicts?
On 2022-01-12, Arve Barsnes wrote: > On Wed, 12 Jan 2022 at 01:44, Grant Edwards wrote: >> Still not sure what command one uses to determine what package is >> preventing some other package from being upgraded... > > It should all be in the emerge output, although it's quite hard to read. > > If you want help interpreting it you could post the complete conflict > output, but what you've posted in your initial message is just the bit > that says that python-exec-2.4.8 requires python-exec-conf-2.4.6. > That's not a conflict, that's just one of the packages having one > dependency. To have a conflict, a different package would need to > require a different version. Right. And how to determine which package requires the older version is the question. Since I can't reinstall ipkg-utils, I don't have any way to recreate the conflict. > Most of the times this particular kind of conflict is with an older > package that requires older PYTHON_TARGETS than can be provided, and I > expect something that got depcleaned with ipkg-utils, or ipkg-utils > directly, required python-exec or python-exec-conf with > PYTHON_TARGETS="python3_7". Note that dev-lang/python itself is not > the source of any of these problems, I still have python 2.7 and 3.10 > installed (along with 3.9 which is the default version on this machine > now). Then it must have been ipkg-utils itself that required the older python_exec, but there was no ebuild present for it. I know that ipkg-utils was not mentioned at all in the emerge output. After unmerging ipkg-utils and python2.7 the conflict was gone. Next time I'll keep a copy of the entire emerge output. -- Grant
[gentoo-user] Re: How to diagnose version conflicts?
On 2022-01-12, Jack wrote: >> python-exec-2.4.8 requires python-exec-conf which requires >> python-exec 2.4.6? > > I was going to wonder if you are caught in the middle of an upgrade > that's only partly reached the mirrors. Given that (as I see it, > having last done a sync a few hours ago) that there is ONLY one version > each in the tree for python-exec-2.4.8 and python-exec-conf-2.4.6. Yep, same here. > However, looking at the ebuilds: > python-exe requires python-exec-conf (no version specified) > python-exec-conf-2.4.6 requires "! should allow 2.4.8. > > I have both python 3.9.9-r1 and 3.10.0_p1-r1 installed (plus > 2.7.18_p13) so there doesn't seem to be any conflict there. What does > "equery d python-exec" tell you? After poking around a bit, I realized that the only machine that was having this problem was also the only one that had python2.7 installed. Python 2.7 was required by ipkg-utils (for which the ebuild seems to have long since vanished). The only thing I ever use from ipkg-utils is the ipkg-build bash script. I copied that script to ~/bin/ and unmerged ipkg-utils. emerge --depclean then removed python2.7, and then emerge -auvND happily upgraded python-exec to 2.4.8. Still not sure what command one uses to determine what package is preventing some other package from being upgraded... -- Grant
[gentoo-user] How to diagnose version conflicts?
It seems that every time a new Python version is unmasks, it breaks something on one or another of my machines. This time it's a python-exec version conflict that prevents emerge -u. FAICT, Python 3.10 requires python-exec 2.4.8, and some other package requires 2.4.6. I've fixed things temporarily with: package.use: */* PYTHON_TARGETS: -python3_10 */* PYTHON_SINGLE_TARGET -python3_10 package.mask: >=dev-lang/python-3.10 Now, at least I can continue to update the machine. When I get the spare time to try to get Python 3.10 working, what is the easiest way to figure out which package is causing the problem by requring the older version of python-exec? I've tried adding a 't' to the emerge flags, but that doesn't seem to show anything useful. Is there any documentation on how to determine the cause of a package version conflict? Here's what emerge says: (dev-lang/python-exec-conf-2.4.6:2/2::gentoo, ebuild scheduled for merge) pulled in by dev-lang/python-exec-conf required by (dev-lang/python-exec-2.4.8:2/2::gentoo, ebuild scheduled for merge) USE="(native-symlinks) -test" ABI_X86="(64)" PYTHON_TARGETS="(pypy3) (python3_10) (python3_8) (python3_9)" python-exec-2.4.8 requires python-exec-conf which requires python-exec 2.4.6?
Re: [gentoo-user] installing virtual machine under gentoo
On 1/2/22 12:14 AM, John Covici wrote: OK, I fixed it, the group name was wrong when I tried the last time, I had libvirtd and its only libvirt and that seems to have fixed things. Thank you for the clarifying follow up. Here's hoping you same someone else time in the future. :-) On 1/2/22 9:58 AM, John Covici wrote: OK, more progress and a few more questions. Yay progress! In the virt-manager, I could not figure out how to add disk storage to the vm. I have a partition I can use for the disk storage -- is this different from the virtual machine image? It depends.™ KVM / libvirt / Qemu can use raw partitions, files on a mounted file system, logical volumes, ZFS vDevs, iSCSI, and other things for storage. Each one is configured slightly differently. So, which method do you want to use? I'd suggest that you /start/ with files on a mounted file system and then adjust as you need / want to. At least as long as you're getting your feet wet. From memory, you need to define a directory as a storage location to KVM / libvirt. -- I'm not currently using KVM so I'm working from a mixture of memory and what I can poke without spinning things up. 1) Open VMM (virt-manager). 2) Select the KVM host in the window. 3) Edit -> Connection Details 4) Go to the Storage tab. 5) Click the plus below the left hand pane. 6) Choose and enter a name for the storage pool. 7) Choose "dir: Filesystem Directory" as the type. 8) Choose a target path by typing or browsing to it. 9) Click Finish. Now the storage pool you created should appear as an option when creating a VM. Of even more importance, how do I bridge the vm onto my existing network? This is also done through host properties on the Virtual Networks tab. I don't remember the specifics (and can't walk through it the same way for reasons). I usually did most of the management via the /etc/conf.d/net file as I do a lot of things with networking that few things can properly administer (802.3ad LACP, 802.1q VLAN, bridging, l2 filtering, l3 filtering, etc). What I remember doing was re-configuring the (primary) network interface so that it came up without an IP address and was added as a member to a newly created bridge. As part of that I moved the system's IP address(es) from the underlying Ethernet interface to the newly created Bridge interface. With the bridge created and manged outside of VMM (virt-manager) I was able to add new VMs / containers to the existing Bridge interface. Thus establishing a layer 2 connection from the VM(s) / LXC(s) to the main network. Note: This is somewhat of a simplification as there are VLANs and multiple physical interfaces with many logical interfaces on the machine that I'm replying to you from. However, I believe, the concepts hold as I've written them. I have a nic for internal items named eno1 and another nic which connects to the outside world, I would like to bridge to the internal network, that would give the vm a dhcp address, etc. If you have a separate physical NIC, as I had suggested starting with, then you can avoid much of the bridge & IP re-configuration in the /etc/conf.d/net file and /mostly/ manage an independent bridge on the additional NIC from within VMM (virt-manager). The 2nd NIC means that you don't end up with a chicken & egg problem trying to administer a network interface across the network, which is how I do much of my work. Re-configuring things through the console also simplifies things in this regard. -- Grant. . . . unix || die
Re: [gentoo-user] installing virtual machine under gentoo
On 1/1/22 11:05 PM, John Covici wrote: Well, I foujnd out something. If I go to the file menu, I can add the connection manually and it works, That sounds familiar. but I wonder why I have to do that? Because the KVM Virtual Manager is designed such that it can administer KVM / libvirt / qemu on multiple systems. It's really client-server infrastructure. You're just needing to point the client at your local server one time. Also, before I do anything, it asks me for the root password and says system policy prevents local management of virtual machines. Do you know why this is so? This also seems familiar. Try re-starting the libvirt / kvm daemons. They may not be aware that your user is now a member of the proper group. -- Aside: This is why a reboot is ... convenient, but not required. This /should/ be taken care of proper group administration for your normal user. I ran into this a long time ago when I set up KVM on my last Gentoo system. I don't remember exactly what I had to do to resolve it. I do know that it was less than five minutes of searching the web to find the answer, cussing at what needed to be done, and doing it. That system has been running perfectly fine for many years. -- Grant. . . . unix || die
Re: [gentoo-user] installing virtual machine under gentoo
On 1/1/22 10:07 PM, John Covici wrote: Maybe I have to log out of everything with my user name even though most of the logins are to virtual consoles? You typically need to log out of X11 sessions and log back in for them to see the new groups. But you say "virtual consoles", which tells me (Control)-(Alt)-(F#) which means that any given virtual console should be able to see the new groups if it logs out and logs back in, even if others stay logged in. -- Grant. . . . unix || die
Re: [gentoo-user] installing virtual machine under gentoo
On 1/1/22 1:19 PM, Mark Knecht wrote: In my experience it often takes either a logout/in or a reboot Ya Depending on what you actually /need/ to use the new group for you can probably ssh to localhost or possibly use the `newgrp` command go switch your primary group to the group that you've been added to which hasn't been loaded (?) instantiated (?) ... in the current session. -- Grant. . . . unix || die