Re: [gentoo-user] How to synchronise between 2 locations
On 3/27/24 13:58, J. Roeleveld wrote: Hi all, Hi, I am looking for a way to synchronise a filesystem between 2 servers. Changes can occur on both sides which means I need to have it synchronise in both directions. What sort of turn around time are you looking for? seconds, minus, hours, longer? Does anyone have any thoughts on this? I would wonder about using rsync. host1 -> host2 at the top of the hour host2 -> host1 at the bottom of the hour Or if you wanted to get fancy host1 pushes to host2 at the top of the hour host2 pushes to host1 at a quarter past host2 pulls from host1 at the bottom of the hour host1 pulls from host2 at a quarter till I'm thinking like if one of them was a road warrior and only one side could initiate because of a stateful firewall. Also, both servers are connected using a slow VPN link, which is why I can't simply access files on the remote server. ACK -- Grant. . . .
Re: [gentoo-user] What do you think about pam-gnupg?
On 3/2/23 9:53 PM, efeizbudak wrote: Doesn't this sort of defeat the purpose of using pass? I mean if it's always decryptable then is it really useful to have it encrypted in the first place (assuming you have full disk encryption set up)? I may be missing something crucial here so please let me know. There is value in not having a password in clear text on a file system. It really depends on what your trying to protect from / against. Grant: This seems like the lesser of all evils to me. As I understand, you're suggesting that I lend the email password to the daemon at start and only have that password stored in memory instead of my actual gpg password, is that correct? I think we're talking about the same thing. Again, I may be missing something here, but does having your GPG credentials unprotected offer any real protection? See my response to your comment / question to Matt. I guess this is where I'll eventually be heading towards. I'm personally looking forward to being able to use TPMv2 to protect keys for services running on the system. It requires said services to support the TPM. By the way, thanks to both of you for your thoughts! :-) -- Grant. . . . unix || die
Re: [gentoo-user] What do you think about pam-gnupg?
On 3/2/23 6:48 AM, Matt Connell wrote: You just described gpg-agent, the core of what Efe (OP) is meddling with :) No, I didn't. I was referring to having the OP's utility read the password and interact with GPG /once/ at startup and then the utility run for a much longer time retaining the decrypted password in it's memory. The difference may seem subtle, but it is very important to understand. -- Grant. . . . unix || die
Re: [gentoo-user] What do you think about pam-gnupg?
On 3/1/23 7:10 AM, efeizbudak wrote: Hi all, Hi, I let mutt-wizard set a cron job which takes my password out of pass, logs into the email server and fetches my mail every 5 minutes. Can you re-architect this as a (pseudo) daemon so that you unlock it once (or at least a LOT less often) and it stores the necessary information in memory for subsequent re-use? With this I have to unlock my key as frequently as the amount in gpg-agent.conf's default-cache-ttl setting. :-/ pam-gnupg has been suggested as a remedy to this problem but the disclaimer on its page about dangerous bugs make me hesitant to use it. What do you think about the security of it? It's only 500 SLOC but I don't trust myself with reviewing the security of it. I don't relish the idea of giving something the keys to the kingdom. Could you re-configure things so that (a copy of) the requisite password is accessible via a different set of GPG credentials specific to the process that you're running? Then you could probably have just that set of GPG credentials unprotected so that the script could use them as it is today. If neither of these options were possible I'd look into something like a TPM and / or Yubikey wherein I could offload some of the GPG to it so that the decryption key is physically tied to the source computer /and/ *where* *it* *can't* *be* *copied*. I might also look into other authentication methods, e.g. TLS client certificate, so that the script can do what it needs to without needing to bother with GPG. -- Grant. . . . unix || die
Re: [gentoo-user] Re: Bouncing messages
On 1/20/23 9:09 AM, Peter Humphrey wrote: I'm still getting bounce messages the same as all year. Different meaning of "all the time". - Not all sending domains use advanced security. - Not all receiving domains use advanced security. - Not all mailing lists account for advanced security. It's the overlap of those three things that suggest if a message will be bounced or accepted. -- Grant. . . . unix || die
Re: [gentoo-user] Re: Bouncing messages
On 1/20/23 2:07 AM, Dale wrote: It could be the OP is running into the same problem I have in the past, whatever that problems is. My experience is that this is a combination of advanced email protection on the sender /and/ the receiver. E.g. the sending domain's email configuration specifies very specific locations combined with a receiving domain's email configuration honoring what the sending domain publishes. Thus when a message passes through a 3rd party, saying a mailing list, the recipient refuses to accept the message because it's not from where the sender says the message is authorized to come from. There's a lot of minutia to this and lots of ways that this can fail. Yes, there are some things that the Gentoo Users mailing list can change, but do to various reasons, this isn't done all the time. I might add, I don't recall seeing anything that leads me to believe I actually missed any messages. I tent to follow most threads and I don't recall ever seeing a quoted message that I don't have the original of. My experience is similar. It's odd in my opinion. Maybe someone will figure it out. I think it's been figured out. This is where "this isn't done all the time" comes into play. -- Grant. . . . unix || die
Re: [gentoo-user] Re: Bouncing messages
On 1/18/23 4:19 PM, Dale wrote: I might add, in the past I followed the instructions to get bounced messages, I've never once had it work. I don't get a error or anything either, like I do if I do something wrong doing something else. I tried it a few times. I'd see mail log entries where the re-sent messages would fail the same way that the original sent message failed. :-/ -- Grant. . . . unix || die
Re: [gentoo-user] Re: Bouncing messages
On 1/18/23 8:07 AM, Neil Bothwick wrote: You can also request redelivery of messages based on the internal numbers if you follow the help advice in all list message headers. The problem is that if the message is rejected because of filtering the first time around, there's a very good chance that it will also be filtered on subsequent re-delivery requests. -- Grant. . . . unix || die
Re: [gentoo-user] Update to /etc/sudoers disables wheel users!!!
On 10/26/22 7:27 PM, Ramon Fischer wrote: Sure, you cannot cover everything, but mitigating at least a little bit would be OK or not? :) I don't know. :-/ It's the proverbial problem of spam / virus filtering and a spam / virus gets through the filters and someone saying "But it's your fault because you are supposed to protect me!!!". Sometimes there's advantages to saying "here's a gun, it's loaded, and the safety is off. we suggest not pointing it at your foot. If you do point it at your foot, don't pull the trigger." type thing. -- Grant. . . . unix || die
Re: [gentoo-user] Update to /etc/sudoers disables wheel users!!!
On 10/26/22 3:48 PM, Ramon Fischer wrote: I have created an issue at their Git repository. Maybe there will be solution for this: https://github.com/sudo-project/sudo/issues/190 I ... don't know where to begin. There are so many ways that you can hurt yourself with syntactically valid sudoers that it's not even funny. You could allow list almost all commands, without using the special ALL place holder and then remark critical commands and end up in a very similar situation. At some point we have to trust that Systems Administrators / Sudoers editors know what they are doing and let them do so. -- Grant. . . . unix || die
Re: [gentoo-user] Update to /etc/sudoers disables wheel users!!!
On 10/26/22 3:27 PM, Ramon Fischer wrote: Why was I thinking of a chroot? Maybe because of reading "grup/grub" a few e-mails before and thinking of "grub-mkconfig"... Or maybe because entering a chroot is such a prominent thing to do when booting off of Gentoo media to do an installation that it's largely habitual for some of us. ;-) -- Grant. . . . unix || die
Re: [gentoo-user] Update to /etc/sudoers disables wheel users!!!
On 10/26/22 3:13 PM, Neil Bothwick wrote: They and you are different people. You are looking at it from the perspective of a user accidentally locking themself out of the system, so su is the best way to be able to fix it. I agree with you there. I was looking at it from the perspective of a third party changing sudo right without your consent. We were at cross purposes. ACK Thank you for clarifying. -- Grant. . . . unix || die
Re: [gentoo-user] Update to /etc/sudoers disables wheel users!!!
On 10/26/22 2:08 PM, Neil Bothwick wrote: So they have root access, nothing has changed. How they get root access is irrelevant, just that they have it. No, how they get root access is not irrelevant. If your only access to root is via sudo and you break sudo you no longer have root access. If you don't have root access through something other than sudo, you can't fix your sudo (from your existing system). -- Grant. . . . unix || die
Re: [gentoo-user] Update to /etc/sudoers disables wheel users!!!
On 10/26/22 12:35 PM, Jack wrote: Could you not interrupt grup and append "single" or "init=/bin/bash" to the kernel command line? Maybe. It will depend on how complex your configuration is. I don't remember if Gentoo requires root's password when entering single user mode or not. (I've not tested it in a long time.) Invoking Bash (or any shell) as init may not work as desired if your system configuration is complex and needs fancier things (modules / network resources / etc) during normal init. My 20 years worth of experience is to have a root password set so that you can fix this more directly and more reliably. Ideally, as soon as you learn that sudo is not working as desired, use su -- using root's password -- and revert the recent sudo change. -- Grant. . . . unix || die
Re: [gentoo-user] Update to /etc/sudoers disables wheel users!!!
On 10/26/22 12:22 PM, Neil Bothwick wrote: You need to be root to write to /etc/sudoers.d. If someone has that access, you are already doomed! And what happens if someone uses the existing root-via-sudo access to break sudo? You loose root-via-sudo access. Someone could become root, via sudo, edit the sudoers file without using visudo, introduce a syntax problem, thereby breaking sudo (fail secure). You could easily do this to yourself if you don't follow best practices. -- Grant. . . . unix || die
Re: [gentoo-user] Update to /etc/sudoers disables wheel users!!!
On 10/26/22 12:04 PM, Ramon Fischer wrote: Also a very interesting question! }:-) I just tested this with "visudo" and it does not intercept this. Nor should it. It's perfect legitimate sudoers syntax. The location; /etc/sudoers.d/zz vs the end of /etc/sudoers (proper), doesn't matter. If "su" is disabled, you are locked out and you are forced to enter your system via a live USB stick and a "chroot" in order to edit "/etc/shadow" to set a root password via "mkpasswd" and enable "su". Which is one of the reasons that it's important to have (set) a known root password. -- Grant. . . . unix || die
Re: [gentoo-user] Update to /etc/sudoers disables wheel users!!!
On 10/26/22 1:42 AM, Ramon Fischer wrote: and your user is able to synchronise your clock again. I'm not sure that will work as hoped. See my other reply about PTY and testing the commands at the command line for more explanation of what I suspect is happening. I do not know, what the developers were thinking to encourage the user to edit a default file, which gets potentially overwritten after each package update... To the sudo developers, the /etc/sudoers file is *SUPPOSED* *TO* /be/ /edited/. The sudo developers provide the sudo (et al.) program(s) for your use and /you/ provide the configuration file(s) that it (they) use. It is natural for the /etc/sudoers file to be edited. To me the disconnect is when people other than the sudo developers distribute the /etc/sudoers file and expect that it will not be edited. What are end users / systems administrators to do if the default file has something like the following enabled in the default /etc/sudoers file and the EUs / SAs want it to not be there? %wheel ALL=(ALL:ALL) ALL They have no choice but to change (edit / replace) the /etc/sudoers file. Especially if other parts of the system rely on the wheel group and not putting users in it is not an option. -- The above line *MUST* be taken out, thus the /etc/sudoers file *MUST* be edited. Unix has 50 years of editing files to make the system behave as desired. Modularization and including other files is nice /when/ /it/ /works/. But there are times that modularization doesn't work and files *MUST* be edited. "etc-update" helps to have an eye on, but muscle memory and fast fingers are sometimes faster. How many levels of safety do you suggest that we put in place? What if someone were to put the following into /etc/sudoers.d/zz ALL ALL=(ALL) !ALL }:-) This is the best way. Try to be as precise as possible, but be aware of wildcards![1] The /etc/sudoers syntax can be tricky to master. But it can also be very powerful when done correctly. -- Grant. . . . unix || die
Re: [gentoo-user] Update to /etc/sudoers disables wheel users!!!
On 10/26/22 12:31 AM, Walter Dnes wrote: My regular user has script "settime" in ${HOME}/bin #!/bin/bash date /usr/bin/sudo /usr/bin/rdate -nsv ca.pool.ntp.org /usr/bin/sudo /sbin/hwclock --systohc date /etc/sudoers.d/001 has, amongst other things, two lines... waltdnes x8940 = (root) NOPASSWD: /sbin/hwclock --systohc waltdnes x8940 = (root) NOPASSWD: /usr/bin/rdate -nsv ca.pool.ntp.org User "waltdnes" is a member of "wheel". If the "wheel" line is uncommented in /etc/sudoers, sudo works for me. If the "wheel" line is commented, then sudo breaks for my regular user. Please try running the two sudo lines from the script as is on the command line as the waltdnes user. I'm wondering if the problem is potentially related to something else, namely sudo wanting to read from a terminal (PTY) in some configurations. I believe there is a non-zero chance that the commands allowed via the /etc/sudoers.d/001 file will work as entered. But that running sudo from within a script, as opposed to on the command line, /may/ be the source of problems. -- Divide and conquer the problem. There seem to be two different approaches here. The loose approach is to allow a user to run "sudo ". This seems to be -- what I refer to as -- the distribution default. E.g. get people to run things through sudo vs running things through su or running directly as root. A more locked down approach allows regular users to run "sudo specific command>". This is -- what I refer to as -- the (more) enterprise approach. It also seems to be the next evolution of the distribution default wherein people want to start restricting what can and can't be run via sudo. The enterprise approach also tends to come more into play as you use sudo to run things as users other than root; e.g. run RDBMS commands as the Oracle user or backup commands as the Tivoli user. This guards against "fat-finger-syndrome". I think it's more than protection against fat-finger-syndrome. After all, unless the sudoers file(s) is (are) *EXTREMELY* specific down to and including command parameters / options, you can still fat-finger command parameters / options. When you start separating duties and who is allowed to do what is when you start to see the more locked down enterprise methodology. I go with the more locked down approach I use the distribution default on my personal systems where I'm 95% of the use case. I use the enterprise method on work systems where we have multiple people with different skill levels doing different tasks. Aside: One advantage of the enterprise method is that you can allow a command as one target user (Oracle) but not the (default) root user. Thus helping protect against people omitting a critical option. -- Many things, e.g. Oracle RDBMS, get rather upset when commands (accidentally) change the ownership of files when run as the wrong user. -- Grant. . . . unix || die
Re: [gentoo-user] Update to /etc/sudoers disables wheel users!!!
On 10/25/22 9:44 PM, Matt Connell wrote: Calm down. I am calm. The suggestion to not edit the (/etc/sudoeres) configuration file is one of those types of things that if nobody objects to then eventually not doing so will become defacto policy. So I objected, calmly, but with emphasis. Nobody said you can't. Yet. (See above.) I do. I do too. Just know what you're doing and pay attention to what portage does with package-managed configuration files. Yep. This is a common pitfall across multiple distributions / operating systems / platforms. dispatch-conf even gives you the opportunity to edit it before applying. Yep. I almost always reject the changes suggested on config files that I've modified and accept them on files that I've not modified. I really do wish that there was a better way to manage this, likely involving diffs / deltas. E.g. what changed between the N distribution file and the N+1 distribution file. Can that same change be safely applied to the N' distribution file to create the N'+1 file? -- Grant. . . . unix || die
Re: [gentoo-user] Update to /etc/sudoers disables wheel users!!!
On 10/25/22 9:04 PM, Ramon Fischer wrote: I do not think, that this is a bug, since it is the default file, which should not be edited by the user. I *STRONGLY* /OBJECT/ to the notion that users should not edit configuration files. By design, that's the very purpose of the configuration file, for users to edit them to be what they want them to be. The concept of "don't edit configuration files" seems diametrically opposed to the idea of Gentoo as I understand it. Namely, /you/ build /your/ system to behave the way that /you/ want it to. All changes should be done in "/etc/sudoers.d/" to avoid such cases. Then why in the world does the /default/ file, as installed by Gentoo, include directions to edit the the file?!?!?! Aside: Someone recently posted a comment to the sudo users mailing list (exact name escapes me) wherein their security policy prohibited @includedir explicitly because of the capability that adding a file to such included directories inherently enabled sudo access -or- caused sudo to fail secure and perform a Denial of Service. They were required to use individual @include directives. IMHO telling a Gentoo user not to modify a file in /etc takes hutzpah. -- Grant. . . . unix || die
Re: [gentoo-user] Change History of linux commands
On 10/7/22 11:10 AM, Matt Connell wrote: Was more just laughing at myself for having used equery so frequently for ~10 years and not knowing about the option. Fair enough. And if I was hiding it, I wouldn't have publicly replied that I learned it :) TIL You accidentally struck a button for me. As the ... more experienced SA on teams for a while, I tend to not tolerate people hording / not sharing information and / or making fun of others for not knowing something. So I counter this by actively promoting people learning things as a good thing. -- Grant. . . . unix || die
Re: [gentoo-user] Change History of linux commands
On 10/7/22 10:23 AM, Philip Webb wrote: There's the Wayback Machine, which tries to archive all I/net pages ever. Sadly, there are a lot of pages that the Wayback Machine a.k.a. The Internet Archive doesn't have archived. TIA / WM is a best effort system and is a lot better than not having anything at all. I've never used it, but it should have copies of man pages going back, which would allow you to reconstruct the history of the commands. I don't think that searching the internet for old copies of man pages is going to be as productive as one might hope. First there's the SysV vs BSD lineage to account for. Second there's all the other things that don't fall in the SysV / BSD camps, mostly older. I'd suggest inquiring on the TUHS or COFF mailing lists for pointers to history of various commands. You may very well be pointed to archived man pages. But you'll also have comments from people who maintained commands and possibly added the option that you're most interested in. -- Grant. . . . unix || die
Re: [gentoo-user] Change History of linux commands
On 10/7/22 10:31 AM, Matt Connell wrote: Ashamed to admit I learned of equery meta today. I'd previously been relying on eix to find, say, the website associated with a package. NEVER be ashamed to admit that you learned something. Learning is a good thing. It doesn't matter when you learn it as long as you do learn. I think that being ashamed about not knowing something tends to promote what I consider to be a negative stigmata that people should know everything and that they should hide what they don't know. I've been administering Linux professionally for more than two decades and I still learn new things weekly if not daily. Help pull others up, don't hold them down by climbing on top of them. -- Grant. . . . unix || die
Re: [gentoo-user] Change History of linux commands
On 10/7/22 8:25 AM, n952162 wrote: Can anybody tell me how I can look at the official change history of linux commands? Some man pages have history of commands in them. Admittedly, it seems as if man pages on Solaris and *BSD (I have access to FreeBSD) tend to be better than Linux man page at this aspect. -- Grant. . . . unix || die
Re: [gentoo-user] openvpn experience, anyone?
On 9/18/22 1:26 AM, n952162 wrote: I want to ssh over my openvpn connection, and I can't do it, the connection times out. IMHO the first, second, and third thing to try when OpenSSH clients fail for some reason is `-v`, `-v -v`, and `-v -v -v` in your ssh command(s). That will almost always give you some sort of indication of the next place to start looking. That being said, -- assuming routing is good -- I would also question an MTU issue. The symptoms of this are OpenSSH establishes the TCP connection that carries the data and starts negotiating the SSH protocol but fails part way through and starts timing out when big packets are sent but never make it to the other end. As Michael alluded to, trying to SSH from the local gateway to the remote gateway can be a little tricky to configure as there can be a couple of source IPs (local inside & local outside) as well as a couple of destination IPs (remote outside & remote inside). Tunnels usually cover local inside communicating with remote inside but fail to account for any outside addresses. -- N.B. this can usually be addressed with a judicious route statement that specifies which source address to use. -- Grant. . . . unix || die
Re: [gentoo-user] Getting maximum space out of a hard drive
On 8/20/22 10:22 PM, William Kenworthy wrote: What are you measuring the speed with - hdparm or rsync or ? hdparm is best for profiling just the harddisk (tallks to the interface and can bypass the cache depending on settings, rsync/cp/?? usually have the whole OS storage chain including encryption affecting throughput. How you measure performance is a complicated thing. There is the raw device speed verses the speed of the system under normal load while interacting with the drive. At $WORK, we are more concerned about throughput of the drive in our day to day use case than drive's raw capacity. Encryption itself can be highly variable depending on what you use and usually though not always includes compression before encryption. Compression can be a very tricky thing. There's the time to decompress and compress the data as it's read and written (respectively). Then there's the throughput of data to the drive and through the drive to the media. If you're dealing with text that can get a high compression ratio with little CPU overhead, then there's a good chance that you will get more data into / out of the drive faster if it's compressed than at the same bit speed decompressed. To whit I enabled compression on my ZFS pools a long time ago and never looked back. There are tools you can use to isolate where the slowdown occurs. atop is another one that may help. Yep. [test using a USB3 shingled drive on a 32 it arm system] Is that an Odroid XU4 system? If so, why 32-bit vs 64-bit? -- Or am I mistaken in thinking the Odroid XU4 is 64-bit? xu4 ~ # hdparm -Tt /dev/sda /dev/sda: Timing cached reads: 1596 MB in 2.00 seconds = 798.93 MB/sec Timing buffered disk reads: 526 MB in 3.01 seconds = 174.99 MB/sec xu4 ~ # If that is an Odroid XU4, then I strongly suspect that /dev/sda is passing through a USB interface. So ... I'd take those numbers with a grain of salt. -- If the system is working for you, then by all means more power to you. I found that my Odroid XU4 was /almost/ fast enough to be my daily driver. But the fan would kick in for some things and I didn't care for the noise of the stock fan. I've not yet compared contemporary Raspberry Pi 4 or other comparable systems. -- Grant. . . . unix || die
Re: [gentoo-user] Getting maximum space out of a hard drive
On 8/20/22 4:45 PM, Dale wrote: I figured it was something like that. ;-) :-) This drive is not supposed to be SMR. It's a 10TB and according to a site I looked on, none of them are SMR, yet. I found another site that said it was CMR. So, pretty sure it isn't SMR. Nothing is 100% tho. I might add, it's been at about that speed since I started the backup. If you have a better source of info, it's a WD model WD101EDBZ-11B1DA0 drive. I am so far from an authority and wouldn't know anything better than a web search for manufacturer's documents. I noticed there is a kcrypt something thread running, a few actually but it's hard to keep up since I see it on gkrellm's top process list. The CPU is running at about 40% or so average but I do have mplayer, a couple Firefox profiles, Seamonkey and other stuff running as well. I still got plenty of CPU pedal left if needed. Having Ktorrent and qbittorrent running together isn't helping. Thinking of switching torrent software. Qbit does seem to use more memory tho. Ya, the number of things hitting the drive will impact performance. The type of requests will also impact things. In my limited experience, lots of little requests seem to be harder for a drive than fewer but bigger requests. I think the 512 has something to do with key size or something. Am I wrong on that? If I need to use 256 or something, I can. My understanding was that 512 was stronger than 256 as far as the encryption goes. Agreed. At least that's the quick look at the cryptsetup man page on line showed me. But I suspect the underlying concept may still stand, even if the particular parameter in your previous message is not related. I'm going to try some tests Rich mentioned after it is done doing its backup. I don't want to stop it if I can avoid it. It's about half way through, give or take a little. :-) -- Grant. . . . unix || die
Re: [gentoo-user] Getting maximum space out of a hard drive
Sorry for the duplicate post. I had an email client error that accidentally caused me to hit send on the window I was composing in. On 8/20/22 1:15 PM, Dale wrote: Howdy, Hi, Related question. Does encryption slow the read/write speeds of a drive down a fair amount? My experience has been the opposite. I know that it's unintuitive that encryption would make things faster. But my understanding is that it alters how data is read from / written to the disk such that it's done in more optimized batches and / or optimized caching. This was so surprising that I decrypted a drive / re-encrypted a drive multiple times to compare things to come to the conclusion that encryption was noticeably better. Plus, encryption has the advantage of destroying the key rendering the drive safe to use independent of the data that was on it. N.B. The actual encryption key is encrypted with the passphrase. The passphrase isn't the encryption key itself. This new 10TB drive is maxing out at about 49.51MB/s or so. I wonder if you are possibly running into performance issues related to shingled drives. Their raw capacity comes at a performance penalty. I actually copied that from the progress of rsync and a nice sized file. It's been running over 24 hours now so I'd think buffer and cache would be well done with. LOL Ya, you have /probably/ exceeded the write back cache in the system's memory. It did pass both a short and long self test. I used cryptsetup -s 512 to encrypt with, nice password too. My rig has a FX-8350 8 core running at 4GHz CPU and 32GBs of memory. The CPU is fairly busy. A little more than normal anyway. Keep in mind, I have two encrypted drives connected right now. The last time I looked at cryptsetup / LUKS, I found that there was a [kernel] process per encrypted block device. A hack that I did while testing things was to slice up a drive into multiple partitions, encrypt each one, and then re-aggregate the LUKS devices as PVs in LVM. This surprisingly was a worthwhile performance boost. Just curious if that speed is normal or not. I suspect that your drive is FAR more the bottleneck than the encryption itself is. There is a chance that the encryption's access pattern is exascerbating a drive performance issue. Thoughts? Conceptually working in 512 B blocks on a drive that is natively 4 kB sectors. Thus causing the drive to do lots of extra work to account for the other seven 512 B blocks in a 4 kB sector. P. S. The pulled drive I bought had like 60 hours on it. Dang near new. :-) -- Grant. . . . unix || die
Re: [gentoo-user] VirtualBox question on Thinkpad laptop
On 8/20/22 12:30 AM, Walter Dnes wrote: Long-story-short; I run ArcaOS (backwards compatable OS/2 successor) as a guest on QEMU on my desktop. Aside: Is ArcaOS really a different version of OS/2? Or is it still 4.x with patches and updated drivers? I saw extremely little difference, other than eye candy / included open source packages, between IBM OS/2 Warp 4.5x, eComm Server, and ArcaOS. Further Aside: I run anything in the above to be able to drive my P/390-E PCI card. The Lenovo Thinkpad has the "vmx" cpu flag, so QEMU is theoretically doable. But the mouse is extremely flakey, to the point of unusability, under QEMU on the Thinkpad. I've tried various tweaks, but no luck. I "asked Mr. Google", but only found other people with the same problem... and no solution. This sounds extremely reminiscent of guest OS driver / utility integration, or rather the lack there of, when running OS/2 et al. in VM. Are there any booby-traps to watch out for? What I'm most concerned about is the default "qt5" USE flag. Is VirtualBox usable without the qt5 GUI? I've not fond much effective difference in the various hyper visors, save for driver / guest OS additions / integration maturity level. Sure, different hyper visors have varying maturity levels of the management utilities. But I've gotten all of them to do what I want. I prefer VirtualBox on stand alone workstation for lab / play thing and VMware's (free) ESXi on my server for things I want running months at a time (read: to continue running when I reboot my workstation to change kernels). I assume that since you're running ArcaOS, that you have support from Arca Noae. As such, I'd open a support ticket with them and ask about guest add-ons for various hyper visors. I don't know the current state of 3rd party guest add-ons for OS/2 / eCS / ArcaOS under VirtualBox. Hopefully they've improved since the last time I looked. Surprisingly enough, I think the best integration that I ever saw was under an *OLD* version of Microsoft's Virtual PC / Virtual Server / Hyper-V. Back when they still supported OS/2 as a guest OS in an official capacity. Perhaps you can run an old version thereof or extract the guest add-ons therefrom and use them elsewhere. -- Grant. . . . unix || die
Re: [gentoo-user] Getting maximum space out of a hard drive
On 8/20/22 1:15 PM, Dale wrote: Howdy, Hi, Related question. Does encryption slow the read/write speeds of a drive down a fair amount? m This new 10TB drive is maxing out at about 49.51MB/s or so. I actually copied that from the progress of rsync and a nice sized file. It's been running over 24 hours now so I'd think buffer and cache would be well done with. LOL It did pass both a short and long self test. I used cryptsetup -s 512 to encrypt with, nice password too. My rig has a FX-8350 8 core running at 4GHz CPU and 32GBs of memory. The CPU is fairly busy. A little more than normal anyway. Keep in mind, I have two encrypted drives connected right now. Just curious if that speed is normal or not. Thoughts? Dale :-) :-) P. S. The pulled drive I bought had like 60 hours on it. Dang near new. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/18/22 3:28 AM, J. Roeleveld wrote: Either on the client where the agent is running, but also on the system I connected to. I have always considered that there is enough sensitive data on the client and that there are already enough things running there that I end up considering the client a sensitive / secure system as a unit. This seems to be especially true with servers hosting automation. But to each their own. As for the security of the forwarded agent, I've generally been okay with root on the target system having access to the agent. Especial when I have used different key pairs for different destination hosts and / or specify the from stanza in the authorized_keys file. If you want to, you can specify how long, in seconds, that a key can be used in an agent. So if you have a running agent, you can load a key and specify that it can be used for up to two seconds. So even if someone does compromise the target host and does talk to the agent, the agent won't allow the key to be used and will behave as if the key wasn't loaded. You can also lock / unlock the agent on the source side as you see fit. Unlock it for authentication, and then immediately re-lock it after authenticating. Local commands and / or a local process using ssh remote commands makes this more reasonable. Aside: Backgrounded / multiplexed connections make running multiple remote commands on a host a lot more expedient. 1) Log in to the remote host with a background connection. 2) Run multiple remote commands via "ssh @ " 3) Log out of the remote host closing the background connection. The business logic of the script lives on the client and all the intermediate commands (#2) avoid the overhead of establishing a connection and authenticating again. But, I just noticed the following, which is hopeful, but need to read up on this: https://www.openssh.com/agent-restrict.html Interesting. More reading. Agreed, which is why I always stop and think when I see that. ;-) Usually the answer is: "Oh, yes, I didn't access this host from my laptop yet". But that is usually after the 2nd or 3rd connection attempt with retyping the hostname and verifying the IP-address that is resolved for it first. I think I mis-took a previous statement to mean that you did something to distribute the contents of the known_hosts file so that re-loads would already be known. I guess I misunderstood. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/18/22 12:23 AM, J. Roeleveld wrote: I've been using ansible for some of my automation scripts and am happy with the way that works. The existing implementations for "adding users" and such is tested plenty by others and does actually check if the user exists before trying to add one. ACK I only use expect to automate the login-process as mentioned in the original email. I've been a fan of the sshpass command explicitly for sshing into systems. Though I've gotten it to work for a few other very similar things. The line it's expecting is more then just "*?assword" like in all the examples. Currently, SSH puts the password-prompt as: (@) Password: As I know both, the expected string is this full line. If SSH changes its behaviour, the script will simply fail. Nice! -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/17/22 11:48 PM, J. Roeleveld wrote: It could, but that would open up an unsecured key to interception if an intermediate host is compromised. What are you thinking? -- I've got a few ideas, but rather than speculating, I'll just ask. See previous answer, the agent, as far as I know, will have the keys in memory and I haven't seen evidence that it won't provide the keys without authenticating the requestor. Are you concerned about a rogue requestor on the host where the agent is running or elsewhere? Yes, copy/paste has no issues with multi-page texts. But manually reading a long password and copying that over by typing on a keyboard when the font can make the difference between "1" (ONE), "l" (small letter L) and "|" (pipe- character) and similar characters make it annoying to say the least. Agreed. Currently, when that comment pops up, the first thing I do is wait and wonder why it's asking for it. As all the systems are already added to the list. Such a pop-up would be a very likely indication of a problem. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/17/22 11:24 PM, J. Roeleveld wrote: If I have 1 desktop and 1 laptop, that means 2 client machines. Add 5 servers/vms. /Clients/ need (non-host) key pairs. Servers shouldn't need non-host key pairs. Servers should only need the clients' public keys on them. That means 10 ssh-keys per person to manage and keep track off. If you're using per-host-per-client key pairs, sure. If you're only using per-client key pairs and copying the public key to the server, no. When a laptop gets replaced, I need to ensure the keys get removed from the authorized_keys section. If the new key pair would be using the same algorithm and bit length and there is no reason to suspect compromise, then I see no reason to replace the key pair. I'd just copy the key pair from the old client to the new client and destroy it on the old client. This is especially true if the authorized_keys file has a from stanza on the public key. Same goes for when the ssh-keys need refreshing. Which, due to the amount, I never got round to. I've not run into any situation where policy mandates that a key pair be replaced unless when there isn't any reason to suspect it's compromise. I actually have more then the amount mentioned above, the amount of ssh-keys gets too much to manage without an automated tool to keep track of them and automate the changing of the keys. I never got the time to create that tool and never found anything that would make it easier. As I think about it, I'd probably leverage the comment stanza of the public key so that I could do an in place delete with sed and then append the new public key. E.g. have a comment that consists of the client's host name, some delimiter, and the date. That way it would be easy to remove any and all keys for the client in the future. When hosts can get added and removed regularly for testing purposes, this requires a management tool. It depends on how you configure things. It seems as if it's possible to use the "%h" parameter when specifying the IdentityFile. So you could have a wild card stanza that would look for a file based on the host name. You could put "root" without a valid password, making it impossible to "su -" into and add a 2nd uid/gid 0 account with a valid password. I know of 1 organisation where they had a 2nd root account added which could be used by the orgs sys-admins for emergency access. (These were student owned servers directly connected to the internet) I absolutely hate the idea of having multiple accounts using the same UID. I'd be far more likely to have a per host account with UID=0 / GID=0 and have the root account have a different UID / GID. I'll need to try this at some point in the future. I expect the "wheel" group to only be for changing into "root", that's what it's advertised as. I've seen some binaries in the wheel group and 0550 permission. Still needs the clients to be actually running when the server runs the script. Or it needs to be added to a schedule and gets triggered when the client becomes available. This would make the scheduler too complex. Why can't the script that's running ssh simply start an agent, run ssh, then stop the agent? There's no coordination necessary. I agree, but root-access is only needed for specific tasks, like updates. Most access is done using service-specific accounts. I only have 2 where users have shell-accounts. Many people forget about problems on boot that require root's password. I'd love to implement Kerberos, mostly for the SSO abilities, but haven't found a simple to follow howto yet which can be easily adjusted so it can be added to an existing environment. ACK -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 11:46 PM, J. Roeleveld wrote: Hmm... interesting. I will look into this. :-) But, it needs the agent to be running, which will make it tricky for automation. Why can't automation start an agent? Why can't there be an agent running that automation has access to? (I have some scripts that need to do things on different systems in a sequence for which this could help) :-) I know, which is why I was investigating automating it. The passwords are too long to comfortably copy by hand. I assume that you mean "type" when you say "copy". I will definitely investigate this. They sound interesting. I'd set the validity to a lot less if this can be automated easily. Yes, it can be fairly easily automated. One of the other advantages of SSH /certificates/ is when you flip things around and use a /host/ certificate. Clients can recognize that the target host's certificate is signed by the trusted SSH CA and not prompt for the typical Trust On First Use (TOFU) scenario. Thus you can actually leverage the target host SSH fingerprint and not need to ignore that security aspect like so many people do. Added to my research-list. :-) -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 11:42 PM, J. Roeleveld wrote: True, properly done automation is necessary to make our lives easier. #truth I tried this approach in the past and some levels of automation still use this, but for being able to login myself, I found having different keys become cumbersome and I ended up never actually replacing them. I'm curious what you found to be cumbersome. I make extensive use of the client SSH configuration file (~/.ssh/config) such that I don't need to worry about which key is used for which host. This means that anything that uses ssh / sftp / scp /just/ /works/ (tm) using the contents of the configuration file. The goal is to have whichever authentication system used, the passwords/keys to be replaced often with hard to brute-force passwords/keys. I can currently replace all passwords on a daily basis and not have a problem with accessing any system. I agree in concept. Though I question the veracity of that statement when things aren't working normally. E.g. system is offline for X hours do to hardware failure or an old version restored from backup that is now out of sync with the central system. For normal use, most systems don't need to be logged into a shell. For the few where this is needed, individual accounts exists. But, no individual account is a member of "wheel". For admin access, there are admin accounts on the machines. (they are all named individually and you won't find the same admin-account-username on more then 1 system) I've wondered about having the account for UID / GID 0 be named something other than root. But the testing that I did showed that there were too many things that assumed "root". :-/ Though I did find that I was able to successfully convert a test VM to use something other than root and the proof of concept was a success. It's just that the PoC was too much effort / fragile to be used in production. I find that the wheel group is mostly for su and a few other commands. But the concept of you must be a member of a group or have special permissions applied directly to your account is conceptually quite similar to being a member of the wheel group. As such I don't think the abstraction makes much difference other than obfuscation. True, but this needs to run from the client. Not the server. Which means it will need to be triggered manually and not scheduled. The algorithm could be refactored such that it is run from the server. E.g. if you can ensure that the old key is replaced with the new key, it can safely be done server side. I did this for a few colleagues that had forgotten the passphrase for their old private key and needed their new public key to be put into place. I don't even have sudo installed on most systems, only where it's needed for certain scripts to work and there it's only used to avoid "setuid" which is an even bigger issue. I tend to prefer sudo's security posture where people need to know /their/ password. Meaning that there was no need for multiple people to know the shared target user's password like su does. If I was in a different environment, I'd consider Kerberized versions of su as an alternative. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 4:11 PM, Neil Bothwick wrote: I've never used it before, mainly because I wasn't aware of its existence until I re-read the ssh-keygen man page, but it seems to be simple timestamps passed to valid-before/valid-after. I'm not sure that's applicable to /keys/ verses /certificates/. Excerpt from the ssh-keygen man page: -V validity_interval Specify a validity interval when signing a /certificate/. A validity interval may consist of a single time, indicating that the /certificate/ is valid beginning now and expiring at that time, or may consist of two times separated by a colon to indicate an explicit time interval. Maybe there's something else, but it seems like the validity period is for SSH /certificates/ and not SSH /keys/. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 1:12 PM, Neil Bothwick wrote: I'll check that out, but it is also possible to set time limits on SSH keys, and limit them to specific commands. Please elaborate on the time limit capability of SSH /keys/. I wasn't aware of that. Is it hours of the day / days of the week they can be used? Or is it the number of days / date range that they can be used? -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/14/22 3:22 PM, Steve Wilson wrote: Have you looked at dev-tcltk/expect? Expect has it's place. Just be EXTREMELY careful when using it for anything security related. Always check for what is expected before sending data. Don't assume that something comes next and blindly send it (possibly after a pause). Things break in a really weird and unexpected way. (No pun intended.) Also, do as much logic outside of expect as possible. E.g. don't try to add a user and then respond to a failure. Instead check to see if the user exists /before/ trying to add it. Plan on things failing and try to control the likely ways that it can fail. Paying yourself forward with time and effort developing (expect) scripts will mean that you reap the rewards for years to come. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 6:44 AM, Neil Bothwick wrote: I don't share keys, each desktop/laptop has its own keys. Not if they use their own keys. It should be simple to script generating a new key, then SSHing to a list of machines and replacing the old key with the new one in authorized_keys. +1 Indeed it is, and now you've found a way to do what you want with passwords, all is well. However, I will look at scripting regular replacements for SSH keys, for my own peace of mind. /me loudly says "SSH /certificates/" from the top atop a pile of old servers in the server room. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 1:53 AM, J. Roeleveld wrote: I agree, but that is a tedious process. Yes, it can be. That's where some automation comes into play. I have multiple machines I use as desktop depending on where I am. And either I need to securely share the private keys between them or set up different keys per desktop. I /currently/ use unique keys /per/ /client/ /system/. I am /planing/ on starting to use unique keys /per/ /client/ /per/ /server/. Meaning that each client will use a different key for each remote server. I think that this combined with location restrictions in the authorized_keys file will mean that SSH keys (or certificates) can't be used from anywhere other than their approved location or for anything other than their intended purpose. I assume the same is true for most people. Yes. It depends what security posture you / your organization want. Never mind that access to the servers needs to be possible for others as well. I assume that other users will use their own individual accounts to log into the target systems with a similar configuration. E.g. I log into remote systems as "gtaylor" and you log into remote systems as "joost", and Neil logs into remote systems as "neil". We would all then escalate to root via "su -" with the automation providing the password to su. Either way, to do this automatically, all the desktop machines need to be powered and running while changing the keys. No, they don't. You just need to account for current and prior keys. I've done exactly this on a fleet of about 800 Unix systems that I helped administer at my last job. You do something like the following: 1) Log into the remote system explicitly using the prior key. 2) Append the current key to the ~/.ssh/authorized_keys file. 3) Logout of the remote system. 4) Log into the remote system explicitly using the current key. 5) Remove the prior key from the ~/.ssh/authorized_keys file. 6) Logout of the remote system. This can be fairly easily automated. You can then loop across systems using this automation to update the key on systems that are online. You can relatively easily deal with systems that are offline currently later when they are back online. -- There are ways to differentiate between offline and bad credentials during day to day operations. So when you hit the bad credentials you leverage the automation that tries old credentials to update them. You end up bifurcating the pool of systems into different groups that need to be dealt with differently. Online and doing what you want; online but not doing what you want; and offline. Changing passwords for servers and storing them in a password vault is easier to automate. I disagree. Using passwords tends to negate things like authenticating to sudo with SSH keys / certificates, thus prompting the use of NOPASSWD:. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 1:15 AM, J. Roeleveld wrote: Yes. Okay. That simply means that SSH keys won't be used to authenticate to the remote system. How would it not prompt for a password. There is a PAM module; pam_ssh_agent_auth, which can be used to enable users to authenticate to sudo using SSH keys. This means that the user /does/ authenticate to sudo as necessary. It's just that the authentication happens behind the scenes and they don't need to enter their password. Thus you can avoid the NOPASSWD: option which means a better security posture. I need something that will take the password from the vault (I can do this in Python and shell-scripting. Probably also in other scripts). Authenticating to the vault can be done on a session basis and shared. So locally, I'd only login once. Sure. Currently, yes. I never physically see the password as it currently goes into the clipboard and gets wiped from there after a short time period. Enough time to paste it into the password-prompt. It's the copy/pasting that I am looking to automate into a single "login-to-remote-host" script. I would not consider the copy and paste method to be secure. There are plenty of utilities to monitor the clipboard et al. and copy the new contents in extremely short order. As such, users could arrange to acquire copies of the password passing through the clipboard. I would strongly suggest exploring options that don't use the clipboard and instead retrieve the password from the vault and inject it into the remote system without using the clipboard. Or, authenticate to sudo a different way that doesn't involve a password. This will work for 90+ percent of the use cases. Meaning that the sensitive password is needed for 10 percent or less of the time. Thereby reducing the possible sensitive password exposure. }:-) I prefer not to use SSH keys for this as they tend to exist for years in my experience. And one unnoticed leak can open up a lot of systems. That is a valid concern. I'd strongly suggest that you research SSH /certificates/. SSH /certificates/ support a finite life time /and/ can specify what command(s) / action(s) they can be used for. My $EMPLOYER uses SSH /certificates/ that last about 8 hours. I've heard of others that use SSH /certificates/ that last for a single digit number of minutes or even seconds. The idea being that the SSH /certificate/ only lasts just long enough for it to be used for it's intended purpose and no longer. The ability to specify the command; e.g. "su -" that is allowed to be executed means that people can't use them to start any other command. }:-) This is why I use passwords. (passwords are long random strings that are changed regularly) Fair enough. I only counter with take a few minutes to research SSH /certificates/ and see if they are of any interest to you. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/15/22 1:07 AM, J. Roeleveld wrote: What I am looking for is: 1) Lookup credentials from password vault (I can do this in script-form, already doing this in limited form for ansible-scripts, but this doesn't give me an interactive shell) ACK You indicated you already had a solution for this. So I'm leaving it in your capable hands. 2) Use admin-account credentials to login via SSH into host When you say "admin-account", do you mean the given System Administrator's personal account or a common / shared administrative account? E.g. would I log in as myself; "gtaylor", or something shared "helpdeskadmin"? I'm assuming the former unless corrected. Do you want the user to be prompted for the Unix account password (on the remote system) or can they use SSH keys to login without a password prompt? 3) On remote host, initiate "su -" to switch to root and provide root-password over SSH link at the right time I would suggest having the SSH command invoke the "su -" command automatically. Note: You will probably want to run a command something like this to make sure that a TTY is allocated for proper interaction with su. ssh -t @ "/path/to/su -" 4) Give me an interactive root-shell on remote-host Okay. Not what I would have expected, but it's your system and you do you. :-) When I close the shell, I expect to be fully logged out (eg, I go straight back to the local host, not to the admin-account) The nice thing about having SSH invoke the "su -" command directly is that once you exit su, you also end up exiting the SSH session. I see plenty of google-results and also as answers for ssh directly to "root" using ssh-keys. I do not consider this a safe method, I use it for un- priviliges accounts (not member of "wheel"). I don't use it for admin- accounts. Thank you for the elaboration. I tend to agree with your stance. I have exceedingly few things that can SSH into systems as the root user, and they all have forced commands. They all have to do with the backup system which can't use sudo /or/ I want the ability to get in and restore a sudoers file if it gets messed up, thus avoiding the chicken / egg problem. Following the same security mentality, I prefer to specify the full path to executables, when possible, in order to make sure that someone doesn't put a Trojanized version earlier in the path. }:-) -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/14/22 1:08 PM, Neil Bothwick wrote: I was accepting your point, one I hadn't considered. Ah. Okay. :-/ Here I was hoping to learn something new from you. ;-) Still a good discussion none the less. :-) -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/14/22 9:56 AM, Neil Bothwick wrote: That is true, but it is also true about the current setup as that also gives root access. I get the impression that Joost is looking for a more convenient approach that does not reduce security, which is true here... I'm all for being /more/ secure, especially when doing so can be made to appear to be /simpler/ for the end user. I think the quintessential example of this is authenticating to sudo with SSH keys via SSH agent forwarding. It eliminates the password prompt or the NOPASSWD: option. Either way, you have better security posture (always authenticated) and / or users have a better experience (no password prompt). Well, almost true. Please elaborate. I consider it fairly difficult for non-root users to get a copy of the /etc/shadow file on most systems. Conversely, SSH private key files tend to ... leak / be forgotten. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/14/22 8:48 AM, Neil Bothwick wrote: Is this user only used as a gateway to root access, or can you set up such a user? If so you could use key-based authentication for that user, with a passphrase, and add command="/bin/su --login" to the authorized_keys line. That way you still need three pieces of information, Be mindful that despite the fact that this protects things on the surface, it is / can be a way to boot strap changing this. After all, nothing about this forced command prevents the user from using the acquired root access to modify the ~/.ssh/authorized_keys file enforcing the command. This is one of the pitfalls that I alluded to in my earlier reply about security vs automation. Quite simply, this is NOT security as it's trivial to use the access (su -) to gain more access (edit the ~/.ssh/authorized_keys file). replacing the user's password with the user's key passphrase. This is another slippery slope. SSH key pass phrases can be brute forced in an offline fashion. Conversely, system passwords are more of an online attack. Assuming that standard system protections are in place for /etc/shadow*. -- It's easier to get a copy of someone's private SSH key file, especially if they are somewhat lax about it's security believing that the passphrase will protect it. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/14/22 3:54 AM, J. Roeleveld wrote: For security reasons, I do not want direct login to root under any circumstances. This is disabled on all systems and will stay this way. +10 for security Currently, to login as root, you need to know: - admin user account name - admin user account password - root user account password Please describe what an ideal scenario would be from a flow perspective, independent of the underlying technology. I do not want to reduce this to a single ssh-key-passphrase. Please elaborate as I suspect that the reasoning behind that statement is quite germane to this larger discussion. -- Grant. . . . unix || die
Re: [gentoo-user] Any way to automate login to host and su to root?
On 7/14/22 12:35 AM, J. Roeleveld wrote: Hi All, Hi, I am looking for a way to login to a host and automatically change to root using a password provided by an external program. Please clarify if you want to /require/ a password? I can think of some options that would authenticate, thus avoiding sudo's NOPASSWD:, but not prompt for a password. I want to know if those types of options are on the table or if they should be discarded. The root passwords are stored in a vault and I can get passwords out using a script after authenticating. Okay. Currently, I need to do a lot of the steps manually: ssh @ su - You could alter that slightly to be: ssh @ su - That would combine the steps into one. (copy/paste password from vault) Are you actually copying & pasting the password? Or will you be using something to retrieve the password from the vault and automatically provide it to su? I think that removing the human's need ~> ability to copy & paste would close some security exposures. Aside: This remove the human's ability to copy ~> know the password from the mix as a security measure can be a slippery slope and I consider it to be questionable at best. -- Conversely, doing it on behalf of the human with a password that they know simply as automation is fine. I would like to change this to: I think that's doable. I've done a lot of that. I'll take it one step further and put " " in a for loop to do my bidding on a number of systems. I think the "ssh @ su -" method might be a bit cleaner from a STDIN / TTY / FD perspective. Does anyone have any hints on how to achieve this without adding a "NOPASSWD" entry into /etc/sudoers ? Flag on the play: You've now mixed privilege elevation mechanism. You originally talked about "su" and now you're talking about "sudo". They are distinctly different things. Though admittedly they can be used in concert with each other. If you are using SSH keys /and/ sudo, then I'd recommend that you investigate authenticating to sudo via (forwarded) SSH keys. This means that your interactions with sudo are /always/ authenticated *and* done so without requiring an interactive prompt. Thanks in advance, There's more than a little bit here. There are a number of ways that this could go. -- Grant. . . . unix || die
Re: [gentoo-user] Change in sudoers format?
On 5/29/22 9:48 AM, w...@op.pl wrote: User xyz can exacute command D on host A as user B in group C ... is just a matter of consistency ;) The group that a command is run as starts to become much more germane when you are using sudo to run commands as a different non-root user. E.g. if you want to run commands as the Oracle user to manage things about a database. In some ways this is somewhat akin to setting the GID bit on a directory so that newly created files inherit the group of the directory. At least insofar as the type of situation that would necessitate the use of this feature. -- Grant. . . . unix || die
Re: [gentoo-user] problem with saslauthd
On 5/12/22 8:42 AM, John Covici wrote: So, I went on to the sasl mailing list and someone found a patch -- seems to be available for the freebsd port, and the patch was specific to sendmail and dev-libs/cyrus-sasl 2.1.28. I modified it for gentoo and it fixed everything up! I wonder if I should file this somewhere -- funny no one else noticed this before -- I saw nothing on bgo. Hi John, I'm glad that you found a solution. I'm sorry that I've not responded to your detailed message yet. Life / $WORK has been really busy this week. I was planing on giving your message the attention it deserved this weekend. Yes, I suspect that a patch or at least a bug report to Gentoo would be good. I'd suggest starting communications with the Gentoo package maintainer if there is no better place. I expect that they will receive the patch and / or redirect you somewhere better. -- Grant. . . . unix || die
Re: [gentoo-user] problem with saslauthd
On 5/6/22 4:09 AM, John Covici wrote: So, I restored all the files, I could like sendmail.mc and the Sendmail.conf, but no joy, still no authentication mechanisms. I restored them to about first of April. Well darn. :-/ This still leads me to saslauthd. I didn't mean to imply that it /wasn't/ SASL, just that the two are separate. Have you been maintaining your sendmail.cf via the sendmail.mc file? Or are there unaccounted for hand edits? -- I'll often test new things in sendmail.cf directly and then promote them to sendmail.mc once I have identified what I want. Likewise with submit.cf / submit.mc. Would you be willing to share your sendmail.mc and submit.mc files? Feel free to "REDACT" things as necessary. (Please make sure it's easy to tell what is redacted.) -- Grant. . . . unix || die
Re: [gentoo-user] problem with saslauthd
On 5/5/22 1:24 PM, John Covici wrote: I do have a submit.mc file, but I have not changed this at all. What is strange to me is that if I do saslauthd -v should not I get everything that my Sendmail.conf has? I would not assume so. I say that based on my understanding of how SASL and Sendmail interact. In many ways, Sendmail and SASL are two entirely separate sub-systems. Sendmail (as I usually see it configured) wholesale outsources outsources testing authentication credentials. It does so by asking the completely independent SASL authentication daemon to test the credentials (nominally a username and password pair) to see if they are valid. SASL returns a yes / no to Sendmail. Sendmail alters what it does based on that answer. Since Sendmail and SASL are independent entities there is no reason for SASL to know anything about how Sendmail is configured. I can check an old backup and see if I have one for my sendmail.mc and get back. ACK -- Grant. . . . unix || die
Re: [gentoo-user] problem with saslauthd
On 5/5/22 10:39 AM, John Covici wrote: saslauthd is running, but it seems to ignore the Sendmail.conf . I think it's the other way around. Sendmail is told to support authentication via one or more methods, one of which can be SASL and co. The actual SASL auth daemon just listens on a unix socket and / or TCP port for clients to test authentication pairs, returning a pass fail type message. I used openssl s_client to connect to my sendmail, it was happy with the certs, but in response to the ehlo gives me no auth line at all. :-/ Very strange. Very annoying, definitely. I don't know if it's strange yet or not. I think the strangeness will be confirmed or refuted after finding out why Sendmail isn't offering AUTH options. My favorite thing to turn to when things that used to work and now don't is to restore a backup of the configuration file and compare them. Can you do that with your sendmail.cf or sendmail.mc file? There's also a chance that it's your submit.cf or submit.mc file since we're talking about the MSA on port 587. (Unless you aren't using the separate MSA which has been standard for 15+ years.) -- Grant. . . . unix || die
Re: [gentoo-user] problem with saslauthd
On 5/4/22 7:31 AM, John Covici wrote: Hi. I have been using various clients to connect to my sendmail server using port 587 and using starttls to encrypt the connections and then using the plain mechanism to send the user name and password to authenticate. Last day or so this has stopped working -- I don't know that I changed anything (famous last words), Assume that your configuration is at least acceptable until you have a reason to think otherwise. So, after all that, anyone have an idea as to how to fix? Start with the simpler thing first. Is the SASL authentication daemon running? Did your (START)TLS certificate expire? Contemporary clients may silently refuse to use expired certs. Thanks. You're welcome. Feel free to poke things and respond with more questions / details / errors / etc. -- Grant. . . . unix || die
Re: [gentoo-user] Fully-Defined-Domain-Name for nullmailer
On 4/13/22 6:31 AM, n952162 wrote: Unfortunately, I get a 550 from my network provider for all of these: 1. me 2. localdomain 3. net 4. web.de So, how does thunderbird do it? I don't know what name Thunderbird uses in it's HELO / EHLO command(s). Though it shouldn't matter much which name is used. The important thing should be that the SMTP client, be it Thunderbird or nullmailer or something else, should authenticate to the outbound relay / MSA. The MSA should then use that authentication as a control for what is and is not allowed to be relayed. Nominally, the name used has little effect on the SMTP session. However there is more and more sanity checking being applied for server to server SMTP connections. Mostly the sanity checking is around that a sender isn't obviously lying or trying to get around security checks. These attempts usually take the form of pretending to be the destination or another known / easily identifiable lie. Mail servers that send server to server traffic actually SHOULD use proper names that validate. Clients shouldn't need to adhere to as high a standard. I consider nullmailer to be a client in this case. -- Grant. . . . unix || die
Re: [gentoo-user] Two wifi client interfaces and routing
On 3/31/22 10:17 AM, Grant Taylor wrote: I do know that the DHCP protocol supports adding additional options / definitions / parameters (?term?) to specify ... static routes. In case others are interested in this, a few pointers about using it. ISC's DHCP server has two options for advertising routes that clients should install; subnet ... netmask ... { ... option cidr-static-route ...; ... ms-static-route ...; ... } Both *-static-route options use the same format and the format took a little bit to wrap my head around. It consists of sets of length>, followed by the , followed by the router. E.g. option cidr-static-route 10, 100, 64, 192, 0, 2, 123, 0, 192, 0, 2, 1; That says: - 100.64.0.0/10 is reachable via 192.0.2.123 - 0/0 is reachable via 192.0.2.1 ProTip: Go ahead and add the default gateway 0/0 route to the *-static-route entries as some clients ignore the option routers entry when *-static-route option is present. I have multiple macOS, iOS, Windows 10, Linux, and other esoteric things correctly using a route to a lab / sandbox subnet via a system that isn't the LAN's default gateway. Finally: This seems to be a well defined DHCP standard, but seemingly not well known option by the various people that I've discussed this with. -- Grant. . . . unix || die
Re: [gentoo-user] Two wifi client interfaces and routing
On 3/31/22 7:21 AM, William Kenworthy wrote: Hi, Hi, I am trying to use a raspberry pi ... to create a routed link between two access points ... so I can access the monitoring port ... from homeassistant. I'm distilling this down to a Gentoo system participating in two two LANs, both of which are connected as DHCP clients. -- Correct me if I've distilled too much. -- And you want other systems on either LAN to use this system as a communications path to systems on the opposing LAN. Both AP's connect ok from the rpi but the routing is wrong - I can ping in both directions from the rpi, but only sometimes from devices further hops away - can openrc even do this? This seems like a classic routing issue. To me, it's not even an OpenRC issue in any way other than how to add static routes /after/ the network is brought up via DHCP. My experimenting so far is hit and miss. Trying to static route or override the default routes doesn't survive a network glitch, and half the time doesn't seem to "take" at all. Ya. At a higher level, this can be non-obvious how to do this as it's niche routing configuration. A working example I could adapt would be great! I don't have an example off hand. -- Seeing as I use static IPs on almost all of my machines, I don't even know if OpenRC supports adding a static route /after/ bringing an interface up with DHCP. I do know that the DHCP protocol supports adding additional options / definitions / parameters (?term?) to specify -- what I've been describing as -- static routes. That way DHCP clients will learn about these additional routes and install them in their local routing table. Though I don't know if you will have the necessary control over /both/ DHCP servers that's needed to do this. Presuming that you don't have control over /both/ DHCP servers (as control over /both/ will be needed), I'm going to fall back and suggest what I call the "Customer Interface Router". Specifically, set up port forwarding on the Pi such that when clients on LAN1 connect to $PORT on the Pi, the traffic is DNATed to the HomeAssistant on LAN2 /and/ the traffic is SNATed to the LAN2 interface on the Pi. Thus every system on each LAN thinks that it's talking to a directly attached system in the same LAN. There is no need for routing in this case. I typically only use the C.I.R. when there are reasons that more proper routing can't be configured. The C.I.R. is an abstraction layer that allows either side to operate almost completely independently of each other, save for IP conflicts between each directly attached LAN. -- Grant. . . . unix || die
Re: [gentoo-user] How to run X11 apps remotely?
Some clarifications. On 3/22/22 1:28 PM, Grant Taylor wrote: Xvnc I have looked at NoMachine (a.k.a. NX) in the past. But I've not tried it myself because my work client machine has a VNC client built in and doesn't have an NX client. As in run an Xvnc server as an X11 server / display. Point your programs at that display / server. Then have a VNC client connect to said VNC server. There's another option in the VNC / NX arena, but the name escapes me at the moment. There is also the possibility of RDP and / or ICA (whatever name old Citrix technology is going by these days). If you're into retro computing, PC Anywhere / Timbuktu are options. I run programs like this on the daily. E.g. Lotus Notes 9.x running on an old CentOS 6.x VM (last supported version) displaying on contemporary Gentoo on my workstation. The latency is noticeable if you know what to look for. But the latency is also quite tolerable. To be crystal clear, my Gentoo physical machine SSHs to my CentOS virtual machine with X11 forwarding such that the Notes client shows up on my Gentoo system. It's about as stock X11 as you can get. -- I have contemplated messing with xhost / xauth (cookies) to avoid the encryption / decryption overhead. But I found that I still needed remote command execution to set the DISPLAY and launch the Notes client. SSH makes this latter part trivial while also providing the former part. This is across a switched 1 Gbps LAN in the same subnet. This works well enough that I'm considering evaluating running more programs on discrete systems / VMs / containers with X11 networking. -- Grant. . . . unix || die
Re: [gentoo-user] How to run X11 apps remotely?
On 3/22/22 10:41 AM, Grant Edwards wrote: How does one run "modern" X11 apps remotely? Xvnc As in run an Xvnc server as an X11 server / display. Point your programs at that display / server. Then have a VNC client connect to said VNC server. Using ssh -X or ssh -Y works fine for older applications, but not for things that use "modern" toolkits. Modern tookit designers appear to have adopted a life mission to maximize the number of client-server round-trips required for even a trivial event like a keystroke in a text box. Yes. The back and forth between the X11 client (program) and server (display) is quite chatty and latency sensitive. The thing that running the Xvnc server on the same system as the X11 clients is that the latency between the two that the X11 protocol sees is effectively as small as possible. Then VNC's Remote Frame Buffer (RFB) protocol is more forgiving with latency between the VNC server and the VNC client. As a result, even with a 5-10Mbps remote connection, it takes several minutes to enter a string of even a few characters. A mouseclick on a button can take a minute or two to get processed. Resizing a window pretty much means it's time for a cuppa. Been there. Done that. Opening chrome and loading a web page can take 10-15 minutes. No activity at all on the screen, but the network connection to the remote machine is saturated at 5Mbps for minutes at a time. WTF? You also want to minimize spurious / superfluous updates that aren't actually /needed/. E.g. things fading in / out / animations. I do not want a "remote desktop". I just want to run a single application on a remote machine and have its window show up locally. You can adjust the size of the Xvnc's display so that it's the size of just the application in question. You also don't need the full desktop to display on that screen. Back in the day, I used to run X11 apps remotely through dial-up connections, and most of them were a little sluggish but still actually usable... The X11 protocol has changed a lot over the years. Older versions of X11 are less chatty than newer versions of X11. Reducing color depth also helps reduce the amount of data that needs to be exchanged. X11 transparent network support was its killer feature, I completely agree. Especially when you start running different programs on different systems / users / contexts. but for all practical purpopses, that feature seems to have been killed. I don't think that's true. I run programs like this on the daily. E.g. Lotus Notes 9.x running on an old CentOS 6.x VM (last supported version) displaying on contemporary Gentoo on my workstation. The latency is noticeable if you know what to look for. But the latency is also quite tolerable. I find web browsing to be considerably slower than my Notes client which I use interactively on the daily, if not hourly. -- Grant. . . . unix || die
Re: [gentoo-user] gentoo for a virtual server in the cloud?
On 3/18/22 1:03 PM, n952162 wrote: I rent a low-cost virtual server in the cloud. The platform offers me some choices in linux distributions, but I'm wondering if I can compile gentoo to run on it. Anybody have experience doing this? I've got a Gentoo image running in Linode without any problem. I'm fairly certain that they offer Gentoo as an option when creating the VPS. It's been too long and I've messed with too many things since then. -- Grant. . . . unix || die
Re: [gentoo-user] Re: Root can't write to files owned by others?
On 3/9/22 11:50 PM, Nikos Chantziaras wrote: This is normal, at least when using systemd. How is this a /systemd/ thing? Is it because systemd is enabling a /kernel/ thing that probably is otherwise un(der)used? I ask as someone who disliked systemd as many others do. But I fail to see how this is systemd's fault. To disable this behavior, you have to set: sysctl fs.protected_regular=0 But you should know what this means when it comes to security. See: https://www.spinics.net/lists/fedora-devel/msg252452.html I read that message, but no messages linked therefrom, and don't see any security gotchas about disabling (setting to 0) fs.protected_* I see some value in a tunable to protect against writing to files of different type in the guise of protecting against writing somewhere that you probably want to not write. Sort of like shell redirection ">" protection for clobbering existing files where you likely meant to append ">>" to them. But I am ignorant as to how this is a /systemd/ thing. -- Grant. . . . unix || die
Re: [gentoo-user] strange errors in http log, what can/should I do about it.
On 2/28/22 5:04 AM, Adam Carter wrote: If you put that url in a browser does it show your passwd file? I assume because the logs say 200 it will. If so shut down the httpd and reset all the passwords Note the question mark after the leading slash. As such, the path traversal component is for a query parameter, named f / file / filename / id. There is a reasonable chance that the web server returned the index / default page for the document root and that the query parameter didn't actually change any thing. With this in mind, it would be normal to return a 200 status code for the index / default page for the document root. Check your httpd config… seems odd that an old attack like this would still work. If this did return the actual contents of /etc/password then there is quite likely a different problem in that the index / default page is accepting query parameters as paths, independent of the HTTP daemon. Aside: +1 to everything that Stefan S. said. -- Grant. . . . unix || die
Re: [gentoo-user] [OT] mounting screws
On 2/20/22 10:24 AM, Peter Humphrey wrote: Hello list, Hi, I have a couple of vertically mounted easy-swap disk caddies in the back of my workstation, and I'm having trouble finding screws to mount the disk in the caddy. Clearance is nil, so the screws must be countersunk so they aren't proud of the surface. They seem to be m3 perhaps 5mm long. I just can't find any via Google. Can anyone in UK please help out? I consider screw / bolt / etc. acquisition to start with three primary things: 1) Thread identification; diameter and pitch of threads 2) Length identification 3) Head identification; what sort of screw head do you need? Once you have this information, chances are quite good that you will be able to find a screw / bolt (that you can modify) to fit your needs. I would expect the thread to be well known / documented for the type of hard drive you're using. Maybe even some of the length information related to how much goes into the drive body. You might be able to find out some head information from documents from the manufacturer of the drive sled, maybe some length information too. I have a micrometer that I use for measuring some things like this. It's an inexpensive plastic one from a local hardware store, but it gets the job done. (I'm only going to one decimal place on mm measurements.) -- Grant. . . . unix || die
Re: [gentoo-user] Reverse Proxy with Apache2
On 1/18/22 1:26 PM, Raphael Mejias Dias wrote: Hello, Hi, I've modified a little my config file: Okay. ProxyPass "zmz" "http://raphaxx.intranet:8280/zm/ ProxyPassReverse "zmz" "http://raphaxx.intranet:8280/zm/; I would expect the first parameter to be anchored / fully qualified from within the site's URL. E.g. ProxyPass "/zmz" "http://raphaxx.intranet:8280/zm/; ProxyPassReverse "/zmz" "http://raphaxx.intranet:8280/zm/; My expectation would be that for this to proxy any requests to the "/zmz" path (sub-directory?) to the "/zm/" path on an HTTP server on port 8280 of raphaxx.intranet. Aside: Make sure that "raphaxx.intranet" resolves where you want it to. Be mindful of IPv4 vs IPv6. My ssl is ok, the ssl redirect is on default.conf Okay. But this ProxyReverse, I've been trying in many ways, another file, and so on, but nothing works. I have the following in a config file for a service that I disabled a few months ago. ProxyPass "/" "http://127.0.0.1:8080/; ProxyPassReverse "/" "http://127.0.0.1:8080/; This was in use in a Named Virtual Host that reverse proxied everything to port 8080 listening on localhost (127.0.0.1). Aside: Port 8080 on localhost (127.0.0.1) was actually an SSH remote port forward to a web server running on the remote client machine. You will want to adjust the source path ("/") and the destination ("http://127.0.0.1:8080/;) as you need. But this is copied verbatim from a site that I disabled recently. (Disabling is typical Ubuntu / Debian remove a sym-link so that the config is not in the sites-enabled directory. No changes to the actual config file.) About the VirtualHost for the 8280, I'm guessing it was not necessary, because the 8280 is the VM and the VM has its own apache2. ACK I have a nat rule to redirect 192.168.0.15:8280 to my VM server 192.168.2.100:80 on my root server 192.168.0.15. Okay. That could be a complicating factor. You say "NAT rule". I'm taking that to mean a Destination NAT (DNAT) rule for port forwarding. The important bit is that it doesn't alter the source IP (SNAT). So you could potentially be running into a TCP triangle scenario. Unless you have a specific reason to use the NAT rule, I would strongly suggest altering the ProxyPass(Reverse) rules to use the proper target. ProxyPass "/zmz" "http://192.168.2.100:80/zm/; ProxyPassReverse "/zmz" "http://192.168.2.100:80/zm/; Just avoid the potential for a TCP triangle all together. Considering the potential complexity, please share what sort of errors / failures you are seeing. Given the remote nature of the real server (from the point of view of the Apache HTTPD instance), please provide output of a TCP dump for tests. Let's make sure that all the bases are covered. About Caddy, I do not want to install another server and deal with another config. I can fully understand and appreciate that. Thanks! You're welcome. -- Grant. . . . unix || die
Re: [gentoo-user] Reverse Proxy with Apache2
On 1/18/22 1:30 PM, Anatoly Laskaris wrote: Age migth mean a lot when we are talking about software. Modern software usually is easier to configure, has sane defaults, more secure and has integration with other modern software. I'll concede that those points are /possibilities/. But they are not guaranteed. And is much more popular in the community meaning better support. I do not agree that something being more common means, much less implies, better support. There are an awful lot of bad recommendations all over the Internet. I'm was not talking about adding software, I was talking about replacing software. But you are. Replacing something inherently implies adding and / or configuring something old with something new. Time saved in managing complex software that does a simple task can be applied elsewhere. Sometimes yes, sometimes no. In regards to "already having a software" most modern applications don't require "having" them. It works out of the box, usually with one command and you can switch parts of your infrastructure without pain thanks to containers (or statically linked binaries in golang and rust) without downtime (if done right). "if done right" is so over the top the /operative/ /phrase/ of that statement that it's not even remotely funny. Dynamic ports with service discovery == no port conflicts. There's no dynamic ports / service discovery in what the OP asked about. The OP asked how to configure a feature (reverse proxy) of the software that they are already (Apache HTTPD) using for a part of a URL (https://192.168.0.15:443/zv) for a service that's currently listening on a given IP and port pair (https://192.168.0.15:443/). So please elaborate on what the right way is to replace (as in add new and remove old) the existing software /or/ split the IP & port (192.168.0.15 TCP port 443) across multiple daemons is. I would very much be interested in learning how to do this the right way. I can think of many ways to do this, but all of which require something intercepting the port & IP pair at some point up stream. Not that old as apache. I take your statement to be that the Apache HTTPD developers and administrators have more experience than Nginx / caddy / traefik developers and administrators by the simple fact that it has existed longer. What /new/ thing are you using to communicate with caddy / traefik if you don't use the old crufty IPv4 / IPv6? Nginx is still widly used (contrast to apache), The first four reports I found when searching for web server popularity show that Apache and Nginx are the top two popular servers. Which one is number one depends on the report. Link - Global Web Server Market Share January 2022 - https://hostadvice.com/marketshare/server/ Link - Web and Application Servers Software Market Share - https://www.datanyze.com/market-share/web-and-application-servers--425 Link - Usage statistics of web servers - https://w3techs.com/technologies/overview/web_server Link - January 2022 Web Server Survey - https://news.netcraft.com/archives/category/web-server-survey/ My opinion is that being the first, or the close second is a good indication that Apache is still wildly used. but is being replaced by caddy/traefik. Apache is ancient and I've never seen it running in production. If you've never seen the first or second most popular web server running in production, I can only question where you are looking. I know multiple people that have run Apache HTTP Server (both by Apache and rebranded by IBM / Oracle) web server in production on multiple platforms for each and every year for the last two decades. I've personally run Apache in production for that entire time. -- Grant. . . . unix || die
Re: [gentoo-user] TLD for home LAN?
On 1/18/22 1:50 PM, Rich Freeman wrote: No, I'm talking about the opposite situation. I'm talking about you have foo.local resolvable via mDNS, but not DNS - then there is a chance you won't be able to access the host. It's the same problem just opposite directions. The solution is to use something to unify the .local name in the mDNS and uDNS name spaces. This can be done via a gateway that speaks both protocols. E.g. listens for mDNS queries as well as being an authoritative uDNS server for the .local domain / TLD. It's not /simple/ but nor is it /impossible/. -- Grant. . . . unix || die
Re: [gentoo-user] Reverse Proxy with Apache2
On 1/18/22 11:24 AM, Anatoly Laskaris wrote: I'm sorry for not answering to the question directly, but why use apache2? - Because Apache is already installed and listening on the port in question. - Because that's what the OP asked about. - Because it might be IBM / Oracle HTTP Server which are re-rolls of Apache HTTP Server. - $REASONS There are modern alternatives ... Age of something doesn't mean a lot. - TCP/IP is from the 80s and yet we are still using it. - OSI is newer than IPv4. - IPv6 is newer than IPv4 and OSI. Yet we are still talking about the venerable IPv4. And something completely different like Traefik (https://doc.traefik.io/traefik/getting-started/quick-start/) which is geared towards modern cloud native infrastructure with containers and workload orchestrators like Nomad or Kubernetes. Usually you don't configure Traefik with static config file, but with metadata and annotations in K8S and Consul so it is dynamic and reactive. I view adding /additional/ software / daemons as poor form, especially when the /existing/ software can do the task at hand. Don't overlook the port conflict. Or you can use nginx (which is already considered pretty old and clunky, but it is much easier than apache still). Why start the email asking why something old is used and then finish the email suggesting the possibility of using something else old? -- Grant. . . . unix || die
Re: [gentoo-user] Reverse Proxy with Apache2
On 1/18/22 9:57 AM, Raphael Mejias Dias wrote: Hello, Hi, I'm trying to setup a reverse proxy on my apache2 server to serve an another apache2 server running on a vm, basically my root apache2 is at 192.168.0.15 and my second apache2 is at 192.168.0.15:8280. My idea is to have 192.168.0.15/zm as 192.168.0.15:8280. If I understand you correctly, you want to take a sub-directory / path from a site on one port (80) and reverse proxy it to the root of another site on a different port (8280) on the same host. Am I understanding you correctly? The question is, how to do it? I need to finish my $CAFFEINE before I formulate a complete answer. But I'm sharing an incomplete answer to hopefully get you down the road sooner. I've looked up some guides, but it is difficult to setup. Like most things Apache, it's mostly difficult the first (few) time(s) you do it. Once you've done it, it's not as bad. My config: I'm redacting the things that I think aren't germane to the question at hand. ServerName 192.168.0.15 DocumentRoot /var/www/html ServerName 192.168.0.15/zm ProxyPass /zm http://192.168.0.15:8280/zm ProxyPassReverse /zm http://192.168.0.15:8280/zm Does it look any good? I question the use of "_default_" and "*", both of which on port 443. My fear is that there is a large potential for confusion ~> conflict between these two named virtual hosts. I'm also not seeing the config for the instance listening on port 8280. If the second named virtual host was put in place specifically in support of the reverse proxy, then I think you want to refactor it as a ... under the original named virtual host. The other thing that I'm not seeing is the ... configuration that I would expect to see. E.g. Orderdeny,allow Deny fromall Allow from 192.0.2.0/24 Allow from 198.51.100.0/24 Allow from 203.0.113.0/24 Beyond that, I need to finish my $CAFFEINE, have some clarification from you, and look at specific failures. N.B.: The access and error log files are going to be your friend when configuring this (or really anything Apache httpd related) as they will let you know when your configuration is correct but things like permission (Allow from) are the problem. Also apache(2)ctl configtest is your friend. Thanks. You're welcome. -- Grant. . . . unix || die
Re: [gentoo-user] Kernel config thingy, "make menuconfig"
On 1/15/22 7:47 AM, tastytea wrote: Did you know you can search with / and then jump to the results with the number keys? I've been using the search for decades*. But I didn't know about the number keys to jump until reading this message and trying it. #TIL *Yes, I've been using Linux for more than two decades. It's been my primary desktop for almost all of that time too. -- Grant. . . . unix || die
Re: [gentoo-user] TLD for home LAN?
On 1/15/22 3:33 AM, Peter Humphrey wrote: Hello list, Hi. Rich F said recently, "I'd avoid using the .local TLD due to RFC 6762." Ya I've read RFC 6762 in the past and I just skimmed part of it again. I didn't find anything that prohibited the use of the local top level domain for things other than mDNS et al. The only hard requirement that I did see is that if mDNS is used, that queries for .local /MUST/ be sent to mDNS. N.B. that does not preclude /also/ sending queries for .local to other name resolution systems like traditional unicast DNS. Ergo, RFC 6762 does not preclude the use of the local top level domain in traditional unicast DNS. That brings me back to a thorny problem: what should I call my local network? Maybe it's just me, I'm weird like that, but I vehemently believe that *I* am the authority for the names of *MY* network(s). As such, whatever name /I/ choose is the name that /my/ network(s) will use. I don't care that a cable internet provider wants my router to be called .. What's more is that I don't fathom, much less allow, the cable company's -- let's go with -- questionable naming have any influence on what my internal network is called. It used to be .prhnet, but then a program I tried a few years ago insisted on a two-component name, so I changed it to .prhnet.local. There are /some/ complications that may have some influence on what names are chosen. But I point out that your network quite likely did exactly what you wanted to do up until that point. Q: Did you continue to use the software that you tried? Or did you end up renaming your network for something that you are no longer using? }:-) Now I've read that RFC - well, Appendix G to it - and I'm scratching my head. I note the distinct absence of the quintessential SHOULD or MUST that RFCs are notorious for in RFC 6762 Appendix G. So ... I don't give the recommendation there in much credence. What's more is that RFC 6762 Appendix G fails to take into account gateways that bridge mDNS into Unicast DNS. E.g. they receive an mDNS query and gateway it to the configured uDNS. Thereby (mostly seamlessly) tying the mDNS and uDNS name space together. I really feel like RFC 6762 is a "you might want to consider not using the .local top level domain on the off hand chance that you ever have something that can't / won't work with it." I suppose it's possible that someone may want to connect an Apple device to my network, so perhaps I should clear the way for that eventuality. Is that possibility significant enough to influence how /you/ run /your/ network? /me puts his hand up to block glare looking out over the horizon looking for the SHOULD and MUST statements again, still not finding them. I can tell you that I have first hand experience with using Apple devices on a network that used the local top level domain without problems. So, what TLD should I use? Should I use .home, or just go back to .prhnet? It isn't going to be visible to the Big Bad World, so does it even matter? Use whatever TLD you want to use. Be aware of any potential gotchas and decide if they are worth avoiding or not. The old fable of "The Miller, his son, and the donkey" comes to mind. -- Make yourself happy. -- Grant. . . . unix || die
Re: [gentoo-user] BIND Configuration for DNS
On 1/14/22 8:45 AM, Raphael Mejias Dias wrote: Hello, Hi, I'm trying to configure BIND for a local DNS server, but I'm not sure that it's ok. Based on your other comments, it seems as if there is more of a question about overall DNS configuration and operation than about the BIND DNS server (named) itself. Basically, I'm wanting to create an internal address like intranet.local, Okay. this way, I can change the internal IP address, without the obligation to reconfigure the client machines to lookup the new IP, only changing the DNS lookup table. It sounds like you might be referring to updating DNS vs updating the hosts file. First, I had followed the Gentoo Wiki and after I tried BIND official documentation. ACK I've realized the network PC's did not find the DNS address, only the localhost can find it, I'm assuming that means the server running BIND (named). when I force the DNS, the client PC cannot access the internet anymore. I'm assuming that means that BIND (named) is working and doing what you want with regard to the local / internal domain name. With these assumptions, it seems to me like BIND (named) is working and that it is likely not configured to allow clients to perform recursive queries. Assuming this is the case, you need to change the allow-recursion parameter to allow the LAN clients to perform recursive queries. This is predicated on the system BIND (named) is running on being able to access the internet to query external resources on behalf of the LAN clients. If someone knows a guide to help, I'll be glad to know. Please reply if any of my assumptions are wrong or if you have other questions. Thanks. You're welcome. -- Grant. . . . unix || die
Re: [gentoo-user] installing virtual machine under gentoo
On 1/2/22 12:14 AM, John Covici wrote: OK, I fixed it, the group name was wrong when I tried the last time, I had libvirtd and its only libvirt and that seems to have fixed things. Thank you for the clarifying follow up. Here's hoping you same someone else time in the future. :-) On 1/2/22 9:58 AM, John Covici wrote: OK, more progress and a few more questions. Yay progress! In the virt-manager, I could not figure out how to add disk storage to the vm. I have a partition I can use for the disk storage -- is this different from the virtual machine image? It depends.™ KVM / libvirt / Qemu can use raw partitions, files on a mounted file system, logical volumes, ZFS vDevs, iSCSI, and other things for storage. Each one is configured slightly differently. So, which method do you want to use? I'd suggest that you /start/ with files on a mounted file system and then adjust as you need / want to. At least as long as you're getting your feet wet. From memory, you need to define a directory as a storage location to KVM / libvirt. -- I'm not currently using KVM so I'm working from a mixture of memory and what I can poke without spinning things up. 1) Open VMM (virt-manager). 2) Select the KVM host in the window. 3) Edit -> Connection Details 4) Go to the Storage tab. 5) Click the plus below the left hand pane. 6) Choose and enter a name for the storage pool. 7) Choose "dir: Filesystem Directory" as the type. 8) Choose a target path by typing or browsing to it. 9) Click Finish. Now the storage pool you created should appear as an option when creating a VM. Of even more importance, how do I bridge the vm onto my existing network? This is also done through host properties on the Virtual Networks tab. I don't remember the specifics (and can't walk through it the same way for reasons). I usually did most of the management via the /etc/conf.d/net file as I do a lot of things with networking that few things can properly administer (802.3ad LACP, 802.1q VLAN, bridging, l2 filtering, l3 filtering, etc). What I remember doing was re-configuring the (primary) network interface so that it came up without an IP address and was added as a member to a newly created bridge. As part of that I moved the system's IP address(es) from the underlying Ethernet interface to the newly created Bridge interface. With the bridge created and manged outside of VMM (virt-manager) I was able to add new VMs / containers to the existing Bridge interface. Thus establishing a layer 2 connection from the VM(s) / LXC(s) to the main network. Note: This is somewhat of a simplification as there are VLANs and multiple physical interfaces with many logical interfaces on the machine that I'm replying to you from. However, I believe, the concepts hold as I've written them. I have a nic for internal items named eno1 and another nic which connects to the outside world, I would like to bridge to the internal network, that would give the vm a dhcp address, etc. If you have a separate physical NIC, as I had suggested starting with, then you can avoid much of the bridge & IP re-configuration in the /etc/conf.d/net file and /mostly/ manage an independent bridge on the additional NIC from within VMM (virt-manager). The 2nd NIC means that you don't end up with a chicken & egg problem trying to administer a network interface across the network, which is how I do much of my work. Re-configuring things through the console also simplifies things in this regard. -- Grant. . . . unix || die
Re: [gentoo-user] installing virtual machine under gentoo
On 1/1/22 11:05 PM, John Covici wrote: Well, I foujnd out something. If I go to the file menu, I can add the connection manually and it works, That sounds familiar. but I wonder why I have to do that? Because the KVM Virtual Manager is designed such that it can administer KVM / libvirt / qemu on multiple systems. It's really client-server infrastructure. You're just needing to point the client at your local server one time. Also, before I do anything, it asks me for the root password and says system policy prevents local management of virtual machines. Do you know why this is so? This also seems familiar. Try re-starting the libvirt / kvm daemons. They may not be aware that your user is now a member of the proper group. -- Aside: This is why a reboot is ... convenient, but not required. This /should/ be taken care of proper group administration for your normal user. I ran into this a long time ago when I set up KVM on my last Gentoo system. I don't remember exactly what I had to do to resolve it. I do know that it was less than five minutes of searching the web to find the answer, cussing at what needed to be done, and doing it. That system has been running perfectly fine for many years. -- Grant. . . . unix || die
Re: [gentoo-user] installing virtual machine under gentoo
On 1/1/22 10:07 PM, John Covici wrote: Maybe I have to log out of everything with my user name even though most of the logins are to virtual consoles? You typically need to log out of X11 sessions and log back in for them to see the new groups. But you say "virtual consoles", which tells me (Control)-(Alt)-(F#) which means that any given virtual console should be able to see the new groups if it logs out and logs back in, even if others stay logged in. -- Grant. . . . unix || die
Re: [gentoo-user] installing virtual machine under gentoo
On 1/1/22 1:19 PM, Mark Knecht wrote: In my experience it often takes either a logout/in or a reboot Ya Depending on what you actually /need/ to use the new group for you can probably ssh to localhost or possibly use the `newgrp` command go switch your primary group to the group that you've been added to which hasn't been loaded (?) instantiated (?) ... in the current session. -- Grant. . . . unix || die
Re: [gentoo-user] installing virtual machine under gentoo
On 1/1/22 6:04 PM, John Covici wrote: It more seems to have to do something with the uri -- libvertd is certainly running, and I added myself to the kvm group, but still get qem/kvm not connected. Run `id` as your current user and make sure that it's showing the kvm & libvirt groups. -- Grant. . . . unix || die
Re: [gentoo-user] installing virtual machine under gentoo
On 1/1/22 12:08 PM, John Covici wrote: OK, I made some progress -- I emerged qemu/kvm packages including libvirtd and virt-manager came along. Now, when I start virt-manager, it complains the qqemu/kvm not connected. I am running virt-manager as my regular user. Make sure that libvirtd is running: # rc-service libvirtd status Also: # rc-update add libvirtd default You may need to add your user account to -- what I think is -- the "kvm" group. (Don't forget the usual dance when adding yourself to a new group.) -- Grant. . . . unix || die
Re: [gentoo-user] Re: configure "net-mail/mailutils" - non-answer / drive by comment
On 12/31/21 4:50 PM, the...@sys-concept.com wrote: Thanks for the hint. Yes, it works. I think it is the best solution for now. You're welcome. A simple .forward works in most cases. Though it may run into typical forwarding problems (SPF, DKIM, etc.). But you're probably fine with what you're doing. I had some problems deleting mail with "net-mail/mailutils" program. *nod* -- Grant. . . . unix || die
Re: [gentoo-user] Re: configure "net-mail/mailutils" - non-answer / drive by comment
On 12/31/21 3:58 PM, the...@sys-concept.com wrote: How do you configure "~/.forward"? echo "u...@example.net" > ~/.forward That will cause most MTAs to forward message for your local user to the u...@example.net email address. -- Grant. . . . unix || die
[gentoo-user] Re: configure "net-mail/mailutils" - non-answer / drive by comment
I don't have an answer for you, but I do have a drive by comment. On 12/31/21 3:09 PM, the...@sys-concept.com wrote: I'm trying to find a solution to read and delete local mail in: /var/mail/[user] as Thunderbird discontinued support for reading local mail directory (movemail). This type of need is why I have ~/.forward files on most systems so that email to my local account is forwarded to (one of) my primary email accounts that Thunderbird does check. -- Grant. . . . unix || die
Re: [gentoo-user] installing virtual machine under gentoo
On 12/31/21 8:12 AM, Rich Freeman wrote: ++ +++ to KVM / libvirt / VirtManager (GUI) This is just a front-end to libvirt and kvm, so you're building entirely on solid technologies, and anything you set up with the GUI can be edited or run or otherwise managed from the command line, and vice-versa. Close, but not quite. Yes, anything that can be done in the GUI can be done at the CLI / config files. Though I have had some more essoteric things that had to be done at the CLI / config files that couldn't be done in the GUI. This usually has to do with more advanced things like iSCSI, Fibre Channel, ZFS pools / dataset per guest, etc. The vast majority of the things that someone starting with KVM will want to do can be done with the Virtual Machine Manager GUI. It ends up resembling something like VirtualBox or the old VMWare Workstation edition, but it is all FOSS and in-kernel so it just is more reliable/etc. Yep. There are only so many ways that you can present a concept; inventory of VMs, VM console, VM management. They start to look similar after a while. That said, I only use VMs situationally and at this point just about everything I'm doing is in containers if it can be linux-based. Way lighter all-around, even if I'm running a full OS in the container. I personally prefer to run my containers with nspawn and virtual ethernet, so each container gets its own IP via DHCP. The Virtual Machine Manager GUI can also administer / manage some aspects of containers. I would highly suggest giving Virtual Machine Manager GUI for KVM+libvert+qemu a try. It is probably the quintessential Linux virtualization method. Oh, and for kvm if you want to run your guests on your main LAN you'll probably need to set up a bridge interface. Yes, bridging is very nice and is my preferred way for most VM use cases. Though it might be a bit more than someone wants to tackle while getting their feet wet with virtualization. Especially if you're trying to share a single NIC for other aspects of the hosting system. It can all be done, but there is a lot of minutia (methods and configurations therein) that are easy to get lost in. I'd probably recommend a second NIC, even if it's an inexpensive USB NIC just for the virtualization. Doing that will avoid complexities that don't need to be dealt with /now/. -- Reduce the number of variables that you're working with at one time. -- Grant. . . . unix || die
Re: [gentoo-user] ssh problem
On 12/26/21 9:42 AM, Philip Webb wrote: I want to login to a remote site using 'ssh'. The response I get is "Unable to negotiate with port : no matching host key type found. Their offer: ssh-rsa,ssh-dss". Yesterday, I updated 'openssh' : Michael's pointing in the proper direction. Check out the OpenSSH Legacy Options page for more details. I've successfully used this information to log into Red Hat 5.x from the '90s. (Not contemporary RHEL.) Link - OpenSSH: Legacy Options - https://www.openssh.com/legacy.html Note: This works exceedingly well in the ssh client config file (~/.ssh/config or /etc/ssh/ssh_config). Using the config file means that anything that uses OpenSSH commands benefits from and inherits the configuration parameters; rsync, git, what have you. -- Grant. . . . unix || die
Re: [gentoo-user] Local mail delivery agent (MDA) wanted
On 12/20/21 3:37 PM, Wol wrote: You mean the body sans envelope? Kinda, sorta, yes, no, maybe. I'd have to compare the two formats to be able to say more definitively, or with any certainty. But, ya, that's the /type/ of difference that I'm thinking of. Aside: What /actually/ is the body vs envelope? In SMTP it's simple, it, everything between the DATA and the trailing . is the body the MAIL FROM: and RCPT TO(s) are envelope. But RFC 822 messages on disk, that's a bit of a horse of a different color. It's easy to differentiate body from headers, but envelope??? -- Grant. . . . unix || die
Re: [gentoo-user] Local mail delivery agent (MDA) wanted
On 12/20/21 3:09 PM, Frank Steinmetzger wrote: Delivery works on both systems :-) (with a little caveat, see second-last paragraph). ;-) At first I believed that both systems used mail from GNU mailutils. But I erred: Ya. Determining /which/ implementation of a command is being used can be ... difficult at times. My Gentoo NAS only has mailutils installed. But while I have that also installed on Arch, I was in fact using s-nail’s mail program there. Mailutils installs its mail as /usr/bin/gnu-mail instead, which allows both packages to co-exist (which Gentoo does not). Yep. That could be an issue. It's not bad if you are aware of it and can account for it. So I tried gnu-mail on Arch, but this does not move read mail away upon exit like its Gentoo cousin. That might be a default configuration / rc file in somewhere in /etc. I did more trials, wrote a lengthy description of it into this message and threw them away again, so I wouldn’t bore you. /me chuckles mildly. I doubt that you would bore me. I might also learn something about Arch and possibly even Gentoo. But those bits have already gone through the circuit. (Portmanteau of "water under the bridge"?) In the end I gave up, removed Gentoo’s mailutils and went with s-nail. And now it works. ¯\_(ツ)_/¯ It sounds like you've achieved your goal /and/ that we (at least you and I) learned some things along the way. :-D Maybe building dma from source broke some stuff, because it installed into /usr/local. `echo foo | mail root` (mail from mailutils) produces mail that remains in dma’s queue, whereas `echo bar | sendmail root` (/usr/local/sbin/ sendmail from dma) gets the mail delivered to the spool file. I would expect the `mail` and `sendmail` commands (binaries / scripts) to do slightly different things. Both should accept messages on STDIN. But what they do with them might be different. Leaving the message in the spool makes me think that there is expectation that something else, maybe even another part of DMA, will do something with the spooled message(s). Maybe DMA is expecting some sort of cron job to work the mail queue. But the latter mails were missing vital headers and thus mail had a problem displaying them properly. That sounds like raw, unprocessed email to me. It’s all a bit voodoo-esque to my simple-minded user’s point of view; confusion over many implementations of the same standard; The wonderful thing about standards is that we have so many to choose from. }:-) they should interoperate, Ideally. but maybe don’t, or maybe I did not configure them properly. See above comment about cron et al. plus the overly complex configs and info documentation on GNU’s side which keeps me away. It must have been great days back in the 80s. I wish I had experienced those times and machines. Email is ... non-trivial, to put it mildly. There are many different things that interact and each behaves slightly differently while doing a different part of the job. -- Grant. . . . unix || die
Re: [gentoo-user] Local mail delivery agent (MDA) wanted
On 12/20/21 12:08 PM, Frank Steinmetzger wrote: There is one last niggle: after I read a message with the mail tool, it saves those messages in /root/mbox. It does not do this on Arch, but keeps them in /var/spool/mail/root instead. This sounds like the doing of your mail user agent. The MTA+LDA receive and deliver the mail (respectively) to the user's mailbox. The MUA is what reads / modifies the mailbox. So ... compare the email client that you're using between the two systems. -- Grant. . . . unix || die
Re: [gentoo-user] Local mail delivery agent (MDA) wanted
On 12/18/21 4:00 PM, Frank Steinmetzger wrote: Just for the record and completeness’ sake: ... I found out that the program was actually called dma -- the DragonFly BSD mail transport agent, not mda. Thank you for sharing your find Frank. The DragonFly BSD MTA looks interesting. I'll have to check it out. Especially if it's small and intended for local delivery and / or getting messages off of box all the while without exposing an SMTP port. -- Grant. . . . unix || die
Re: [gentoo-user] Local mail delivery agent (MDA) wanted
On 12/15/21 1:21 PM, Laurence Perkins wrote: So one thing that's annoyed me for a while is that there are several things which will pull in nullmailer to accept local mails, but don't pull in anything to do local delivery (And I'm not sure if nullmailer can even pass things to local delivery) so your local delivery mails by default just stack up in the nullmailer outbound queue unless you configure it to pass them off to an external mail system. Since the most commonly used of these programs are things like cron where local delivery is probably the only thing most users would care about it might be nice if the default configuration were one that does that, and then those who want local mail relayed elsewhere still don't have any significant extra setup work to do. The idea of having a default mail configuration that would deliver locally originated messages (e.g. from cron) to local user's mailboxes (mbox) in /var(/spool)/mail makes sense to me. I don't think I'd /personally/ us it b/c I run full MTAs on all my systems. But that's /me/. I realize that I'm atypical. But I would +1 a simple config that does local delivery from " | mail ${USER}" to end up in "/var/spool/mail/${USER}". -- Grant. . . . unix || die
Re: [gentoo-user] Local mail delivery agent (MDA) wanted
On 12/13/21 3:12 PM, Frank Steinmetzger wrote: Using strace, I found out that mail from mailx puts those mail into /var/spool/clientmqueue/, one file per mail, but not in a maildir structure. Yes, the /var/spool/clientmqueue is the mail queue for outgoing messages from clients. Hence the name "client m(ail) queue". OK, I found out that this is the usual outgoing queue which needs to be processed by sendmail, probably through another cronjob or a process that itself checks that directory periodically. Sendmail is quintessentially a daemon that's running all the time. As such it usually does it's own scheduling and does not depend on external scheduling. In many places I read that system mail—by default—goes into /var/spool/mail/, but until now I’ve yet to observe this behavior. /var/spool/mail/ and /var/mail/ are the quintessential locations for mbox based inbound email storage. Note: There are a number of other fancy client mail storage routines that don't use files in this path. It’s really not easy to find a description of the default setup of olden days (or I’m simply using the wrong search terms). Because when you search for something like unix local mail setup, most results are about setting up an SMTP server. In hindsight—perhaps that is simply the way to go. :-/ You will quite likely need a Mail Transfer Agent to receive the email, either via command (mail(x) / sendmail / etc) or read from a queue location like /var/spool/clientmqueue and then deliver the messages to where they belong. There /may/ be an alternate "mail" command that does all of this in one function. But I'd be surprised to learn about such. Most of the surprise is because it would be combining three distinct parts of the email flow: the Mail User Agent (a.k.a. MUA) generating the original outgoing message, the Message Transfer Agent (a.k.a. MTA) to receive the original message and do something with it, and the Local Delivery Agent (a.k.a. LDA) to put the message in the proper location. The originating MUA can frequently be substituted at will with "mail", "mailx", and "nail" being three CLI based that come to mind immediately. The MTA can frequently be one of many with Sendmail, Postfix, Courier, Exim coming to mind. The LDA can easily be one of the following; procmail, maildrop, Courier, and something super simple I don't remember the name of because I've not used it in so long. -- Grant. . . . unix || die
Re: [gentoo-user] Local mail delivery agent (MDA) wanted
On 12/13/21 12:40 PM, tastytea wrote: mail-client/mailx provides /usr/bin/mail which can be used for looking at mail in/var/spool/mail/ and for sending it to local users. No configuration necessary. cron and other software will automatically use it. For some reason I thought that mailx (and / or nail) relied on a local MTA+LDA. Even if it's only listening on 127.1. -- I guess I'm wrong. -- Grant. . . . unix || die
Re: [gentoo-user] Bash prompt colours
Some drive-by after-the-fact comments: On 12/6/21 4:03 PM, Frank Steinmetzger wrote: [ "$MC_SID" ] && PS1_JOBS_COUNT="${PS1_JOBS_COUNT}MC " [ "$RANGER_LEVEL" ] && PS1_JOBS_COUNT="${PS1_JOBS_COUNT}R " I've taken to using things like the following: PS1_JOBS_COUNT="${PS1_JOBS_COUNT}${MC_SID:+MC }${RANGER_LEVEL:+R }" Leverage Bash's (and Zsh's) expansion conditional. If the variable is set, then expand it to a different value. ${VARIABLE:+alternate text to show if VARIABLE is set} if [[ -z "$PROMPT_COMMAND" ]]; then PROMPT_COMMAND=__jobsprompt else PROMPT_COMMAND="$PROMPT_COMMAND ; __jobsprompt" fi Is there a reason to not simply do the following, eliminating the if conditional: PROMPT_COMMAND=${PROMPT_COMMAND:+${PROMPT_COMMAND} ; __jobsprompt} -- Grant. . . . unix || die
Re: [gentoo-user] Re: Switching from eudev to udev, disaster.
On 12/1/21 10:02 AM, Grant Edwards wrote: IIRC, there are situations where using udev rules to rename them "ethN" based on MAC addresses will fail because that can conflict with the low-level kernel names. Or something like that. I don't think I ever ran into a problem re-using the original kernel eth# names /as/ /long/ /as/ the target name wasn't currently in use. Sometimes I needed to vacate the target name before I could re-use it. -- Grant. . . . unix || die
Re: [gentoo-user] Switching from eudev to udev, disaster.
On 11/30/21 1:56 PM, Laurence Perkins wrote: So the old inconsistency was a super-bad kind of inconsistency. The interfaces got named based on the order in which the devices were discovered. Which, on a lot of systems, meant that every boot was essentially rolling the dice on a race condition. -guppy mouth- If you only have one device, you're fine. If your devices consistently come up in the same order, you're fine. If there's jitter though then things can easily get messy, and do so unexpectedly. I guess I never really gave the renaming much thought because I almost always complied drivers into the kernel, which meant that they had a consistent ~> predictable enumeration and naming order. The new naming scheme names devices based on where they show up on the bus. This has its own issues. It means that USB adapters get different names when plugged into different slots. It means that adding or removing other PCI bus devices can change the bus address and therefore the name of your network interfaces. Thank you for that explanation. It describes what I witnessed perfectly. I've seen motherboard firmware updates do the same. Oy vey! But, at least in theory, this inconsistency should be triggered by something you *know* about unless hardware is getting added and removed by someone else without your knowledge. One would hope. If you only have one interface though and tweak your hardware regularly then you'll probably be happier to put it back to the old naming scheme because with only one device it should always be eth0. Indeed. -- Grant. . . . unix || die
Re: [gentoo-user] Switching from eudev to udev, disaster.
On 11/30/21 12:58 PM, Dale wrote: What I noticed in dmesg is that it takes the old name, eth0 for example, and then renames it to the new name. I don't know if it's the /kernel/ that does the renaming, or not based on the kernel parameter, or if it's something else very early in the boot that does the renaming. Well, if one moves things around and eth0 becomes eth3 then doesn't that mess up the new name as well? My understanding is that the new name is -- supposed to be -- based off of some property of the device. I assume that said property is from something akin to where lspci gets it's data. Probably something exposed in /proc and / or /sys via the actual driver that ultimately gets feed into the renaming routine. That could be why you see the results you have.> It's hard to base a name on something that is changing itself. My understanding is that the new name is supposed to be completely independent from and not derived using the old name. So the old naming should have no influence on the new name. It would seem to me that if they were going to change things for real, they would change what the kernel names it in the beginning and it uses the name it was first given based on slot or something else unique. Agreed. As in have the driver instantiate the device with the new name from the outset. In other words, have the kernel assign it enp2s3 or whatever when booting and that is the only name it gets. Yep. I don't know /why/ or /where/ the failure is with the new names. I just know that I have seen instability in them. Seeing as how stability ~> predictability is the motivation for the rename, well, that's a failure in my opinion. Besides, it's a LOT easier to /just/ `tcpdump -nni eth0` when logging into a machine than it is to have to figure out the interface name first. That being said, I was okay with what CentOS 6.x did, where the new name was matched against the MAC address. I had eth0 based on MAC for outside and eth1 based on MAC for inside on a number of systems. -- Grant. . . . unix || die
Re: [gentoo-user] Switching from eudev to udev, disaster.
On 11/28/21 9:50 AM, Jack wrote: The network name switch ... is not directly due to eudev vs. udev, but to the "new" ... switch to consistent naming ... so your network is probably something like enp20s2, reflecting which slot your network card is physically in. Except I've had multiple instances where the supposed to be consistent naming is anything but consistent. I don't know if it was a udev issue or something else. But I've seen the actual address of cards in the system change based on what other cards are added to / removed from the system. It seems as if the motherboard re-configured addressing with the hardware change. E.g. NIC1 in PCIe slot A and NIC2 in PCIe slot C. NIC2 changed from (hypothetical) enp20s2 to enp16s2 when NIC1 was removed from PCIe slot A. So ... if the new naming scheme isn't consistent, then I'm not going to give it the time of day. I'd rather have the older and simpler inconsistent naming scheme (eth#) vs the newer and more annoying scheme en{po}\d\d{,s}{,1,2,3}. The epiphany when is aw that the supposedly consistent names weren't was a real son of a REDACTED moment. I'm pretty sure there is a kernel boot parameter which forces the old way, but can't find it now, as I switched to the new naming with eudev, so switching to udev didn't break anything for me. As Neil B. pointed out, "net.ifnames=0" is now on all my kernel boot lines (for the above reason). -- Grant. . . . unix || die
Re: [gentoo-user] Do I need NUMA set up in my kernel?
On 9/23/21 4:39 AM, Miles Malone wrote: You'd need NUMA if you had a NUMA machine. In current context, that would be either a) a dual socket system, b) an amd threadripper, or c) some of the really high core xeons. If your motherboard doesnt have certain memory banks allocated to certain processors or cores, you're probably not running a NUMA machine. Will a kernel without NUMA support boot and run on a system that has a NUMA architecture? If it will boot and run, does it simply do so in a sub-optimal way? Flipping the coin on the other side, is there any negative effect (other than kernel size / lines of code / attack surface) for having NUMA support enabled on a non-NUMA system? -- Grant. . . . unix || die
Re: [gentoo-user] Multi-user login manager
On 7/12/21 2:21 PM, antlists wrote: Two problems - I would like to run without X, but it seems that the greeters need X to run ... I'm not familiar with the term "greeter", but I assume that you're referring to the display manager that functions as the GUI login screen. Also I want to run a multi-user system. I know you can put multiple monitors on one graphics card, and that gives you a multi-head system, but I've got TWO graphics cards. I want to plug in two keyboards, two mice, and have two users sitting there. I would naively assume ~> expect that this is possible. I would expect that you would probably need two different graphics cards. I'm guessing no more than one set of PS/2 keyboard & mouse and that the other (if not both) will be USB. Configuration may be ornery, but I would assume ~> expect that this is possible to do. After all, you're really talking about having the system function as two independent X11 servers, one for each set of keyboard, mouse, monitor. This is imminently doable with external X11 servers. I see no reason other than ornery configuration that you shouldn't be able to do this. I'm thinking old school X11. My expectations may not translate to contemporary systems. But I would be shocked if not flabbergasted if it was not possible to do what you want. Word to the wise: USB devices, especially multiple of the same type, can be annoying to deal with. You may want to look at a udev (et al.) rule to create custom device names (likely based on device serial number) and use said custom device names in your configuration files. From what I can make out, this isn't possible with sddm. Lightdm looks like it might be possible, but there isn't a man page, and I haven't installed it so I can't find out what's what. I have no idea about configuring Display Managers (XDM, sddm, Lightdm, etc.) to do this. Or can I fire up two instances of greetd? One on eg vt7 and the other on vt8? If so, how do I configure vt7 and vt8 to be my two different screen/keyboard/mouse combos? My expectation is that the various display managers run in relation / within the context of a given X server. Lightdm also says it will do vnc, but again, the lack of documentation... Which VNC? x11vnc or Xvnc? Unless you're wanting x11vnc to share a physical console, I doubt that's what you want. As Xvnc will be a purely virtual X11 server. I know I'm asking a lot, but I tend to find documentation makes sense only after you already know what it's saying ... :-) I'm hoping for a "cookbook" style approach, but I don't expect much of that because I know what I'm doing isn't very common ... Ya. I find that man pages and O'Reilly books are good /reference/ material but not good /introduction/ material. -- Grant. . . . unix || die
Re: [gentoo-user] app-misc/ca-certificates
On 6/2/21 1:48 AM, Fannys wrote: Tech should be based on tech. Not faith and trust on the other party. That's where detection of breach of trust comes into play. Thus DNSSEC and things related. -- Grant. . . . unix || die
Re: [gentoo-user] app-misc/ca-certificates
On 6/2/21 1:21 AM, J. Roeleveld wrote: Do you know which extensions add this? I don't remember exactly (they weren't compatible with Firefox 78) but from memory, they were from the CZ NIC operator. They have many things related to this. -- Grant. . . . unix || die
Re: [gentoo-user] app-misc/ca-certificates
On 6/1/21 3:38 PM, Michael Orlitzky wrote: *Any* CA can just generate a new key and sign the corresponding certificate. This is where what can /technically/ be done diverges from what is /allowed/ to be done. CAs adhering to the CA/B Forum's requirements on CAA records mean that they aren't allowed to issue a certificate for a domain that doesn't list them in the CAA record. If a CA violates the CAA record requirement, then the CA has bigger issues and will be subject to distrusting in mass. Certificate Transparency logs make it a lot easier to identify if such shenanigans are done. -- I think that the CA/B Forum is also requiring C.T. Logs. Also, CAs /should/ *NOT* be generating keys. The keys should be generated by the malicious party trying to pull the shenanigans that you're talking about. All browsers will treat their fake certificate corresponding to the fake key on their fake web server as completely legitimate. The "real" original key that you generated has no special technical properties that distinguish it. Not /all/ browsers. I know people that have run browser extensions to validate the TLS certificate that they receive against records published via DANE in DNS, which is protected by DNSSEC. So it's effectively impossible for a rogue CA and malicious actor to violate that chain of trust in a way that can't be detected and acted on. -- Grant. . . . unix || die
Re: [gentoo-user] app-misc/ca-certificates
On 5/31/21 11:15 PM, William Kenworthy wrote: And another "wondering" - all the warnings about trusting self signed certs seem a bit self serving. No, it's not self serving. Considerably more people than public certificate authorities bemoan self signed certificates. Consider this: 1) Your web site uses a self signed certificate and you have trained users to blindly accept and trust the certificate presented to them. 2) Someone decides to intercept the traffic and presents a different self signed certificate to the end users while proxying the traffic on to you. 3) Your end users have no viable way to differentiate between your self signed certificate and the intercepting self signed certificate. Without someone - which you trust - vouching for the identity of the party that you're connecting to, you have no way to know that you are actually connecting to the partying that you are intending to connect to. Yes, they are trying to certify who you are, but at the expense of probably allowing access to your communications by "authorised parties" Nope. Not at all. (Presuming that it's done properly. More below.) The /only/ thing that the certificate does / provides is someone - whom end users supposedly trust - vouching that you are who you say they are. The CA has nothing in the actual communications path. Thus they can't see the traffic if they want to. The proper way configure certificates is: 1) Create a key on the local server. 2) Create a Certificate Signing Request (a.k.a. CSR) which references, but does not include, the key. 3) As a CA to sign the CSR. 4) Use the certificate from the CA. The important thing is that the key, which is integral to the encryption *NEVER* *LEAVES* *YOUR* *CONTROL*! Thus there is no way that a CA is even capable of getting in the middle of the end-to-end communications between you and your client. There have been some CAs in the past that would try to do everything on their server. But in doing so, they violate the security model. Don't use those CAs. *YOU* /must/ generate the key /locally/. Anything else is broken security. (such as commercial entities purchasing access for MITM access - e.g. certain router/firewall companies doing deep inspection of SSL via resigning or owning both end points). This is actually exceedingly difficult to do, at least insofar as decryption and re-encrypting the traffic. Certificate Transparency logs help ensure that a CA doesn't ... inadvertantly ... issue a certificate that they should not. Or at least it makes it orders of magnitude easier to identify and detect when such ... mistakes happen. There is also the Certificate Authority Authorization record that you can put in DNS that authorizes which CA(s) can issue certificates for a domain. A few years ago we passed the deadline where all CAs had to adhere to the CAA record. As in the Certificate Authority / Browser forum / consortium / term??? has non-renewed anybody who wasn't adhering to CAA. This is water so far under the bridge that it's over the waterfall, out to ocean, evaporated, and is raining down again. Also, DNSSEC protects DNS in that it makes it possible to authenticate the information you receive. Thus you can detect when things aren't authenticated and you know they should be. If its only your own communications and not with a third, commercial party self signed seems a lot more secure. Nope. 3rd parties don't have access to the encrypted communications. The only thing they have access to is saying if you are you or not. Yes, that's Bob over there in the corner. But I have no idea what he's talking about b/c MATH. Note the words "signed" and "signing". A Certificate Authority signs a certificate signing request, thus vouching for the identity of the entity submitting the CSR. You obviously can sign your own CSR. That's what a self-signed certificate comes from. But you have nobody vouching for who the far entity is, much less who vouched for them. Spekaing of who vouched for them, and how do we trust them? That's where the hashes in /etc/ssl (or wherever it is) come into play. Your system has a public key for /trusted/ root CAs. Thus when your system sees a certificate signed by a CA, it computes the hash, looks for the public key as the hash file on your local system. If the file exists and all the math passes, then the root certificate is trusted. If the root certificate is trusted, then your system will trust the certificate that the CA is vouching for. This is all ... something ... having to do with who is vouching for whom and do you trust the vouching party or not. But at no time does a CA have access to the encrypted communications. As long as things were done properly in that the keys were generated locally. -- Grant. . . . unix || die