[gentoo-user] Re: NAS box and switching from Phenom II X6 1090T to FX-6300
On 2024-04-17, Frank Steinmetzger wrote: >> If you don't play games, then definitely get integrated graphics. >> Even if the CPU costs a tiny bit more, it will give you a free empty >> 16x PCIe slot at whatever speed the CPU supports (v5 in this case - >> which is as good as you can get right now). > > Not to mention a cut in power draw. And one fewer fan to listen to. [I was also pretty annoyed with NVidia when they stopped offering fanless Quadro boards. I'm a big fan of fanless]
[gentoo-user] Re: Slightly corrupted file systems when resuming from hibernation
On 2024-04-17, Dale wrote: > I still use Nvidia and use nvidia drivers. I to run into problems > on occasion with drivers and kernels. When you switched from > Nvidia, what did you switch too? Do you still use drivers you > install or kernel drivers? All in-tree kernel drivers for integrated GPUs: * Intel UHD Graphics 620 * Intel HD Graphics 4000 * Intel Xeon E3-1200 * AMD Picasso Radeon Vega After I had to recycle my second perfectly functional NVidia card simply because NVidia stopped driver support, I got fed up. I tried the open-source nvidia drivers for those cards, but could never get multiple screens to work. > How well does the video system work? In other words, plenty fast > enough for what you do. They're all fast enough for what I do (no heavy gaming, but I do play with an RC flight simulator). All will drive at least two digital monitors. The last machine that had an NVidia card removed is also the oldest of the machines (Gigabyte Z77X-UD5H Intel i5-3570K w/ HD 4000 graphics), and it's happily driving three monitors (1 HDMI, 1 DVI, 1 DP). When running the flight-sim, the newest of them (the AMD/Radeon) is noticeably smoother and runs at higher frame rates than the older Intel GPUs. I didn't really have any complaints about the older ones, but I don't expect a real gamer would have been satisfied with the Intel ones. > I don't do any sort of heavy gaming. Since I have a nice game on my > cell phone now, I play it almost all the time. I can't recall > playing a game of solitaire on my computer in a long while. My > biggest thing, two video ports, one for monitor and one for TV. > Most TV videos aren't very high def but some are 1080P. That's all > my TV can handle. They all seem to handle HD video playback just fine. How many and what type of monitors can be driven is very much dependent on the motherboard. -- Grant
[gentoo-user] Re: Slightly corrupted file systems when resuming from hibernation
On 2024-04-17, Dr Rainer Woitok wrote: > Grant, > > On Wednesday, 2024-04-17 14:11:21 -, you wrote: > >> ... >> If what you want is access to all upstream longeterm kernel versions, >> then you should be using sys-kernel/vanilla-sources. > > I was not aware of this package. Excatly what could come in handy, if > everything else fails. Thank you for that pointer :-) Just be aware that gentoo-sources contains an "extra" set of gentoo-specific patches. So version x.y.z of gentoo-sources isn't identical to version x.y.z of vanilla-sources. https://dev.gentoo.org/~mpagano/genpatches/ -- Grant
[gentoo-user] Re: Slightly corrupted file systems when resuming from hibernation
On 2024-04-17, Dr Rainer Woitok wrote: > Grant, > > On Tuesday, 2024-04-16 19:26:25 -, you wrote: > >> ... >> That means that all gentoo-sources stable kernels are "longterm" >> kernel versions on kernel.org. It does not mean that all "longterm" >> kernel versions from kernel.org are available as "stable" in >> gentoo-sources. >> >> It is a statement that "gentoo-sources stable" is a subset of >> "kernel.org longterm". > > This sort of deteriorates into a debate about words rather than meanings I'm sorry to have caused "deterioration" by trying to explain the statement you said a) you didn't understand and b) was contridicted by the contents of the gentoo portage tree. The statement was not contridicted by what you pointed out. > without explaining HOW LONG such a series of related kernels are > main- tained and provided. That was not the subject of the statement you claimed was wrong which I then tried to explain. The gentoo-sources versions of upstream "longterm" kernels are maintained and provided for as long as the volunteers who do the work maintain and provide them. > After all, "longterm" or "LTS" suggest that these lines of > developement are less short-lived than others. They are. Upstream longterm kernel versions get updates for several years longer than versions that are not longterm. > To give an ex- ample: the oldest "longterm" kernels listed on > "kernel.org" are 4.19.*, 5.4.* and 5.10.*. Of these only 5.10.* is > still available from Gentoo. You should certainly demand that all of the money you paid for gentoo-sources be refunded. It takes work to maintain gentoo-sources ebuilds. I'm sure if you volunteered to maintain ebuilds for the older longterm kernels, the help would be happily accepted. Free clue: It's _hard_work_ to support old verions of things (especially kernels). They usually won't build with recent tools and won't run on recent hardware or with recent versions of other software. You often have to keep around entire virtual machines that have old tools and utilities. If what you want is access to all upstream longeterm kernel versions, then you should be using sys-kernel/vanilla-sources. > So what time span are we talking about when we say "LTS Gentoo > kernel"? We don't say that. "LTS" and "longterm" are not Gentoo designations, they are kernel.org designations. Gentoo has "stable" and "testing". Only upstream "longterm" kernel versions get marked as "stable" in gentoo-sources. They are then supported for as long as somebody supports them. > Roughly four, three or two years? And why is the support provided > by Gentoo significantly shorter than that by "kernel.org"? Because you're not helping with the support? -- Grant
[gentoo-user] Re: Slightly corrupted file systems when resuming from hibernation
On 2024-04-17, Michael wrote: >> > But, to get back to the beginning of this discussion: if there is a >> > risk that my aging hardware possibly can less and less cope with >> > newer and newer kernels, should I put something like >> > >> >>=sys-kernel/gentoo-sources-6.7.0 >> > >> > into file "package.mask" to stay with "longterm" 6.6.* kernels? >> >> Yes: if you want to avoid getting upgraded to 6.8 when it gets >> kernel.org "longterm" status and gentoo-sources "stable" status, then >> a statement like that in in package.mask will keep you on >> gentoo-sources 6.6 kernels (which are "longterm" on kernel.org). > I am not sure the assumption "... aging hardware possibly can less and less > cope with newer and newer kernels" is correct. Good point. My "yes" was in response to a question of the form "if X is true, should I do Y". I did not attempt to address the likelyhood that X was actually ture, only whether Y was appropriate if X was true. > As already mentioned newer kernels have both security and bug fixes. > As long as you stick with stable gentoo-sources you'll have these in > your system. Later kernels also come with additional kernel drivers > for new(er) hardware. You may not need/want these drivers if you do > not run the latest hardware. Using 'make oldconfig' allows you to > exclude such new drivers, but include new security options and/or > functionality as desired. > > It can happen for new code to introduce some software regression. The usual worries with running newer kernels on older hardware are: 1) Performance degradation when upgrading kernels on older hardware. On one embedded project I work with we're still running a 2.6 kernel because network throughput drops by 25-30% when we upgrade to 3.x kernels. There's nothing "wrong" with those 3.x kernels, they're just bigger and significantly slower. [Even when built with a config that's as identical to the 2.6 kernels as possible.] 2) Lack of support for old hardware when running a newer kernels. I used to run into this when running nvidia-drivers. Gentoo-sources would mark a new kernel stable, but my video board would not be supported by nvidia-drivers versions that were supported for that new stable kernel. I would mask newer kernels until and run older "longterm" kernels as long as I could. I would evenually be forced to buy a new video card. After going through that cycle a couple times, I swore off NVidia video cards and life's been much eaiser since.
[gentoo-user] Re: Slightly corrupted file systems when resuming from hibernation
On 2024-04-16, Dr Rainer Woitok wrote: > Arve, > > On Tuesday, 2024-04-16 15:53:48 +0200, you wrote: > >> ... >> Only LTS kernels get stabilised, so this information is readily available. > > I'm sure I don't understand this: According to "https://www.kernel.org/; > kernel 6.6.27 is "longterm", but according to "eix" the most recent > 6.6.* kernels are 6.6.22 and 6.6.23 which both are non-stable (well, I > ran my last "sync" immediately before the profile upgrade, so this might > not be current). I'm still using stable kernel 6.6.13 as my backup ker- > nel, but this kernel is no longer provided by Gentoo. So, what precise- > ly does LTS or "longterm" mean? That means that all gentoo-sources stable kernels are "longterm" kernel versions on kernel.org. It does not mean that all "longterm" kernel versions from kernel.org are available as "stable" in gentoo-sources. It is a statement that "gentoo-sources stable" is a subset of "kernel.org longterm". It is not a statement that the two sets are identical. In other words: "ONLY LTS kernels get stabilized." is a different statement from "ALL LTS kernels get stabilized." The former is true. The latter is not. > But, to get back to the beginning of this discussion: if there is a > risk that my aging hardware possibly can less and less cope with > newer and newer kernels, should I put something like > >>=sys-kernel/gentoo-sources-6.7.0 > > into file "package.mask" to stay with "longterm" 6.6.* kernels? Yes: if you want to avoid getting upgraded to 6.8 when it gets kernel.org "longterm" status and gentoo-sources "stable" status, then a statement like that in in package.mask will keep you on gentoo-sources 6.6 kernels (which are "longterm" on kernel.org). Again: not all longterm 6.6.x kernel versions get marked as "stable" for gentoo-sources. If you have not enabled the testing keyword for gentoo-sources, then you'll only get the 6.6.x kernel versions that the gentoo-sources maintainers have declared as "stable". -- Grant
[gentoo-user] Re: Slightly corrupted file systems when resuming from hibernation
On 2024-04-16, Dale wrote: > I've never understood what is supported long term either. I use > gentoo-sources. I've never figured out just how to pick a kernel that > is supposed to be stable for the larger version. In other words, only > security and bug fixes, no new hardware. Right now, 6.8.5 is the > highest version in the tree here but there are more versions of it to > come. So, I tend to go back to 6.7.X and pick the highest version of > that. The first two digits used to mean something but I think that > changed a long time ago. Any gentoo-soruces ebuild that's "stable" is an upstream LTS kernel. The 6.8 version of gentoo-sources are all "testing". They're "stable" on kernel.org, but theyre _not_ LTS. I think I read that 6.8 is expected to become the next LTS, but I don't really pay attention. > I try to avoid the absolute latest because my video drivers tend to lag > behind a little. They won't emerge for anything very new sometimes. > That's why I go back a little as described above. Thing is, I have no > idea if that is the right way or if it really even matters if I pick > 6.8.1 over 6.7.12 or vice versa. Neither are stable in Gentoo. Neither are longterm on kernel.org. 6.8 is stable on kernel.org. 6.7 is EOL on kernel.org. I would only choose 6.7 as a last resort. I would only choose 6.8 if the latest longterm (6.6) won't work. > I wish they were clearly marked somehow myself. Something in the name > that shows it is stable. Given I rarely have problems with kernels, > maybe none of this matters. Thing is, I plan to build a new rig soon. > Might help then. Maybe. Look at https://packages.gentoo.org/packages/sys-kernel/gentoo-sources The ones in green are the kernel.org "longterm" supported kernel versions that are stable in Gentoo. Here you can see which ones are lonterm, stable, mainline, and EOL upstream: https://kernel.org/ I would never run something that's not longterm unless there's a specific reason you have to choose something else. If you have to pick something that's not longterm, go with "stable" and not EOL if you can.
[gentoo-user] Re: Slightly corrupted file systems when resuming from hibernation
On 2024-04-16, Arve Barsnes wrote: > On Tue, 16 Apr 2024 at 15:29, Dr Rainer Woitok > wrote: >> > My understanding is the gentoo-sources kernels are aligned with the LTS >> > upstream releases. >> >> Right, they use the same version numbers. But you can't see from just >> looking at the available "gentoo-sources" which one is LTS and which one >> is not. You have to consult "https://www.kernel.org/; to get this in- >> formation. > > Only LTS kernels get stabilised, so this information is readily available. "Stablized" as in the corresponding gentoo-sources ebuild is marked as stable. [Not to be confused with Linux "stable" kernels -- not all of which end up with LTS status.] Getnoo-sources also includes "stable" but not "LTS" Linux kernels, but the gentoo-sources ebuild for those is always "testing". IOW, if you install gentoo-sources, and don't keyword it to allow "testing" ebuilds, then you won't get anything other than LTS kernel sources.
Re: [gentoo-user] How to synchronise between 2 locations
On 3/27/24 13:58, J. Roeleveld wrote: Hi all, Hi, I am looking for a way to synchronise a filesystem between 2 servers. Changes can occur on both sides which means I need to have it synchronise in both directions. What sort of turn around time are you looking for? seconds, minus, hours, longer? Does anyone have any thoughts on this? I would wonder about using rsync. host1 -> host2 at the top of the hour host2 -> host1 at the bottom of the hour Or if you wanted to get fancy host1 pushes to host2 at the top of the hour host2 pushes to host1 at a quarter past host2 pulls from host1 at the bottom of the hour host1 pulls from host2 at a quarter till I'm thinking like if one of them was a road warrior and only one side could initiate because of a stateful firewall. Also, both servers are connected using a slow VPN link, which is why I can't simply access files on the remote server. ACK -- Grant. . . .
[gentoo-user] Re: How to synchronise between 2 locations
On 2024-03-27, Mark Knecht wrote: > On Wed, Mar 27, 2024 at 11:59 AM J. Roeleveld wrote: >> I am looking for a way to synchronise a filesystem between 2 >> servers. Changes can occur on both sides which means I need to >> have it synchronise in both directions. > > How synchronized? For instance, does it need to handle identicals where > a file is on both sides but has been moved? Does it need to handle the case where the same file is modified independently on both sides?
[gentoo-user] Re: New profiles 23.0
On 2024-03-26, Walter Dnes wrote: > On Tue, Mar 26, 2024 at 04:21:23PM +, Michael wrote >> On Tuesday, 26 March 2024 15:21:32 GMT Walter Dnes wrote: >> > I assume my system is already "merged-usr". Current profile... >> > >> > [12] default/linux/amd64/17.1/no-multilib (exp) * >> > [...] > > Thanks for the clarification. So my system is considered > "split-usr", regardless of everything being on one partition. Yes. "split user" means that /bin and /usr/bin are two different directories. Likewise for /lib and /usr/lib, and so on... It doesn't matter that /foo and /usr/foo directories are in the same filesystem. > I got the news item when I ran "emerge --sync". My understanding is > that step 1 in the news item says "Please also update your system > fully and depclean before proceeding" so I should update world > first. Yes. And depclean. > Since I'm currently on profile > "default/linux/amd64/17.1/no-multilib" then I should migrate to > profile "default/linux/amd64/23.0/split-usr/no-multilib" as per the > news item. Yes -- If you're using OpenRC. I assume you are not using systemd since your old profile wasn't a systemd profile [In theory you could be running systemd on a non-systemd profile, but it's not common.] If you are running systemd, you should first update to the "merged-usr" flavor of your current profile. There's a detailed table at https://wiki.gentoo.org/wiki/Project:Toolchain/23.0_update_table It shows exactly what new profile corresponds to what old profile. > Migrating from there to "default/linux/amd64/23.0/no-multilib" is a > separate process as per the Gentoo wiki. Is my understanding > correct? Yes: https://wiki.gentoo.org/wiki/Merge-usr
[gentoo-user] Re: New profiles 23.0
On 2024-03-26, Walter Dnes wrote: > I'm AMD64 stable OpenRC. I got tired of dicking around resizing > partitions years ago, so I have all data and binaries in one honking > big partition. Also separate partitions for UEFI and swap. I assume > my system is already "merged-usr". Current profile... > > [12] default/linux/amd64/17.1/no-multilib (exp) * No, it is not merged user. If it were, the profile would say above would say "/merged-usr" at the end. > I just ran "emerge --sync" and got the profile news item. So do I > update world and then update profile? emerge -pv has 3 interesting > lines... Follow the instructions in the new item.
[gentoo-user] Re: Stage-3 and profile 23.x
On 2024-03-25, Michael wrote: > On Monday, 25 March 2024 21:48:24 GMT Peter Humphrey wrote: >> On Monday, 25 March 2024 16:52:19 GMT Michael wrote: >> >>> The default OpenRC installation now assumes a merged-usr fs structure - >>> therefore make sure you select the appropriate profile in a new >>> installation. >> >> I was wondering about that. Now that we have 23.0 in place, are we meant to >> change to merged-usr? Should I run the eponymous script? > > You can, if you want to. I've installed sys-apps/merge-usr and ran it on my > OpenRC system, after I completed the migration to profile 23.0. It didn't > take any time at all: > > https://wiki.gentoo.org/wiki/Merge-usr I'm in the process of switching two machines fom 17.1 to 23.0/split-usr [emerge @world will probably take overnight.] I've read the merge-usr page, and it looks pretty simple. But, I'm going to run 23.0 split-usr for a few weeks first -- just to make sure that the new profile hasn't broken anything. If you run OpenRC, it doesn't sound like there's any real reason you need to do the merge any time soon. If you run systemd, there's some version number cutoff where it will refuse to work on a split-usr install (IIRC). After all, the systgemd motto is "all your computer are belong to us!" -- Grant
[gentoo-user] Re: Terminal emulator to replace Konsole
On 2024-03-23, Mickaël Bucas wrote: > I think it's not a terminal emulator feature, but rather a shell > feature. > > Some terminal programs are designed to interact with the mouse, but > bash command line, based on readline, doesn't react to mouse clicks. Agreed. > I've tried Midnight Commander in Konsole and xterm, and it actually > moves the cursor to the click position in its own command line ! > > Maybe there's an extension or set of parameters for bash or other > shells to handle mouse clicks, but I'm not aware of it. I vaguely recall that there was some sort zsh hack/addon/module that added extra mouse functionality, but I don't recall the details (I was never a zsh user). -- Grant
[gentoo-user] Re: gentoo-sources 5.15.151 breaks amdgpu support?
On 2024-03-11, Grant Edwards wrote: > I upgraded gentoo-sources from 5.15.147 to 5.15.151 this morning and > amdgpu support is now borked on my system with an AMD Ryzen 5 3400G > with Radeon Vega Graphics. > > Everything worked fine with 5.15.147, but when 5.15.151 (built with > same .config via "make oldconfig") boots there's always a kernel oops, > and video output goes blank. > > I suppose maybe it's time to see if 6.1 works... Gentoo-sources 6.1 seems to work fine. I had masked it when it first went stable because it wouldn't boot back then. Whatever was wrong with it then seems to have been fixed. Hopefully it won't catch whatever malady is affecting 5.15.151. -- Grant
[gentoo-user] gentoo-sources 5.15.151 breaks amdgpu support?
I upgraded gentoo-sources from 5.15.147 to 5.15.151 this morning and amdgpu support is now borked on my system with an AMD Ryzen 5 3400G with Radeon Vega Graphics. Everything worked fine with 5.15.147, but when 5.15.151 (built with same .config via "make oldconfig") boots there's always a kernel oops, and video output goes blank. I suppose maybe it's time to see if 6.1 works... [2.888353] [drm] amdgpu kernel modesetting enabled. [2.896597] amdgpu: Topology: Add APU node [0x0:0x0] [2.896736] [drm] initializing kernel modesetting (RAVEN 0x1002:0x15D8 0x1462:0x7C02 0xC8). [2.896742] amdgpu :2a:00.0: amdgpu: Trusted Memory Zone (TMZ) feature enabled [2.896751] [drm] register mmio base: 0xFCA0 [2.896752] [drm] register mmio size: 524288 [2.896762] [drm] add ip block number 0 [2.896764] [drm] add ip block number 1 [2.896766] [drm] add ip block number 2 [2.896767] [drm] add ip block number 3 [2.896769] [drm] add ip block number 4 [2.896770] [drm] add ip block number 5 [2.896772] [drm] add ip block number 6 [2.896774] [drm] add ip block number 7 [2.896775] [drm] add ip block number 8 [2.896779] Loading firmware: amdgpu/picasso_gpu_info.bin [2.919851] [drm] BIOS signature incorrect 5b 7 [2.919872] amdgpu :2a:00.0: amdgpu: Fetched VBIOS from ROM BAR [2.919874] amdgpu: ATOM BIOS: 113-PICASSO-115 [2.919883] Loading firmware: amdgpu/picasso_sdma.bin [2.920090] [drm] VCN decode is enabled in VM mode [2.920091] [drm] VCN encode is enabled in VM mode [2.920092] [drm] JPEG decode is enabled in VM mode [2.920111] amdgpu :2a:00.0: vgaarb: deactivate vga console [2.920841] Console: switching to colour dummy device 80x25 [2.920885] [drm] vm size is 262144 GB, 4 levels, block size is 9-bit, fragment size is 9-bit [2.920892] amdgpu :2a:00.0: amdgpu: VRAM: 2048M 0x00F4 - 0x00F47FFF (2048M used) [2.920895] amdgpu :2a:00.0: amdgpu: GART: 1024M 0x - 0x3FFF [2.920898] amdgpu :2a:00.0: amdgpu: AGP: 267419648M 0x00F8 - 0x [2.920904] [drm] Detected VRAM RAM=2048M, BAR=2048M [2.920906] [drm] RAM width 128bits DDR4 [2.921005] [drm] amdgpu: 2048M of VRAM memory ready [2.921008] [drm] amdgpu: 3072M of GTT memory ready. [2.921012] [drm] GART: num cpu pages 262144, num gpu pages 262144 [2.921136] [drm] PCIE GART of 1024M enabled. [2.921137] [drm] PTB located at 0x00F40090 [2.921208] Loading firmware: amdgpu/picasso_asd.bin [2.921688] Loading firmware: amdgpu/picasso_ta.bin [2.921897] amdgpu :2a:00.0: amdgpu: PSP runtime database doesn't exist [2.921939] Loading firmware: amdgpu/picasso_pfp.bin [2.922059] Loading firmware: amdgpu/picasso_me.bin [2.922246] Loading firmware: amdgpu/picasso_ce.bin [2.922344] Loading firmware: amdgpu/picasso_rlc_am4.bin [2.922846] Loading firmware: amdgpu/picasso_mec.bin [2.923079] Loading firmware: amdgpu/picasso_mec2.bin [2.924074] amdgpu: hwmgr_sw_init smu backed is smu10_smu [2.924081] Loading firmware: amdgpu/raven_dmcu.bin [2.924221] Loading firmware: amdgpu/picasso_vcn.bin [2.924504] [drm] Found VCN firmware Version ENC: 1.15 DEC: 3 VEP: 0 Revision: 0 [2.924510] amdgpu :2a:00.0: amdgpu: Will use PSP to load VCN firmware [2.945168] [drm] reserve 0x40 from 0xf47fc0 for PSP TMR [3.004533] amdgpu :2a:00.0: amdgpu: RAS: optional ras ta ucode is not available [3.009017] amdgpu :2a:00.0: amdgpu: RAP: optional rap ta ucode is not available [3.011810] [drm] psp gfx command LOAD_TA(0x1) failed and response status is (0x7) [3.011919] [drm] psp gfx command INVOKE_CMD(0x3) failed and response status is (0x4) [3.011922] amdgpu :2a:00.0: amdgpu: Secure display: Generic Failure. [3.011924] amdgpu :2a:00.0: amdgpu: SECUREDISPLAY: query securedisplay TA failed. ret 0x0 [3.013330] [drm] kiq ring mec 2 pipe 1 q 0 [3.014477] [drm] DM_PPLIB: values for F clock [3.014478] [drm] DM_PPLIB: 160 in kHz, 4399 in mV [3.014481] [drm] DM_PPLIB: values for DCF clock [3.014482] [drm] DM_PPLIB: 30 in kHz, 3099 in mV [3.014484] [drm] DM_PPLIB: 60 in kHz, 3574 in mV [3.014485] [drm] DM_PPLIB: 626000 in kHz, 4250 in mV [3.014487] [drm] DM_PPLIB: 654000 in kHz, 4399 in mV [3.014716] [drm] Display Core initialized with v3.2.149! [3.100173] [drm] VCN decode and encode initialized successfully(under SPG Mode). [3.101219] kfd kfd: amdgpu: Allocated 3969056 bytes on gart [3.101437] amdgpu: Topology: Add APU node [0x15d8:0x1002] [3.101440] kfd kfd: amdgpu: added device 1002:15d8 [3.101447] kfd kfd: amdgpu: Failed to resume IOMMU for device 1002:15d8 [3.101449] amdgpu :2a:00.0: amdgpu: amdgpu_device_ip_init failed [3.101452] amdgpu :2a:00.0: amdgpu: Fatal error during
[gentoo-user] Re: is a global use flag necessary for python?
On 2024-03-09, Walter Dnes wrote: > On Sat, Mar 09, 2024 at 07:55:13PM +0100, n952162 wrote >> I just synced my system after a long delay, > > That's your problem right there. Yep, to quote Olivia Rodrigo... Bad idea, right? >> Is there a way to do it globally? > > First of all python targets should not need to be mentioned in > make.conf or package.use. Like the girl said. > Gentoo manages versions automatically... if you update often enough. This has been pointed out here (I think even to the OP) more than once. Gentoo is not appropriate for machines that don't get updated regularly. I recommend at least once a week, but once a month will probably be OK. Much longer than a few months and you're asking for trouble. Hard-wiring PYTHON versions is also asking for trouble. Combining the two is demanding trouble. Just back up your user data and re-install. There are ways to get out of your mess, but if you have to ask... As they used to say at the billiards tournaments: "If you could make a shot like that, you wouldn't need to make a shot like that." -- Grant
[gentoo-user] Re: How to set up drive with many Linux distros?
On 2024-03-10, Michael wrote: > Perhaps I'm picking up on semantics, but shouldn't this sentence: > > "... The gap between the DOS disklabel and the first partition" > > read: > > "The gap between the MBR and the first partition"? Yes, thanks -- MBR is more accurate, I've changed that sentence. > Your next paragraph pointed out something which I hadn't considered at any > length. Namely, the installation of GRUB's boot.img in a MBR or VBR also > hardcodes in a block list format the location of the first sector where the > core.img is stored and more importantly, the physical position of this sector > can be altered both by COW fs (and by the wear levelling firmware of flash > storage devices). > > I had assumed both the COW fs and/or the flash controller will in > both cases translate any physical data position to the logical layer > and presented this to inquiring software. Have you actually tried > using btrfs as a distro's root fs to see if the VBR installed GRUB > boot.img will ever lose access to the core.img? No, I haven't. I agree that the flash controller can't change the logical address of a filesystem data block without the knowledge of the filesystem, so I don't think controller layer wear-leveling would be a problem. But, the filesystem layer is allowed to move data blocks around, so flash-aware filesystems that attempt to do wear-leveling or defragmentation could move data blocks. Some of the descriptions I've read of "fancier" filesystem internals have also implied implied that does happen under certain conditions, but I may have misunderstood or the descriptions may have been wrong. My use of these multi-boot installs have no need for anything beyond exnN, so I've never tried using block lists with anything other than extN filesystems. Since I am confident extN filesystems won't cause problems, I've always stuck with that. -- Grant
[gentoo-user] Re: How to set up drive with many Linux distros?
On 2024-02-22, Grant Edwards wrote: > For many years, I've used a hard drive on which I have 8-10 Linux > distros installed -- each in a separate (single) partition. > > [...] > > Is there an easier way to do this? After some additional studying of UEFI and boot managers like rEFInd, I decided that my current approach was still the easiest method. I did switch from DOS to GPT disklabel (I bricked my old drive tring to update the firmware, so I had to start over anyway). In case anybody is interested in the gory details, I documented my scheme and the helper shell scripts at https://github.com/GrantEdwards/Linux-Multiboot -- Grant
[gentoo-user] Re: Problem with "GRUB upgrades" news item
On 2024-03-06, Walter Dnes wrote: > I've got a UEFI system. According to the news item... > >> Re-runing grub-install both with and without the --removable option >> should ensure a working GRUB installation. > > I tried that... > > [i3][root][~] grub-install I believe you have to run grub-install using all the same options you did originally. AFAICT, grub doesn't remember what you did the last time.
[gentoo-user] Re: How to set up drive with many Linux distros?
On 2024-02-26, Wol wrote: > On 26/02/2024 20:51, Grant Edwards wrote: > >> The simple answer is to quit wasting time trying to multi-boot like >> that and just buy a dozen USB flash drives. >> > And then, if USB isn't the default boot media, he might as well sort out > UEFI boot, and multi-boot that way. Except that every time I've found a write-up about multibooting a lot of Linux distros with UEFI, it turns out that it doesn't actually work very well and is more work to maintain than what I'm doing now. -- Grant
[gentoo-user] Re: How to set up drive with many Linux distros?
On 2024-02-26, eric wrote: > I agree, using the custom.cfg file would not work if needing to boot > different kernels of the same OS and those kernels were being updated. The simple answer is to quit wasting time trying to multi-boot like that and just buy a dozen USB flash drives. -- Grant
[gentoo-user] Re: How to set up drive with many Linux distros?
On 2024-02-26, eric wrote: > On 2/26/24 04:57, gentoo-u...@krasauskas.dev wrote: >> You could also write a script that keeps all the distros up to date >> from within whichever one you're currently booted by mounting >> subvolumes to /mnt or wherever, chrooting in and running the update. > > To avoid grub not being able to point to a newly updated kernel on one > of the OS's installed, I use a "custom.cfg" file in all my /boot/grub/ > directories for each OS where the "linix" and "initrd" point to the > symbolic links of the kernel and init files which point to the newly > updated files on most major distributions like ubuntu, arch, suse, and > debian. The name of the symbolic links stay the same over upgrades. It > works great when using UUID to identify the partition that has root and > I can always boot into any of the OS's installed no matter which one > hijacked the MBR. Except I generally have multiple kernels installed for each of the distros, and need to be able to choose which kernel to boot. There are also various other boot options (e.g. "safe mode") offered by some distros that I occasionally need to use. > https://forums.linuxmint.com/viewtopic.php?t=315584 Interesting article, thanks. After reading up more on UEFI, it looks like that would be even more work and more mess. So, there seem to be two options: 1) Stick to the dual-stage chainloading scheme I'm using now (though I'll probably switch from DOS to GTP disklabel). That way after selecting which parition (distro) to boot, I get all the boot options normally offered by that distro's install. Installing a distro involves letting it install to MBR and BIOS-boot, installing grub manually to the root partition, then restoring MBR and BIOS-boot. 2) Use a single master grub to boot any distro. I think I'd need to write my own OS-prober. All of the distros I care about seem to now be using grub2 now. Instead of looking for kernels and initrd images and adding them to the master grub.cfg, I would probe for grub.cfg files, and for each one found incorporate the entire set of choices in that .cfg file as a submenu in the main grub.cfg menu. I think that in order to generate the distro's grub.cfg files, I still have to allow the distros to install grub to the MBR and BIOS-boot (or to a second disk that I don't care about), then restoring MRB/BIOS-boot.
[gentoo-user] Re: How to set up drive with many Linux distros?
On 2024-02-23, Mark Knecht wrote: > The only other idea I had was to install to a different > disk and then use something like Clonezilla to move it to the partition > you want it in on your system. > > While I suspect you were being sarcastic I do not think any solution > that involves a 'pocketful of USB 3 thumb drives' will be satisfying. Actually (assuming the thumb drives are relaible) it probably would work fine. That's more-or-less the typical solution that my colleagues use -- though it's a drawer full of hard drives in caddies instead of USB thumb drives. Back in the day when we supported a couple versions of SCO, various BSDs, Novell Netware and Solaris, it required a fait bit of drawer space. For now, I guess I'll stick with the scheme I'm using but switch from DOS disklabel and gap to GPT disklabel and BIOS boot partition. It seems ugly, but it's managable with the help of a few shell scripts that are stored in the parition that belongs to the master copy of grub.
[gentoo-user] Re: How to set up drive with many Linux distros?
On 2024-02-23, Mark Knecht wrote: > On Fri, Feb 23, 2024 at 11:59 AM Grant Edwards > wrote: >> >> The simple solution is to give up on multi-booting a dozen different >> distros on a single disk and buy a pocketful of USB 3 thumb drives. >> > > Given performance does drop a bit and there can be issues with allocating > hardware, why not use Virtualbox which allows you to run both your base > distro and then as many of the others as your hardware can handle as VMs? The machine is used for testing PCI and PCI-express boards and their drivers. I don't really trust PCI passthrough (especially when it comes to interrupt handling) enought to do such testing in a VM. IFAICT, it's very rare for customers to use VMs. If customers do start to use VMs, then we'll have to test with VMs also. -- Grant
[gentoo-user] Re: How to set up drive with many Linux distros?
On 2024-02-23, Wols Lists wrote: > On 23/02/2024 00:28, Grant Edwards wrote: >> In my experience, 's bootloader does not boot other >> installations by calling other bootloaders. It does so by rummaging >> through all of the other partitions looking for kernel images, >> intird files, grub.cfg files, etc. It then adds menu entries to >> the config file for 's bootloader which, when selected, >> directly load the kernel image and initrd from those other >> partitions. Sometimes, it works -- at least until those other >> installations get updated without the knowlege of the distro that >> currently "owns" the MBR's bootloader config. Then it stops working >> until you tell that bootloader to re-do it's rummaging about >> routine. > > IME distros that try that (SUSE, anyone!) generally get confused as > to which kernel belongs to which root partition. > > Hence needing to boot with a live distro to edit the resulting mess > and get the system to actually come up without crashing ... IIRC, all of the big distros used to do that. It didn't work very well, but at least it took a really long time. However, I read recently that Ubuntu had disabled the os-prober by default in 22.04. Disabling it was always one of the first things I did after installing a new distro. The simple solution is to give up on multi-booting a dozen different distros on a single disk and buy a pocketful of USB 3 thumb drives. -- Grant
[gentoo-user] Re: How to set up drive with many Linux distros?
On 2024-02-23, Michael wrote: > The problem starts if/when kernel images are overwritten by > successive Linux OS distros. This is likely when derivatives of the > same main distros e.g. Ubuntu all create a directory called > /EFI/ubuntu/ in the ESP and drop their kernels & initrd images in > there potentially overwriting other distro's files. Yes, that's the problem that I've read about when trying to multi-boot with UEFI. I usually have 3-4 different Ubuntu installations and 3-4 different RedHat installations. Ubuntu in particular causes a log of complaints about overwriting ESP files belonging to other Ubuntus. > When using a distro's installer menu on a legacy BIOS MoBo you can > select a partition (PBR) to install GRUB, You used to be able to. I can no longer find the option to do that in Ubuntu or RedHat. I've been told that Suse still allows it. > but GRUB will complain and suggest you could use blocklists but it > is unreliable. Last time I received an error like this, I installed > grub in a PBR manually with the '--force' option, without using the > installer GUI. After that, whenever I updated GRUB it complained > again about blocklists, but it worked fine. Using --force will work fine as long as the grub 1.5 files don't get moved afterwards. That's why I lock them in place. Locking them will cause future upgrades to Grub for that distro to fail, but that doens't happen very often. When it does, you unlock them, updated grub, re-install using --force, and lock them again. >> I'd welcome pointers to where those advanced options are in the RH >> and Ubunutu installers -- I've searched everywhere I can think >> of. Various things Google has found lead me to believe that they no >> longer support installing grub in a partition. > > Try using '--force' to make GRUB install its image in some distro's > boot/root partition PBR instead of the disk MBR, but you'll probably > have to perform this outside the installer script. I've done this > with VMs. Yes, in my OP describing what I'm doing now it explains that's what I do. Then I lock the Grub files that are located using the blocklists created by the --force option. >> I guess I'll stick with my current setup. >> >> Or perhaps I'll switch from a DOS disklabel to a GPT disklabel. >> Instead of backing up and restoring the MBR and the gap, I would >> backup and restore the MBR and the BIOS boot partition. And I could >> use UUIDs and partition labels. > > These days I use disks with GPT even on MoBos with legacy BIOS. Same here -- except for this one machine. I think I'll switch it over soon. > Instead of backing up and restoring the MBR/BIOS Boot Partition you > could just reinstall grub and run grub-mkconfig, as long as the > latter involves fewer key-presses. ;-) I don't use grub-mkconfig for the "main" grub. It has a fixed grub.cfg file that does nothing but chainload the user-selected partition. Currently, backing up MBR+gap only happens once when I install/setup the main grub. Restoring BMR+gap is one command (which is actually in a shell script) that's run after any new distro is installed. MBR+Bios-boot-partition would work pretty much the same way. -- Grant
[gentoo-user] Re: How to set up drive with many Linux distros?
On 2024-02-23, Wojciech Kuzyszyn wrote: > I guess most (all) of the distro's you are talking about use GRUB (or > at least they allow to do it). Yes, I belive that they are all now using Grub2. > If that's true, I'm pretty sure you can happily let them overwrite > the GRUB in MBR as many times as they want, since it's the same (or > just probably minor version differences) bootloader. Just make a > copy of /boot/grub/grub.cfg and make sure it's the same on every > partition. That means I have to update all of those grub.cfg files manually every time I install or update a distro. That's a lot of work. > Or, even better, if that's possible right now, make a common /boot > partition and after installing the new distro just merge the > (probably new) /boot/grub/grub.cfg with your old one. "just merge the new grub.cfg file with the old one" is the problem. I tried that for a while: every time a distro got installed, I would add it to the "main" grub config file. Every time a distro got a kernel update, I'd modify the main grub config file. It was a lot more work that my current scheme (and a lot more error prone). > I really think that *should* work! It would work, but maintaining the grub.cfg files is a lot of work. The scheme I'm using now doens't require me to mess with any of the grub.cfg files when distros get updated or installed. -- Grant
[gentoo-user] Re: How to set up drive with many Linux distros?
On 2024-02-22, Wol wrote: > On 22/02/2024 21:45, Grant Edwards wrote: >> I've been reading up on UEFI, and it doesn't seem to be any >> better. People complain about distro's stomping on each other's files >> in the ESP partiton and multiple distro's using the same name in the >> boot slots stored in NVM. And then the boot choice order changes >> (though it may not be apparent to the naked eye) when one of the >> distros decides to update/reinstall its boot stuff. > > At least if you use UEFI *as* your bootloader, then that won't > happen. That assumes you're using UEFI, though! According to what I've read UEFI isn't a bootloader. It's a boot manager which can load and run EFI bootloaders (of which there can be multiple installed). > In which case, 's bootloader doesn't get a look-in. Yes, AFAICT, it does (sometimes?). When you install under UEFI it installs EFI bootloader files (either kernels wrapped in EFI bootloader executables or the grub EFI bootloader) in the EFI Systgem Partition (ESP), and then adds one or more entries in the EFI NVM that points to those files (or something like that). The Linux UEFI systems I have all still use grub2 (which gets written to the ESP). It's entirely possible for one distro to overwrite files in the ESP that belong to other distros. I've read multiple complaints about exactly that when trying to do multi-boot with UEFI. In practice it's just like the fight over who owns the MBR and the DOS disklable gap. One recipe I read about for doing what I wanted to do with UEFI involved installing a Linux distro (didn't really matter which), then installing rEFInd. After that, some manual renaming and deleting of the files in the ESP was required. Then he started installing various distros. After each distro installation, the author had to re-install rEFInd, and after many of them he had to manually remove or rename files in the ESP (or adjust the rEFInd config file). And in the end, he ended up with multiple menu entries (for different installations) that had identical names. It was more complicated and difficult than my current scheme. > As for "'s obviously superior bootloader", well > is using the exact same boot-loader, and when IT installs, how is it > going to be able to boot if it can't call 's boot > loader because it's just trashed it by overwriting it? In my experience, 's bootloader does not boot other installations by calling other bootloaders. It does so by rummaging through all of the other partitions looking for kernel images, intird files, grub.cfg files, etc. It then adds menu entries to the config file for 's bootloader which, when selected, directly load the kernel image and initrd from those other partitions. Sometimes, it works -- at least until those other installations get updated without the knowlege of the distro that currently "owns" the MBR's bootloader config. Then it stops working until you tell that bootloader to re-do it's rummaging about routine. > To me, you seem to be describing the *default* installer setup, that's > been there for ever. Last I installed SUSE, iirc I had to specify > "advanced bootloader installation", most of who's options I didn't even > understand!, but it did do what I told it to (apart from not recognising > my weird disk stack!). So SuSe still allows you to install grub to a partition instead of MBR? That's encouraging. RH and Ubuntu used to allow that, but AFAIK, now they do not. > If you can find, and understand!, that advanced options, I think > you'll find you can do what you want. I'd welcome pointers to where those advanced options are in the RH and Ubunutu installers -- I've searched everywhere I can think of. Various things Google has found lead me to believe that they no longer support installing grub in a partition. I guess I'll stick with my current setup. Or perhaps I'll switch from a DOS disklabel to a GPT disklabel. Instead of backing up and restoring the MBR and the gap, I would backup and restore the MBR and the BIOS boot partition. And I could use UUIDs and partition labels.
[gentoo-user] Re: How to set up drive with many Linux distros?
On 2024-02-22, Wol wrote: > On 22/02/2024 19:17, Grant Edwards wrote: > >> However, the choice to install bootloaders in partitions instead of >> the MBR has been removed from most (all?) of the common installers. >> This forces me to jump through hoops when installing a new Linux >> distro: > > File a bug! LOL, good one! As if a normal person filing a bug with RedHat or Ubuntu actually accomplishes anything. I'll tell them to make systemd optional while I'm at it. > If that's true, it basically borks any sort of dual boot, unusual disk > layout, whatever. Yep, it does. The answer from is: You should really just (shut up and) install 's (obviously superior) bootloader in the MBR. It will auto-detect (some of) the other already installed (obviously inferior) OSes, and will add (some subset of) them to the boot menu that will (sometimes) allow you to boot them (maybe -- if you kneel, bow your head and ask nicely). > Last time I installed SUSE, it trashed my boot totally because it > didn't recognise my disk stack, failed to load necessary drivers, > and worse trashed my gentoo boot too... > > Cue one big rescue job to get the system up and working again. At > least it was only the boot that was trashed. I've been reading up on UEFI, and it doesn't seem to be any better. People complain about distro's stomping on each other's files in the ESP partiton and multiple distro's using the same name in the boot slots stored in NVM. And then the boot choice order changes (though it may not be apparent to the naked eye) when one of the distros decides to update/reinstall its boot stuff. -- Grant
[gentoo-user] How to set up drive with many Linux distros?
For many years, I've used a hard drive on which I have 8-10 Linux distros installed -- each in a separate (single) partition. There is also a single swap partition (used by all of the different Linux installations). There is also a small partition devoted only to the "master" instance of Grub that lives in the MBR and the space between the MBR and the first partition (the drive uses a DOS disklabel). That master instance of Grub has a menu which contains entries which "chainload" each of the other partitions. For many years, this worked great. All of the various distro installers offered the option of installing the bootloader in the MBR (e.g. /dev/sda) or in a partition (e.g /dev/sdaN). I would tell the installer to install the bootloader in the root partition, and everything "just worked". However, the choice to install bootloaders in partitions instead of the MBR has been removed from most (all?) of the common installers. This forces me to jump through hoops when installing a new Linux distro: 1. Back up the MBR and gap between the MBR and the first partition. 2. Let the installer install it's bootloader (seems it's always grub these days) in the MBR. 3. Boot into the newly installed Linux. 4. Manually install grub in the root partition (e.g. /dev/sdaN) using the --force option to tell grub to use blocklists to find it's files. 5. Find those grub files and lock them so they can't be moved. 6. Restore the MBR/gap backup from step 1. It seems like there should be a better way to do this. One might hope that UEFI offers a solution to this problem. Google has found me others asking the same question but no real answers. Is there an easier way to do this? -- Grant
[gentoo-user] Re: Re-run grub-install to update installed boot code!
On 2024-02-17, Dale wrote: > Grant Edwards wrote: >> Today's routine update says: >> >> Re-run grub-install to update installed boot code! >> >> Is "sudo grub-install" really all I have to do? [...] >> >> Or do I have to run grub-install with all the same options that >> were originally used to install grub? > > I been wondering the same since I saw this posted on -dev. The news > item seems to mention the EFI booting but I'm sure us legacy booting > users need to do the same. At this point, I may skip updating grub > this week until I know exactly what I'm supposed to do as well. I'd > think we need to reinstall like when we first did our install but > not sure. :/ That was my guess. I should have recorded the options originally passed to grub-install. Now that I have BIOS boot partitions (instead of using embedded blocklists) on all my machines, reinstalling grub should be trivial. I think all I have to do is tell grub-install the boot device. > It would suck to have a unbootable system. More than once I've had to boot from either systemrescuecd or minimal gentoo install ISO so I could re-install (or re-configure) grub after someting gets messed up. It's not difficult, but it is annoying. -- Grant
[gentoo-user] Re-run grub-install to update installed boot code!
Today's routine update says: Re-run grub-install to update installed boot code! Is "sudo grub-install" really all I have to do? Grub knows where/how everthing was originally installed and will do the right thing without any options? Or do I have to run grub-install with all the same options that were originally used to install grub? [I use a manually generated grub.cfg file, so I'm ignoring the message that tells me I to run "grub-mkconfig".] -- Grant
[gentoo-user] 147 .pem files in /etc/ need updating
After a routine update this morning (last one was probably 3 days ago), I see that 147 files in /etc need updating. When I run etc-update, they're all ".pem" CA files (or links?). It looks like it was all of the .pem files under /etc/ssl/certs. I did a -5, and all seems well. It's a bit alarming when portage tells you that all of your root CA files have been modified since they were installed. The package app-misc/ca-certificates was updated last week (and it's quite possible the 147 /etc files needed updating after that), but I've never had to deal with certificate file updates like that before (nor have I ever seen anything similar on other Gentoo machines). The only thing I can think of that seems relevent is that last week I did add one _local_ certificate using the method prescribed by the Wiki: # mkdir -p /usr/local/share/ca-certificates # cp .crt /usr/local/share/ca-certificates # update-ca-certificates Would that fool portage into thinking that all 147 CA files belonging to app-misc/ca-certificates had been modified and needed to be "merged" when app-misc/ca-certificates got updgraded? -- Grant
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-02-06, J. Roeleveld wrote: > If you want to use snapshots, the filesystem will need to support it. (either > LVM or ZFS). If you only want to create snapshots on the backupserver, I > actually don't see much benefit over using rsync. Upthread I've been told that ZFS snapshots 1. Require far less disk space than rsync's snapshots. 2. Are far faster. 3. Are atomic. >> If (like rsnapshot/rsync's hard-link scheme) ZFS snapshots are normal >> directory trees that can be "browsed" with normal filesystem tools, >> that would be ideal. [I'll do some googling...] > > ZFS snapshots can be accessed using normal tools and can even be exposed over > NFS mounts making it super easy to find the files again. > > They are normally not visible though, you need to access them specifically > using "/filesystem/path/.zfs/snapshot" Great, that's exactly what I would hope for. I'm reading up on ZFS, and from what I've gleaned so far, it seems lake ZFS source and ZFS backup certainly would be ideal. It's almost like the ZFS filesystem designers had thought about "how to backup" from the start. Something that all of the old-school filesystem designers clearly hadn't. :)
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-02-06, J. Roeleveld wrote: > On Tuesday, February 6, 2024 4:38:11 PM CET Grant Edwards wrote: >> On 2024-02-05, J. Roeleveld wrote: >> > On Wednesday, January 31, 2024 6:56:47 PM CET Rich Freeman wrote: >> >> On Wed, Jan 31, 2024 at 12:40 PM Thelma wrote: >> >> > If zfs file system is superior to ext4 and it seems to it is. >> >> > Why hasn't it been adopted more widely in Linux? >> >> >> >> The main barrier is that its license isn't GPL-compatible. It is >> >> FOSS, but the license was basically designed to keep it from being >> >> incorporated into the mainline kernel. >> > >> > Which isn't as much of an issue as it sounds. You can still add it >> > into the initramfs and can easily load the module. >> >> What if you don't use an initrd? >> >> I presume that boot/root on ext4 and home on ZFS would not require an >> initrd? > > Yes, that wouldn't require an initrd. But why would you limit this? Because I really, really dislike having to use an initrd. That's probably just an irrational 30 year old prejudice, but over the decades I've found live to be far simpler and more pleasant without initrds. Maybe things have improved over the years, but way back when I did use distros that required initrds, they seem to be a constant, nagging source of headaches. > ZFS works best when given the FULL drive. Where do you put swap? > For my server, I use "bliss-initramfs" to generate the initramfs and > have not had any issues with this since I started using ZFS. > > Especially the ease of generating snapshots also make it really easy > to roll back an update if anything went wrong. If your > root-partition isn't on ZFS, you can't easily roll back. True. However, I've never adopted the practice of backing up my root fs (except for a few specific directories like /etc), and haven't ever really run into situations where I wished I had. It's all stuff that can easily be reinstalled. -- Grant
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-02-05, J. Roeleveld wrote: > On Wednesday, January 31, 2024 6:56:47 PM CET Rich Freeman wrote: >> On Wed, Jan 31, 2024 at 12:40 PM Thelma wrote: >> > If zfs file system is superior to ext4 and it seems to it is. >> > Why hasn't it been adopted more widely in Linux? >> >> The main barrier is that its license isn't GPL-compatible. It is >> FOSS, but the license was basically designed to keep it from being >> incorporated into the mainline kernel. > > Which isn't as much of an issue as it sounds. You can still add it > into the initramfs and can easily load the module. What if you don't use an initrd? I presume that boot/root on ext4 and home on ZFS would not require an initrd?
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-02-05, Wols Lists wrote: > On 04/02/2024 15:48, Grant Edwards wrote: >> OK I see. That's a bit different than what I'm doing. I'm backing up >> a specific set of directory trees from a couple different >> filesystems. There are large portions of the "source" filesystems that >> I have no need to back up. And within those directory trees that do >> get backed up there are also some excluded subtrees. > > But my scheme still works here. The filesystem I'm snapshotting is the > backup. As such, it only contains the stuff I want backed up, copied > across using rsync. > > There's nothing stopping me running several rsyncs from the live system, > from several different partitions, to the backup partition. Ah! Got it. That's one of the things I've been trying to figure out this entire thread, do I need to switch home and root to ZFS to take advantage of its snapshot support for backups? In the case you're describing the "source" filesystem(s) can be anything. It's only the _backup_ filesystem that needs to be ZFS (or similar). If (like rsnapshot/rsync's hard-link scheme) ZFS snapshots are normal directory trees that can be "browsed" with normal filesystem tools, that would be ideal. [I'll do some googling...] -- Grant
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-02-04, Wols Lists wrote: > On 04/02/2024 06:24, Grant Edwards wrote: > >> I don't understand, are you saying that somehow your backup doesn't >> contain a copy of every file? >> > YES! Let's make it clear though, we're talking about EVERY VERSION of > every backed up file. > And you need to get your head round the fact I'm not - actually - > backing up my filesystem. I'm actually snapshoting my disk volume, my > disk partition if you like. OK I see. That's a bit different than what I'm doing. I'm backing up a specific set of directory trees from a couple different filesystems. There are large portions of the "source" filesystems that I have no need to back up. And within those directory trees that do get backed up there are also some excluded subtrees. > Your strategy contains a copy of every file in your original backup, a > full copy of the file structure for every snapshot, and a full copy of > every version of every file that's been changed. Right. > My version contains a complete copy of the current backup and > (thanks to the magic of lvm) a block level diff of every snapshot, > which appears to the system as a complete backup, despite taking up > much less space than your typical incremental backup. If I were backing up entire filesystems, I can see how that would definitely be true. > To change analogies completely - think git. My lvm snapshot is like > a git commit. Git only stores the current HEAD, and retrieves > previous commits by applying diffs. If I "check out a backup" (ie > mount a backup volume), lvm applies a diff to the live filesystem. Got it, thanks.
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-02-03, Wol wrote: > On 03/02/2024 16:02, Grant Edwards wrote: >> rsnapshot is an application that uses rsync to do >> hourly/daily/weekly/monthly (user-configurable) backups of selected >> directory trees. It's done using rsync to create snapshots. They are >> in-effect "incremental" backups, because the snapshots themselves are >> effectively "copy-on-write" via clever use of hard-links by rsync. A >> year's worth of backups for me is 7 daily + 4 weekly + 12 monthly >> snapshots for a total of 23 snapshots. If nothing has changed during >> the year, those 23 snapshots take up the same amount of space as a >> single snapshot. > > So as I understand it, it looks like you first do a "cp with hardlinks" > creating a complete new directory structure, but all the files are > hardlinks so you're not using THAT MUCH space for your new image? No, the first snaphost is a complete copy of all files. The snapshots are on a different disk, in a different filesystem, and they're just plain directory trees that you can brose with normal filesystem tools. It's not possible to hard-link between the "live" filesystem and the backup snapshots. The hard-links are to inodes "shared" between different snapshot directory trees. The first snapshot copies everything to the backup drive (using rsync). The next snapshot creates a second directory tree with all unchanged files hard-linked to the files that were copied as part of the first snapshot. Any changed files just-plain-copied into the second snapshot directory tree. The third snapshot does the same thing (starting with the second snapshot directory tree). Rinse and repeat. Old snapshots trees are simply removed a-la 'rm -rf" when they're no longer wanted. > So each snapshot is using the space required by the directory > structure, plus the space required by any changed files. Sort of. The backup filesystem has to contain one copy of every file so that there's something to hard-link to. The backup is completely stand-alone, so it doesn't make sense to talk about all of the snapshots containing only deltas. When you get to the "oldest" snapshot, there's nothing to delta "from". > [...] > > And that is why I like "ext over lvm copying with rsync" as my > strategy (not that I actually do it). You have lvm on your backup > disk. When you do a backup you do "rsync with overwrite in place", > which means rsync only writes blocks which have changed. You then > take an lvm snapshot which uses almost no space whatsoever. > > So to compare "lvm plus overwrite in place" to "rsnapshot", my > strategy uses the space for an lvm header and a copy of all blocks > that have changed. > > Your strategy takes a copy of the entire directory structure, plus a > complete copy of every file that has changed. That's a LOT more. I don't understand, are you saying that somehow your backup doesn't contain a copy of every file? -- Grant
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-02-03, Michael wrote: >> If you'll forgive the analogy, we'll say the the functionality of >> rsync (as used by rsnapshot) is built-in to ZFS. > > Broadly and rather loosely yes, by virtue of the COW and snapshot fs > architecture and the btrfs/zfs send-receive commands. > >> Is there an application that does with ZFS snapshots what the >> rsnapshot application itself does with rsync? > > COW filesystems do not need a 3rd party application. Really? I can edit a configuration file and then ZFS will provide me with daily/weekly/monthly/yearly snapshots of (say) /home, /etc, and /usr/local on an external hard drive? > They come with their own commands which can be called manually, or > scripted for convenience and automation. Yes, I know that. AFAICT, they provide commands that do pretty much what rsync does in my current backup scheme. It's the automation provided by rsnapshot that I'm asking about. > Various people have created their own scripts and applications, e.g. > > https://unix.stackexchange.com/questions/696513/best-strategy-to-backup-btrfs-root-filesystem > >> I googled for ZFS backup applications, but didn't find anything that >> seemed to be widespread and "supported" the way that rsnapshot is. > > There must be quite a few scripts out there, but can't say what support they > may receive. Random search revealed: > > https://www.zfsnap.org/ > > https://github.com/shirkdog/zfsbackup > > https://gbyte.dev/blog/simple-zfs-snapshotting-replicating-backup-rotating-convenience-bash-script Yes, there seem to be a lot of bare-bones homebrewed scripts like those. That is the sort of what I was looking for but they all seem a bit incomplete and unsupported compared rsnapshot. I can install rsnapshot with a simple "emerge rsnapshot", edit the config file, set up the crontab entries, and Bob's your uncle: rsnapshot bugfixes and updates get installed by the usual Gentoo update process, and backups "just happen". -- Grant
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-02-02, Mark Knecht wrote: > On Fri, Feb 2, 2024 at 4:39 PM Grant Edwards > wrote: > >> >> I googled for ZFS backup applications, but didn't find anything that >> seemed to be widespread and "supported" the way that rsnapshot is. > > I'm not exactly sure I'm following your thoughts above but rsnapshot is an application that uses rsync to do hourly/daily/weekly/monthly (user-configurable) backups of selected directory trees. It's done using rsync to create snapshots. They are in-effect "incremental" backups, because the snapshots themselves are effectively "copy-on-write" via clever use of hard-links by rsync. A year's worth of backups for me is 7 daily + 4 weekly + 12 monthly snapshots for a total of 23 snapshots. If nothing has changed during the year, those 23 snapshots take up the same amount of space as a single snapshot. My understanding of ZFS is that it has built-in snapshot functionality that provides something similar to what is done by rsync by its use of hard-links. In my current setup, there's an application called rsnapshot that manages/controls the snapshots by invoking rsync in various ways. My question was about the existence of a similar application that can be used with ZFS's built-in snapshot support to provide a similar backup scheme. > have you investigated True-NAS? It is Open ZFS based and > does support snapshots. I'm aware of True-NAS, but I'm not looking for NAS. I was asking abouth methods to backup one local disk to another local disk. -- Grant
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-01-31, Rich Freeman wrote: > Honestly, at this point I would not run any storage I cared about on > anything but zfs. There are just so many benefits. > > [...] > > In any case, these COW filesystems, much like git, store data in a > way that makes it very efficient to diff two snapshots and back up > only the data that has changed. [...] In order to take advantage of this, I assume that the backup destination and source both have to be ZFS? Do backup source and destination need to be in the same filesystem? Or volume? Or Pool? (I'm not clear on how those differ exactly.) Or can the backup destination be "unrelated" to the backup source? The primary source of failure in my world is definitely hardware failure of the disk drive, so my backup destination is always a separate physical (usually external) disk drive. If you'll forgive the analogy, we'll say the the functionality of rsync (as used by rsnapshot) is built-in to ZFS. Is there an application that does with ZFS snapshots what the rsnapshot application itself does with rsync? I googled for ZFS backup applications, but didn't find anything that seemed to be widespread and "supported" the way that rsnapshot is. -- Grant
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-01-31, Thelma wrote: > On 1/31/24 08:50, Grant Edwards wrote: >> On 2024-01-31, Rich Freeman wrote: >> >>> Honestly, at this point I would not run any storage I cared about on >>> anything but zfs. There are just so many benefits. >> >> I'll definitely put zfs on my list of things to play with. I've been >> a little reluctant in the past because it wasn't natively supported. I >> don't use an initrd (or modules in general). So, using a filesystem >> that isn't supported in-tree sounded like too much work. > > If zfs file system is superior to ext4 and it seems to it is. > Why hasn't it been adopted more widely in Linux? My understanding is that the license is incompatible with the Linux kernel's GPL license, so it can't be "built-in" they way that ext4 is. -- Grant
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-01-31, Rich Freeman wrote: > On Wed, Jan 31, 2024 at 6:45 AM John Covici wrote: >> >> I know you said you wanted to stay with ext4, but going to zfs reduced >> my backup time on my entire system from several hours to just a few >> minutes because taking a snapshot is so quick and copying to another >> pool is also very quick. >> > > Honestly, at this point I would not run any storage I cared about on > anything but zfs. There are just so many benefits. I'll definitely put zfs on my list of things to play with. I've been a little reluctant in the past because it wasn't natively supported. I don't use an initrd (or modules in general). So, using a filesystem that isn't supported in-tree sounded like too much work. -- Grant
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-01-31, gentoo-u...@krasauskas.dev wrote: > On Tue, 2024-01-30 at 20:38 +0000, Grant Edwards wrote: >> >> It took me an embarassing number of tries to get the intervals and >> crontab entries to mesh so it worked the way I wanted. It's not >> really >> that difficult (and it's pretty well documented), but I managed to >> combine a misreading of how often and in what order the rsync wrapper >> was supposed to run with my chronic inability to grok crontab >> specifications. Hilarity ensued. >> > > I just wanted to share my 2¢. https://crontab.guru has made my life a > lot easier when it comes to setting up crontab. Yep, I just found that site recently myself, and it's a big help. -- Grant
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-01-30, Rich Freeman wrote: > On Tue, Jan 30, 2024 at 3:08 PM Wol wrote: >> >> On 30/01/2024 19:19, Rich Freeman wrote: >> > I'd echo the other advice. It really depends on your goals. >> >> If you just want a simple backup, I'd use something like rsync onto lvm >> or btrfs or something. I've got a little script that sticks today's date >> onto the snapshot name > > So, you've basically described what rsnapshot does, minus half the > features. You should consider looking at it. If you do, read carefully the documentation on intervals and automation. It took me an embarassing number of tries to get the intervals and crontab entries to mesh so it worked the way I wanted. It's not really that difficult (and it's pretty well documented), but I managed to combine a misreading of how often and in what order the rsync wrapper was supposed to run with my chronic inability to grok crontab specifications. Hilarity ensued. > It is basically an rsync wrapper and will automatically rotate > multiple snapshots, and when it makes them they're all hard-linked > such that they're as close to copy-on-write copies as possible. The > result is that all those snapshots don't take up much space, unless > your files are constantly changing.
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-01-30, Rich Freeman wrote: > On Tue, Jan 30, 2024 at 1:15 PM Grant Edwards > wrote: >> >> Are there other backup solutions that people would like to suggest I >> look at to replace rsnapshot? I was happy enough with rsnapshot (when >> it was running), but perhaps there's something else I should consider? > > I'd echo the other advice. It really depends on your goals. FWIW, I'm backing up only home directories and config stuff (e.g. /etc). I don't backup the OS itself or anything installed in /opt by installers or ebuilds. > I think the key selling point for rsnapshot is that it can generate a > set of clones of the filesystem contents that are directly readable. Yes, that's the main advantage of rsnapshot. You can browse through backups without any special tools. > That isn't as efficient as it can be, but it is very simple to work > with, and it is done about as well as can be done with this sort of > approach. Restoration basically requires no special tooling this > way, so that is great if you want to restore from a generic rescue > disk and not have to try to remember what commands to use. Yep, rsnapshot can take several hours to run every night. But, being able to look through backups with nothing more than "cd" "ls" and "cat" sure is nice. > send-based tools for filesystems like brtrfs/zfs are SUPER-efficient > in execution time/resources as they are filesystem-aware and don't > need to stat everything on a filesystem to identify exactly what > changed in an incremental backup. However, you're usually limited to > restoring to another filesystem of the same type and have to use those > tools. For now, I need something for ext4, but backup-ability is definitely a a reason to consider switching filesystem types. > Restic seems to be the most popular tool to backup to a small set of > files on disk/cloud. I use duplicity for historical reasons, and > restic does the same and probably supports more back-ends. These > tools are very useful for cloud backups as they're very efficient > about separating data/indexes and keeping local copies of the latter > so you aren't paying to read back your archive data every time you do > a new incremental backup, and they're very IO-efficient. I generally backup to a USB 3 external hard drive, so IO efficiency isn't as much a concern as when backing up to cloud. There are things I periodially back up to cloud, but that's generally a manual process involving little more than "scp". > Bacula is probably the best solution for tape backups of large numbers > of systems, but it is really crufty and unwieldy. I would NOT use > this to backup one host, and especially not to back up the host > running bacula. Bootstrapping it is a pain. It is very much designed > around a tape paradigm. Thanks, I'll cross that one off the list. :) > If you have windows hosts you want to backup Thankfully, I don't.
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-01-30, Michael wrote: > On Tuesday, 30 January 2024 18:15:09 GMT Grant Edwards wrote: >> I need to set up some sort of automated backup on a couple Gentoo >> machines (typical desktop software development and home use). One of >> them used rsnapshot in the past but the crontab entries that drove >> that have vanished :/ The crontabs had not dissappeared, I was looking on the wrong computer. Too many terminal windows open... > You have probably seen the backup packages suggested in this wiki page? Yep. Rsnapshot is one of them, and it's what I chose quite a few years ago. > https://wiki.gentoo.org/wiki/Backup > > and what's available in the tree: > > https://packages.gentoo.org/categories/app-backup I'll look through the package database and visit some of the home pages. > You may also want to consider integral filesystem solutions like btrfs and > zfs, depending on your needs and how often your data change, as well as > related scripts; e.g.: I don't think I'm ready to switch from ext4.
[gentoo-user] Re: Suggestions for backup scheme?
On 2024-01-30, Thelma wrote: > I backup, periodically: > - corontab (user, root) > - etc > - hylafax > > daily: > - data > > It all depend what you want you backup, how large is your data. > For backup standard "rsync" over the network does the job OK rsnapshot is a perl app that automates/organizes rsync backups, so it's doing pretty much the same thing as the script below (though it's a little more sophisticated). > I customized this rsync-bacup script: > https://serverfault.com/questions/271527/explain-this-rsync-script-for-me-linux-backups > [...]
[gentoo-user] Suggestions for backup scheme?
I need to set up some sort of automated backup on a couple Gentoo machines (typical desktop software development and home use). One of them used rsnapshot in the past but the crontab entries that drove that have vanished :/ (presumably during a reinstall or upgrade -- IIRC, it took a fair bit of trial and error to get the crontab entries figured out). I believe rsnapshot ran nightly and kept daily snapshots for a week, weekly snapshots for a month, and monthly snapshots for a couple years. Are there other backup solutions that people would like to suggest I look at to replace rsnapshot? I was happy enough with rsnapshot (when it was running), but perhaps there's something else I should consider? -- Grant
[gentoo-user] Re: The hopeless futility of printing.
On 2024-01-29, Michael wrote: > On Monday, 29 January 2024 18:19:19 GMT Alan Grimes wrote: > >> It's a LaserJet Pro M453-4. > > You shouldn't need hplip drivers and what not, IPP Everywhere ought to allow > driverless CUPS to allow you to print: > > https://www.pwg.org/printers/ Does anybody have any experience with using IPP everywhere for driverless printing with a USB attached printer? (e.g. LasterJet 1320)? Yea, I know, it works as is with the PCL driver, so don't futz with it... -- Grant
[gentoo-user] Re: sending message from Linux to window user
On 2024-01-26, Thelma wrote: > Is there a way to send a pop-up message to Windows user from Linux? > > The below command works but from Windows to Windows: > msg fd /server:fd-server "Your message here" > > but I need it to work from Linux. > > I tried: > smbclient -M fd\%5d2f0of -I 10.0.0.137 > Connection to fd%522002fd failed. Error NT_STATUS_RESOURCE_NAME_NOT_FOUND > > $ net send fd "Your message here" > Invalid command: net send https://superuser.com/questions/1625305/sending-messages-from-linux-machine-to-windows-one-fails
[gentoo-user] Re: downloading from cell phone to Gentoo
On 2024-01-18, Philip Webb wrote: > 240117 Philip Webb wrote: >> I want to be able to download photos from my new cellphone to Gentoo. >> The phone is a Samsung A14 5G ; its pet name is Athene. >> I use KDE to manage my desktops on my desktop machine ANB6. > > Thanks for the many replies, which offer as many different methods. > I'll try them out & report back. > > One further important question : do I need to enable Fuse in the kernel ? All of the mtp implementations I've tried were user-space filesystems, so for those you will need Fuse enabled in the kernel. There may be MTP clients that act more like remote filesystem browsers and don't use an unerlying Fuse filesystem. It would be cool if someting like Filezilla spoke MTP.
[gentoo-user] Re: downloading from cell phone to Gentoo
On 2024-01-18, Philip Webb wrote: > I want to be able to download photos from my new cellphone to Gentoo. > The phone is a Samsung A14 5G ; its pet name is Athene. > I use KDE to manage my desktops on my desktop machine ANB6. MTP can be a bit tempermental if that's what's being used. It's been around for enough years that it should "jsut work", but that's still no my experience, even with recent phones. I usually fire up an sftp server on the phone and do file transfers via Wifi. I find it to be less hassle and faster.
[gentoo-user] Re: VboxClient: the virtualbox kernel service is not running. Exiting
On 2023-12-12, the...@sys-concept.com wrote: > It was a virtualbox upgrade (not kernel), the notification is on > Gentoo host system running VM. Were you trying to run guest additions on the host? > I might be related to "app-emulation/virtualbox-guest-additions" > Unmerging this package solved the problem, no more pop-up at login. https://packages.gentoo.org/packages/app-emulation/virtualbox-guest-additions "VirtualBox kernel modules and user-space tools for Gentoo guests" https://wiki.gentoo.org/wiki/VirtualBox Guest Additions To install the Guest Additions, invoke the following command on the Gentoo guest system: root #emerge --ask app-emulation/virtualbox-guest-additions You do not install Gentoo virtualbox-guest-additions on the Gentoo host.
[gentoo-user] Re: ffmpeg: WARNING: One or more updates/rebuilds have been skipped due to a dependency conflict
On 2023-12-04, Michael wrote: >> However, the "h264enc" package has a hard dependency on mplayer. > > Which I believe is not needed for mpv. You can set: The problem is not that h264enc is required by mplyaer, it's that the h264enc package requires mplayer: >From the h264enc ebuild >[https://gitweb.gentoo.org/repo/gentoo.git/tree/media-video/h264enc/h264enc-10.4.7-r1.ebuild]: RDEPEND="media-video/mplayer[encode,x264] sys-apps/coreutils [...] sys-process/time" > vo=gpu > hwdec=auto > > or, > > hwdec=auto-safe > > in .config/mpv/mpv.conf and all should be good. Check the mpv man page for > "Actively supported hwdecs" to see what applies to your hardware. I don't understand how that's relevent to h264enc's dependancy on mplayer. The solution is to probably replace the h264enc script (which requires mplayer -- more specifically, it appears to require mencoder), with an equivalent script which uses mpv or ffmpeg instead of mencoder for transcoding. FWIW, mpv doesn't have a separate "encoder" utilitity like mplayer does. Instead, you just add some -o "output" options to tell it where to write the stream and what container/codec to use for output. -- Grant
[gentoo-user] Re: ffmpeg: WARNING: One or more updates/rebuilds have been skipped due to a dependency conflict
On 2023-12-04, Dale wrote: > Grant Edwards wrote: > >> Do you really need both mpv and mplayer? > > > Given the new one fails to build, that is a good question. Personally, > I just want to play videos. lol This is what equery shows as needing > mplayer. > > > > root@fireball / # equery d media-video/mplayer > * These packages depend on media-video/mplayer: > media-video/devedeng-4.17.0-r2 (media-video/mplayer) > media-video/h264enc-10.4.7-r1 (media-video/mplayer[encode,x264]) > media-video/smplayer-23.6.0 (media-video/mplayer[bidi?,libass,png,X]) > root@fireball / # > > I use smplayer, a LOT. It's what I use to watch videos on my TV with. > Can smplayer use mpv instead? According to Wikipedia: SMPlayer is a cross-platform graphical front-end for MPlayer and mpv. In the smplayer ebuild file is says this: RDEPEND="${DEPEND} || ( media-video/mpv[libass(+),X] media-video/mplayer[bidi?,libass,png,X] ) So yes, smplayer will use either. > Would disabling those USE flags above make it not need mplayer? In the devedeng emerge file it too appears to be happy with either mplayer, vlc, or mpv: RDEPEND=" [...] || ( media-video/vlc media-video/mpv media-video/mplayer ) [...] " However, the "h264enc" package has a hard dependency on mplayer.
[gentoo-user] Re: ffmpeg: WARNING: One or more updates/rebuilds have been skipped due to a dependency conflict
On 2023-12-04, Dale wrote: > I either started a thread on this a while back or it was mentioned > inside another thread. This has been popping up for months now. Either > I have something set wrong or there is a problem in a ebuild or > something. I just don't know what. This is what I get. I'm having to > use this command because it does update everything else. This however > triggers the same as I get during a regular world update. > > root@fireball / # emerge -auDN ffmpeg mpv mplayer A month or two ago, I had to give up on mplayer and replace it with mpv [https://en.wikipedia.org/wiki/Mpv_(media_player)]. mplayer required old versions of various libraries, and that was preventing other things from getting updated because they depended on more modern versions of those same libraries. Do you really need both mpv and mplayer?
[gentoo-user] Re: Abnormal processor temperature.
On 2023-11-21, Laurence Perkins wrote: > I have a system here running an Intel N97 processor, which is idling > at 70-80C on Gentoo with all cores 99% idle. This is 40 degrees > hotter than it runs on Ubuntu or Windows 10. > > Powertop confirms that the CPU is spending nearly all of its time in > idle mode. Are clock speeds being scaled down when idle? Or does the N97's "idle mode" preclude the need to scale down clock speed when not busy to avoid high temps?
[gentoo-user] Re: OFF TOPIC Need Ubuntu network help: boot loader info
On 2023-10-19, Dale wrote: > That config kinda reminds me of the old grub. A title line, location of > kernel and then options. Sounds easy enough. The new grub config is > almost impossible to config by hand. They had to make a tool to do it. > That says a lot there. ;-) Manually configuring Grub2 for a single OS is pretty trivial. Here's a typical grub.cfg file: -grub.cfg timeout=10 default=0 root (hd0,0) menuentry vmlinuz-5.15.135-gentoo { linux /boot/vmlinuz-5.15.135-gentoo root=/dev/sda1 } menuentry vmlinuz-5.10.76-gentoo-r1 { linux /boot/vmlinuz-5.10.76-gentoo-r1 root=/dev/sda1 } If you want to get fancy and use labels and UUIDs, it looks like this --grub.cfg-- search --no-floppy --label ROOT --set root timeout=10 default=0 menuentry vmlinuz-5.15.135-gentoo { linux /boot/vmlinuz-5.15.135-gentoo root=PARTUUID=fd96ac2d-5521-c043-9fdb-5067b48fb063 } menuentry vmlinuz-5.15.127-gentoo { linux /boot/vmlinuz-5.15.127-gentoo root=PARTUUID=fd96ac2d-5521-c043-9fdb-5067b48fb063 } Most distros add 2 or 3 layers of obsfucation on top of grub.cfg with scripts upon scripts upon scripts that read a dozen or so config files and automagically detect kernels and initrds and other OSes and then generate a grub.cfg file containing many hundreds of lines of stuff. If you just boot one OS with a "main" kernel and a "backup" kernel, then all you need is what you see above.
[gentoo-user] Re: OFF TOPIC Need Ubuntu network help
On 2023-10-18, Michael wrote: > >> The protective MBR and the BIOS boot partition are two different, >> unrelated things. The BIOS boot partition is a real partition (usually >> 1-2MB in size) that's present in the GPT parition table. It's used by >> Grub as a place to store its files. > > Yes, this is needed on GPT disks when installed on BIOS MoBos. There is a way to install Grub on GPT disks without it, but it takes extra work and isn't worth it. You have to lock certain files in place under /boot/grub so that block-lists can be embedded in sector 0. All of the disk label utilities I've seen recently will, by default, leave a sizable empty space between the primary GPT table and the start of the first partition (which typically starts at a 1MB offset from the start of the disk). I've never understood why Grub won't use that space they way it will use the empty space between an MBR and the first partition. >> It must be the first partition, and it doesn't have a real >> filesystem (grub uses some sort of private filesystem): > > I'm not sure it uses any filesystem. I understood it uses a raw sector jump > from the MBR to the GPT partition type 0xEE. I've read a couple vague but differing descriptions of it. One description specifically referred to "files" (plural) and some sort of grub-private-internal filesystem. However, it could be that it's nothing but a single "file" starting at block 0 in that partition. Whatever it is, it seems to be "opaque" in that Grub puts stuff in that partition, Grub later uses that stuff, and nobody else needs to know or care what it is or how it's organized. I haven't looked through the Grub source code to try to see inside the black box... -- Grant
[gentoo-user] Re: OFF TOPIC Need Ubuntu network help
On 2023-10-18, Rich Freeman wrote: > Oh well, I rarely reboot so it just hasn't been on the top of my > list of things to fix. I don't really care much on the Ubuntu servers I maintain because they are rarely rebooted, and their network interfaces are always up. A couple weeks ago I was testing/troubleshooting some PCI-express board prototypes which meant rebooting dozens of times a day. I threw Ubuntu server on a spare machine for that, but the 2-minute delay drove me nuts. After futzing around for a while, I did get Ubuntu to boot in a timely fashion [but it meant I had to manually configure one of the network interfaces with 'ip' when I wanted to use it]. However, I never could get the serial console to work acceptably on Ubuntu. It worked fine during the kernel boot, but once systemd started up, the serial console got shut down. I wasted hours trying to figure out how to fix that before I gave up on Ubuntu. I finally ended up installing Gentoo/openrc, and then it only took a few minutes to figure out how to keep the serial console working. -- Grant
[gentoo-user] Re: OFF TOPIC Need Ubuntu network help
On 2023-10-18, Michael wrote: >> Oh, and if you use GPT, you no longer need the MBR compatibility >> partition, or whatever its called. I no longer need it so I can't >> remember the exact name. > > Man pages of partitioning tools refer to it as "Protective MBR", although > I've > seen it mentioned in the interwebs as "protective GPT", which I think is more > accurate. It uses the first sector (LBA 0) to store an MBR table showing the > whole disk, or 2TB if smaller, as an MBR partition. This is the first > partition on the disk, typically 1 MiB in size. It is meant to stop 20 year > old partitioning tools from messing up a GPT partitioning scheme because they > can't see it. Arguably nobody uses Windows 98 these days, so it should be > safe to not have a protective MBR on your GPT disks. The protective MBR and the BIOS boot partition are two different, unrelated things. The BIOS boot partition is a real partition (usually 1-2MB in size) that's present in the GPT parition table. It's used by Grub as a place to store its files. It must be the first partition, and it doesn't have a real filesystem (grub uses some sort of private filesystem): $ sudo fdisk -l /dev/nvme0n1 Disk /dev/nvme0n1: 465.76 GiB, 500107862016 bytes, 976773168 sectors Disk model: Samsung SSD 980 PRO 500GB Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: E81DD16A-A5AE-3C4A-AD3C-26DF2985827A Device Start End Sectors Size Type /dev/nvme0n1p1 2048 6143 4096 2M BIOS boot /dev/nvme0n1p2 6144 134219775 13421363264G Linux filesystem /dev/nvme0n1p3 134219776 976773134 842553359 401.8G Linux filesystem
[gentoo-user] Re: OFF TOPIC Need Ubuntu network help
On 2023-10-17, Mark Knecht wrote: > I have 4 Ubuntu-based machines here and over the last 6 years I've > never seen a 1 minute delay to login, much less 5 minutes. I see it all the time. Two minutes is the most common delay that I run into, but I've seen longer. The two-minute delay I frequently run into has usually got something to do with networking. For example, if one of the links is down, Ubuntu is really fond of waiting a couple mintues for it to come up before it finishes booting. [If it doesn't wait for all the network interfaces, how is it going to do all that cloudy crap nobody really wants?] People have been complaining about that one for years and years and years. There have been countless web pages written about it with almost as many different answers/suggestions. Here's a recent one: https://devicetests.com/fix-waiting-for-network-configuration-error-ubuntu-startup The really fun part is that since the methods used for configuring the network on Ubuntu change with the seasons, 95% of the suggested fixes you find are irrelevent even if they were on-target at one point. I've run into various other (less common) causes of Ubuntu boot delays, but it's usually waiting for "network configuration". And then there are the delays during shutdown... And how about the stupid #@$% "modem manager" that mucks with serial ports looking for dial-up modems. Yea, that still needs to be installed and enabled by default on every Ubuntu install on the planet... -- Grant
[gentoo-user] Re: OFF TOPIC Need Ubuntu network help
On 2023-10-17, Dale wrote: > I to find Gentoo to be much better documented. There were places where > the old BIOS and efi info got a little confusing but eventually I > figured it out. I been trying to think of a way to color code the docs > but I can't figure out a sensible way. You got BIOS and efi, openrc and > systemd and several other smaller things that one has to decide on and > take different steps. One would run out of colors or the colors > themselves would get confusing. I can't think of a better way. In theory, a wee bit of CSS and Javascript along with some radio buttons would allow the reader to make a few choices and then see an installation manual that only shows the relevent sections. I still miss being able to view the installation manual as a single HTML page. I find the "chopped up" format difficult to use: I can't easily search for things, and the bit I'm looking for never seems to be in the section where I think it's going to be. -- Grant
[gentoo-user] Re: Video card. Will this work for me?
On 2023-10-13, Dale wrote: > As most likely know, I'm in the process of building a new rig and > putting a couple older systems to use. Most of my mobos support > PCIe-x16 2.0 for video cards. I found a Nvidia NVS 510 that has four > mini HDMI outputs. Research claims those are for multiple monitors, in > other words not cloned. I can use mini HDMI to regular HDMI adapters > easy enough. I found a website that lists Nvidia cards and I think this > will suite my needs rather well. I currently have a GeForce GTX 650 and > it works just fine but it is older. While this 510 isn't a spring > chicken by any means, it seems to be a little younger. On my newer rig, > it will likely be PCI 3.0. Will this card work in it as well? In other > words, are they backward compatible? I think I've read that mobos are > but want to be sure. Yes, PCIe cards are supposed to be backwards compatible with older PCIe implementations on motherboards and vice-versa. Be warned: I went through a couple generations of NVS-3xx cards. I had to recycle both of them because NVidea driver support stopped long before I had any real reason to replace either of the boards. [I could never get the open-source driver to work.] After being forced to replace a second perfectly servicable NVidia card, I swore never to by NVidia again. -- Grant
[gentoo-user] Re: What to do about openssl
On 2023-10-04, John Covici wrote: > Hi. I just did a world update and found that my openssl-1.1.1v is > masked. What can I do, Use one of the stable versions. > I don't have any version that is not masked Huh? What architecture are you on? There are three versions of openssl that are stable and not masked for amd64, x86, and most others: 3.0.9-r1 3.0.9-r2 3.0.10 see https://packages.gentoo.org/packages/dev-libs/openssl > and according to the message this version is EOL. Indeed. OpenSSL 1.1.1 is dead. Support ended a few weeks ago.
[gentoo-user] Re: How to move ext4 partition
On 2023-09-21, Jack wrote: > >> [...] Of course I've discovered for the Nth time in the past 10-15 >> years, that for the root= command line argument, the kernel doesn't >> grok LABEL or UUID values -- it only understands device names and >> PARTUUID. > > while my Gentoo grub.cfg has root=PARTUUID=, my Artix Linux install > (using openrc) has root=UUID=. I wasn't aware they had mucked with > grub (2.12-rc1) nor do I know if it's a recent change in grub. AFAIK, it's not grub (grub does know how to handle LABEL and UUID when setting grub's root). For the kernel, it's something in the initrd's 'init' executable that parses the root=UUID= or root=LABEL=, searches the available filesystems to find a match, then mounts the matching filesystem and does a chroot to it (or someting like that). If you don't have an initrd, then the kernel itself has to handle the root= and that code only knows about device names and partition UUIDs. It doesn't know anything about filesystems (which doesn't make much sense, since the next step is to mount the specified partition, and it obviously knows about filesystems at that point). At least that's what I've read...
[gentoo-user] Re: How to move ext4 partition
On 2023-09-21, Victor Ivanov wrote: > On Wed, 20 Sept 2023 at 23:58, Grant Edwards > wrote: > >>> Just make sure you update /etc/fstab and bootloader config file >>> with the new filesystem UUID or partition indices. >> >> I always forget one or the other until after I try to boot the >> first time. That's why I keep systemrescuecd and Gentoo minimal >> install USB drives on hand. > > Me too, even just recently when I migrated my OS to another build I > decided to do a few partition touch ups and fell once more into this > trap. I updated fstab but not the bootloader. Luckily, Gentoo > minimal install image is so tiny a bootable medium can literally be > created in minutes. The tar backup restore worked just fine (and didn't take long, even though both drives were connected via USB). I've since fixed a second machine by adding a bios-boot partition. I should have started using them when I switched from MBR to GPT, but I think I got bios-boot partitions confused with UEFI boot partitions. :/ I'm also working on switching to using either labels or uuids in fstab and grub configs so that changes in partition numbers don't cause problems. Of course I've discovered for the Nth time in the past 10-15 years, that for the root= command line argument, the kernel doesn't grok LABEL or UUID values -- it only understands device names and PARTUUID. -- Grant
[gentoo-user] Re: How to move ext4 partition
On 2023-09-20, Frank Steinmetzger wrote: > Am Wed, Sep 20, 2023 at 10:57:00PM +0100 schrieb Victor Ivanov: > >> On Wed, 20 Sept 2023 at 22:29, Grant Edwards >> wrote: >> > >> > That depends on how long it takes me to decide on tar vs. rsync and >> > what the appropriate options are. >> >> I've done this a number of times for various reasons over the last 1-2 >> years, most recently a few months ago due to hard drive swap, and I >> find tar works just fine: >> >> $ tar -cpf /path/to/backup.tar --xattrs --xattrs-include='*.*' -C / . > > Does that stop at file system boundaries (because you tar up '/')? I think > it must be, otherwise you wouldn’t use it that way. > But when copying a root file system, out of habit I first bind-mount it in a > subdirectory and tar/rsync from there instead. This will also make files > visible which might be hidden under an active mount. The partition/fs being backed up isn't live (it's mounted, but it's not the root partition of the host doing the backup), so nothing is mounted within it and there aren't any /proc or /sys entries in it. So, in this case there's no need to worry about crossing filesystem boundaries. -- Grant
[gentoo-user] Re: How to move ext4 partition
On 2023-09-20, Victor Ivanov wrote: > On Wed, 20 Sept 2023 at 22:29, Grant Edwards > wrote: >> >> That depends on how long it takes me to decide on tar vs. rsync and >> what the appropriate options are. > > I've done this a number of times for various reasons over the last 1-2 > years, most recently a few months ago due to hard drive swap, and I > find tar works just fine: > > $ tar -cpf /path/to/backup.tar --xattrs --xattrs-include='*.*' -C / . > > Likewise to extract, but make sure "--xattrs" is present Yep, that's pretty much what I decided on based on the tar command shown at https://wiki.gentoo.org/wiki/Handbook:AMD64/Installation/Stage Interestingly, the Arch Linux Wiki recommends using bsdtar because "GNU tar with --xattrs will not preserve extended attributes". > Provided backup space isn't an issue, I wouldn't bother with > compression. It could be a lot quicker too depending on the size of > your root partition. Both the drive being "fixed" and the backup drive are in a USB3 attached dual slot drive dock, so I'm thinking compression might be worthwhile. > Just make sure you update /etc/fstab and bootloader config file with > the new filesystem UUID or partition indices. I always forget one or the other until after I try to boot the first time. That's why I keep systemrescuecd and Gentoo minimal install USB drives on hand. -- Grant
[gentoo-user] Re: How to move ext4 partition
On 2023-09-20, Wol wrote: > Or, assuming the people who wrote gparted have two brain cells to rub > together, I'm pretty sure they use the same technique as memmove. > > "If the regions overlap, make sure you start from whichever end won't > overwrite the source, otherwise start at whichever end you like". > > Barring screw-ups (a very unsafe assumption :-), I'm pretty certain you > don't even need a backup! > > I suspect the man-page even confirms this behaviour. Not that I could see. The only mention of "move" on the man page is in this list of features. With gparted you can accomplish the following tasks: - Create a partition table on a disk device. - Enable and disable partition flags such as boot and hidden. - Perform actions with partitions such as create, delete, resize, move, check, label, copy, and paste. Assuming GParted is smart enough to do overlapping moves, is it smart enough to only copy filesystem data and not copy "empty" sectors? According to various forum posts, it is not: moving a partion copies every sector. [That's certainly the obvious, safe thing to do.] The partition in question is 200GB, but only 7GB is used, so I think backup/restore is the way to go... -- Grant
[gentoo-user] Re: How to move ext4 partition
On 2023-09-20, Neil Bothwick wrote: > On Wed, 20 Sep 2023 20:24:17 - (UTC), Grant Edwards wrote: > >> For example, I have a 500GB partition containing an ext4 filesystem >> starting at sector 2048 (1MiB). I want to move that filesystem so that >> it starts at sector 3*2048 (3MiB). >> >> Can that be done in-place? >> >> Or should I just back up the filesystem to a second drive and start >> from scratch? > > Given that you'd want to backup before such an operation anyway, It's a machine with very limited uses, so I'd probably only back up /etc and /root. Reinstalling probably wouldn't take too much longer than backing up and restoring /. > you may as well then restore from that backup. I'm sure it will be a > lot quicker than GParted's moving all the data around. That depends on how long it takes me to decide on tar vs. rsync and what the appropriate options are. After 40 years using Unix, you'd think I'd know that (or have it written down somewhere). :) That said, I think I will go with the backup, repartition, restore method. It's been many, many years since I used GParted, and I can probably have the whole job done from the command-line before I can figure out how the GParted GUI works.
[gentoo-user] How to move ext4 partition
I've got a Gentoo install using a GPT partition table and Legacy boot using Grub2. There is a single /root parition and a single swap partition on the drive. I did not create a bios-boot partition at the start of the disk, so I had to force grub2 to install using block-lists. I'd like to fix that now. This requires that I move the ext4 root partition towards the end of the drive to create 2MB of free space at the start of the drive for a new bios-boot partition. I see that Gnu parted no longer has a move command. However, GParted apparently does. Can GParted move an ext4 filesystem to a destination location that overlaps its starting location? For example, I have a 500GB partition containing an ext4 filesystem starting at sector 2048 (1MiB). I want to move that filesystem so that it starts at sector 3*2048 (3MiB). Can that be done in-place? Or should I just back up the filesystem to a second drive and start from scratch?
[gentoo-user] Re: Password questions, looking for opinions. cryptsetup question too.
On 2023-09-20, Dale wrote: > For websites, I really like Bitwarden. I remember one password and it > can generate passwords for all the websites I use. The passwords it > generates are pretty random. For sites that don't allow symbols, I can > turn that off. The big point, I only remember one password. Thing is, > on one hand I need help remembering all these passwords. On the other > hand, that is a risk itself. I second the recommendation of Bitwarden. I used to use Lastpass but they discontinued their free version, and the entry-level price was just too high. I was so impressed with Bitwarden's support that I did end up subsribing to their lowest-level paid service even though I don't really need any of the extras that gets me. It's also nice to know that I can set up my own Bitwarden server if I want to. If you're using Bitwarden's cloudy storage, don't forget to back up your password database locally too. I always back it up in human readable format and then encrypt it using openssl command-line methods. You don't want to have to depend on either Bitwarden's servers or the Bitwarden app to retreive your passwords. -- Grant
[gentoo-user] Re: Computer case for new build
On 2023-09-20, Dale wrote: > Grant Edwards wrote: On 2023-09-18, Dale wrote: > >> The built-in Intel video on an oldish Intel i5 at the office is >> currently driving 3 displays. The built-in video on the AMD at home is >> driving 2 and, IIRC, could handle 2 more. > > Then maybe I can use the onboard one. At least I know it is a option. > Most of the mobos I've seen, shich are older by the way, have only one > port, usually a DB15. I think I got one around here somewhere that has > a HDMI, I think. The old i5 used to have an NVidia 3xx Quadro board installed which had a dual DisplayPort pigtail cable with DisplayPort to DVI adapters to drive two 1600x1200 monitors. I wasn't using the built-in graphics at all because we've all known for decades that built-in graphics were useless, right? Then the pandemic happened, and I brought the NVidia card and one of those monitors home for the duration, leaving the other monitor plugged in to the i5 motherboard's DVI output. Not too long after that, NVidia stopped supporting the Quadro card. I got to a point where I needed to update the kernel for [some reason]. But, the NVidia driver wasn't available for a kernel that recent. The i5 motherboard I had at home at the time had DVI, HDMI, and DB15 connectors on the back. I sort of assumed that the built-in graphics could only mirror the same image onto multiple displays, but once I got the right cables, it drove a 1600x1200 and a 1920x1200 at full resolution with no problems. The one thing the built-in graphics couldn't do is provide two separate X11 displays (instead of one virtual display that's spread out over two monitors). For various reasons I had always run multiple separate X11 desktops on NVidia cards rather than one desktop spread over multiple monitors. But I got used to the single large virtual desktop setup. I've since replaced the home i5 machine with a Ryzen 5 3400G, and it was definitely a step up in video performance. Then I acquired a couple more monitors so that I had three at the office. That i5 motherboard has DVI, HDMI, mini-DisplayPort and DB15 connectors. With the right adapter cables, I was able to connect two 1600x1200 monitors to DVI and HDMI, plus a 1600x900 monitor to the mini-DP port. It drives all of them at their native resolutions. I don't do any heavy duty gaming or 3D stuff, so I can't vouch for performance in that area. But both the i5 and Ryzen 5 have HW direct-rending 3D support, and the RC heli/plane flight simulator I do play with seems happy enough (the two year old Ryzen 3400G does maintain noticably higher frame-rates than the ten year old i5-3570K). Neither one of these processors was top of the class for integrated graphics when they were introduced. I tend to go for lower TDP to keep fan noise down, and that limits GPU performance. -- Grant
[gentoo-user] Re: Computer case for new build
On 2023-09-18, Dale wrote: > Well, for one, I usually upgrade the video card several times before I > upgrade the mobo. When it is built in, not a option. I think I'm on my > third in this rig. I also need multiple outputs, two at least. One for > monitor and one for TV. The built-in Intel video on an oldish Intel i5 at the office is currently driving 3 displays. The built-in video on the AMD at home is driving 2 and, IIRC, could handle 2 more. -- Grant
[gentoo-user] Re: long compiles
On 2023-09-13, Kristian Poul Herkild wrote: > Nothing compares to Chromium (browser) in terms of compilation times. On > my system with 12 core threads it takes about 8 hours to compile - which > is 4 times longer than 10 years ago with 2 core threads ;) About a year ago I finally gave up building Chromium and switched to www-client/google-chrome. It got to the point where it sometimes took longer to build Chromium than it did for the next version to come out. -- Grant
[gentoo-user] Re: Anyone used openmediavault with LVM?
On 2023-09-12, Todd Goodman wrote: > >> I've generally used "sudo bash" for such stuff. > > Or sudo -i Doh! How did I not know that? I've been doing "sudo bash -" for years. All those wasted bits...
[gentoo-user] Re: Anyone used openmediavault with LVM?
On 2023-09-12, Dale wrote: > I currently have Ubuntu installed. [...] So far, my biggest gripe is > sudo this, sudo that. Dang, give me root and be done with it. :/ > I did try, no freaking password for the thing. I gotta google that > tho. There has to be a way. $ sudo bash - It's been a while since I tried it, but you used to be able to set a password for root. IIRC, it was as simple as $ sudo passwd root
[gentoo-user] Re: Serial console stops working as soon as openrc starts
On 2023-09-09, Dale wrote: >> Changing the level in /etc/conf.d/dmesg from 1 to 8 allowed the serial >> console to continue working as I wanted it to. > > Does it say what else changing the log level does? It's a single, global value in the kernel so it has the same affect on all linux kernel consoles. > If so, can you link to the docs you found? I'm curious. https://github.com/OpenRC/openrc/blob/master/conf.d/dmesg https://man7.org/linux/man-pages/man1/dmesg.1.html https://www.kernel.org/doc/html/next/core-api/printk-basics.html https://linuxconfig.org/introduction-to-the-linux-kernel-log-levels https://www.oreilly.com/library/view/linux-kernel-in/0596100795/re06.html
[gentoo-user] Re: Serial console stops working as soon as openrc starts
On 2023-09-09, Grant Edwards wrote: > I've set up a serial console by adding the following to my kernel > command line: > > console=ttyS0,115200 console=tty1 > > It works fine for the first few seconds as the kernel starts up. All > of the expected messages are sent out ttyS0. > > But, soon after init starts, the serial console stops working. That's because one of the first things openrc runs in /etc/init.d/dmesg, and it changes the kernel logging level to the value defined in /etc/conf.d/dmesg (which defaults to 1). Changing the level in /etc/conf.d/dmesg from 1 to 8 allowed the serial console to continue working as I wanted it to. [I spent an entire day trying to get serial logging to work on Ubuntu with systemd, and got exactly nowhere. After replacing Ubuntu/systemd with Gentoo/openrc it didn't take long to track down the answer in the openrc docs.]
[gentoo-user] Serial console stops working as soon as openrc starts
I've set up a serial console by adding the following to my kernel command line: console=ttyS0,115200 console=tty1 It works fine for the first few seconds as the kernel starts up. All of the expected messages are sent out ttyS0. But, soon after init starts, the serial console stops working. The end of the serial console log always looks like this [3.502684] Freeing unused kernel image (initmem) memory: 1516K [3.509155] Write protecting the kernel read-only data: 24576k [3.515861] Freeing unused kernel image (text/rodata gap) memory: 2036K [3.523160] Freeing unused kernel image (rodata/data gap) memory: 1156K [3.568699] x86/mm: Checked W+X mappings: passed, no W+X pages found. [3.575655] x86/mm: Checking user space page tables [3.615655] x86/mm: Checked W+X mappings: passed, no W+X pages found. [3.622604] Run /sbin/init as init process [3.660655] kbd_mode (115) used greatest stack depth: 13096 bytes left [3.667760] loadkeys (116) used greatest stack depth: 13048 bytes left On the tty1 console, the next thing after the "loadkeys" line above is the OpenRC banner, so apparently openrc is messing with my console settings. It's been a few years since I setup a serial console, but after adding the "console=" argument to the kernel args it used to "just work". How do I get openrc to leave the serial console alone? -- Grant
[gentoo-user] Re: sqlite downgraded by update breaks things
On 2023-09-06, Alan McKinnon wrote: > Not really. ebuilds tend to be named the same as the project, so > apache is called apache (project name), not httpd (binary name) > > The user package is named after what the system user will be, and > SVN has run as "svn" since forever. Makes total sense, as long as > you know exactly what how the software works and how it's deployed. Exactly. :)
[gentoo-user] Re: sqlite downgraded by update breaks things
On 2023-09-06, Michael wrote: > The message indicates subversion needs reinstalling with the downgraded > sqlite > - potentially @preserved-rebuild ought to catch this, or revdep-rebuild. I used to run revdep-rebuild after every update, but a few years ago I thought I read that was no longer a useful thing to do. I did not try @preserved-rebuild since there was no message from portage indicating it was needed. Isn't there usually a message from portage if that set is non-empty? I don't think it would have done anything, since the library file's version didn't change and subversion was indeed using the newer library. @preserved-rebuild only kicks in if the library file version changes and portage keeps the old version of the file around to keep some apps running until they are re-built to use the newer version of the library file. > You could have a go rebuilding sqlite with +static-libs, but I'm clutching at > straws here. :-/ Emerging 'subversion' did it. When I typed 'emerge svn' and something got merged without any errors I didn't even look to see exactly what -- though after I emerged subversion I did remember that emerging svn didn't take nearly as long as it should have. IMO it's a mistake to have one package called "svn" and another one called "subversion". -- Grant
[gentoo-user] Re: sqlite downgraded by update breaks things
On 2023-09-06, Grant Edwards wrote: > sudo emerge --sync > sudo emerage -auvND world > [...] > > $ svn status > svn: E200029: Couldn't perform atomic initialization > svn: E200030: SQLite compiled for 3.43.0, but running with 3.42.0 > > [...] > Manually re-merging svn didn't fix it. Doh! Emerging "svn" is basically a nop: all it deals with is account stuff. Emerging "subversion" fixed it. Is there a portage mechanism that should have done that? Why is the account stuff "svn" and the package itself "subversion"? -- Grant
[gentoo-user] sqlite downgraded by update breaks things
I just did my usual update sudo emerge --sync sudo emerage -auvND world I noticed that it was downgrading sqlite from 3.43 to 3.42. OK, we'll assume that portage and the devs know what they're doing... Now this happens: $ svn status svn: E200029: Couldn't perform atomic initialization svn: E200030: SQLite compiled for 3.43.0, but running with 3.42.0 Have I done something wrong? Is an ebuild broken? Manually re-merging svn didn't fix it. Now what do I do?
[gentoo-user] Re: Email clients
On 2023-07-31, Kusoneko wrote: > > Jul 31, 2023 13:52:25 Grant Edwards : > >> On 2023-07-31, Kusoneko wrote: >>> >>>> Don't get me wrong, I'm "team plaintext" all day every day but I'm not >>>> going to make my life more difficult on principles. There are hills >>>> worth dying on but this isn't mine. >>> >>> Iirc, you can setup mutt to open html emails either in a web browser >>> or with something like w3m. >> >> Wait -- those are web engines. I thought the argument was that mutt >> didn't need a web engine. If that was the case, then you would have no >> need to set up mutt to use them to display HTML email. > > Why would you want a mail client to also be a web browser when you > already have a web browser to do that job? I don't want a mail client that's also a web browser. I want a mail client that renders HTML. That's only a small small of what a web browser does. Most of what a web browser does these days is provide an environment in which to run JavaScript. > I will never understand the mindset of trying to include web > browsers into everything. Web browsers are massive pieces of > software, including one in everything massively increases the > compile time and resource usage of the software it's added into. That's because they do a lot more than just render HTML. >>> There's no need for a web engine in a mail client when you have a >>> perfectly workable web engine in the browser. >> >> Composing HTML also e-mails requires a web-engine. Sure, you can do >> that using emacs, markdown mode, a web browser for previewing, and so >> on. It's a lot of work. > > I don't get the point of composing HTML emails. Let's be honest > here, unless you're writing emails as part of a company with > complicated messes of html signatures or marketing emails, the only > difference between composing a plain text email and a html email for > most people is unnoticeable. I found that not to be the case for the Outlook users to whom I sent e-mails. I was unable to figure out how to get mutt to generate plaintext e-mails that were rendered properly by Outlook (e.g. using a fixed font, honoring newlines and multiple spaces, etc.) in Outlook. It's also difficult to get plaintext e-mails to display in a reasonable way on both a large screen and a small screen (i.e. phone). I was not happy seeing what my plaintext, 72 column e-mails looked like on a small phone screen. -- Grant
[gentoo-user] Re: Email clients
On 2023-07-31, Alexe Stefan wrote: >> >> Normally I would be in the chorus of "why do I need a whole entire web >> engine for an email client" but I'm also in the group of people who >> knows full well what the answer is. >> > > What is the answer? Most of us don't like reading HTML. > Mutt doesn't need a web engine. You must get e-mail from a different sort of sender than I do. -- Grant
[gentoo-user] Re: Kudos on prompt release of gentoo-sources w/ Zenbleed mitigation
On 2023-07-25, Grant Edwards wrote: > Thanks and well done to the Gentoo Kernel Project for promptly pushing > out 5.15.122, 6.1.41, et alia. Those latest kernels add mitigation for > the "Zenbleed" vulnerability found in AMD Ryzen and Epyc processors. FWIW, Zenbleed affects only "Zen2" family parts: https://gadgetversus.com/processor/amd-zen-2-processors-list/ https://en.wikipedia.org/wiki/Zen_2 -- Grant
[gentoo-user] Kudos on prompt release of gentoo-sources w/ Zenbleed mitigation
Thanks and well done to the Gentoo Kernel Project for promptly pushing out 5.15.122, 6.1.41, et alia. Those latest kernels add mitigation for the "Zenbleed" vulnerability found in AMD Ryzen and Epyc processors. https://www.theregister.com/2023/07/24/amd_zenbleed_bug/ https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-20593 https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7008.html I noticed that my Ubunutu server machines got a kernel update this morning also, and I assumed that update also contained the Zenbleed mitigation -- but it did not. Ubuntu apparently has not pushed out kernel updates for that yet. [My Ubuntu machines are unaffected, but I'd be a little worried if they were.]
[gentoo-user] Re: Plasma session saving
On 2023-07-05, Peter Humphrey wrote: > This version of memtest86 ran to completion after going through the whole > 64GB, and stopped with a success message. That's a pretty good sign, but I have seen memory that made it through one complete test pass and failed on subsequent ones. > Over the last...oh, many months, I've noticed an occasional package in a > large > batch failing for no obvious reason, only to succeed on its own. What sort of failure? I've found that inconsistent/random gcc internal errors or gcc segfaults have usually been due to failing RAM. [Though in one case I remember, it was due to a failing SCSI disc controller card -- back when that was a thing.] It might also be due to a failing disk, but there are usually good indications of that in dmesg output and in SMART logs before it starts to affect other things. -- Grant
[gentoo-user] Re: QMPlay2 single instance, want multiple.
On 2023-06-18, Matt Connell wrote: > On Sat, 2023-06-17 at 00:02 -0500, Dale wrote: >> Thanks Matt for pointing me in this direction. As it is, this might >> be a better player for me than QMPlay2 is. This works as good as >> QMPlay2 and it closes at the end. I miss gnome-player tho. Silly >> old thing >> gave me a lot of years of good use. :/ > > Glad I could help, at least as far as naming the package and telling > you to Read The Friendly Manual. > > If it gives you any salve, mpv is based on mplayer/mplayer2 just > like... well almost everything else, including gnome-player / Totem I'm still using plain old mplayer and see no reason to look for anything else.
[gentoo-user] Re: Can't upgrade portage or update/install ebuilds
On 2023-06-12, Wol wrote: > On 09/06/2023 21:16, Grant Edwards wrote: >> On 2023-06-09, Daniel Pielmeier wrote: >> >>> If it is only about gemato then temporary disable the rsync-verify flag >>> which pulls it in. >>> >>> # USE="-rsync-verify" emerge sys-apps/portage >> >> The problem I ran into is that you never know how many issues there >> are standing in the way of upgrading. The one time I decided to muscle >> my way through updating an "obsolete" Gentoo install, [...] >> >> You do learn alot about how portage/emerge works... >> > Learning that is a good idea maybe :-) > > But last time I had a well-out-of-date system, it was a long and > messy process ... > > What I did was, every time portage said "giving up" or "conflict found" > or whatever, I just took a note of as many of the packages I could > remember that portage said it could emerge, and then manually updated > them "emerge --update --one-shot". > > And any conflicts, if I dared, I simply deleted then "emerge -C --one-shot". IIRC, at one point Python was one of those problems, and I stupidly removed Python before realizing what that meant... Hilarity ensued. Removing/skipping as many of the non-essential "big" packages and their dependancies and getting the base system updated is indeed the best way to go.
[gentoo-user] Re: google-chrome can render pages after update
On 2023-06-12, Michael wrote: >> It seems to be a variation on this bug which affects only AMD GPUs: >> >> https://bugs.gentoo.org/907431 >> >> Clearing the GPU driver cache or using the >> -disable-gpu-driver-bug-workarounds option avoids the problem. >> >> In my case, It wasn't a mesa update that triggered the problem. I >> think it was the llvm update (I haven't confirmed that). > > Did you (re)compile anything graphics related using llvm, which > might be used by the Chrome binary? No -- but as I understand it, mesa uses llvm (at runtime) to generate GPU object code. Based on the work-around, it looks like compiled GPU object code is cached by Chrome/Chromium, and updates to mesa and/or llvm can result attempts to use old, incompatible GPU object code. As pages are rendered, there was a constant stream of "link failure" messages on the console window where Chrome is running. > I don't have chrome/chromium installed here to directly compare > notes and so far qtwebengine appears to work fine after updating > llvm this morning, as does www-client/microsoft-edge. Firefix-bin still worked fine also. It only seemed to affect Chrome/Chromium or it's derivitives. -- Grant
[gentoo-user] Re: google-chrome can render pages after update
On 2023-06-12, Grant Edwards wrote: > I did an update this morning which installed the following: > > aleph ~ # fgrep '>>> emerge ' emerge.log > > 1686579407: >>> emerge (1 of 11) dev-util/strace-6.3 to / > 1686579455: >>> emerge (2 of 11) dev-libs/nspr-4.35-r2 to / > 1686579470: >>> emerge (3 of 11) dev-python/fonttools-4.39.4 to / > 1686579500: >>> emerge (4 of 11) dev-python/weasyprint-59.0 to / > 1686579507: >>> emerge (5 of 11) net-print/cups-2.4.4 to / > 1686579541: >>> emerge (6 of 11) sys-devel/llvm-15.0.7-r3 to / > 1686582174: >>> emerge (7 of 11) app-portage/gemato-20.4 to / > 1686582180: >>> emerge (8 of 11) media-libs/gstreamer-1.20.5 to / > 1686582206: >>> emerge (9 of 11) dev-db/unixODBC-2.3.11 to / > 1686582239: >>> emerge (10 of 11) media-libs/gst-plugins-base-1.20.5 to / > 1686582282: >>> emerge (11 of 11) www-client/firefox-bin-114.0.1 to / > > > Now google-chrome-stable Version 114.0.5735.106 (Official Build) > (64-bit) can lo longer display pages proplerly. It looks like chunks > of the application window are randomly scrambled or shown in the wrong > places. AFIACT, the pages are being parsed/process properly but the > actaul rendering of the X11 window is broken. It seems to be a variation on this bug which affects only AMD GPUs: https://bugs.gentoo.org/907431 Clearing the GPU driver cache or using the -disable-gpu-driver-bug-workarounds option avoids the problem. In my case, It wasn't a mesa update that triggered the problem. I think it was the llvm update (I haven't confirmed that). -- Grant
[gentoo-user] google-chrome can render pages after update
I did an update this morning which installed the following: aleph ~ # fgrep '>>> emerge ' emerge.log 1686579407: >>> emerge (1 of 11) dev-util/strace-6.3 to / 1686579455: >>> emerge (2 of 11) dev-libs/nspr-4.35-r2 to / 1686579470: >>> emerge (3 of 11) dev-python/fonttools-4.39.4 to / 1686579500: >>> emerge (4 of 11) dev-python/weasyprint-59.0 to / 1686579507: >>> emerge (5 of 11) net-print/cups-2.4.4 to / 1686579541: >>> emerge (6 of 11) sys-devel/llvm-15.0.7-r3 to / 1686582174: >>> emerge (7 of 11) app-portage/gemato-20.4 to / 1686582180: >>> emerge (8 of 11) media-libs/gstreamer-1.20.5 to / 1686582206: >>> emerge (9 of 11) dev-db/unixODBC-2.3.11 to / 1686582239: >>> emerge (10 of 11) media-libs/gst-plugins-base-1.20.5 to / 1686582282: >>> emerge (11 of 11) www-client/firefox-bin-114.0.1 to / Now google-chrome-stable Version 114.0.5735.106 (Official Build) (64-bit) can lo longer display pages proplerly. It looks like chunks of the application window are randomly scrambled or shown in the wrong places. AFIACT, the pages are being parsed/process properly but the actaul rendering of the X11 window is broken. You can see examples here: https://www.panix.com/~grante/chrome/foo.png https://www.panix.com/~grante/chrome/bar.png None of the other "big" X11 apps seem to be affected (firefox, thunderbird, libre-office, etc. all work fine). The console window where I launch chrome now spews almost continuous errors like those show below. Has anybody else run into this? I'm going to start backing out the updates above, but thought I'd check to see if this was a known problem. I haven't found anything in buzilla yet... Errors: link failed but did not provide an info log [7110:7110:0612/103618.780116:ERROR:shared_context_state.cc(81)] Skia shader compilation error // Vertex SKSL #extension GL_NV_shader_noperspective_interpolation: require uniform float4 sk_RTAdjust;uniform float2 uAtlasSizeInv_S0;in float2 inPosition;in half4 inColor;in ushort2 inTextureCoords;noperspective out float2 vTextureCoords_S0;flat out float vTexIndex_S0;noperspective out half4 vinColor_S0;void main() {// Primitive Processor BitmapText int texIdx = 0;float2 unormTexCoords = float2(inTextureCoords.x, inTextureCoords.y);vTextureCoords_S0 = unormTexCoords * uAtlasSizeInv_S0;vTexIndex_S0 = float(texIdx);vinColor_S0 = inColor;float2 _tmp_1_inPosition = inPosition;sk_Position = inPosition.xy01;} // Fragment SKSL #extension GL_NV_shader_noperspective_interpolation: require const int kFillBW_S1_c0 = 0; const int kInverseFillBW_S1_c0 = 2; const int kInverseFillAA_S1_c0 = 3; uniform float4 urectUniform_S1_c0;uniform sampler2D uTextureSampler_0_S0; noperspective in float2 vTextureCoords_S0;flat in float vTexIndex_S0;noperspective in half4 vinColor_S0;half4 Rect_S1_c0(half4 _input) { half4 _tmp_0_inColor = _input; half coverage; if (int(2) == kFillBW_S1_c0 || int(2) == kInverseFillBW_S1_c0) { coverage = half(all(greaterThan(float4(sk_FragCoord.xy, urectUniform_S1_c0.zw), float4(urectUniform_S1_c0.xy, sk_FragCoord.xy; } else { half4 dists4 = saturate(half4(1.0, 1.0, -1.0, -1.0) * half4(sk_FragCoord.xyxy - urectUniform_S1_c0)); half2 dists2 = (dists4.xy + dists4.zw) - 1.0; coverage = dists2.x * dists2.y; } if (int(2) == kInverseFillBW_S1_c0 || int(2) == kInverseFillAA_S1_c0) { coverage = 1.0 - coverage; } return half4(half4(coverage)); } half4 Blend_S1(half4 _src, half4 _dst) { return blend_modulate(Rect_S1_c0(_src), _src);} void main() {// Stage 0, BitmapText half4 outputColor_S0;outputColor_S0 = vinColor_S0;half4 texColor;{ texColor = sample(uTextureSampler_0_S0, vTextureCoords_S0).; }half4 outputCoverage_S0 = texColor;half4 output_S1;output_S1 = Blend_S1(outputCoverage_S0, half4(1));{ // Xfer Processor: Porter Duff sk_FragColor = outputColor_S0 * output_S1;}} // Vertex GLSL #version 300 es #extension GL_NV_shader_noperspective_interpolation : require precision mediump float; precision mediump sampler2D; uniform highp vec4 sk_RTAdjust; uniform highp vec2 uAtlasSizeInv_S0; in highp vec2 inPosition; in mediump vec4 inColor; in mediump uvec2 inTextureCoords; noperspective out highp vec2 vTextureCoords_S0; flat out highp float vTexIndex_S0; noperspective out mediump vec4 vinColor_S0; void main() { highp int texIdx = 0; highp vec2 unormTexCoords = vec2(float(inTextureCoords.x), float(inTextureCoords.y)); vTextureCoords_S0 = unormTexCoords * uAtlasSizeInv_S0; vTexIndex_S0 = float(texIdx); vinColor_S0 = inColor; gl_Position = vec4(inPosition, 0.0, 1.0); gl_Position = vec4(gl_Position.xy * sk_RTAdjust.xz + gl_Position.ww * sk_RTAdjust.yw, 0.0, gl_Position.w); } //
[gentoo-user] Re: Can't upgrade portage or update/install ebuilds
On 2023-06-09, Daniel Pielmeier wrote: > If it is only about gemato then temporary disable the rsync-verify flag > which pulls it in. > > # USE="-rsync-verify" emerge sys-apps/portage The problem I ran into is that you never know how many issues there are standing in the way of upgrading. The one time I decided to muscle my way through updating an "obsolete" Gentoo install, I spent a very long day fixing "one more problem" and trying again. It took many more hours than a scratch install would have taken, but at some point I decided to keep going just to see if I could make it all the way through the process. I did. Then I promised myself never to try that again. You do learn alot about how portage/emerge works... -- Grant
[gentoo-user] Re: Can't upgrade portage or update/install ebuilds
On 2023-06-09, Nikolay Pulev wrote: > This is my first reach out to you. I have not update my machine for > a long time How long? > and have no reached a point where I can't install or upgrade > packages. My experience is that if you haven't updated up for more than 6-9 months, the easiest/fastest thing to do (usually) is back up /etc, /home, /root and /usr/src/linux/.config and re-install from scratch. If you've got /home in a separate partition, that makes the reinstall particularly easy. -- Grant