Re: [gentoo-user] app-crypt/pinentry - major rework: "/usr/bin/pinentry-gtk-2" missing

2021-01-31 Thread Wynn Wolf Arbor
On 2021-01-31 13:03, Ramon Fischer wrote:
> The USE flag "gtk" was not removed:
> 
>-IUSE="caps emacs gnome-keyring fltk gtk ncurses qt5"
>+IUSE="caps emacs gnome-keyring gtk ncurses qt5"
> 
> Since when is this obsolete and is there any alternative?

I cannot comment directly on the obsolescence of pinentry-gtk2 (most
certainly a decision upstream), but the alternative is pinentry-gnome3.

The gtk flag was not removed because it now configures pinentry-gnome3
instead of -gtk2. You should be able to select that using 'eselect
pinentry', but because of [1] you'll have to enable the gtk flag for
app-crypt/gcr first and then rebuild app-crypt/pinentry.

[1] https://bugs.gentoo.org/show_bug.cgi?id=767424

-- 
Wolf



Re: [gentoo-user] Nvidia GeForce GTX 960

2020-09-13 Thread Wynn Wolf Arbor
Hi Silvio,

I think the problem is that you have told portage to use both nouveau
and nvidia as a driver for your card. If you have both drivers installed
and do not blacklist one of them, the other will not work.

> # modprobe nvidia
> modprobe: ERROR: could not insert 'nvidia': No such device

If nouveau takes control of the card first, for example, you will not be
able to load the nvidia module anymore, just as it happens to you here.

> # lspci | grep VGA
> 01:00.0 VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] 
> (rev a1)

You can run 'lspci -k' to see which driver is in use for a PCI device.
This will probably return 'nouveau' for you. You can also check lsmod
for the 'nouveau' module.

> # cat /etc/portage/make.conf | grep VIDEO
> VIDEO_CARDS="nouveau nvidia"

I'm assuming you want the proprietary driver, so change this to read the
following and run 'emerge -c' to get rid of the superfluous driver:

VIDEO_CARDS="nvidia"

-- 
Wolf



Re: [gentoo-user] Upgrade to rsync-3.2.0-r1 results in "didn't get server startup line"

2020-07-01 Thread Wynn Wolf Arbor
Hi Steve,

On 2020-06-30 20:35, Steve Freeman wrote:
> I have a local gentoo repo mirror that has been running well for
> years.  It is essentially the same setup as described at
> https://wiki.gentoo.org/wiki/Local_Mirror except that it runs on a
> non-default port.

I sadly cannot reproduce this on my systems with a similar setup. Could
you attach the full rsyncd.conf file? Perhaps there's also custom
settings in /etc/conf.d/rsyncd.

> rsync: didn't get server startup line
> [Receiver] _exit_cleanup(code=5, file=main.c, line=1777): entered
> rsync error: error starting client-server protocol (code 5) at main.c(1777) 
> [Receiver=3.2.0]
> [Receiver] _exit_cleanup(code=5, file=main.c, line=1777): about to call 
> exit(5)

I've had a look in the code, and that particular message is only
triggered in one place:

if (!read_line_old(f_in, line, sizeof line, 0)) {
rprintf(FERROR, "rsync: didn't get server startup line\n");
return -1;
}

read_line_old is described thusly:

/* Read a line of up to bufsiz-1 characters into buf.  Strips
 * the (required) trailing newline and all carriage returns.
 * Returns 1 for success; 0 for I/O error or truncation. */
int read_line_old(int fd, char *buf, size_t bufsiz, int eof_ok)

> Running rsync as a non-daemon appears to work fine regardless of
> server/client versions; it's only rsyncd that fails.
>
> With no useful logs or output, I'm finding this impossible to
> diagnose.  Does anyone have any ideas?

I have no concrete ideas, but given that read_line_old seems to fail,
maybe it's helpful checking out actual network traffic with wireshark or
tcpdump. You could compare traffic between the working and the broken
version. A simple capture filter of 'port 5873' should be enough.

Since there really doesn't seem to be any better debug functionality
(there's --debug, but it didn't really add anything for me), perhaps you
could also build rsync with debug symbols and throw gdb at it.

Finally, have you tried accessing the rsync endpoint manually without
invoking 'emerge --sync'? Does the following also raise an error?

rsync --port 5873 10.10.10.10::

If not, perhaps try syncing a single file manually.

I also see that version 3.2.1 is in the tree now, could be worth a shot
too.

-- 
Wolf



Re: [gentoo-user] Bitwarden, anyone?

2020-06-16 Thread Wynn Wolf Arbor
On 2020-06-16 12:05, Peter Humphrey wrote:
> So I created a ~/.cache/bwtmp directory and passed TMPDIR= to
> bitwarden, but then it threw another error. I'd better take this up
> with BitWarden.

I just tried getting it to work again. If this is anything like on my
system, once the noexec problem is fixed, the app fails here because it
doesn't find libsecret. I'm not sure why this is not bundled in the
AppImage, but emerging app-crypt/libsecret fixes this for me, and I can
run the app without any further issues. I might have other libs it
depends on installed already, so I can't say for sure without looking at
the error message.

Hope that helps.

-- 
Wolf



Re: [gentoo-user] Bitwarden, anyone?

2020-06-15 Thread Wynn Wolf Arbor
On 2020-06-14 23:20, Peter Humphrey wrote:
> On Sunday, 14 June 2020 19:06:36 BST Wynn Wolf Arbor wrote:
> 
> That was a good idea - but it didn't help, so that's not the answer.

If you're still interested in debugging this, did the error message stay
the same? At least the path "/tmp/.org.chromium.Chromium.QkN0cP" should
have change to indicate the new TMPDIR. It should also have created
files there.

> Yes, I understand that.

Oh, sorry if I misunderstood something then. I had assumed that you
thought Java was needed for the Bitwarden app. There's no indication on
the site that it is (though then again it also doesn't say that it is an
Electron app), so I thought the confusion came from "JavaScript" in the
error message.

Good to hear that the bitwarden-cli app works for you.

-- 
Wolf



Re: [gentoo-user] Bitwarden, anyone?

2020-06-14 Thread Wynn Wolf Arbor
On 2020-06-14 18:45, Peter Humphrey wrote:
> Yes; this is what I get:
> 
> $ ./Bitwarden*/opt/Bitwarden/bitwarden
> A JavaScript error occurred in the main process
> Uncaught Exception:
> Error: /tmp/.org.chromium.Chromium.QkN0cP: failed to map segment from shared 
> object
> --->8

>From what I remember this is caused by having /tmp mounted with noexec.
Sadly the app tries to execute a process directly from within the
temporary directory and fails. Try something like this to confirm:

mkdir $HOME/.cache/bitwarden-tmp
TMPDIR=$HOME/.cache/bitwarden-tmp ./Bitwarden*/opt/Bitwarden/bitwarden

To see whether you've mounted /tmp with noexec: mount | grep /tmp

Should give something like this:

tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec,relatime)

Note that having /tmp mounted with noexec is usually a good idea. I used
to create a wrapper script that launched these kinds of app with a
special TMPDIR.

> I haven't played with java before, so I'm trying to follow the gentoo
> wiki. My first question is whether I need a jdk as well as a jre. The
> wiki talks blithely about virtual machines, and I'm left to guess
> whether the jre is the jvm, as it seems. I'm currently installing
> openjre and openjdk; icedtea-bin is also installed.

The JRE is the Runtime Environment. It includes all things necessary to
run a compiled Java program (so it does indeed include the JVM). The JDK
is the Development Kit - it includes the JRE, but also ships with the
javac compiler and a few other tools. So, if you intend to just run an
already compiled Java program (usually in the form of a .jar), you just
need the JRE. If you need to compile Java programs, you should instead
install the JDK (and can remove the JRE fully).

Regardless of that, JavaScript is not Java [1]. There's no need for the
JRE if you want to run JavaScript code. The Bitwarden desktop app does
not need a JRE or JDK.

[1] https://en.wikipedia.org/wiki/JavaScript#Java

-- 
Wolf



Re: [gentoo-user] Bitwarden, anyone?

2020-06-14 Thread Wynn Wolf Arbor
Hi Peter,

On 2020-06-14 12:43, Peter Humphrey wrote:
> Afternoon all,
> 
> Has anyone some experience of bitwarden on Gentoo? It doesn't run for
> me, and I suspect a java problem. I have icedtea here.
> 
> I'm talking about the installed version, not the firefox extension,
> which seems to work.

Which installed version are you talking about? I don't see the Bitwarden
desktop or command line app in the tree - I guess you downloaded one
from their site?

I used to run the desktop app myself, and am pretty sure it does not
require a Java runtime. If you use the AppImage version, there might be
certain libs missing (tested it out just now, and it fails to start
because it cannot find libsecret).

Have you tried running it from the command line to see what it says?

-- 
Wolf



Re: [gentoo-user] Nvidia driver plus kernel info questions

2020-05-02 Thread Wynn Wolf Arbor

Hi,

I'm currently using the 390 slot since when I installed that card, 
that is what it showed.  I'm almost 100% certain I checked this when 
installing this card. My question is, is it normal for nvidia to 
change the series of drivers for cards like this?


A driver series is not necessarily bound to a specific card, so it is 
normal to see newer driver series supporting older devices. I'd assume 
that 390 was the current stable series back when you checked it. The 
current series is 440.


For the current series, you also have drivers separated into a 
"long-lived" and a "short-lived" branch - as far as I know that only 
marks how long a driver receives official support, but don't quote me on 
that.


The most recent driver is 440.82 in the long-lived branch. You can find 
a list of all Unix drivers at [1] without having to fill out any forms.


Then there's also the legacy driver series specifically for devices that 
the current driver no longer supports. A GTX 480 card, for example, 
would need the 390 series. There's a list at [2] and more info about 
support timeframes at [3].


According to that the 440 series supports the 5.6 series of kernel. It 
doesn't indicate a specific version tho. Does that mean I can go to 
the very latest version or do I need to look elsewhere to see what is 
supported?


In this case you can go with the very latest release, yes. Any future 
440.* driver will work for you.


Once a new series is released (I don't know how frequent that is) you 
might want to check whether your card still supports that, however.


Hope this helped clear up the confusion.

[1] https://www.nvidia.com/en-us/drivers/unix/
[2] https://www.nvidia.com/en-us/drivers/unix/legacy-gpu/
[3] https://nvidia.custhelp.com/app/answers/detail/a_id/3142

--
Wolf



Re: [gentoo-user] Trouble with backup harddisks

2020-05-01 Thread Wynn Wolf Arbor

On 2020-05-01 09:18, tu...@posteo.de wrote:

A very *#BIG THANK YOU#* for all the great help, the research and
the solution. I myself am back in "normal mode" :)


Glad it helped!


One thing remains...
I want to prevent this kind of hassle in the future... ;)


The most important thing to keep in mind here is to _always_ partition 
and format a drive yourself. That way, you know what is on there and how 
it was partitioned.



Perhaps it is a good idea to re-partitions the disk to get rid of
any bogus bit, format the partition and copy back the data then.


Whilst you can definitely just change the partition type from EE to 83, 
now that you can move the data around safely, I'd probably repartition 
the drives fully, yeah.



What is the most reasonable setup here:
GPT without any hybrid magic and ext4 because it is so common?
Any other, possible more robust configuration, which is also
common with rescue tools and -distributions?


For external backup disks like these I'd go with GPT and ext4. Seems to 
be the most robust configuration for that use case.


If by "hybrid magic" you mean the Protective MBR, that is not something 
you can simply disable. The standard includes the Protective MBR, and 
tools adhere to that.


Bottom line: Repartition the disks with GPT, then create the necessary 
file systems. Don't turn on any special knobs, the tool's defaults are 
all decent.


--
Wolf



Re: [gentoo-user] Trouble with backup harddisks

2020-04-30 Thread Wynn Wolf Arbor

On 2020-04-30 22:21, Andrea Conti wrote:
It won't, as long as it recognizes it as a protective MBR. Which is the 
right thing to do, as a disk with a protective MBR and no valid GPT is 
inherently broken.


True. It was more my intention to depict what the system "should" do in 
order to access the file system.


... or just bypass the partition table altogether. The filesystem 
starts at sector 1, i.e. 1*512B, so:


mount -o ro,offset=512 /dev/sdb /mnt/xxx


Interesting, thanks. I was initially considering something like this 
myself, but after a cursory check of the manual, I was under the 
assumption that 'offset' was only valid with loop devices and dismissed 
that solution.


Turns out if you mount a drive like this, the kernel uses a loop device 
in the background and you can use the 'offset' option with block devices 
as well. I feel the documentation could be improved here.


--
Wolf



Re: [gentoo-user] Trouble with backup harddisks

2020-04-30 Thread Wynn Wolf Arbor

Hi Meino,

On 2020-04-30 21:46, tu...@posteo.de wrote:

I had booted into my old system, attached the disks and both show the
same behaviour: Only the device itself (/dev/sdb) was recognized.


Now that is very curious. Just to make sure, the old system definitely 
does not understand GPT? CONFIG_EFI_PARTITION should be unset.



'file' shows the following output:

file sdb-data
sdb-data: Linux rev 1.0 ext4 filesystem data, 
UUID=2f063705-0d3a-4790-9203-1b4edab7788c (extents) (64bit) (large files) (huge 
files)

Looks better than I have thought...or?


This does indeed look promising (and is what I expected in the best 
case). Now, of course, the problem is figuring out how to get to that 
data.



I will take a deeper look tommorrow...I am too tired to
"fix partition tables manually" this evening!

Read you tommorrow! :)


Hope the rest of your evening will be more relaxing!

--
Wolf



Re: [gentoo-user] Trouble with backup harddisks

2020-04-30 Thread Wynn Wolf Arbor
All the following assuming that the disk was originally partitioned as 
GPT, but after that exclusively accessed as an MBR disk.



PT fdisk (gdisk) version 1.0.5

Caution: invalid main GPT header, but valid backup; regenerating main header
from backup!


This makes sense since the GPT backup at the very end of the disk would 
most likely still be intact. gdisk identifies it correctly, but assumes 
(wrongly) that the data on the disk is governed by the GPT layout.


Since the disk was only ever accessed through an operating system that 
knew solely about MBR, the GPT data meant nothing to it. It happily 
wrote data past the MBR headers. Because the protective MBR is 
positioned before GPT information, the primary GPT header was destroyed 
and most likely overwritten with the file system. See also [1], the 
actual file system data probably begins somewhere past LBA 0.



Caution! After loading partitions, the CRC doesn't check out!
Warning: Invalid CRC on main header data; loaded backup partition table.
Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
on the recovery & transformation menu to examine the two tables.


This is because the backup GPT written when first partitioned does no 
longer match the data present at the very beginning of the disk.


If the initial assumption is correct, GPT *must not* be restored. Your 
modern PC sees the GPT partition type and assumes the existence of a 
GPT. It should, however, access the MBR layout and interpret the 
partition marked with the GPT ID as a regular partition.


Now, how to fix this?

Like Andrea already said earlier: 

Since the disk is only 1TB, there is no reason to use GPT at all, so 
your best bet is to use fdisk to make that a standard MBR by changing 
the partition type from 'ee' to '83'. 


This would *not* repartition or reformat any data, it would simply tell 
your modern operating system to access the protective partition as a 
regular one.


It would, however, require writing the new type to disk. What you could 
do to be more safe here is to take a backup of the first 512 bytes with 
`dd', then change the partition ID with `fdisk', and try mounting it.


If it works, great. If not, you can restore the first 512 bytes of the 
disk with the backup.



"fix manually" scares me...especially because I have no place for
1TB of an image file to with which I can experiment ...



Any ideas which could ease my burden and to un-scare my
"need to fix it manually" ??? ;) ;)


There's a few alternatives:

1) Boot an older system that only understands MBR, and mount the disk 
there. This was suggested earlier but was dismissed because we assumed 
the sector size had something to do with it. I do not think this is the 
case anymore - the old system should be able to read it.


2) Boot a VM with a kernel that only understands MBR, pass USB through 
to the virtual machine, mount the disk there.


3) Try confirming that there exists file system data past the MBR header.

Maybe something like this:

# dd if=/dev/sdb of=sdb-data bs=512 skip=1 count=16384
$ file sdb-data

As established, the block size is 512 bytes. We skip the first 512 bytes 
since that is the protective MBR. sdb-data should then contain the first 
8MiB worth of actual file system data. The `file' utility can tell you 
what kind of data it is.


[1] 
https://en.wikipedia.org/wiki/GUID_Partition_Table#/media/File:GUID_Partition_Table_Scheme.svg

--
Wolf



Re: [gentoo-user] Trouble with backup harddisks

2020-04-30 Thread Wynn Wolf Arbor

Hi Meino,

Thanks very much for the info. At this point I'm convinced you're 
running into the problem Andrea described in another reply in this 
thread - best to follow up there :)


--
Wolf



Re: [gentoo-user] Trouble with backup harddisks

2020-04-30 Thread Wynn Wolf Arbor

Hi,

On 2020-04-30 13:17, Wols Lists wrote:

All I can suggest is to check the kernel and see if it's an option that
has been disabled (512-byte sectors, that is).


As far as I know the kernel still uses 512 bytes internally [1], and I 
do not recall having seen an option that enables or disables support for 
512/4K sectors.


That said, the problem may well be stemming from a sector size 
discrepancy, but as I understand it, it would have to do with how and 
when the partition table was created. That is, like described in [2], 
some USB enclosures seem to be a bit overzealous with obscure features, 
and might take eight disk sectors and bundle them together into one 4K 
sector.


If the disk was partitioned in the exact same enclosure, and is read 
from the exact same enclosure right now, there shouldn't be any 
problems. Is this the case, Meino?


Also, when did you last access the drives successfully, and with which 
system?


On 2020-04-30 11:32, Meino wrote:

Disk /dev/sdb: 931.49 GiB, 1000170586112 bytes, 1953458176 sectors
Disk model: Elements 25A2   
Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: dos
Disk identifier: 0x16f2a91f

Device Boot StartEndSectors   Size Id Type
/dev/sdb1   1 1953458175 1953458175 931.5G ee GPT


Interestingly, this reads *exactly* like the Protective MBR [3] that GPT 
has. That is, the disklabel type is DOS whilst the partition ID is EE.  
There's a single partition that spans the entire drive (and it's also 
seemingly not aligned properly - usually you see Start at 2048).


As a comparison, here's the output from fdisk for the Protective MBR 
from one of my GPT drives:



Disklabel type: dos
Disk identifier: 0x



Device Boot StartEndSectors Size Id Type
/dev/sdc1   1 4294967295 4294967295   2T ee GPT


I'd assume that the missing disk identifier here is coming from 
different tools writing the protective MBR differently.


With that said, are you absolutely certain that you did not partition 
this drive with GPT instead of MBR? Did you do the partitioning in 
something like fdisk (which asks you specifically what you want), or 
some other application? Did you maybe format this drive on a Windows 
system?


I'm not one to discount entirely strange things happening, but I have 
never before seen a proper MBR laid out like a protective MBR. Indeed it 
would be quite impossible to have systems access data through such a 
table, since the partitions are hidden within that one huge contiguous 
block.


Ordinarily I'd point to fdisk not reading the partition table properly, 
but it seems that although your kernel has support for GPT, it doesn't 
seem to see the partitions either (assuming a proper GPT exists at all).

Do you have some other GPT drives you can access successfully?

I'd say that this requires some more forensic work. Perhaps extracting 
the first few megabytes of the disk and seeing whether there's a proper 
GPT or not. This would of course require manual work.


A few more things to try:

To see what the kernel uses for a particular disk, you can run the 
following: cat /sys/block/sdb/queue/{physical,logical}_block_size


fdisk takes a sector size with -b, --sector-size (should be 
non-destructive as long as you don't write anything, but I am not sure). 
Also, fdisk has a compatibility mode for dos with -c=dos. Might be worth 
a short.


fdisk should support GPT starting with util-linux 2.23, but you can also 
try gptfdisk (it's in the tree).


Hope this helps.

[1] https://github.com/torvalds/linux/blob/master/include/linux/types.h#L120
[2] 
https://superuser.com/questions/679725/how-to-correct-512-byte-sector-mbr-on-a-4096-byte-sector-disk/679800#679800
[3] https://en.wikipedia.org/wiki/GUID_Partition_Table#PROTECTIVE-MBR

--
Wolf