Re: Current best practices for system configuration management?

2024-04-24 Thread Linux-Fan

Mike Castle writes:


Hah!

https://lists.debian.org/debian-user/2013/08/msg00042.html


Yes, that was me > 10a ago. Transitioning from these scripts to ant allowed  
came with a few improvements:


* I switched all package building to `debuild` in favor of using more
  low-level tools for `raw` packages before.

* I was able to establish some sort of “buld upon new commit” poor man's
  local CI -- I experimented with some fully-fledged systems like Concurse CI
  and GitLab, too, but they were all too much for my local-first
  activities

* When resources were required from external locations (e.g. source code
  downloads), that set of scripts did not provide for any automatism in this
  regard. One of my continuous goals is to split between “my settings” and
  some re-downloadable files. It was only achieved partially back then and
  the ant-approach takes it much further :)

Still, in many ways the ant does not do much more than the scripts linked at the  
message above. If you are about to create your own scripts (which is  
probably a better idea than trying to use my scripts anyways :) ) the  
shellscripts may be easier to understand.


HTH
Linux-Fan

öö


pgpEtXWCbBgkx.pgp
Description: PGP signature


Re: Current best practices for system configuration management?

2024-04-19 Thread Linux-Fan

Mike Castle writes:


For a while now, I've been using `equivs-build` for maintaining a
hierarchy of metapackages to control what is installed on my various
machines.  Generally, I can do `apt install mrc-$(hostname -s)` and
I'm golden.

Now, I would like to expand that into also setting up various config
files that I currently do manually, for example, the `/etc/apt/*`
configs I need to make the above work.  For a single set of files,


[...]


My first thought was to simply add a `Files:` section to *.control
files I use for my metapackages.  After all, for configs going into
*.d directories, they are usually easy to just drop in and remove, no
editing in place required.  But, that is when I discovered that all
files under `/etc` are treated specially.


[...]

Hello,

I can confirm from experience that Ansible can indeed scale down to as  
little as the one local machine that it is running from. It has a learning  
curve and at least to me it always felt a little clumsy to learn a YAML  
based scripting language for this purpose, but its a solid choice.


Continuing the package-based approach is what I do because once some wrapper  
around the `debuild` commands was established, it became acceptably easy to  
use. I even maintain my “dotfiles” (not under $HOME but under /etc, but to  
a similar effect) this way: https://masysma.net/32/conf-cli.xhtml.


With `config-package-dev` there are some tricks to even allow changing  
(config) files supplied by other packages.


The disadvantage with the package-based approach is that it is heavily  
distribution-specific and also if you mess anything up, a core component of  
the OS (package management) can become broken - I luckily never broke it to  
the extent that recovery was impossible, but in the beginning ran a  
dedicated test VM to validate all package changes prior to installing them  
on my main system


I have also heard good things about Nix and if I had to start again from  
scratch today, I'd probably invest time into learning that technology. Right  
now I am sufficiently satisfied with the package-based approach to not look  
into it yet.


HTH
Linux-Fan

öö


pgplLIRpEGPqu.pgp
Description: PGP signature


Re: Fast Random Data Generation (Was: Re: Unidentified subject!)

2024-02-13 Thread Linux-Fan

David Christensen writes:


On 2/12/24 08:30, Linux-Fan wrote:

David Christensen writes:


On 2/11/24 02:26, Linux-Fan wrote:

I wrote a program to automatically generate random bytes in multiple threads:
https://masysma.net/32/big4.xhtml


What algorithm did you implement?


I copied the algorithm from here:
https://www.javamex.com/tutorials/random_numbers/numerical_recipes.shtml


That Java code uses locks, which implies it uses global state and cannot be  
run multi-threaded (?).  (E.g. one process with one JVM.)


Indeed, the example code uses locks which is bad from a performance point of  
view. That is why _my_ implementation works without this fine-grained  
locking and instead ensures that each thread uses its own instance as to  
avoid the lock. IOW: I copied the algorithm but of course adjusted the code  
to my use case.


My version basically runs as follows:

* Create one queue of ByteBuffers
* Create multiple threads
   * Each thread runs their own RNG instance
   * Upon finishing the creation of a buffer, it enqueues
 the resulting ByteBuffer into the queue (this is the only
 part where multiple threads access concurrently)
* The main thread dequeues from the queue and writes the
  buffers to the output file

Is it possible to obtain parallel operation on an SMP machine with multiple  
virtual processors?  (Other than multiple OS processes with one PRNG on one  
JVM each?)


Even the locked random could be instantiated multiple times (each instance  
gets their own lock) and this could still be faster than running just  
one of it. However, since the computation itself is fast, I suppose the  
performance hit from managing the locks could be significant. Multiple OS  
processes would also work, but is pretty uncommon in Java land AFAIR.


I found it during the development of another application where I needed a  
lot of random data for simulation purposes :)


My implementation code is here:
https://github.com/m7a/bo-big/blob/master/latest/Big4.java


See the end of that file to compare with the “Numerical Recipes” RNG linked  
further above to observe the difference wrt. locking :)



If I were to do it again today, I'd probably switch to any of these PRNGS:

* https://burtleburtle.net/bob/rand/smallprng.html
* https://www.pcg-random.org/


Hard core.  I'll let the experts figure it out; and then I will use their  
libraries and programs.


IIRC one of the findings of PCG was that the default RNGs of many  
programming languages and environments are surprisingly bad. I only arrived at  
using a non-default implementation after facing some issues with the Java  
integrated ThreadLocalRandom ”back then” :)


It may indeed be worth pointing out (as Jeffrey Walton already mentioned in  
another subthread) that these RNGs discussed here are _not_ cryptographic  
RNGs. I think for disk testing purposes it is OK to use fast non- 
cryptographic RNGs, but other applications may have higher demands on their  
RNGs.


HTH
Linux-Fan

öö

[...]


pgpLYMPAsjGtj.pgp
Description: PGP signature


Re: Fast Random Data Generation (Was: Re: Unidentified subject!)

2024-02-12 Thread Linux-Fan

David Christensen writes:


On 2/11/24 02:26, Linux-Fan wrote:

I wrote a program to automatically generate random bytes in multiple threads:
https://masysma.net/32/big4.xhtml

Before knowing about `fio` this way my way to benchmark SSDs :)

Example:

| $ big4 -b /dev/null 100 GiB
| Ma_Sys.ma Big 4.0.2, Copyright (c) 2014, 2019, 2020 Ma_Sys.ma.
| For further info send an e-mail to ma_sys...@web.de.


[...]


| 99.97% +8426 MiB 7813 MiB/s 102368/102400 MiB
| Wrote 102400 MiB in 13 s @ 7812.023 MiB/s



What algorithm did you implement?


I copied the algorithm from here:
https://www.javamex.com/tutorials/random_numbers/numerical_recipes.shtml

I found it during the development of another application where I needed a  
lot of random data for simulation purposes :)


My implementation code is here:
https://github.com/m7a/bo-big/blob/master/latest/Big4.java

If I were to do it again today, I'd probably switch to any of these PRNGS:

* https://burtleburtle.net/bob/rand/smallprng.html
* https://www.pcg-random.org/


Secure Random can be obtained from OpenSSL:

| $ time for i in `seq 1 100`; do openssl rand -out /dev/null $((1024 * 1024  
* 1024)); done

|
| real    0m49.288s
| user    0m44.710s
| sys    0m4.579s

Effectively 2078 MiB/s (quite OK for single-threaded operation). It is not  
designed to generate large amounts of random data as the size is limited by  
integer range...



Thank you for posting the openssl(1) incantation.


You're welcome.

[...]

HTH
Linux-Fan

öö


pgpjvuqb6Fy1L.pgp
Description: PGP signature


Fast Random Data Generation (Was: Re: Unidentified subject!)

2024-02-11 Thread Linux-Fan

David Christensen writes:


On 2/11/24 00:11, Thomas Schmitt wrote:


[...]


Increase block size:

2024-02-11 01:18:51 dpchrist@laalaa ~
$ dd if=/dev/urandom of=/dev/null bs=1M count=1K
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.62874 s, 296 MB/s


Here (Intel Xeon W-2295)

| $ dd if=/dev/urandom of=/dev/null bs=1M count=1K
| 1024+0 records in
| 1024+0 records out
| 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.15018 s, 499 MB/s


Concurrency:

threads throughput
1   296 MB/s
2   285+286=571 MB/s
3   271+264+266=801 MB/s
4   249+250+241+262=1,002 MB/s
5   225+214+210+224+225=1,098 MB/s
6   223+199+199+204+213+205=1,243 MB/s
7   191+209+210+204+213+201+197=1,425 MB/s
8   205+198+180+195+205+184+184+189=1,540 MB/s


I wrote a program to automatically generate random bytes in multiple threads:
https://masysma.net/32/big4.xhtml

Before knowing about `fio` this way my way to benchmark SSDs :)

Example:

| $ big4 -b /dev/null 100 GiB
| Ma_Sys.ma Big 4.0.2, Copyright (c) 2014, 2019, 2020 Ma_Sys.ma.
| For further info send an e-mail to ma_sys...@web.de.
| 
| 0.00% +0 MiB 0 MiB/s 0/102400 MiB

| 3.48% +3562 MiB 3255 MiB/s 3562/102400 MiB
| 11.06% +7764 MiB 5407 MiB/s 11329/102400 MiB
| 19.31% +8436 MiB 6387 MiB/s 19768/102400 MiB
| 27.71% +8605 MiB 6928 MiB/s 28378/102400 MiB
| 35.16% +7616 MiB 7062 MiB/s 35999/102400 MiB
| 42.58% +7595 MiB 7150 MiB/s 43598/102400 MiB
| 50.12% +7720 MiB 7230 MiB/s 51321/102400 MiB
| 58.57% +8648 MiB 7405 MiB/s 59975/102400 MiB
| 66.96% +8588 MiB 7535 MiB/s 68569/102400 MiB
| 75.11% +8343 MiB 7615 MiB/s 76916/102400 MiB
| 83.38% +8463 MiB 7691 MiB/s 85383/102400 MiB
| 91.74% +8551 MiB 7762 MiB/s 93937/102400 MiB
| 99.97% +8426 MiB 7813 MiB/s 102368/102400 MiB
| 
| Wrote 102400 MiB in 13 s @ 7812.023 MiB/s


[...]

Secure Random can be obtained from OpenSSL:

| $ time for i in `seq 1 100`; do openssl rand -out /dev/null $((1024 * 1024 * 
1024)); done
|
| real  0m49.288s
| user  0m44.710s
| sys   0m4.579s

Effectively 2078 MiB/s (quite OK for single-threaded operation). It is not  
designed to generate large amounts of random data as the size is limited by  
integer range...


HTH
Linux-Fan

öö


pgpoAijtbLtka.pgp
Description: PGP signature


Re: testing new sdm drive

2024-02-08 Thread Linux-Fan

Alexander V. Makartsev writes:


On 08.02.2024 12:14, gene heskett wrote:

gene@coyote:/etc$ sudo smartctl --all -dscsi /dev/sdm
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.1.0-17-rt-amd64] (local  
build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke,  
http://www.smartmontools.org>www.smartmontools.org


=== START OF INFORMATION SECTION ===
Vendor:
Product:  SSD 3.0


[...]

Looks like a scam. Probably a reprogrammed controller to falsely report 2TB  
of space to the system.


I support this view :)


This is how I would test it.
First create a new GPT partition table and a new 2TB partition:
    $ sudo gdisk /dev/sdX

/!\  Make double sure you've selected the right device by using "lsblk" and  
"blkid" utilities.  /!\
/!\    It could change from 'sdm' to another name  
after reboot.      /!\


At gdisk prompt press "o" to create a new GPT table, next press "n" to create  
a new partition, accept default values by pressing "enter".
To verify setup press "p", to accept configuration and write it to device  
press "w".


Next format partition to ext4 filesystem:
    $ sudo mkfs.ext4 -m 0 -e remount-ro /dev/sdX1

Next mount the filesystem:
    $ sudo mkdir /mnt/disktest
    $ sudo mount /dev/sdX1 /mnt/disktest

Next create reference 1GB file filled with dummy data:
    $ cd /mnt/disktest



From here on I'd suggest trying the tools from package `f3`.


After installing it, find the documentation under
/usr/share/doc/f3/README.rst.gz. Basic usage requires only two commands:

f3write .

Fills the drive until it is full (No Space Left on Device). Umount and re- 
mount it to ensure that data is actually written to the disk. Then switch  
back to /mnt/disktest and read it back using


f3read .

It should output a tabular summary about what could be read successfully and  
what couldn't.


As to whether this affects the stability of the running system: If the drive  
is fake (which I think is a real possibility) then it may as well cause  
hickups in the system. If the work you are doing on the machine is mission- 
critical, don't run tests with suspect hardware on it...


HTH
Linux-Fan

öö

[...]


pgpxY5QIR73gw.pgp
Description: PGP signature


Re: update-ca-certificates

2023-12-14 Thread Linux-Fan

Pocket writes:

On Dec 14, 2023, at 2:23 PM, Linux-Fan  wrote:
> Pocket writes:


[...]

> > Should the suffix of the file be .pem as the certs that are referenced by  
> > the conf file seem to be in PEM format?

>
> Stick to what the program expects and use .crt

Ok what format DER, PEM or some form of PKC?


Use PEM-format with file extension .crt.


DER and PEM both use crt.


Yes, although PEM seems to be more common per my anecdotical understanding  
because for DER format, `.cer` seems to be more prevalent. 


One cert for file or multiple?

Notice the docs do not specify.


Indeed they don't specify this directly. If you take the examples into  
consideration, they may shed some light on this, though:


$ xxd < 
/usr/share/doc/ca-certificates/examples/ca-certificates-local/local/Local_Root_CA.crt
: 2d2d 2d2d 2d42 4547 494e 2043 4552 5449  -BEGIN CERTI
0010: 4649 4341 5445 2d2d 2d2d 2d0a 4475 6d6d  FICATE-.Dumm
0020: 7920 526f 6f74 2043 4120 6669 6c65 3b20  y Root CA file;
0030: 7265 706c 6163 6520 6974 2077 6974 6820  replace it with
0040: 7468 6520 5045 4d2d 656e 636f 6465 6420  the PEM-encoded
0050: 726f 6f74 2063 6572 7469 6669 6361 7465  root certificate
0060: 0a2d 2d2d 2d2d 454e 4420 4345 5254 4946  .-END CERTIF
0070: 4943 4154 452d 2d2d 2d2d 0a

I used the xxd just because I was unsure of the format and within the first  
lines one can recognize the familiar --BEGIN CERTIFICATE-- lines that are  
typical for PEM certificates. Additionally, there is some text that  
explicitly explains that this should resemble a PEM file (I find this  
example odd, because it is obviously not a valid PEM since that would be  
base64 encoded?)


Additional info can be gained from the README.Debian:

~~~
$ head -n 5 /usr/share/doc/ca-certificates/README.Debian
The Debian Package ca-certificates
--

This package includes PEM files of CA certificates to allow SSL-based
applications to check for the authenticity of SSL connections.
~~~

Concluding from both of these documentation pieces it looks like the PEM  
format is indeed hinted at although maybe not as obviously as it could be.


It does not answer the question about multiple certificates in one file,  
though.


[...]

HTH
Linux-Fan

öö


pgpucYz8RVcyh.pgp
Description: PGP signature


Re: update-ca-certificates

2023-12-14 Thread Linux-Fan

Pocket writes:


On 12/14/23 08:11, Henning Follmann wrote:

On Wed, Dec 13, 2023 at 09:47:41PM -0500, Jeffrey Walton wrote:

On Wed, Dec 13, 2023 at 7:55 PM Pocket  wrote:

What formats does certs need to be to work with update-ca-certificates?

PEM or DER?

PEM

Well lets look at man update-ca-certificates, shall we?

"Certificates must have a .crt extension..."


Lets have a look at some of the standards shall we?

https://www.ssl.com/guide/pem-der-crt-and-cer-x-509-encodings-and-conversions/

A cert that have a suffix of .crt are in DER format by this convention.  
maybe the script should actually look for PEM files?


The above linked page is not a standard. Additionally, it does not seem to  
support your claim and e.g. says as follows:


* “The DER certificate format stands for “distinguished encoding rules. It
  is a binary form of PEM-formatted certificates containing all types of
  certificates and private keys. However, they usually use .cer and .der  
  extensions.”

* “A PEM file contains ASCII encoding data, and the certificate files come
  in .pem, .crt, .cer, or .key formats.”

IOW per this source, `.crt` is a perfectly valid file extension for  
certificates in PEM format.


I'd be curious for some “standard” definition about these file extensions  
because from what I have seen, the file extensions for certificates, keys  
and certificate signing requests are used quite chaotically sometimes to  
encode either the intention (.pub, .priv, .cer, .csr) or the data format  
(.pem, .der) and sometimes there seems to be an intention to encode both  
some way e.g. I've observed .pem for PEM certificates and .cer for DER- 
formatted certificates which would be in line with the ssl.com link btw.


Should the suffix of the file be .pem as the certs that are referenced by  
the conf file seem to be in PEM format?


Stick to what the program expects and use .crt


Well yes that would eliminate the confusion and we can not have that can we.


If there were some agreed-on standard to do this stuff, I would love to know  
about it. The closest things that I found by a cursory internet search were  
FRC2585 and RFC5280:


* https://datatracker.ietf.org/doc/html/rfc2585
* https://datatracker.ietf.org/doc/html/rfc5280

AFAIU they specify

* `.cer` for DER-encoded certificates
* `.crl` for DER-encoded certificate revocation lists
* `.p7c` for PKCS#7 encoded certificates

[...]

YMMV
Linux-Fan

öö


pgpX0l4h1UxZR.pgp
Description: PGP signature


Re: I uninstalled OpenMediaVault (because totally overkill for me) and replaced it with borgbackup and rsyncq

2023-09-09 Thread Linux-Fan

Default User writes:


On Fri, 2023-09-01 at 23:15 +0200, Linux-Fan wrote:
> Default User writes:


[...]


> > I HAVE used a number of other backup methodologies, including
> > Borgbackup, for which I had high hopes, but was highly
> > disappointed.
>
> Would you care to share in what regards BorgBackup failed you?


[...]


2) I used borg for a while. Block-level de-duplication saved a ton of
space.  But . . .

Per the borg documentation, I kept two separate (theoretically)
identical backup sets, as repository 1 and repository 2, on backup
drive A. Both repositories were daily backed up to backup drive B.

Somehow, repository 1 on drive A got "messed up".  I don't remember the
details, and never determined why it happened.

I had a copy of repository 1 on backup drive B, and two copies of
repository 2 on backup drive B, so, no problem. I will just copy
repository 1 on backup drive B to backup drive A.  Right?

Wrong. I could not figure out how to make that work, perhaps in part
because of the way borg manages repositories by ID numbers, not by
repository "names".


[...]


And, what is F/LOSS today can become closed and proprietary tomorrow,
and thus unintentionally, or even deliberately, you and your data are
trapped . . .  (Audacity? CentOS?).

So even though borg seems to be the flavor of the month, I decided no
thanks. I think I'll just "Keep It Simple".

Now if borg (or whatever) works for you, fine. Use it. This is just my
explanation of why I looked elsewhere. YMMV.


Thank you very much for taking the time to explain it detailedly.

I can understand that corruption / messed up repositories are really one of  
the red flags for backup tools and hence a good reason to avoid such tools.  
Hence I can fully understand your decision. That there was no way to recover  
despite following the tools best practices (docs) does not improve things...


Just a question for my understanding: You mentioned having multiple  
repositories. If I had the situation with two different repositories and one  
corrupted my first idea (if the backup program does not offer any internal  
functions for these purposes which you confirmed using the mailing list?)  
would be to copy the “good” repository at the file level (i.e. with rsync /  
tar whatever) and then afterwards update the copy to fixup any metadata that  
may be wrong. Did you try out this naive approach during your attempt for  
recovery?


I think that currently I am not affected by such issues because I only keep  
the most recent state of the backup and do not have any history in my  
backups (beyond the “archive” which I keep separate and using my own  
program). Hence for me, indeed, the solution to re-create the repository in  
event of corruption is viable.


But as the backup programs advertise the possibility to keep multiple states  
of the backup in one repository, it is indeed, essential, that one can “move  
around” such a repository on the file system while being able to continue  
adding to it even after swiching to a different/new location. I have never  
thought about testing such a use case for any of the tools that I tried, but  
I can see that it is actually quite the essential feature making it even  
more strange that it would not be available with Borg?


TIA
Linux-Fan

öö


pgp3s_4nB4ddI.pgp
Description: PGP signature


Re: I uninstalled OpenMediaVault (because totally overkill for me) and replaced it with borgbackup and rsyncq

2023-09-02 Thread Linux-Fan

Michael Kjörling writes:

[...]


The biggest issue for me is ensuring that I am not dependent on
_anything_ on the backed-up system itself to start restoring that
system from a backup. In other words, enabling bare-metal restoration.
I figure that I can always download a Debian live ISO, put that on a
USB stick, set up an environment to access the (encrypted) backup
drive, set up partitions on new disks, and start copying; if I were
using backup software that uses some kind of custom format, that would
include keeping a copy of an installation package of that and whatever
else it needs for installing and running within a particular
distribution version, and making sure to specifically test that,
ideally without Internet access, so that I can get to the point of
starting to copy things back. (I figure that the boot loader is the
easy part to all this.)


[...]

My personal way to approach this is as follows:

* I identify the material needed to restore.
  It consists of

   - the backup itself
   - suitable Linux OS to run a restore process on
   - the backup software
   - the backup key
   - a password to decrypt the backup key

* I create a live DVD (using `live-build`) containing
  the Linux OS (including GUI, gparted and debian-installer!),
  backup software (readily installed inside the live system),
  backup key (as an encrypted file) but not the password nor
  the backup itself.

  Instead I decided to add:

  - a copy of an SSH identity I can use to access a
read-only copy of the backup through my server and
  - a copy of the encrypted password manager database
in case I forgot the backup password but not the
password manager password and also in case I would
be stuck with the Live DVD but not a copy of the
password such that I could use one of the password
manager passwords to access an online copy of the
backup.

* When I still used physical media in my backup strategy
  these were external SSDs (not ideal in terms of data
  retention, I know). I partitioned them and made them
  able to boot the customized live system (through syslinux).

  If you took such a drive and a PC of matching architecture
  (say: amd64) then everything was in place to restore from
  that drive (except for the password...). The resulting Debian
  would probably be one release behind (because I rarely updated
  the live image on the drive) but the data would be as up to
  date as the contained backup. The assumtion here was that one
  would be permitted to boot a custom OS off the drive or have
  access to a Linux that could read it because I formatted the
  “data” part with ext4 which is not natively readable on
  Windows.

In addition to that, each copy of my backups includes a copy of the backup  
program executable (a JAR file and a statically compiled Rust program in my  
case) and some Windows exe files that could be used to restore the backup on  
Windows machines in event of being stuck with a copy of the backup “only”.


While this scheme is pretty strong in theory, I update and test it far too  
rarely since it is not really easy to script the process, but at least I  
tested the correct working of the backup restore after creation of the live  
image by starting the restore from inside a VM.


HTH
Linux-Fan

öö



pgpjGcE1hDBcf.pgp
Description: PGP signature


Re: I uninstalled OpenMediaVault (because totally overkill for me) and replaced it with borgbackup and rsyncq

2023-09-01 Thread Linux-Fan

Michel Verdier writes:


On 2023-09-01, Default User wrote:

> Yes, it does require considerable space (no data de-duplication), and
> the rsync of the backup drives does take considerable time.  But to me,
> it is worth it, to avoid the methodological equivalent of "vendor lock-
> in".

You must have a bad configuration : rsnaphot de-duplicate using hard
links so you never have duplicated files. Keeping 52 weekly and 7 daily
and 24 hourly I need only 130% of original space. And it takes minimal
time as it transfers only changes and can use ssh compression.


It highly depends on the type of data that is being backed up.

For my regular user files, I think a file-based deduplication works OK. But  
for my VM images, hardlinks would only save space for those VMs which did  
not run between the current and the preceding backup.


Btw.: I am personally not using any hard-link based approach, mostly due to  
the missing encryption and integrity protection of data and metadata.


HTH
Linux-Fan

öö


pgp_XdT1JGgxv.pgp
Description: PGP signature


Re: I uninstalled OpenMediaVault (because totally overkill for me) and replaced it with borgbackup and rsyncq

2023-09-01 Thread Linux-Fan

Default User writes:


On Fri, 2023-09-01 at 07:25 -0500, John Hasler wrote:
> Jason writes:
> > Or how does your backup look like?


See https://lists.debian.org/debian-user/2019/11/msg00073.html
and https://lists.debian.org/debian-user/2019/11/msg00420.html


> Just rsync.


Sorry, I just couldn't resist chiming in here.

I have never used OpenMediaVault.

I HAVE used a number of other backup methodologies, including
Borgbackup, for which I had high hopes, but was highly disappointed.


Would you care to share in what regards BorgBackup failed you?

I am currently using `bupstash` (not in Debian unfortunatly) and `jmbb`  
(which I wrote for myself in 2013) in parallel and am considering switching  
to `bupstash` which provides just about all features that I need.


Here are my notes on these programs:
* https://masysma.net/37/backup_tests_borg_bupstash_kopia.xhtml
* https://masysma.net/32/jmbb.xhtml

And also the Bupstash home page:
* https://bupstash.io/

IMHO borg is about the best backup program that you can get from the Debian  
repositories (if you need any of the modern features that is). The only  
issue I really had with it is that it was too slow for my use cases.



In the end, I currently have settled upon using rsnapshot to back up my
single-machine, single-user setup to external external usb hard drive
A, which is then copied to external usb hard drive B, using rsync.  If
you can do rsync, you can do rsnapshot.  

It's easy, especially when it comes to restoring, verifying, and
impromptu access to data, to use random stuff, or even to just "check
on" your data occasionally, to reassure yourself that it is still
there.

Yes, it does require considerable space (no data de-duplication), and
the rsync of the backup drives does take considerable time.  But to me,
it is worth it, to avoid the methodological equivalent of "vendor lock-
in".


Yes, the “vendor lock-in” is really a thing especially when it comes to  
restoring a backup but the fancy backup software just does not compile for  
the platform or is not available for other reasons or you are stuck on a  
Windows laptop without Admin permissions (wost case scenario?).


I mitigated this with `jmbb` by providing for a way to restore individual  
files also using third-party utilities and I intend to mitigate this for  
`bupstash` by writing my own restore program

(work-in progress: https://masysma.net/32/maxbupst.xhtml)


INB4:  No, I don't do online backups. If people or organizations with
nose problems want my data they are going to have to make at least a
little effort to get it. And yes, I do know the 1-2-3 backup
philosophy, which does seem like a good idea for many (most?) users.


The problem I have with offline backups that it is an inconvenience to carry  
around copies and that this means they are always more out of date than I  
want them to be. Hence I rely on encryption to store backups on untrusted  
storages.


[...]

Short but comprehensive resource on the subject (includes some advertising /  
I am not affiliated / maybe this has outlived the product it advertises for?):

http://www.taobackup.com/index.html

YMMV
Linux-Fan

öö


pgptt2qvn0tQe.pgp
Description: PGP signature


Re: Atualizar Debian 8.11

2023-06-28 Thread Linux - Junior Polegato

Olá!

Minha opinião, grande diferenças entre versões dá muito zica, 
principalmente os arquivos de configuração, então se tiver o /home 
separado em um partição, faz um backup do /etc dentro deste e veja se 
você tem algo em particular /var ou /usr/local, joga dentro do /home. É 
interessante pegar a lista de pacotes instalados ("dpkg -l > 
/home/pacotes"). Se puder fazer um backup de tudo em HD externo ou 
nuvem, melhor ainda.


Inicia uma instalação do zero da versão mais recente, vai de expert e 
formata as outras partições, menos a /home, segue com a instalação até o 
final. Depois de tudo instalado conforme os pacotes que você precisa, 
consulte sua lista de pacotes, faz uma comparação (diff) entre o /etc 
atual e o etc que fez cópia, vai ajustando as configurações dos novos 
serviços e já vai aprendendo parâmetros novos, aos que sofreram 
alteração e tudo o que de mais novo trazem as versões mais novas.


Particularmente seria o caminho que eu escolheria.


[]'s

Junior Polegato





On 6/28/23 12:38, Marcelo wrote:

Boa Tarde,

Existe alguma maneira de atualizar um Debian 8.11 atualmente? Para o 9 
e 10 consequentemente.


Algum source.lists específico?

Agradeço qualquer ajuda...


Att,
Marcelo




Re: certificado para assinatura digital

2023-06-14 Thread Linux - Polegato

On 6/14/23 11:14, Ênio Júnior wrote:
Enfim, fiquei curioso com o título do email: "Re: certificado para 
assinatura digital". Creio que para pessoas desenvolvedoras e 
esclarecidas esse tópico seria mais interessante do que discutir 
regras de português, preconceitos ou o que as pessoas fazem com suas 
próprias vidas no particular.
Em tempo: Alguém pegou o fio da meada? A pergunta foi respondida? 
Algum dos "professores de português" saberia a resposta (e a pergunta 
também, pois ela se perdeu nesse mar de inutilidades) ?


Se prestar atenção na mensagem "boiolagem", a data da mensagem original 
é de mais de um ano atrás:

"Je Wed, Apr 06, 2022 at 06:43:52PM -0300, Leonardo S. S. da Rocha skribis:"

Veja em: 
https://lists.debian.org/debian-user-portuguese/2022/04/threads.html




Re: certificado para assinatura digital

2023-06-14 Thread Linux - Polegato

On 6/14/23 00:24, Lucas Castro wrote:

    i.e. "Todos e Todas aqui", "Homens e Mulheres aqui presente!"


"Todos" são todos na língua portuguesa, então "Todos e TODAS aqui" seria 
"Seres humanos e MULHERES aqui" (que já é errado).
Agora não me venha com "seres human[ao@ie]s", é um lista de usuários da 
língua portuguesa que cabe a nós seguirmos o português correto, 
principalmente em traduções.




Re: certificado para assinatura digital

2023-06-13 Thread Linux - Polegato

On 6/13/23 22:10, Lucas Castro wrote:

Em 13/06/2023 22:03, Gilberto F da Silva escreveu:
Je Wed, Apr 06, 2022 at 06:43:52PM -0300, Leonardo S. S. da Rocha 
skribis:

Pessoal, boa noite!
espero que tod@s estejam bem e segur@s.
 Que boiolagem!  Como  pronunciar essas arrobas no meio das 
palavras?

Gilberto,
Cara, independente de sua opinião, é bom manter o respeito na lista.



Bigodagem!
"Todos" já é o indefinido em questão de gênero, todas enaltence a 
consideração, importância e respeito que a mulher genitora e cuidadora 
da família tem na sociedade, qualquer outra forma, distorção e/ou 
variente é puro mi-mi-mi...




Re: "dpkg-reconfigure" dash no longer works

2023-06-13 Thread Linux-Fan

Darac Marjal writes:



On 10/06/2023 16:08, S M wrote:

On Sat, Jun 10, 2023 at 02:12:14PM +0100, Darac Marjal wrote:


Is command-line editing part of POSIX, then? Are you suggesting that dash is
missing some bit of POSIX compliance? That's possible.
Command-line editing in vi-mode is defined by POSIX, but it's not mandatory  
as

far as I know.

OK, this looks like Bug #561663. If I read that bug correctly, the intention  
IS that dash should support command-line editing (in your case, you'd invoke  
it with -V for vi-style editing. The maintainer claimed the block was  
closed, but then they re-opened it two days later.


Interesting. I am also one of the niche users interested in running dash as  
a primary shell with vi-style line editing.


Last time I tried it (must be several years ago already), the vi-style  
editing did indeed work when enabled with `set -o vi`. On my current Debian  
oldstable (bullseye) workstation it does not work anymore.


Back when the vi-style editing worked it was _almost_ ready for "productive"  
use. Unfortunately, POSIX shells do not support bash's `\[` and `\]` in  
prompts that can be used to hide color code sequences from being counted  
towards the prompt length. This caused there to be a discrepancy between the  
observed and computed lengths leading to erratic line editing whenever the  
line exteeded the width of the window (happens often for me).


Hence I concluded that while it sounds nice to switch to `sh` as the primary  
shell in theory, this does not quite work in practice (for me anyways).


YMMV
Linux-Fan

öö


pgpHxp9oW5HUx.pgp
Description: PGP signature


Re: Searching for HA shared storage free for docker

2023-04-28 Thread Linux-Fan

Mimiko writes:


Hello.

I would want to use a shared storage with docker to store volume so swarm  
containers can use the volumes from different docker nodes.

As for now I tried glusterFS and linstor+drbd.
GlusterFS has a bug with slow replication and slow volume status  
presentation or timing out. Also directory listing or data access is not  
reliable, despite the nodes are connected with each other. So the GlusterFS  
didn't worked for me.
With linstor+drbd and linstor plugin for docker, the created volumes are  
inconsistend on creation. Did not find a solution.


DRBD might still be a solid base to run upon. Consider using an alternative  
file system on top of it like e.g. GFS2 or OCFS2. The management tools to  
assemble these kinds of cluster file systems can be found in packages
gfs2-utils and ocfs2-utils respectively. Back when I tried it, I found OCFS2  
slightly easier to setup.


I once tested a Kubernetes + OCFS2 setup (backed by a real iSCSI target  
rather than DRBD though) and it worked OK.



I want to try Ceth.

What other storage distributions did you tried and worked? It should have  
mirroring data on more than 2 nodes, representation as a bloch device to  
mount in system, or as a network file system to be mounted using fstab, or  
using docker volume. Sould be HA and consistent with at least 3 nodes and  
can automatically get back even if two of three nodes restarts, and be prune  
to sporadic network shortage.


I am not aware of any system that fulfils all of these requirements.  
Specifically, systems are typically either low-latency OR robust in event of  
“sporadic network shortage” issues.


GlusterFS could be preferrable on this with there were not the bug with  
timeout.


I have never used GlusterFS, but from things I heard about it it very much  
seems as if this is not really a ”bug” but rather by design. I think that  
the observed behaviour might also be similar if you were to use Ceph instead  
of GlusterFS, but if you find out that it works much better, I'd be   
interested in learning about it :)


HTH and YMMV
Linux-Fan

öö


pgpovu3A5nOlL.pgp
Description: PGP signature


Re: Microcode bug problem?

2023-03-19 Thread Linux-Fan

Jesper Dybdal writes:

I am planning to upgrade from Buster to Bullseye, and trying to prepare for  
any problems.


The release notes say
The intel-microcode package currently in bullseye and buster-security (see  
DSA-4934-1 (https:
//www.debian.org/security/2021/dsa-4934)) is known to contain two  
significant bugs. For
some CoffeeLake CPUs this update may break network interfaces  
(https://github.com/intel/
Intel-Linux-Processor-Microcode-Data-Files/issues/56) that use firmware- 
iwlwifi,
and for some Skylake R0/D0 CPUs on systems using a very outdated  
firmware/BIOS, the system may
hang on boot (https://github.com/intel/Intel-Linux-Processor-Microcode-Data- 
Files/

issues/31).


I have no idea whether my old processor is a "CoffeeLake" or a "Skylake" or  
something else.  It is a pc that I bought in 2008, I think (and still  
working just fine).


/proc/cpuinfo says:

vendor_id   : GenuineIntel
cpu family  : 6
model   : 23
model name  : Intel(R) Core(TM)2 Duo CPU E8400  @ 3.00GHz


The way to find out is to query the database at ark.intel.com. For your  
processor it leads me to


https://ark.intel.com/content/www/us/en/ark/products/33910/intel-core2-duo-processor-e8400-6m-cache-3-00-ghz-1333-mhz-fsb.html

The "CoffeeLake" and "Skylake" are processor code names. For the Core 2 Duo  
E8400 this is "Wolfdale" according to ark.intel.com



Do I need to worry about those microcode bugs?


Your CPU us far older than the ones affected by the microcode bug. It does  
not have a code name matching one of the listed ones. I'd conclude that your  
CPU is not affected.


HTH
Linux-Fan

öö


pgpVU7E3j6RtC.pgp
Description: PGP signature


Re: PDF on debian

2023-03-09 Thread Linux-Fan

Corey Hickman writes:


Hello,

What's the suggested PDF generator in Debian (without desktop)?
And is there a VIM plugin for that?


For cases where I care little about font or formatting, I use VIM's  
integrated hardcopy:


:ha > /tmp/print.ps
:!ps2pdf /tmp/print.ps /tmp/print.pdf

Find the printable PDF in /tmp/print.pdf afterwards.

When I want to use a custom font and zoom settings, I follow a  
transformation of Text -> HTML -> wkhtmltopdf using a script (attached as  
`ascii2pdf.sh`). Requires `wkhtmltopdf` and `texlive-fonts-extra` (you can  
avoid that by changing the font file in the script).


To test it, you can run it on itself to produce a file `print.pdf`

./ascii2pdf.sh ascii2pdf.sh print.pdf

It even supports colors but the syntax is weird enough such that it is not  
useful without additional documentation (I did not document it yet...).


HTH
Linux-Fan

öö


ascii2pdf.sh
Description: Bourne shell script


pgpdQxb_q9TYY.pgp
Description: PGP signature


Re: Subject: OT: LUKS encryption -- block by block, file by file, or "one big lump"

2023-03-08 Thread Linux-Fan

to...@tuxteam.de writes:


On Wed, Mar 08, 2023 at 10:20:09AM -0500, rhkra...@gmail.com wrote:
> I am curious about the integrity of LUKS (that is, the ability to preserve
> data in the event of corruption on the disk or such).


[...]


>* can files in the LUKS partition other than the one with the one block
> corrupted be read correctly?

Most probably yes (see below)


There is an interesting FAQ touching this subject at  
https://gitlab.com/cryptsetup/cryptsetup/-/wikis/FrequentlyAskedQuestions#5-security-aspects


Specifically, section 6.5 “Do I need a backup of the full partition? Would  
the header and key-slots not be enough?” seems to go into the direction but  
is rather vague regarding the quantity of data affected by a single error.


To me it is quite plausible that small errors can corrupt larger armounts of  
data than "usually", but most likely not _all_ data if one excludes the  
precious metadata from the consideration.


[...]


> Something I don't know is whether LUKS does encryption separately for each
> block (or maybe for each file) or whether somehow the result of encryption  
> is one big "lump" of data [...]


This hints at the difference of CBC and counter modes [1]. If you
encrypt each block separately with the same key (this mode is called
ECB "electronic code book"), you end up with a schema in which equal
blocks encrypt to equal encrypted values, which gives away quite a
bit of your content (nice pic here [2]).

The traditional way to counter this is to mix part (well, kind of)
of the block you just encrypted into the next block, this is called
CBC for "cipher block chaining". This is fine for streams of blocks
(think TLS), but not so nice for a hard disk, where you'd like to
have random block access.

The solution is to take some hashy function of the block's number
itself into the blocks encription. This is called CTR (aka counter
mode).

I haven't looked deeply into Luks (which would be Luks2 these days).
As far as I know, Luks only manages all that stuff (key management,
algorithms, modes, etc.). But I'm pretty sure that whichever algo
is beneath that is using CTR mode.


[...]

It seems that earlier versions defaulted to CBC-ESSIV mode whereas new  
instances rely on XTS. Regarding their details, see


https://en.wikipedia.org/wiki/Disk_encryption_theory

I think one of the problems with a “plain” CTR instantiation is that it  
fails when a file changes:


Say the counter value is based on the location on disk*** and the first  
entry with CTR=1 is considered. Now the file is encrypted as follows:


Ek(CTR=1) xor Plaintext1

Now the data at the location changes from Plaintext1 to Playintext2, then  
naively, one could go on storing the new ciphertext as follows:


Ek(CTR=1) xor Plaintext2

But: The xor operation is only secure under the assumption of a “one time  
pad” i.e. no reuse of the "key" which in this case would be a constant value  
of Ek(CTR=1) for all plaintexts that are to be stored at that location.


***) This may be a simplification, but consider the alternatives...

HTH
Linux-Fan

öö


pgp8k19yPur6M.pgp
Description: PGP signature


Re: what method do you prefer for data transfer between nodes?

2023-03-05 Thread Linux-Fan

Ken Young writes:


Hello,[1;5B


The methods I know,

1. scp
pros: the native tool in the OS
cons: you will either input password or put key pairs into servers for  
authentication.


Works for simple cases.


2. rsync
pros: it can transfer data by increasement 
cons: you need to setup rsyncd server and make the correct authorization.


Works for simple and complex cases.


3. ftp/ftps
pros: easy to use
cons: need to setup ftpd server, and the way is not that secure?


Whenever possible, I'd prefer 1 or 2 over this.


4. rclone
pros:easy to use
cons: hard to setup (you may need a cloud storage for middleware).


I only use rclone when I want to target a cloud storage.
A „cloud storage for middleware” does not seem sensible to me when I can  
copy using methods 1 and 2 without using such a middleware.



For me I most often use scp + rsync. and what's your choice?


These are my standard choices, too. In automated scenarios I often prefer  
rsync over scp due to more flexibility in configuration.


My additional tools for special purposes:

5. lsyncd
If you need to keep directories in sync continuously, there is a tool called  
`lsyncd` that automates repeated invocation of `rsync` in a smart way.


6. tar + netcat (or tar + ssh in very rare cases)
Using tar sacrifices all the flexibility of rsync but may attain a  
significantly higher performance and does not need a lot of flags to do the  
right thing by default (i.e. preserve everything when acting as root). I  
prefer this variant when migrating to a new disk or PC because it seems to  
be the most efficient variant in a "local trusted network and no speedup  
from incremental copying" scenario.


I documented my approach to this here:
https://masysma.net/37/data_transfer_netcat_tar.xhtml

HTH and YMMV
Linux-Fan

öö


pgpX071P0Phsq.pgp
Description: PGP signature


Re: something that can display a Thumbs.db?

2023-02-14 Thread Linux-Fan

to...@tuxteam.de writes:


On Sat, Feb 11, 2023 at 07:20:17PM +0100, Linux-Fan wrote:

[...]

> ~~~
> Traceback (most recent call last):
>  File "/usr/bin/vinetto", line 418, in 
>print(" " + TNid + " " + TNtimestamp + " " + TNname)
> TypeError: can only concatenate str (not "bytes") to str
> ~~~

HAH. Python3 and its OCD type system. Perhaps you might want to
try running it with Python2 (their character type systems are
broken in different ways).


At least a cursory attempt at doing this failed:

~~~
$ python2 /usr/bin/vinetto -o /tmp .../Thumbs.db
Traceback (most recent call last):
 File "/usr/bin/vinetto", line 40, in 
   import vinetto.vinreport
ImportError: No module named vinetto.vinreport
~~~

On Github, there seems to be a newer version available where porting to  
Python 3 has progressed further. If users run into the error above, I'd  
suggest them to try the version from Github?


HTH
Linux-Fan

öö


pgpfL7XCRCI9V.pgp
Description: PGP signature


Re: something that can display a Thumbs.db?

2023-02-11 Thread Linux-Fan

gene heskett writes:


Greetings All;

I'm in the middle pages of trying to build a voron 2.4 kit, a 3d printer.  
Definitely not recommended for beginners. Very poor assembly instructions.  
I've printed two versions of them, over 500 pages in dead tree format but am  
getting poor and often out of order or missing finishing touches, obviously  
the work of a designer who assumes this is the 50th such project the user  
has built and Just Knows what to do next..


I have unpacked the .stl files to feed cura and they generally work, but  
each major section of the printed stuff also contains a Thumbs.db file.
I'm not familiar with databases enough to know what to do, but being able to  
see whats in this file in thumbnail form would sure help me to build the  
next part I need. So what do we have that can display whats in these files,  
from a Chinese originator?


File says this about one of them:
Thumbs.db: Composite Document File V2 Document, Cannot read section info


Thumbs.db is a file generated by Windows OSes to speed up image previews.  
Users are not usually expected to need reading the file. On Windows systems  
such files are usually “hidden” (Windows does not follow the convention of  
the leading dot meaning hidden) such that they end up in all kinds of places  
where they are not expected/useful.


Unless you are looking to extract minature previews of some ofther files in  
that same directory, Thumbs.db may well be of little use to you.


Or in other Words: Whatever information you are missing and want to extract  
from there, is unlikely to be contained in the Thumbs.db.



Is there something that can display the contents of that?


Debian has a package called `vinetto`. When I tried it on some random  
`Thumbs.db` from Windows-originating files, it bailed out with


~~~
Traceback (most recent call last):
 File "/usr/bin/vinetto", line 418, in 
   print(" " + TNid + " " + TNtimestamp + " " + TNname)
TypeError: can only concatenate str (not "bytes") to str
~~~

That looks like a python2 -> python3 migration issue to me.

But: It extracted one thumbnail from the Thumbs.db that looks plausible.

If you must know about the Thumbs.db's contents, it might be interesting to  
try this tool.


HTH
Linux-Fan

öö

[...]


pgpnoCBgmNhZL.pgp
Description: PGP signature


Re: Periodic refresh (or rwrite?) of data on an SSD (was: Re: Recommended SSDs and 4-bay internal dock)

2023-01-13 Thread Linux-Fan

rhkra...@gmail.com writes:


On Wednesday, January 11, 2023 12:20:05 PM Linux-Fan wrote:
> > Or does one need to read every byte, allocated or not?
>
> AFAIK one needs to _power_ the device every once in a while and keep power
> connected for some time. Then, the controller can dos all the necessary
> actions in the background.


[...]


> This entry seems to be rather pessimistic:
> https://www.ibm.com/support/pages/potential-ssd-data-loss-after-extended-sh
> utdown

I've read some of that article, or, I guess, really the abstract and the
section labeled "Content" on that page:

https://www.ibm.com/support/pages/potential-ssd-data-loss-after-extended-
shutdown

I see the statement: "A system (and its enclosed drives) should be powered up
at least 2 weeks after 2 months of system power off.   If a drive has an  
error

indicating it is at end of life we recommend not powering off the system for
extended periods of time.", and the first quoted paragraph in this email
reiterates the need to power up occasionally and leave connected for some  
time

(so that the controller can do all the necessary actions in the background).

I assume that they are talking about the hardwired controller built into the
drive, thus there is no particular need to power it up from an OS that
recognizes it but simply something that powers it somehow?


Actually, since the IBM article is about some sort of storage system, it  
becomes hard to tell what IBM mean with the "controller": The two weeks are  
quite long a period i.e. they expect some technician to swap SSDs in the  
storage array once in a while and in there, they are powered 24/7.


For consumer SSDs, I understand the controller to be indeed one of the chips  
on the SSD i.e. outside the realm of the OSes control. Also, no consumer SSD  
manufacturer advises their customers to power the SSDs two weeks straight :)



Can anyone shed more light on what happens during that two weeks -- is the
data somehow "refreshed" (in place), or rewritten somewhere else on the  
drive, or ???


(Perhaps that is discussed in the complete article of which this appears to  
be just the abstract??)


For the IBM case I expect that information to be proprietary :)

In fact, there seems to be curiously little information available on the  
topic at all. Most people (including credible sources like e.g. the Kingston  
Support) suggest that all data should be read from the flash for the  
controller to refresh it:


https://reboot.pro/index.php?s=234d281f8a9f18ba7b36f5e98890bd2f=13791#entry121995

It is probably not the entire truth though, because if I understand some  
JEDEC slides correcly, thery only talk about power-off where I assume that  
everything non-power-off must not count towards the data retention time  
including not reading the data?


(cf. slides 26f of 
https://www.jedec.org/sites/default/files/Alvin_Cox%20[Compatibility%20Mode]_0.pdf)

Official manufacturer documents are pretty opaque regarding the issue:

https://www.kingston.com/en/blog/pc-performance/ssd-garbage-collection-trim-explained

https://semiconductor.samsung.com/resources/white-paper/Samsung_SSD_White_Paper.pdf
(relevant chapter is CH04 with pp. 17ff)

[...]

Maybe someone with advanced Interenet searching capabilities can bring up  
more relevant documents about the subject :)


HTH
Linux-Fan

öö


pgpoR_XL95c31.pgp
Description: PGP signature


Re: Recommended SSDs and 4-bay internal dock

2023-01-11 Thread Linux-Fan

Jeremy Nicoll writes:


On Wed, 11 Jan 2023, at 14:58, Tom Browder wrote:
> I plan to install a 4-bay, hot swappable SSD dock to replace the existing
> DVD in my only 5.5" externally accesible bay.  To fill it, I will get up to
> four 2.5 inch SSDs of 1 Tb: MX500 by Crucial. My plan is to use the SSDs
> for backup, but not in a RAID configuration.

I can't advise on choice of dock etc, but I'm interested in the side issue
of how long an unpowered SSD can be assumed still to be holding
the data written to it.

Does one need (just) to mount the drive once in a while?  (How often?)

Or does one need (say) to read every file on the drive once, so that the
SSD controller can assess whether any data needs to be moved?

Or does one need to read every byte, allocated or not?


AFAIK one needs to _power_ the device every once in a while and keep power  
connected for some time. Then, the controller can dos all the necessary  
actions in the background.


A long time ago, companies claimed data retention for 10 years (that was:  
for SLC drives!). The latest figure that I am aware of was 1 year (maybe for  
TLC?). I think the trend is that manymanufacturers do not publish any data  
retention times for consumer drives (newly QLC) anymore. One can only guess  
or measurea.


For backup purposes, I believe the advantage of SSDs over HDDs is mostly  
that they are shock resistant. If this is of no concern, I'd prefer to  
backup to HDDs instead of SSDs because of the data retention issue and in  
general a higher chance of rescuing data from the drive in event of failure.


Here is an article from 2021 that shows some typical numbersas I  
remembered them. I do not know anything about this specific source's  
credibility, though:

https://www.virtium.com/knowledge-base/ssd-data-retention/

This entry seems to be rather pessimistic:
https://www.ibm.com/support/pages/potential-ssd-data-loss-after-extended-shutdown

IBM says that for enterprise drives (typically higher quality than consumer- 
grade drives) only three months of data retention are guaranteed at 40°C.  
And: This article is from 2021, too...


YMMV
Linux-Fan

öö


pgpjiKjHb2kfb.pgp
Description: PGP signature


Re: Getting PC with Ubuntu; change to Debian?

2022-12-05 Thread Linux-Fan

Patrick Wiseman writes:


Hello, fellow Debian users:

I've had Debian on my computers for a very long time (can't remember exactly  
when but early 2000's for sure); and I've had Lenovo laptops for ages too. I  
finally need to replace my main laptop (an at least 10-year old ThinkPad), so  
I've bought an X1 from Lenovo, with Ubuntu pre-installed (to be delivered in  
January).


So, a question. Should I side-grade to Debian (and, if so, how easy would it  
be to do that?)? Or will I be happy with Ubuntu (which is, after all, a  
Debian derivative)?


What is a side-grade in this context? I'd strongly advise against trying to  
replace the APT sources with Debian's and then trying to switch with an

`apt-get dist-upgrade`. Instead, I'd suggest the following course of action:

* Create an image backup of the installed Ubuntu such that it is restorable
  in case of warranty or hardware compatibility issues that may surface
  later. If you do this from a Debian Live system you can also get a first
  impression about how well Debian runs on the new hardware.

* Install Debian stable replacing the existing Ubuntu.
  If it works fine, stop there. If not, try backports or even Testing to
  see if it is an issue regarding hardware being too new :)

* If all fails, the image allows reverting to Ubuntu easily enough.

I'm very familiar and comfortable with Debian (was happy with 'testing' for a  
long time, but have lately reverted to 'stable'). And, although I'm a rare  
participant on this list, I enjoy the lurk and would presumably need to go  
elsewhere if I had questions about my Ubuntu experience.


If you have been using Debian for a long time, by all means, stay with it.

Ubuntu is a derivative and many things feel similar, but for an experienced  
Debian user there are tons of minor obstacles that you encounter if you want  
to do certain things on Ubuntu. Differences in Network Interfaces  
configuration come to mind. Also, Ubuntu likes to distribute certain  
applications as “snap” which is replacing traditional Ubuntu packages for  
e.g. Firefox. More differences exist of course but these are my top two :)


HTH
Linux-Fan

öö

[...]


pgp2wcdjUIJ2g.pgp
Description: PGP signature


Re: Debian failed

2022-12-05 Thread Linux-Fan

hw writes:


On Sun, 2022-12-04 at 09:06 -0700, Charles Curley wrote:
> On Sun, 04 Dec 2022 15:52:31 +0100
> hw  wrote:
>
> > so I wanted to have Debian on a Precision R7910 with AMD graphics card
> > and it failed because it refuses to use the amdgpu module.  I tried
> > forcing to load it when booting and it still didn't work.
> >
> > So I'm stuck with Fedora.  What's wrong with Debian that we can't even
> > get AMD cards to work.
>
> Before I get started with some questions to help you debug the problem,
> did you want some serious debugging assistance, or did you simply want
> to whine?

I'm not whining, I was merely telling.  There must be quite a few people
using AMD cards who must have been running into the same problem.


Debian stable is not usually adding support for new hardware during its  
lifecycle. Hence the standard way to deal with such things is to either buy  
hardware that is already old enough to be supported by Debian stable or to  
install a newer release/kernel. A new kernel has been suggested in the  
thread already and is a “standard thing” to try out in case of newer-than- 
distribution hardware. As are backported firmware blobs if you use them... :)


For instance, I am using Debian stable right now on a (smaller) Dell  
Workstation with an “AMD Radeon Pro W5500” which is old enough to be  
supported by it out of the box already. When I bought it, I installed the  
back-then Debian Testing Bullseye specifically because I knew that the  
stable distribution might have been too old (and it was shortly before the  
release of Bullseye anyways).


Since I needed Vulkan support and enhanced GPU performance for certain  
(less-important) applications, I installed some parts of the proprietary  
driver package. This is not officially supported and I expect it to be prone  
to breakage upon upgrades etc.



I can't very well debug this any further since I need the machine
working, so I installed Fedora instead, and I also have currently an
NVIDIA card installed.

After experiencing Debian, I'm no longer really inclined to use it on
workstations.  I'm re-considering Gentoo, but that has had overwhelming
problems in the past with updating which I don't want to have again.


If Fedora works for you now, you could also just stay with it until any  
reason to switch to Debian pops up or a good chance to switch manifests?  
IOW: No need to rush things while the computer is working as expected :)


Unless you keep upgrading to the newest hardware, it may also be a good  
point in time to switch to a new Debian stable that is newer than the  
hardware once that is released? Using a new release rather than only a newer  
kernel has the advantage of coming with a newer graphics stack  
(X11/Mesa/etc.) that may also be relevant for the GPU to function properly.


In my experience, Debian works well across various workstation brands  
(HP, Dell, Fujitsu) but here, the hardware was often older than the  
software.


HTH
Linux-Fan

öö

[...]


pgpiR62mXaYiK.pgp
Description: PGP signature


Re: Debian mirror size

2022-11-28 Thread Linux-Fan

Georgi Naplatanov writes:


On 11/28/22 21:36, krys...@ibse.cz wrote:

Hello everyone,
I have setup debian mirror using official archvsync script suite. Mirrored  
architectures are: all source i386 amd64. The rest of config for ftp-sync is  
kept on default values. Everything seems to work fine - only problem is that  
the mirror is too small, at least according to this website:  
'www.debian.org/mirror/size'. Sum of sizes for these architectures is  
1488GiB (give or take, data are changing every day), but I have only 930GiB


I have a local mirror here, too (using ftpsync scripts) and it takes 1.1T  
(1088 GiB) according to `du -sh` for architectures: all, amd64, armhf, i386,  
source. I thus conclude that maybe it is OK for the mirror to be smaller  
than the listed mirror sizes? This local mirror has been serving me well for  
a few years already :)


Just a data point if it matters.

HTH
Linux-Fan

öö


pgpg11BuQjOVp.pgp
Description: PGP signature


Re: Dell Precision 3570 - Debian instead of Ubuntu

2022-11-28 Thread Linux-Fan

B.M. writes:


I'm going to buy a Dell Precision 3570 laptop in the next couple of weeks.
Since it's a Build Your Own device, I can order it with Ubuntu 20.04 LTS pre-


[...]


the machine should work well - but who knows? How would you proceed?


[...]


b) replace Ubuntu by Debian, fiddling around issues if there are any later


[...]


d) analyze the installed system (how?) to find out any special configs etc.
before replacing Ubuntu by Debian


[...]

My approach would be to combine d + b of sorts:

1. First start up from a Debian Live system. Check that it boots OK.

2. Then create a copy of the installed Ubuntu system (either `dd` or
   `tar` depending on your preferences) and store the copy at a network
   location or attach an external USB drive for the purpose.

3. Then install Debian replacing the existent Ubuntu installation.

4. If something specific does not work and cannot be made working by
   installing non-free / proprietary graphics drivers etc. then the
   previously created copy can be analyzed to find out if Dell's Ubuntu
   does something special/differently.

5. In case of hardware warranty issues etc. you can bring back the original
   Ubuntu just in case Dell insists on having that present for error
   error analysis. Once the device goes out of warranty, you could delete
   the old Ubuntu copy from whatever storage you saved it to.

   Alternatively, with most Dell Precision machines, it should be possible
   to order a “keep your hard drive” service that allows you to send-in a
   device for warranty without having to hand-back the system drive. Not
   sure if it is viable in this case, though.

HTH
Linux-Fan

öö


pgp8bgSj1YVdv.pgp
Description: PGP signature


Re: ZFS performance (was: Re: deduplicating file systems: VDO withDebian?)

2022-11-14 Thread Linux-Fan

hw writes:


On Fri, 2022-11-11 at 21:26 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> > > On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
> > > > On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> > > > > On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> > > > > > And mind you, SSDs are *designed to fail* the sooner the more  
> > > > > > data you write to

> > > > > > them.  They have their uses, maybe even for storage if you're so
> > > > > > desperate, but not for backup storage.
>
> [...]
>
> > Why would anyone use SSDs for backups?  They're way too expensive for  
> > that.

>
> I actually do this for offsite/portable backups because SSDs are shock 
> resistant (dont lose data when being dropped etc.).

I'd make offsite backups over internet.  If you can afford SSDs for backups,
well, why not.


Yes, I do offsite backup over Internet too. Still, an attacker could  
possibly delete those from my running system. Not so much for the detached  
separate portable SSD.


> The most critical thing to acknowledge about using SSDs for backups is  
> that the data retention time of SSDs (when not powered) is decreasing with each 

> generation.

Do you mean each generation of SSDs or of backups?  What do manufacturers say
how long you can store an SSD on a shelf before the data on it has degraded?


Generations of SSDs.

In the early days, a shelf life in the magnitude of years was claimed. Later  
on, most datasheets I have seen have been lacking in this regard.


[...]


>  The small (drive size about 240GB) ones I use for backup are much less 
> durable.

You must have quite a lot of them.  That gets really expensive.


Two: I write the new backup to the one I have here, then carry it to the  
offsite location and take back the old copy from there. I do not these SSD's  
prices but it weren't expensive units at the time.



>  For one of them, the manufacturer claims 306TBW, the other has 
> 360 TBW specified. I do not currently know how much data I have written to 
> them already. As you can see from the sizes, I backup only a tiny subset  
> of the data to SSDs i.e. the parts of my data that I consider most critical  
> (VM images not being among them...).


Is that because you have them around anyway because they were replaced with
larger ones, or did you actually buy them to put backups on them?


I still had them around: I started my backups with 4 GiB CF cards for  
backup, then quickly upgraded to 16 GiB cards all bought specifically for  
the backup purposes. Then I upgraded to 32 GB cards. Somewhere around that  
time I equipped my main system with dual 240 GB SSDs. Later, I upgraded to 2  
TB SSDs with the intention to not only run the OS off the SSDs, but also  
the VMs. By this, I got the small SSDs out, ready to serve the increased  
need for backup storage since 32 GB CF cards were no longer feasible...


Nowdays, my “important data backup” is still close to the historical 32 GiB  
limit although slightly above it (it ranges between 36 and 46 GiB or such).  
There are large amounts of free space on the backup SSDs filled by (1) a  
live system to facilitate easy restore and (2) additional, less important  
data (50 GiB or so). 


[...]

> > There was no misdiagnosis.  Have you ever had a failed SSD?  They  
> > usually just disappear.  I've had one exception in which the SDD at


[...]


> Just for the record I recall having observed this once in a very similar 
> fashion. It was back when a typical SSD size was 60 GiB. By now we should 
> mostly be past this “SSD fails early with controller fault” issues. It can 
> still happen and I still expect SSDs to fail with less notice compared to 
> HDDs.

Why did they put bad controllers into the SSDs?


Maybe because the good ones were to expensive at the time? Maybe because the  
manufacturers were yet to acquire good experience on how to produce them  
reliably. I can only speculate.


[...]

YMMV
Linux-Fan

öö


pgpOoGLUZ_I5l.pgp
Description: PGP signature


Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-14 Thread Linux-Fan

hw writes:


On Fri, 2022-11-11 at 22:11 +0100, Linux-Fan wrote:
> hw writes:
> > On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
>
> [...]
>
> > >  If you do not value the uptime making actual (even
> > >  scheduled) copies of the data may be recommendable over
> > >  using a RAID because such schemes may (among other advantages)
> > >  protect you from accidental file deletions, too.
> >
> > Huh?
>
> RAID is limited in its capabilities because it acts at the file system, 
> block (or in case of hardware RAID even disk) level. Copying files can 
> operate on any subset of the data and is very flexible when it comes to 
> changing what is going to be copied, how, when and where to.

How do you intend to copy files at any other level than at file level?  At  
that

level, the only thing you know about is files.


You can copy only a subset of files but you cannot mirror only a subset of a  
volume in a RAID unless you specifically designed that in at the time of  
partitioning. With RAID redundancy you have to decide upfront what you  
want to have mirrored. With files, you can change it any time.


[...]


> Multiple, well established tools exist for file tree copying. In RAID 
> scenarios the mode of operation is integral to the solution.

What has file tree copying to do with RAID scenarios?


Above, I wrote that making copies of the data may be recommendable over  
using a RAID. You answered “Huh?” which I understood as a question to expand  
on the advantages of copying files rather than using RAID.


[...]


> File trees can be copied to slow target storages without slowing down the 
> source file system significantly. On the other hand, in RAID scenarios, 


[...]


Copying the VM images to the slow HDD would slow the target down just as it
might slow down a RAID array.


This is true and does not contradict what I wrote.


> ### when
>
> For file copies, the target storage need not always be online. You can 
> connect it only for the time of synchronization. This reduces the chance 
> that line overvoltages and other hardware faults destroy both copies at  
> the same time. For a RAID, all drives must be online at all times (lest the 

> array becomes degraded).

No, you can always turn off the array just as you can turn off single disks.
When I'm done making backups, I shut down the server and not much can happen  
to

the backups.


If you try this in practice, it is quite limited compared to file copies.

> Additionally, when using files, only the _used_ space matters. Beyond  
> that, the size of the source and target file systems are decoupled. On the other 

> hand, RAID mandates that the sizes of disks adhere to certain properties 
> (like all being equal or wasting some of the storage).

And?


If these limitations are insignificant to you then lifting them provides no  
advantage to you. You can then safely ignore this point :)


[...]


> > Hm, I haven't really used Debian in a long time.  There's probably no
> > reason 
> > to change that.  If you want something else, you can always go for it.
>
> Why are you asking on a Debian list when you neiter use it nor intend to  
> use it?


I didn't say that I don't use Debian, nor that I don't intend to use it.


This must be a language barrier issue. I do not understand how your  
statements above do not contradict each other.


[...]


> Now check with <https://popcon.debian.org/by_vote>
>
> I get the following (smaller number => more popular):
>
> 87   e2fsprogs
> 1657 btrfs-progs
> 2314 xfsprogs
> 2903 zfs-dkms
>
> Surely this does not really measure if people are actually use these 
> file systems. Feel free to provide a more accurate means of measurement.  
> For me this strongly suggests that the most popular FS on Debian is ext4.


ext4 doesn't show up in this list.  And it doesen't matter if ext4 is most


e2fsprogs contains the related tools like `mkfs.ext4`.


widespread on Debian when more widespread distributions use different file
systems.  I don't have a way to get the numbers for that.

Today I installed Debian on my backup server and didn't use ext4.  Perhaps  
the "most widely-deployed" file system is FAT.


Probably yes. With the advent of ESPs it may have even increased in  
popularity again :)


[...]

> I like to be able to store my backups on any file system. This will not  
> work for snapshots unless I “materialize” them by copying out all files of a 

> snapshot.
>
> I know that some backup strategies suggest always creating backups based  
> on snapshots rather than the live file system as to avoid issues with  
> changing files during the creation of backups.

>
> I can see the merit in implementing it this way but have not yet found a 
&

Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-11 Thread Linux-Fan

hw writes:


On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:


[...]


>  If you do not value the uptime making actual (even
>  scheduled) copies of the data may be recommendable over
>  using a RAID because such schemes may (among other advantages)
>  protect you from accidental file deletions, too.

Huh?


RAID is limited in its capabilities because it acts at the file system,  
block (or in case of hardware RAID even disk) level. Copying files can  
operate on any subset of the data and is very flexible when it comes to  
changing what is going to be copied, how, when and where to.


### what

When copying files, its a standard feature to allow certain patterns of file  
names to be exclueded. This allows fine-tuning the system to avoid  
unnecessary storage costs by not duplicating the files of which duplicates  
are not needed (.iso or /tmp files could be an example of files that some  
uses may not consider worth duplicating).


### how

Multiple, well established tools exist for file tree copying. In RAID  
scenarios the mode of operation is integral to the solution.


### where to

File trees are much easier copied to network locations compared to adding a  
“network mirror” to any RAID (although that _is_ indeed an option, DRBD was  
mentioned in another post...).


File trees can be copied to slow target storages without slowing down the  
source file system significantly. On the other hand, in RAID scenarios,  
slow members are expected to slow down the performance of the entire array.  
This alone may allow saving a lot of money. E.g. one could consider copying  
the entire tree of VM images that is residing on a fast (and expensive) SSD  
to a slow SMR HDD that only costs a fraction of the SSD. The same thing is  
not possible with a RAID mirror except by slowing down the write operations  
on the mirror to the speed of the HDD or by having two (or more) of the  
expensive SSDs. SMR drives are advised against in RAID scenarios btw.


### when

For file copies, the target storage need not always be online. You can  
connect it only for the time of synchronization. This reduces the chance  
that line overvoltages and other hardware faults destroy both copies at the  
same time. For a RAID, all drives must be online at all times (lest the  
array becomes degraded).


Additionally, when using files, only the _used_ space matters. Beyond that,  
the size of the source and target file systems are decoupled. On the other  
hand, RAID mandates that the sizes of disks adhere to certain properties  
(like all being equal or wasting some of the storage).


> > Is anyone still using ext4?  I'm not saying it's bad or anything, it  
> > only seems that it has gone out of fashion.

>
> IIRC its still Debian's default.

Hm, I haven't really used Debian in a long time.  There's probably no reason  
to change that.  If you want something else, you can always go for it.


Why are you asking on a Debian list when you neiter use it nor intend to use  
it?



>  Its my file system of choice unless I have 
> very specific reasons against it. I have never seen it fail outside of 
> hardware issues. Performance of ext4 is quite acceptable out of the box. 
> E.g. it seems to be slightly faster than ZFS for my use cases. 
> Almost every Linux live system can read it. There are no problematic 
> licensing or stability issues whatsoever. By its popularity its probably  
> one of the most widely-deployed Linux file systems which may enhance the  
> chance that whatever problem you incur with ext4 someone else has had before...


I'm not sure it's most widespread.  Centos (and Fedora) defaulted to xfs  
quite
some time ago, and Fedora more recently defaulted to btrfs (a while after  
Redhat
announced they would remove btrfs from RHEL altogether).  Centos went down  
the
drain when it mutated into an outdated version of Fedora, and RHEL is  
probably

isn't any better.


~$ dpkg -S zpool | cut -d: -f 1 | sort -u
[...]
zfs-dkms
zfsutils-linux
~$ dpkg -S mkfs.ext4
e2fsprogs: /usr/share/man/man8/mkfs.ext4.8.gz
e2fsprogs: /sbin/mkfs.ext4
~$ dpkg -S mkfs.xfs
xfsprogs: /sbin/mkfs.xfs
xfsprogs: /usr/share/man/man8/mkfs.xfs.8.gz
~$ dpkg -S mkfs.btrfs
btrfs-progs: /usr/share/man/man8/mkfs.btrfs.8.gz
btrfs-progs: /sbin/mkfs.btrfs

Now check with <https://popcon.debian.org/by_vote>

I get the following (smaller number => more popular):

87   e2fsprogs
1657 btrfs-progs
2314 xfsprogs
	2903 zfs-dkms 

Surely this does not really measure if people are actually use these  
file systems. Feel free to provide a more accurate means of measurement. For  
me this strongly suggests that the most popular FS on Debian is ext4.



So assuming that RHEL and Centos may be more widespread than Debian because
there's lots of hardware support

Re: ZFS performance (was: Re: deduplicating file systems: VDO withDebian?)

2022-11-11 Thread Linux-Fan

hw writes:


On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
> > On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> > > On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> > > > And mind you, SSDs are *designed to fail* the sooner the more data  
you

> > > > write
> > > > to
> > > > them.  They have their uses, maybe even for storage if you're so
> > > > desperate,
> > > > but
> > > > not for backup storage.


[...]


Why would anyone use SSDs for backups?  They're way too expensive for that.


I actually do this for offsite/portable backups because SSDs are shock  
resistant (dont lose data when being dropped etc.).


The most critical thing to acknowledge about using SSDs for backups is that  
the data retention time of SSDs (when not powered) is decreasing with each  
generation.


Write endurance has not become critical in any of my SSD uses so far.  
Increasing workloads have also resulted in me upgrading the SSDs. So far I  
always upgraded faster than running into the write endurance limits. I do  
not use the SSDs as caches but as full-blown file system drives, though.


On the current system, the SSDs report having written about 14 TB and are  
specified by the manufacturer for an endurance of 6300 TBW (drive size is 4  
TB). The small (drive size about 240GB) ones I use for backup are much less  
durable. For one of them, the manufacturer claims 306TBW, the other has  
360 TBW specified. I do not currently know how much data I have written to  
them already. As you can see from the sizes, I backup only a tiny subset of  
the data to SSDs i.e. the parts of my data that I consider most critical (VM  
images not being among them...).


[...]

There was no misdiagnosis.  Have you ever had a failed SSD?  They usually  
just

disappear.  I've had one exception in which the SDD at first only sometimes
disappeared and came back, until it disappeared and didn't come back.


[...]

Just for the record I recall having observed this once in a very similar  
fashion. It was back when a typical SSD size was 60 GiB. By now we should  
mostly be past this “SSD fails early with controller fault” issues. It can  
still happen and I still expect SSDs to fail with less notice compared to  
HDDs.


When I had my first (and so far only) disk failure (on said 60G SSD) I  
decided to:


* Retain important data on HDDs (spinning rust) for the time being

* and also implement RAID1 for all important drives

Although in theory running two disks instead of one should increase the  
overall chance of having one fail, no disks failed after this change so  
far.


YMMV
Linux-Fan

öö 


pgp7fF7ksy0yP.pgp
Description: PGP signature


Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread Linux-Fan

hw writes:


On Wed, 2022-11-09 at 19:17 +0100, Linux-Fan wrote:
> hw writes:
> > On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > > Le 09/11/2022 à 12:41, hw a écrit :


[...]


> > I'd
> > have to use mdadm to create a RAID5 (or use the hardware RAID but that  
> > isn't

>
> AFAIK BTRFS also includes some integrated RAID support such that you do  
> not necessarily need to pair it with mdadm.


Yes, but RAID56 is broken in btrfs.

> It is advised against using for RAID 
> 5 or 6 even in most recent Linux kernels, though:
>
> 
https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices

Yes, that's why I would have to use btrfs on mdadm when I want to make a  
RAID5.

That kinda sucks.

> RAID 5 and 6 have their own issues you should be aware of even when  
> running 

> them with the time-proven and reliable mdadm stack. You can find a lot of 
> interesting results by searching for “RAID5 considered harmful” online.  
> This 
> one is the classic that does not seem to make it to the top results,  
> though:


Hm, really?  The only time that RAID5 gave me trouble was when the hardware  


[...]

I have never used RAID5 so how would I know :)

I think the arguments of the RAID5/6 critics summarized were as follows:

* Running in a RAID level that is 5 or 6 degrades performance while
  a disk is offline significantly. RAID 10 keeps most of its speed and
  RAID 1 only degrades slightly for most use cases.

* During restore, RAID5 and 6 are known to degrade performance more compared
  to restoring one of the other RAID levels.

* Disk space has become so cheap that the savings of RAID5 may
  no longer rectify the performance and reliability degradation
  compared to RAID1 or 10.

All of these arguments come from a “server” point of view where it is  
assumed that


(1) You win something by running the server so you can actually
tell that there is an economic value in it. This allows for
arguments like “storage is cheap” which may not be the case at
all if you are using up some thightly limited private budget.

(2) Uptime and delivering the service is paramount. Hence there
are some considerations regarding the online performance of
the server while the RAID is degraded and while it is restoring.
If you are fine to take your machine offline or accept degraded
performance for prolonged times then this does not apply of
course. If you do not value the uptime making actual (even
scheduled) copies of the data may be recommendable over
using a RAID because such schemes may (among other advantages)
protect you from accidental file deletions, too.

Also note that in today's computing landscape, not all unwanted file  
deletions are accidental. With the advent of “crypto trojans” adversaries  
exist that actually try to encrypt or delete your data to extort a ransom.


More than one disk can fail?  Sure can, and it's one of the reasons why I  
make

backups.

You also have to consider costs.  How much do you want to spend on storage  
and
and on backups?  And do you want make yourself crazy worrying about your  
data?


I am pretty sure that if I separate my PC into GPU, CPU, RAM and Storage, I  
spent most on storage actually. Well established schemes of redundancy and  
backups make me worry less about my data.


I still worry enough about backups to have written my own software:
https://masysma.net/32/jmbb.xhtml
and that I am also evaluating new developments in that area to probably  
replace my self-written program by a more reliable (because used by more  
people!) alternative:

https://masysma.net/37/backup_tests_borg_bupstash_kopia.xhtml


> https://www.baarf.dk/BAARF/RAID5_versus_RAID10.txt
>
> If you want to go with mdadm (irrespective of RAID level), you might also 
> consider running ext4 and trade the complexity and features of the  
> advanced file systems for a good combination of stability and support.


Is anyone still using ext4?  I'm not saying it's bad or anything, it only  
seems that it has gone out of fashion.


IIRC its still Debian's default. Its my file system of choice unless I have  
very specific reasons against it. I have never seen it fail outside of  
hardware issues. Performance of ext4 is quite acceptable out of the box.  
E.g. it seems to be slightly faster than ZFS for my use cases.  
Almost every Linux live system can read it. There are no problematic  
licensing or stability issues whatsoever. By its popularity its probably one  
of the most widely-deployed Linux file systems which may enhance the chance  
that whatever problem you incur with ext4 someone else has had before...



I'm considering using snapshots.  Ext4 didn't have those last time I checked.


Ext4 still does not offer snapshots. The traditional way to do snapshots  
outside of fancy BTRFS and ZFS file systems is to add LVM

Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-09 Thread Linux-Fan

hw writes:


On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> Le 09/11/2022 à 12:41, hw a écrit :


[...]


> I am really not so well aware of ZFS state but my impression was that:
> - FUSE implementation of ZoL (ZFS on Linux) is deprecated and that,
> Ubuntu excepted (classic module?), ZFS is now integrated by a DKMS module

Hm that could be.  Debian doesn't seem to have it as a module.


As already mentioned by others, zfs-dkms is readily available in the contrib  
section along with zfsutils-linux. Here is what I noted down back when I  
installed it:


https://masysma.net/37/zfs_commands_shortref.xhtml

I have been using ZFS on Linux on Debian since end of 2020 without any  
issues. In fact, the dkms-based approach has run much more reliably than  
my previous experiences with out-of-tree modules would have suggested...


My setup works with a mirrored zpool and no deduplication, I did not need  
nor test anything else yet.



> - *BSDs integrate directly ZFS because there are no licences conflicts
> - *BSDs nowadays have departed from old ZFS code and use the same source
> code stack as Linux (OpenZFS)
> - Linux distros don't directly integrate ZFS because they generally
> consider there are licences conflicts. The notable exception being
> Ubuntu that considers that after legal review the situation is clear and
> there is no licence conflicts.


[...]


broke something.  Arch is apparently for machosists, and I don't want
derivatives, especially not Ubuntu, and that leaves only Debian.  I don't  
want

Debian either because when they introduced their brokenarch, they managed to
make it so that NVIDIA drivers didn't work anymore with no fix in sight and
broke other stuff as well, and you can't let your users down like that.  But
what's the alternative?


Nvidia drivers have been working for me in all releases from Debian 6 to 10 both  
inclusive. I did not have any need for them on Debian 11 yet, since I have  
switched to an AMD card for my most recent system.



However, Debian has apparently bad ZFS support (apparently still only Gentoo
actually supports it), so I'd go with btrfs.  Now that's gona suck because


You can use ZFS on Debian (see link above). Of course it remains your choice  
whether you want to trust your data to the older, but less-well-integrated  
technology (ZFS) or to the newer, but more easily integrated technology  
(BTRFS).



I'd
have to use mdadm to create a RAID5 (or use the hardware RAID but that isn't


AFAIK BTRFS also includes some integrated RAID support such that you do not  
necessarily need to pair it with mdadm. It is advised against using for RAID  
5 or 6 even in most recent Linux kernels, though:


https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices

RAID 5 and 6 have their own issues you should be aware of even when running  
them with the time-proven and reliable mdadm stack. You can find a lot of  
interesting results by searching for “RAID5 considered harmful” online. This  
one is the classic that does not seem to make it to the top results, though:


https://www.baarf.dk/BAARF/RAID5_versus_RAID10.txt

If you want to go with mdadm (irrespective of RAID level), you might also  
consider running ext4 and trade the complexity and features of the advanced  
file systems for a good combination of stability and support.



fun
after I've seen the hardware RAID refusing to rebuild a volume after a failed
disk was replaced) and put btrfs on that because btrfs doesn't even support
RAID5.


YMMV
Linux-Fan

öö


pgpnYrzw5zESo.pgp
Description: PGP signature


imapsync notification

2022-08-29 Thread OB-Linux GNU
Hello

I use imapsync occasionally. In the new version, it sends emails to
users at every run. I want to prevent this.  (h...@imapsync.tk)

is there any way to prevent this?



Re: VFAT vs. umask.

2022-07-31 Thread Linux-Fan

pe...@easthope.ca writes:


From: Linux-Fan 
Date: Sat, 30 Jul 2022 21:37:37 +0200
> Formatting it to ext2 should work and not cause any issues ...

Other authorities claim "factory format" is optimal and wear of flash
storage is a concern. A revised "format" can impose worse conditions
for wear?  Does any manufacturer publish about this?  What is hope?
What is truth?


Reformatting (in the sense of mkfs.ext2) does not by itself cause excessive  
wear on the flash drive. Now if the cards were optimized to handle FAT and  
somehow interpret its structures cleverly then this could in theory cause  
longer life of the drives for FAT compared to ext2. So far we got some  
rumors about that but nothing definitive. I sincerely doubt that there are  
any advantages of FAT on SD beyond the compatibility. Because: Nowdays it  
is quite common to use these cards as general purpose storage. Encrypted  
storage of Android files come to mind. Here, the card cannot optimize beyond  
the block level which is what I guess it does in any case. Here are some  
resources that you could use for further research:


- 
https://www.reddit.com/r/raspberry_pi/comments/ex7dvo/quick_reminder_that_sd_cards_with_wearleveling/
- 
https://electronics.stackexchange.com/questions/27619/is-it-true-that-a-sd-mmc-card-does-wear-levelling-with-its-own-controller

I do not know of any case where an optimization for FAT became visible.  
Especially, having used SD and µSD cards for Linux root file systems for  
longer than a year without observing any issues, I would conclude that it is  
much more related to the quality of the flash than the file system in use.



> Modern backup tools use their own archive/data formats ...

All that's needed here is a reliable copy on the HDD, of the files on
the SD.  If the SD fails I mount the part of the HDD, restore to
another SD and continue working.  Rsync is the most effective tool
I've found.  More advanced archiving is not needed.


rsync on FAT will not preserve Unix permissions and most other metadata. I  
(personally) would not consider it “reliable” since I care about these  
metadata.


There are three ways around this:

* Don't care - If your workflow does not need any Unix permissions
  whatsoever then you might as well stick with FAT if none of its other
  limits are of concern (maximum file size for FAT32 comes to mind).

* Format using an Unix-aware file system like e.g. ext2 and then
  just copy the files (rsync works).

* Use a tool that keeps the metadata for you (like the suggested
  advanced backup tools).

If rsync works for you, then by all means stick with it :)

HTH
Linux-Fan

öö

[...]


pgpMJtSUSta_V.pgp
Description: PGP signature


Re: VFAT vs. umask.

2022-07-30 Thread Linux-Fan

pe...@easthope.ca writes:


David,
thanks for the reply.

From: David Wright 
Date: Fri, 29 Jul 2022 00:00:29 -0500
> When you copy files that have varied permissions onto the FAT, you may
> get warnings about permissions that can't be honoured. (IIRC, copying
> ug=r,o= would not complain, whereas u=r,go= would.)

Primary store is an SD card.  Rsync is used for backup.  Therefore
this dilema.

* In Linux, an ext file system avoids those complications.  To my
knowledge, all SD cards are preformatted with a FAT.  Therefore ext
requires reformatting.


FAT or exFAT depending on size, yes.


* Most advice about flash storage is to avoid reformatting.
Unfortunately most or all of this advice is written by software
people; none, that I recall, from a flash storage manufacturer.


The only reason for not chosing to format the SD card with a Linux file  
system is the lack of compatibility: Other systems (think Windows, Mac,  
etc.) will not be able to use the SD card anymore because it will appear  
„wrongly formatted” to them.



My own experience, is one SD card about a decade old, reformatted to
ext2 when new and still working. A second SD purchased recently with
factory format unchanged seems very slow in mounting.  As if running
fsck before every mount.  -8~/

Certainly tempted to reformat the new card to ext2.  Information
always welcome.  Knowledge even better.


Formatting it to ext2 should work and not cause any issues such as long as  
you remain in a “Linux world”. If the data should still be accessible from  
other systems, consider packing it into archives and storing the archives on  
the FAT file system. That is something that I used to do and that has worked  
reliably for me in the past. It does not allow for incremental updates (like  
rsync would) by default, though.


Modern backup tools use their own archive/data formats and can thus store  
Linux metadata (permissions, ownerships etc.) on file systems that are  
incapable of doing this (like FAT, cloud storage etc.). I have written my  
notes about a few such tools here:

https://masysma.lima-city.de/37/backup_tests_borg_bupstash_kopia.xhtml

HTH
Linux-Fan

öö


pgpwYqGXMmTxt.pgp
Description: PGP signature


Re: SSD Optimization and tweaks - Looking for tips/recomendations

2022-06-28 Thread Linux-Fan

Marcelo Laia writes:


Hi,

I bought a SSD solid disk and will perform a fresh install on it. Debian  
testing. I've never used such a disc.


I bought a Crucial CT1000MX500SSD1 (1TB 3D NAND Crucial SATA MX500 Internal  
SSD (with 9.5mm adapter) — 6.35cm (2.5in) and 7mm).


I read the recommendations on the https://wiki.debian.org/SSDOptimization  
page.


However, I still have some doubts:



1. Use ext4 or LVM partitioning?


You could do both at once, too. See the other users' answers.

2. I read in the Warnming section that some discs contain bugs, including  
Crucial. But I don't know if I need to use or not use "discard" on this disk  
(CT1000MX500SSD1). If I need to proceed with use "discard", would you please  
have any tips on how to do it? I didn't understand how to do this.


IIRC the best practice was to not use the "discard" mount option and rather  
run "fstrim" at regular intervals. You could use the `fstrim.timer` systemd  
unit from package util-linux for that purpose.


3. Should I reserve a swap partition or not? I always had one on hdd disks.  
I was in doubt, too.


If you want to have a swap partition, it is perfectly OK to create one on an  
SSD. In fact, I have sometimes used SSD swap to my advantage. Today its  
mostly a matter of personal preference.


4. Any other recommendations to improve the performance and lifespan of this  
disk?


The wiki page is already pretty comprehensive. On my systems I mostly do the  
“Reduction of SSD write frequency via RAMDISK” thing.


As with all disks, it can help to setup S.M.A.R.T. monitoring. For SSDs, a  
metric like “liftime GiB written” or something similar is often included.  
This can be used to reveal if your system is doing a lot of writes or not by  
checking the changes of the value over time (e.g. with help from `smartd`  
from package smartmontools).


HTH and YMMV
Linux-Fan

öö


pgphTZnBNkz29.pgp
Description: PGP signature


Recommendations for data layout in case of multiple machines (Was: Re: trying to install bullseye for about 25th time)

2022-06-10 Thread Linux-Fan

Andy Smith writes:


Hello,

On Thu, Jun 09, 2022 at 11:30:26AM -0400, rhkra...@gmail.com wrote:
> For Gene, he could conceivably just rename the RAID setup that he has  
> mounted
> under /home to some new top level mountpoint.  (Although he probably has  
> some
> scripts or similar stuff that looks for stuff in /home/ that  
> would

> need to be modifed.

For anyone with a number of machines, like Gene, there is a lot to
be said for having one be in the role of file server.

- One place to have a ton of storage, which isn't a usual role for a
  simple desktop daily driver machine.


I have always wondered about this. Back when I bought a new PC I got that  
suggestion, too (cf. thread at
https://lists.debian.org/debian-user/2020/10/msg00037.html). All things  
considered, I ended up with _one_ workstation to address desktop and storage  
uses together for the main reason that whenever I work on my "daily driver".  
I also need the files hence either power on and maintain two machines all of  
the time vs. having just one machine for the purpose?



- One place to back up.

- Access from anywhere you want.

- Easier to keep your other machines up to date without worrying
  about all the precious data on your fileserver. Blow them away
  completely if you like. Be more free to experiment.


These points make a lot of sense and that is basically how I am operating,  
too. One central keeper of data and multiple satellite machines :)


I factor out backups to some extent in order to explicitly store copies  
across multiple machines and some of them "offline" as to resist modern  
threats of "data encryption trojans".


Also, I have been following rhkramer's suggestion of storing important data  
outside of /home and I can tell it has served me well. My main consideration  
for doing this is to separate automatically generated program data (such  
as .cache, .wine and other typical $HOME inhabitants) from _my_ actual data.  
I still backup selected parts of $HOME e.g. the ~/.mozilla directory for  
Firefox settings etc.



It's a way of working that's served me well for about 25 years. It's
hard to imagine going back to having data spread all over the place.


What do you use as your file server? Is it running 24/7 or started as needed?

Thanks in advance
Linux-Fan

öö

[...]


pgptZmlQKdJhL.pgp
Description: PGP signature


Re: setting path for root after "sudo su" and "sudo" for Debian Bullseye (11)

2022-05-21 Thread Linux-Fan

Greg Wooledge writes:


On Sat, May 21, 2022 at 10:04:01AM -0500, Tom Browder wrote:
> I am getting nowhere fast.

OK, let's start at the beginning.

You have raku installed in some directory that is not in a regular PATH.

You won't tell us what this directory is, so let's pretend it's
/opt/raku/bin/raku.


[...]


The voices in your head will tell you that you absolutely must use
sudo to perform your privilege elevation.

Therefore the third solution for you: configure sudo so that it does
what you want.

Next, of course, the voices in your head will tell you that configuring
sudo is not permissible.  You have to do this in... gods, I don't know...


[...]

There is also `sudo -E` to preserve environment and
`sudo --preserve-env=PATH` that could be used to probably achieve the  
behaviour of interest. If this is not permitted by security policy but  
arbitrary commands to run with sudo are, consider creating a startup script  
(or passing it directly at the `sudo` commandline) that sets the necessary  
environment for the raku program.


That being said: On my systems I mostly synchronize the bashrc of root and my  
regular users such that all share the same PATH. I tend to interactively  
become root and hence ensure that the scripts are run all of the time. This  
has mostly survived the mentioned change in `su` behaviour and observable  
variations between `sudo -s` and `su` behaviours are kept at a minimum.


HTH and YMMV
Linux-Fan

öö


pgpRZzTp4rgEM.pgp
Description: PGP signature


Re: Firefox context menu and tooltip on wrong display?

2022-05-18 Thread Linux-Fan

Roberto C. Sánchez writes:


I have recently decommissioned my main desktop workstation and switched
to using my laptop for daily work (rather than only when travelling).  I
acquired a USB-C "docking station" and have connected two external
monitors (which were formerly attached to my desktop machine).

For some strange reason, when there is a Firefox window in the second or
third monitor context menus (from right-click) and tooltips do not
appear on the same screen as the Firefox window, but rather on the
primary monitor (the laptop integrated monitor).  I tried searching, but
did not find anything recent or useful.  Has anyone experienced this
same issue?  I did not experience this at all on my desktop machine with
dual monitors (both the laptop where I am experiencing this and the
desktop where I did not experience this are running bullseye), so I am
curious if anyone has encountered this issue and if so how to resolve
it.


This sounds familiar to me. I think I have observed this behaviour in the  
past albeit never really reproducible. I thought it might be related to  
starting on another screen than the browser is later actually used on, but I  
could not trigger it with some simple tests here.


Back when it appeared I mostly did not bother trying to fix it because for  
me it was only ever temporary in some situations and I could still control  
the rouge menus with the keyboard and avoid moving the mouse cursor all over  
the screens.


Just a datapoint, no solution though :(

HTH
Linux-Fan

öö

[...]


pgpbJ03YgamuY.pgp
Description: PGP signature


Re: Can't setup a VM using SLIC data to activate Win10 in guest

2022-04-24 Thread Linux-Fan

Joao Roscoe writes:

[...]


I'm trying to migrate my VMs from VirtualBox to KVM/QEMU/VirtManager.


[...]

I then proceeded extracting and providing the SLIC/MSDM data, as described  
https://gist.github.com/Informatic/ 
49bd034d43e054bd1d8d4fec38c305ec>here, However, the VM now fails to  
start/install (tried to re-create the VM using the same image file, and  
setting up the SLIC/MSDM info from the beginning), stating that the SLIC file  
can't be read.


[...]


What am I missing here? Any clues?


Have you checked the hint at the very beginning of the gist  
"apparmor/security configuration changes may be needed". I dimly remember  
that I had to do this when migrating from Debian 10 to Debian 11?

There is even a comment in the gist about it linking to
<https://egirland.blogspot.com/2018/12/get-rid-of-that-fng-permission-denied_7.html>

Also, did you try the `sudo strings /sys/firmware/acpi/tables/MSDM`  
suggested in the comments already?


NB: IIRC, back when I migrated all my Windows VMs from VirtualBox to KVM, I  
had to re-activate them all...


HTH and YMMV
Linux-Fan

öö 



pgphq3f8S5oLx.pgp
Description: PGP signature


Re: system freeze

2022-04-20 Thread Linux-Fan

mick crane writes:


hello,
I frequently have the system freeze on me and I have to unplug it.
It seems to only happen in a browser and *appears* to be triggered by using  
the mouse.
If watching streamed youtube movie or reading blogs sometimes the screen  
goes black and everything is unresponsive and sometimes the screen and  
everything freezes but the audio keeps playing.

I'd like it to stop doing that.
It didn't seem to be an issue a while ago but now is happening once at least  
per day with bullseye and now with bookworm.

I cannot find anything in logs that have looked for except


[...]


What steps can I take to isolate the problem ?

mick@pumpkin:~$ inxi -SGayz
System:
  Kernel: 5.16.0-6-amd64 arch: x86_64 bits: 64 compiler: gcc v: 11.2.0
parameters: BOOT_IMAGE=/boot/vmlinuz-5.16.0-6-amd64
root=UUID=1b68069c-ec94-4f42-a35e-6a845008eac7 ro quiet
  Desktop: Xfce v: 4.16.0 tk: Gtk v: 3.24.24 info: xfce4-panel wm: xfwm
v: 4.16.1 vt: 7 dm: LightDM v: 1.26.0 Distro: Debian GNU/Linux  
bookworm/sid

Graphics:
  Device-1: AMD Pitcairn LE GL [FirePro W5000] vendor: Dell driver: radeon
v: kernel alternate: amdgpu pcie: gen: 3 speed: 8 GT/s lanes: 16 ports:
active: DP-1 empty: DP-2,DVI-I-1 bus-ID: 03:00.0 chip-ID: 1002:6809
class-ID: 0300
  Display: x11 server: X.Org v: 1.21.1.3 compositor: xfwm v: 4.16.1 driver:
X: loaded: radeon unloaded: fbdev,modesetting,vesa gpu: radeon
display-ID: :0.0 screens: 1
  Screen-1: 0 s-res: 3840x2160 s-dpi: 96 s-size: 1016x571mm (40.00x22.48")
s-diag: 1165mm (45.88")
  Monitor-1: DP-1 mapped: DisplayPort-0 model: LG (GoldStar) HDR 4K
serial:  built: 2021 res: 3840x2160 hz: 60 dpi: 163 gamma: 1.2
size: 600x340mm (23.62x13.39") diag: 690mm (27.2") ratio: 16:9 modes:
max: 3840x2160 min: 640x480
  OpenGL: renderer: AMD PITCAIRN (DRM 2.50.0 5.16.0-6-amd64 LLVM 13.0.1)
v: 4.5 Mesa 21.3.8 direct render: Yes


[...]

Hello,

I think I had a very similar issue some months ago (Debian Bullseye). Back  
then I tried to switch to the proprietary AMD driver (?) and it seems to  
have helped although on my machine, the problem appeared at most once or  
twice a day back then.


These were the symptoms I had observed:

* Random conditions (but always GUI application usage)
* Clock in i3bar hangs
* X11 mouse cursor can still move
* Shortly after the hang, screen turns black
* At least one program continues to run despite the
  graphics output being "off"
* SSH connection was not possible during this screen off
  state.

In later instances, I also observed that the screen turned black temporarily  
and turned on after a shorter freeze again with the system becoming usable  
again.


Here is my output for your inxi command:

$ inxi -SGayz | cat
System:
 Kernel: 5.10.0-13-amd64 x86_64 bits: 64 compiler: gcc v: 10.2.1
 parameters: BOOT_IMAGE=/boot/vmlinuz-5.10.0-13-amd64
 root=UUID=5d6c37b4-341f-4aca-a9f7-2c8a0f39336a ro quiet
 Desktop: i3 4.19.1-non-git info: i3bar, docker dm: startx
 Distro: Debian GNU/Linux 11 (bullseye)
Graphics:
 Device-1: AMD Navi 14 [Radeon Pro W5500] vendor: Dell driver: amdgpu
 v: 5.11.5.21.20 bus ID: :67:00.0 chip ID: 1002:7341 class ID: 0300
 Display: server: X.Org 1.20.11 driver: loaded: amdgpu,ati
 unloaded: fbdev,modesetting,radeon,vesa display ID: :0 screens: 1
 Screen-1: 0 s-res: 7680x1440 s-dpi: 96 s-size: 2032x381mm (80.0x15.0")
 s-diag: 2067mm (81.4")
 Monitor-1: DisplayPort-0 res: 1920x1080 hz: 60 dpi: 93
 size: 527x296mm (20.7x11.7") diag: 604mm (23.8")
 Monitor-2: DisplayPort-1 res: 2560x1440 hz: 60 dpi: 109
 size: 597x336mm (23.5x13.2") diag: 685mm (27")
 Monitor-3: DisplayPort-2 res: 1280x1024 hz: 60 dpi: 96
 size: 338x270mm (13.3x10.6") diag: 433mm (17")
 Monitor-4: DisplayPort-3 res: 1920x1080 hz: 60 dpi: 85
 size: 575x323mm (22.6x12.7") diag: 660mm (26")
 OpenGL: renderer: AMD Radeon Pro W5500
 v: 4.6.14739 Core Profile Context FireGL 21.20 compat-v: 4.6.14739
 direct render: Yes

Back when the problem was still appearing, I could observe the following  
messages in syslog after reboot (sorry long lines...):


Sep 18 13:11:36 masysma-18 kernel: [ 2045.986736] [drm:amdgpu_job_timedout 
[amdgpu]] *ERROR* ring sdma1 timeout, signaled seq=3179, emitted seq=3181
Sep 18 13:11:36 masysma-18 kernel: [ 2045.986935] [drm:amdgpu_job_timedout 
[amdgpu]] *ERROR* Process information: process  pid 0 thread  pid 0
Sep 18 13:11:36 masysma-18 kernel: [ 2045.986944] amdgpu :67:00.0: amdgpu: 
GPU reset begin!
Sep 18 13:11:38 masysma-18 kernel: [ 2047.719111] amdgpu :67:00.0: amdgpu: 
failed send message: DisallowGfxOff (42) param: 0x response 
0xffc2
Sep 18 13:11:38 masysma-18 kernel: [ 2047.719114] amdgpu :67:00.0: amdgpu: 
Failed to disable gfxoff!
Sep 18 13:11:40 masysma-18 kernel: [ 2049.778328] amdgpu :67:00.0: amdgpu: 
Msg issuing pre-check failed and SMU may be not in the right state!
Sep

Re: updatedb.mlocate

2022-04-10 Thread Linux-Fan

Greg Wooledge writes:


On Sat, Apr 09, 2022 at 09:26:58PM -0600, Charles Curley wrote:
> Two of my machines have their database files dated at midnight or one
> minute after.
>
> Possibly because updatedb is run by a systemd timer, not cron.


[...]


# skip in favour of systemd timer
if [ -d /run/systemd/system ]; then
exit 0
fi
[...]

Wow.  That's incredibly annoying!


It provides a mechanism that adjusts to the init system at runtime. Maybe  
there are better ways to do it, but it seems to work OK?



unicorn:~$ less /lib/systemd/system/mlocate.timer
[Unit]
Description=Updates mlocate database every day

[Timer]
OnCalendar=daily
AccuracySec=24h
Persistent=true

[Install]
WantedBy=timers.target

... it doesn't even say when it runs?  What silliness is this?


Try

# systemctl list-timers
NEXT LEFT  LAST PASSED  
  UNIT ACTIVATES
Mon 2022-04-11 00:00:00 CEST 12h left  Sun 2022-04-10 00:00:01 CEST 11h ago 
  mlocate.timermlocate.service

It shows that at least on my system, it is going to run on 00:00 local time.

I can imagine that the idea behind not specifying the actual time in the  
individual unit allows you to configure the actual time of invocation  
somehwere else. This way if you have a lot of machines all online you can  
avoid having bursts of activity in the entire network/datacenter just as the  
clock turns to 00:00.



Oh well.  It clearly isn't bothering me (I'm usually in bed before
midnight, though not always), so I never had to look into it.  I'm sure
someone felt it was a "good idea" to move things from perfectly normal
and well-understood crontab files into this new systemd timer crap that
nobody understands, and that I should respect their wisdom, but I don't
see the advantages at this time.


I think systemd tries to provide an alternative for all the `...tab` files  
that used to be the standard (/etc/inittab, /etc/crontab, /etc/fstab come to  
mind). IMHO the notable advantage over the traditional method is that on  
systemd-systems one can query all the status information with a single  
command: `systemctl status `. Similarly, the lifecycle of  
start/stop/enable/disable can all be handled by the single command  
`systemctl start/stop/enable/disable `. All stdout/stderr  
outputs available via `journalctl -u `. In theory this could  
eliminate the need to know about or remember the individual services' log  
file names.


Specifically in the case of `cron`, I think it is an improvement that user- 
specific timers are now kept in the user's home directory  
(~/.config/systemd/user directory) rather than a system directory  
(/var/spool/cron/crontabs).


IMHO systemd's interface is not the best design-wise and in terms of its  
strange defaults, formats and names (paged output by default is one of the  
things I disable all the time, INI-derivatives are really not my favorite  
configuration syntax, binary logfiles, semantics of disable/mask can be  
confusing...). IMHO it does provide a lot of useful features that were  
missing previously, though.


YMMV
Linux-Fan

öö


pgpcH9a30fVSj.pgp
Description: PGP signature


Re: certificado para assinatura digital

2022-04-07 Thread Linux - Junior Polegato

Olá!

        Consegui com Bird ID desktop, instalei e configurei no Firefox 
e no Assinador Serpro com o caminho da ".so" em [1].


        Estou utilizando o Debian Testing (/etc/debian_version: 
bookworm/sid) com tudo atualizado.


        Os sites que aceitam o Bird ID direto funcionam muito bem também.


[1] /opt/Assistente Desktop 
birdID/resources/extraResources/linux/x64/vault-pkcs11.so


--

[]'s

Junior Polegato




On 07/04/2022 14:43, Daniel Lenharo wrote:

Olá

Em 07/04/2022 14:35, Leonardo S. S. da Rocha escreveu:

Obriagdo Paulo, o GnuPGP eu já utilizo.

Daniel, me diz uma coisa, eu consigo utilizar nesse assinador que você
utiliza do Serpro, qualquer certificado?



Acredito que sim. Desde que seja certificado no padrão ICP-Brasil, não 
acredito que tenha problemas. Seja A1 ou A3.


[]'s





Re: certificado para assinatura digital

2022-04-07 Thread Linux - Junior Polegato

Olá!

        Estarei fazendo um teste hoje com Bird-ID (A3 na nuvem) da 
empresa Soluti.


        Já uso o A1 (arquivo) dessa empresa, baixo com o OpenWebStart, 
e então não precisa de token, só adicionar o certificado no 
Firefox/Chomium ou abrir o arquivo ".pfx" no LibreOffice e 
assinar/verificar documentos/PDF, também funcionado com NF-e e no 
SpedFiscal ICMS/IPI/Contribuições e Conectividade Social (VM Win).


        Logo mais reporto se o Bird-ID vai rolar.

--

[]'s

Junior Polegato



On 06/04/2022 18:43, Leonardo S. S. da Rocha wrote:

Pessoal, boa noite!

espero que tod@s estejam bem e segur@s.

Gostaria de indicação de certificado para assinatura digital, um e-CPF
A1 ou A3 mesmo cujo gerenciador para a assinatura possa ser instalado
no Debian (GNU/Linux).

Agradeço,

Leonardo Rocha.






Re: Problem downloading "Installation Guide for 64-bit PC (amd64)"

2022-04-07 Thread Linux-Fan

Richard Owlett writes:


On 04/07/2022 10:22 AM, Cindy Sue Causey wrote:

On 4/7/22, Richard Owlett  wrote:

I need a *HTML* copy of "Installation Guide for 64-bit PC (amd64)" for
*OFFLINE* use.

The HTML links on [https://www.debian.org/releases/stable/installmanual]
lead *ONLY* to Page 1.

Is the complete document downloadable as a single HTML file?



Have you seen the "installation-guide-amd64" package in Debian's
repositories?


Thank you.
No, I hadn't. The machine I'm currently on is running Debian 9.13 [I'm  
prepping to do a install of 11.3 to another machine].
I found it listed in Synaptic and installed it. It does not appear in any of  
MATE's menus and I can't reboot until later today.


Do you know in which sub-directory I might find it?


Try /usr/share/doc/installation-guide-amd64/en/index.html as the entry point.

An easy way to find out about a package's files after installation is
`dpkg -L `, e.g. in this case:

$ dpkg -L installation-guide-amd64

Btw. the guide in the package is then not a single HTML file but multiple  
files (in case it matters...)


HTH
Linux-Fan

öö

[...]


pgpREAh7J2Hck.pgp
Description: PGP signature


Re: libvirt tools and keyfiles

2022-04-02 Thread Linux-Fan

Celejar writes:


Hi,

I'm trying to use virt-manager / virt-viewer to access the console of
some qemu / kvm virtual machines on a remote system over ssh. I have
public key access to root@remote_system. When I do:

virt-manager -c 'qemu+ssh://root@remote_system/system?
keyfile=path_to_private_key'

the connection to libvirt on the remote system comes up fine, and I can
see the various VMs running there, but when I try to access a VM
console (via the "Open" button or "Edit / Virtual Machine Details"), I
get prompted for the password for "root@remote_system" (which doesn't
even work, since password access is disabled in the ssh server
configuration).


What do you insert for `remote_system`? A hostname or an IP?

IIRC I once tried to use an IP address directly  
(qemu+ssh://u...@192.168.yyy.yyy), and while it would perform the initial  
connection successfully, subsequent actions would query me for  
the password of (user@masysma-...) i.e. change from IP-address-based (which  
was configured to use a key in .config/ssh) to hostname based (for which the  
key was not specified in the config. I solved this by adding the hostname to  
/etc/hosts and configuring SSH and my virt-manager connection to use the  
hostnames rather than IP addresses.


I also remember that I had to add the connection to my GUI user's .ssh/config  
AND my root user's .ssh/config. In my case, I am not specifying the keyfile  
as part of the connection, though.


HTH
Linux-Fan

öö

[...]


pgpagjTs3CcR0.pgp
Description: PGP signature


Re: which references (books, web pages, faqs, videos, ...) would you recommend to someone learning about the Linux boot process as thoroughly as possible?

2022-04-02 Thread Linux-Fan

Albretch Mueller writes:


imagine you had to code a new bootloader now (as an exercise) in
hindsight which books would you have picked?


I do not know of any books about bootloaders, but having a look at multiple  
different bootloaders (documentation and possibly source code) should be a  
good way to start? I think GRUB, SYSLINUX, u-boot will cover for a wide  
variety of boot scenarios?



I am OK with Math and technology of any kind and I am more of a Debian
kind of guy. In fact, I am amazed at how Debian Live would pretty much
boot any piece of sh!t you would feed to it, but, just to mention one
case, knoppix would not. But then knoppix, has such super nice boot-up
options as: toram (right as a parameter as you boot, no tinkering with
any other thing!), fromhd and bootfrom (you can use to put an iso in a


I think that at least in the past it was possible to boot Debian Live  
systems with `toram` option, too. You should probably just try it out? In  
case the menu does not offer it, consider tinkering with the respective  
syslinux/isolinux configuration and adding a menu entry with `toram` set?



partition of a pen drive, or even stash it in your own work computer,
in order to liberate your DVD player after booting), ..., which DL
doesn’t have.


I think GRUB2 supports this feature, but am not sure if it will work  
correctly in all of the cases.


For my own tinkering I mostly prefer SYSLINUX. It can boot just about any  
live linux and you can also add `memdisk` images to add DOS and other small  
systems.


[...]


I have been always intrigued about such matters and such differences,
between what I see as supposedly being standardized, like a boot
process.


Compare the boot process between amd64 and armhf to find out that there are  
quite the differences :)


HTH
Linux-Fan

öö


pgpVSDkpKfsYh.pgp
Description: PGP signature


Re: how many W a PSU for non-gaming Debian?

2022-03-08 Thread Linux-Fan

Emanuel Berg writes:


Linux-Fan wrote:

>>> CPU power doubled to account for short-time bursts.
>>
>> Double it, that something one should do?
>
> In Intel world, yes :) In AMD world it seems to be slightly
> better, cf.:
> https://images.anandtech.com/graphs/graph16220/119126.png
>
> The TDP is given in the labels whereas the actual max power
> consumption observed is in the diagram. It seems that for
> AMD systems, the most extreme factor observed there is
> 143.22/105 = 1.364 [...]

OK, included ...

> SSD highly depends on the model. No need to argue for one
> general figure over the other. I think my SSD is specified
> 14W, but it is large and not the "newest" :)

OK, it says the SSD and RAM are

  Corsair Vengeance LPX · DDR4 · 2*8GB=16GB · 3600Mhz

  250GB Kingston KC 2000 (SSD/NVMe/M.2)

if one can find exact digits, that's optimal, but what do you
search for to find out? I mean in general? The model name
and ... ?


... datasheet
... power consumption

This does not yield anything ineteresting for the RAM here, but for the SSD  
we get a useful datasheet this way:


[v] https://www.kingston.com/datasheets/SKC2000_us.pdf

Which indicates on page 2 that the max. power consumption is 7W under heavy  
write loads.


I get most of my estimates derived from the figures of a PC magazine I  
regularly read :)



Anyway, the computation now lands at 307 W.

  device  model/category max W   note
  -
  CPU AMD Ryzen 3, 4 cores89 exact plus extra   [i]
  fans 80 mm (3K RPM)  9 3*3W =  9W[ii]
  120 mm (2K RPM) 12 2*6W = 12W[ii]
  GPU geforce-gt-710  19 exact[iii]
  mb  Asus ROG Strix B450-F Gaming AM4   101 exact excl CPU[iv]
  RAM DDR3 (1.5V)  3 actually a DDR4   [ii]
  SSD  2.8 [ii]
  -


3W is for one module per [ii] but further above you write 2x8GB so why not  
at least compute it at 6W? Also, SSD could go up to 7W per datasheet [v].  
For actual PSU sizes this will end up at 350W min. which is OK I guess.



total, with +30% wiggle room:
(ceiling (* 1.3 (+ (* (/ 143.22 105) 65)
   (* 3 3) (* 2 6)
   19
   (- 166.2 65)
   3
   2.8) )) ; 307 W

  [i] https://www.amd.com/en/products/apu/amd-ryzen-3-3200g
  https://images.anandtech.com/graphs/graph16220/119126.png
 [ii] https://www.buildcomputers.net/power-consumption-of-pc-components.html
[iii] https://www.techpowerup.com/gpu-specs/geforce-gt-710.c1990
 [iv] https://www.techporn.ph/review-asus-rog-strix-b450-f-gaming-am4- 
motherboard/
  https://www.techporn.ph/wp-content/uploads/ASUS-ROG-Strix-B450-F- 
Gaming-Benchmark-1.jpg


[...]

HTH and YMMV
Linux-Fan

öö


pgp9ZR9Hn5YpT.pgp
Description: PGP signature


Re: how many W a PSU for non-gaming Debian?

2022-03-07 Thread Linux-Fan

Emanuel Berg writes:


Linux-Fan wrote:

> It does, see
> https://www.techporn.ph/wp-content/uploads/ASUS-ROG-Strix-B450-F-Gaming-
Benchmark-1.jpg
> which is from your reference iv and explicitly shows an AMD R5
> 2600X processor being used.

I'll subtract 65W from it then ...

> * CPU power doubled to account for short-time bursts.

Double it, that something one should do?


In Intel world, yes :) In AMD world it seems to be slightly better, cf.:
https://images.anandtech.com/graphs/graph16220/119126.png

The TDP is given in the labels whereas the actual max power consumption
observed is in the diagram. It seems that for AMD systems, the most extreme
factor observed there is 143.22/105 = 1.364, so you might take that or round
up to 1.5 rather than factor 2 for AMD systems.


  Default TDP
  65W
  AMD Configurable TDP (cTDP)
  45-65W
  <https://www.amd.com/en/products/apu/amd-ryzen-3-3200g>

> * RAM upped to 10W and SSD upped to 5W (depending on the
>   actual components, you might want to revert that but
>   computing an SSD with 3W makes your entire calculation
>   dependent on that specific model and if you upgrade that
>   later you'd have to take it into account).

I got these digits from
https://www.buildcomputers.net/power-consumption-of-pc-components.html
which is one of the first Google hits so I trust them for
now ...


The figures on that page for CPUs are misleading (they specify TDP range
which is not much related to actual power draw anymore, see linked figure
above).

The remainder of the figures seems sensible. Some GPUs are also known to
draw extreme peak loads (though usually that's only the "large" ones).

SSD highly depends on the model. No need to argue for one general figure
over the other. I think my SSD is specified 14W, but it is large and not the
"newest" :)

For RAM it seems that my figure is just a little too high and that your 3W
are more correct in modern times. Nice to know :)


As for upgrading that will be easy in this regard since I'll
read how many Watts on the box of whatever I get :)

device  model/category max W   note   ref
  -
  CPU AMD middle end, 4 cores 65 exact  [i]
  fans 80 mm (3K RPM)  9 3*3W =  9W[ii]
  120 mm (2K RPM) 12 2*6W = 12W[ii]
  GPU geforce-gt-710  19 exact[iii]
  mb  Asus ROG Strix B450-F Gaming AM4   166.2   exact, incl CPU   [iv]
  RAM DDR3 (1.5V)  3 actually, a DDR4  [ii]
  SSD  2.8 [ii]
  -

total:
  (ceiling (+ 65 (* 3 3) (* 2 6) 19 (- 166.2 65) 3 2.8))  ; 212 W

with +30% wiggle room:
  (ceiling (* 1.3 (+ 65 (* 3 3) (* 2 6) 19 (- 166.2 65) 3 2.8)))  ; 276 W


IMHO this is too low a figure for the system being planned. I am pretty sure
it _will_ run on a 300W PSU, BUT probably not stable for a long time and
under high loads.

HTH
Linux-Fan


  [i] https://www.amd.com/en/products/apu/amd-ryzen-3-3200g
 [ii] https://www.buildcomputers.net/power-consumption-of-pc-components.html
[iii] https://www.techpowerup.com/gpu-specs/geforce-gt-710.c1990
 [iv] 
https://www.techporn.ph/review-asus-rog-strix-b450-f-gaming-am4-motherboard/
  
https://www.techporn.ph/wp-content/uploads/ASUS-ROG-Strix-B450-F-Gaming-Benchmark-1.jpg


[...]


pgpJwCQrGCL_L.pgp
Description: PGP signature


Re: how many W a PSU for non-gaming Debian?

2022-03-06 Thread Linux-Fan

Emanuel Berg writes:


Unfortunately the motherboard was worst-case 166.2 W, not the
previous estimate/approximation at 80.

The 166.2 (motherboard W really doesn't include the CPU?)


It does, see
https://www.techporn.ph/wp-content/uploads/ASUS-ROG-Strix-B450-F-Gaming-Benchmark-1.jpg
which is from your reference iv and explicitly shows an AMD R5 2600X  
processor being used.



So no it says it is is worst-case 277 W, and with +30% wiggle
room it is 361 W :(

back to passive 400W I guess ...


  device  model/category max W   note   ref
  -
  CPU AMD middle end, 4 cores 65 exact  [i]
  fans 80 mm (3K RPM)  9 3*3W =  9W
  120 mm (2K RPM) 12 2*6W = 12W[ii]
  GPU geforce-gt-710  19 exact[iii]
  mb  Asus ROG Strix B450-F Gaming AM4   166.2   exact [iv]
  RAM ~DDR3 (1.5V) 3 actually, a DDR4
  SSD  2.8
  -

total:
  (ceiling (+ 19 65 (* 3 3) (* 2 6) 166.2 3 2.8))  ; 277 W

with +30% wiggle room:
  (ceiling (* 1.30 (+ 19 65 (* 3 3) (* 2 6) 166.2 3 2.8))) ; 361 W


[...]

 [iv] https://www.techporn.ph/review-asus-rog-strix-b450-f-gaming-am4- 
motherboard/


[...]

May I suggest you to compute it differently as follows?

   device  model/category max W   note   ref
   -
   CPU AMD middle end, 4 cores130 2*65W  [i]
   fans 80 mm (3K RPM)  9 3*3W =  9W
   120 mm (2K RPM) 12 2*6W = 12W[ii]
   GPU geforce-gt-710  19 exact[iii]
   mb  Asus ROG Strix B450-F Gaming AM480 previous estimate
   RAM ~DDR3 (1.5V)10 actually, a DDR4
   SSD  5
   -

What did I change:

* CPU power doubled to account for short-time bursts.
* Motherboard back to 80W which should still be a safe estimate.
* RAM upped to 10W and SSD upped to 5W (depending on the actual components,
  you might want to revert that but computing an SSD with 3W makes your
  entire calculation dependent on that specific model and if you upgrade
  that later you'd have to take it into account).

Sum = 130+9+12+19+80+10+5 = 265W.
With +30%: 265*1.3 = 344.5W

Hence it would be suggested to take at least a 350W PSU. Use a larger one if  
you ever plan to extend the system.


HTH and YMMV
Linux-Fan

öö


pgpl5untslDYm.pgp
Description: PGP signature


Re: how many W a PSU for non-gaming Debian?

2022-03-05 Thread Linux-Fan

Emanuel Berg writes:


Linux-Fan wrote:

>> Oh, the OP has the AMD4 x86_64 CPU that comes with/in the
>> Asus ROG Strix B450-F Gaming motherboard!
>
> By default, a motherboard is just that, a motherboard.
> Unless you have some specific "bundle" package, there is no
> CPU included with it. According to
> https://rog.asus.com/motherboards/rog-strix/rog-strix-b450-f-gaming-model/
> the board has "AM4 socket: Ready for AMD Ryzen(TM)
> processors". And according to
> https://en.wikipedia.org/wiki/Socket_AM4 this socket can
> accomodate for a wide variety of different processors.

$ lscpu | grep "name"
Model name: AMD Ryzen 3 3200G with Radeon Vega Graphics


Now this is _not_ a 125W CPU but a 65W TDP one [1]. When you calculate that  
CPU with 125W you are already safe. No need to multiply that *2 what I would  
have suggested were it a 125W TDP CPU. In fact, by calculating with 125W you  
almost took twice the TDP already which would have been 2*65W = 130W.


[1] https://www.amd.com/en/products/apu/amd-ryzen-3-3200g


> (a) PSU with 450W will have specified 87% at 90W i.e.
> draw 90W/0.87 = 103W
>
> (b) PSU with 600W will have specified 90% at 120W i.e.
> draw 120W/0.9 = 133W

Okay, so with everything and +25% wiggle room, i.e.

  (ceiling (* 1.25 (+ 19 125 (* 3 3) (* 2 6) 80 3 2.8))) ; 314 W


Use that figure, see above :)


it is 314 W, and then the worst efficiency is 87%, the digit
lands at

  (ceiling (/ 314 0.87 1.0)) ; 361

361 W.

?


You do not need to take efficiency into account this way. Reason is: The  
efficiency is the factor between the wall power draw and the PSU's output  
power. To size a PSU for a new computer you _only_ take into account the  
output power and this is also the figure that the manufacturer advertises  
(i.e. 300W PSU means 300W output power, not input power draw).


The efficiency only comes into play when you want to consider the wall power  
draw e.g. to find out how much it will cost to run the systmem 24/7 in terms  
of electricity. If you take an "overly large" PSU, efficiency will possibly  
degrade compared to a one that is "just right" in size.


I hope that clarifies it a little.

HTH
Linux-Fan

öö

[...]


pgpJCzDitOFKs.pgp
Description: PGP signature


Re: how many W a PSU for non-gaming Debian?

2022-03-04 Thread Linux-Fan

Emanuel Berg writes:


Linux-Fan wrote:

> Please keep the following points in mind when doing PSU
> wattage sizing for modern PCs:
>
> - Judging a CPU by its thermal design power is no longer
>   feasible due to some CPUs permanently overclocking while
>   the actually available cooling power permits it. On some
>   Intel CPUs this can mean about twice the power than you
>   would have expected. If we were to apply this logic
>   directly to the unspecified (?) AMD CPU from the OP's
>   config

Oh, the OP has the AMD4 x86_64 CPU that comes with/in the Asus
ROG Strix B450-F Gaming motherboard!


By default, a motherboard is just that, a motherboard. Unless you have some  
specific "bundle" package, there is no CPU included with it. According to

https://rog.asus.com/motherboards/rog-strix/rog-strix-b450-f-gaming-model/
the board has "AM4 socket: Ready for AMD Ryzen(TM) processors". And  
according to https://en.wikipedia.org/wiki/Socket_AM4 this socket can  
accomodate for a wide variety of different processors.



> - 80+ certified PSUs are rated in terms of their performance
>   at certain load percentages. If you choose a high-power
>   PSU (e.g. 600W) then even if it has a high efficiency
>   according to 80+ it will not necessarily be more efficient
>   than a less highly rated 300W model.

Not following?


You've snipped the part by Andy Cater:

| A larger PSU in wattage terms may have better capacitors, more capacity to
| withstand dips and spikes in mains voltage and may have a better power factor
| so be more effective overall.
 ^
|
| the cost differential between 300 and 600W should be relatively small.
|
| Easier to overspecify: the other thing is that larger PSU wattages may have
| quieter / better quality fans. I love almost silent PCs.

I just wanted to point out that larger PSU can be more efficient, but  
smaller PSU can also be more efficient. Even when energy efficiency labels  
are compared (cf. https://en.wikipedia.org/wiki/80_Plus), a better rating  
(e.g. gold over silver or such) may not always indicate better efficiency.


Say, for example, you have two PSUs under consideration:

(a) PSU with 450W and 80+ silver rating
(b) PSU with 600W and 80+ gold rating.

Then at 20% load, 80+ specifies (a) to have efficiency 87% and (b) to have  
efficiency 90%. In absolute numbers:


(a) PSU with 450W will have specified 87% at 90W i.e. draw 90W/0.87 = 103W
(b) PSU with 600W will have specified 90% at 120W i.e. draw 120W/0.9 = 133W

As 20% is the lowest load specified for the rating (silver, gold etc.) we  
cannot tell how the respective PSUs operate if less than 20% load is  
requested. From the rating we only know that (a) will take at most 103W in  
idle loads and (b) at most 133W, hence the power consumption of (b) could  
potentially be higher in very-low-load idle scenarios which are not uncommon  
to be the dominating factor for typical PC worksloads.


HTH
Linux-Fan

öö


pgpTY2xmqwiwR.pgp
Description: PGP signature


Re: how many W a PSU for non-gaming Debian?

2022-03-04 Thread Linux-Fan

Henning Follmann writes:


On Fri, Mar 04, 2022 at 06:36:35PM +, Andrew M.A. Cater wrote:
> On Fri, Mar 04, 2022 at 06:47:14PM +0100, Emanuel Berg wrote:
> > Alexis Grigoriou wrote:
> >
> > >> I've heard that for gaming you would want a 600~800W PSU


[...]


> > motherboard, RAM and SSD are at most 232W.
> >
> > CPU  AMD mid end (4 cores)  125
> > fans  80 mm (3K RPM)  9   (3*3W =  9W)
> >  120 mm (2K RPM) 12   (2*6W = 12W)
> > motherboard  high end80
> > RAM  ~DDR3 (1.5V) 3   (actually it is a DDR4)
> > SSD   2.8
> >
> > (+ 125 (* 3 3) (* 2 6) 80 3 2.8) ; 231.8W
> >
> > The only thing left is the GPU, I take it even in that PSU


[...]

> If your draw is a max of 230W and you use a 300W power supply, you've  
> still got to account for inrush current to capacitors as the machine is  
> switched on.

>
> A larger PSU in wattage terms may have better capacitors, more capacity to
> withstand dips and spikes in mains voltage and may have a better power  
> factor so be more effective overall.

>
> the cost differential between 300 and 600W should be relatively small.
>
> Easier to overspecify: the other thing is that larger PSU wattages may have
> quieter / better quality fans. I love almost silent PCs.


[...]


And to add to that,
most recent PSUs are very good in terms of efficiency. They are switched
and drag much less power when the computer doesn't demand it.
I would also go with a 600 W PSU.


[...]

Please keep the following points in mind when doing PSU wattage sizing for  
modern PCs:


- Judging a CPU by its thermal design power is no longer feasible due
  to some CPUs permanently overclocking while the actually available
  cooling power permits it. On some Intel CPUs this can mean about twice
  the power than you would have expected. If we were to apply this logic
  directly to the unspecified (?) AMD CPU from the OP's config, it would
  mean adding 250W for the CPU rather than the 125W from its TDP.

- 80+ certified PSUs are rated in terms of their performance at certain
  load percentages. If you choose a high-power PSU (e.g. 600W) then even
  if it has a high efficiency according to 80+ it will not necessarily be
  more efficient than a less highly rated 300W model.

To summarize: For the use case, one might want to add the CPU's TDP  
"another time", i.e. 231.8W + 125W = 356W. Then choose either the next  
fitting PSU size (400W) or go slightly larger for extra safety e.g. 450W,  
500W or even 550W would all be sensible choices.


HTH and YMMV
Linux-Fan

öö


pgp0I0qlAv8Hu.pgp
Description: PGP signature


Re: Bulseye - TacacsPlus - Configure ?

2022-02-13 Thread Linux-Fan

Maurizio Caloro writes:

Found and install this package, TacacsPlus on Bullseye. Please asking for  
short adivce to configure this.


root@HPT610:# apt search tacacs
libauthen-tacacsplus-perl/stable,now 0.28-1+b1 amd64 [installed]
  Perl module for authentication using TACACS+ server


See https://metacpan.org/pod/Authen::TacacsPlus.

The package is a Perl module. I.e. it is useful inside Perl scripts. If you  
do not want to create or use a perl script with that module, it seems  
unlikely that you would benefit from the package at all?


HTH
Linux-Fan

öö


pgpfodrMpgRO5.pgp
Description: PGP signature


Re: Captive Portal Alternatives (Was: Re: miracle of Firefox in the hotel)

2022-02-13 Thread Linux-Fan

Cindy Sue Causey writes:


On 2/13/22, Brian  wrote:
> On Sun 13 Feb 2022 at 16:02:53 +0100, to...@tuxteam.de wrote:
>> > > On Sat 12 Feb 2022 at 21:07:10 +0100, to...@tuxteam.de wrote:


[...]


>> > > > [1] Had I a say in it, I'd reserve a very special place in Hell
>> > > >for those.


[...]


> Interesting.
>
> Captive portals provide free connectivity. What's the problem?

I almost responded to this thread yesterday to say, "Shudder!"

My thought process was that it seems like it might be pretty easy for
perps hovering out in a parking lot or maybe a nearby building to
create a fake captive portal that resembles what users would be
expecting to see from the, yes, FREE Internet provider.

That would only be possible if this is working like I'm imagining is
being described here. That imagination involves a webpage such as what
I once encountered popping up unexpectedly while trying to access WIFI
through a local grocery store a few years ago.


[...]

Yes, it works pretty much as you describe with exactly the problematic  
aspects (see my other post and the RFC linked before).


It is not _that_ bad for security because of two key points:

- Captive portals cannot bypass protection by TLS certificates.
  Users will instead be unable to access the respective pages and either
  get a certificate error or no useful error message at all.

- In case of unencrypted/unprotected traffic, adversaries can
  manipulate that even _without_ captive portals if they setup their
  own (malicious) “free” WiFi service.

HTH
Linux-Fan

öö


pgpm03BfoCPio.pgp
Description: PGP signature


Re: Captive Portal Alternatives (Was: Re: miracle of Firefox in the hotel)

2022-02-13 Thread Linux-Fan

Brian writes:


On Sun 13 Feb 2022 at 16:02:53 +0100, to...@tuxteam.de wrote:
> On Sun, Feb 13, 2022 at 02:41:31PM +0100, Linux-Fan wrote:
> > Brian writes:


[...]

> > > Could the process to replace them on, say, public transport be  
> > > outlined?


[...]


> > * RFC8910 - Captive-Portal Identification in DHCP and Router
> >   Advertisements (RAs). I never never heard of it before searching
> >   for “Alternatives to captive portals wifi” online :)
>
> * Joining a local initiative providing free connectivity (and, of
>   course, lobbying your local policy makers that this be legal;
>   the very idea of providing free stuff tends to be suspect).
>
> Freifunk [...] is one successful example.

Interesting.

Captive portals provide free connectivity. What's the problem?


[...]

I do not use Wifi with captive portals very often so I have only experienced  
a limited subset of problems, but I can think of at least the following  
issues:


- Security: Intercepting requests to arbitrary pages and replying with
  some other content is quite similar to a MITM adversary. Hence,
  users following the recommended “prefer HTTPS” usage will get
  certificate errors instead.
  The RFC explains this much better than I could do under section
  “5. Security Considerations”.

  Also, I think the OP's problem is caused exactly by this.

  For captive portals to work in a HTTPS-preferring browser quirks
  like those implemented by Firefox are needed i.e. try to detect
  the Internet connectivity by connecting to the vendor's URL...
  not good for privacy and only a heursitics.

- Browser requirements: Captive portals often require a JS-capable
  browser to accept their terms etc. This is probably acceptable for
  Notebooks and “Smartphones”, but any other type of device will often
  be unable to access a captive-portal-protected Wifi. I have not tested
  it but I would imagine that it be tough to join such a network for the
  purpose of playing with a handheld console (e.g. Nintendo 2DS or such)
  on a train given that the device's webbrowser is very limited.

- Acutally, not all captive portals provide ”free” connectivity. At least
  not in the freedom sense. IIRC in Italy, they request your tax number
  before allowing you to use the Wifi on the trains? You pay with your
  data... According to [1] I seem to misremember this: They want your
  phone number or credit card number instead. It seems that on some lines
  they have eliminated this need for registration (not sure if that means
  there is no longer any captive portal at all).

It might only be anecdotical but here is another counter-intuitive problem  
caused by captive portals [2].


[1] 
https://www.trenitalia.com/it/offerte_e_servizi/portale-frecce/come-accedere-al-portale-frecce.html
[2] 
https://ttboj.wordpress.com/2014/11/27/captive-web-portals-are-considered-harmful/

HTH and YMMV
Linux-Fan

öö


pgpOD2dzeuOMS.pgp
Description: PGP signature


Captive Portal Alternatives (Was: Re: miracle of Firefox in the hotel)

2022-02-13 Thread Linux-Fan

Brian writes:


On Sat 12 Feb 2022 at 21:07:10 +0100, to...@tuxteam.de wrote:


[...]


> This is Firefox's captive portal [1] detection [2].
>
> Cheers
>
> [1] Had I a say in it, I'd reserve a very special place in Hell
>for those.

Could the process to replace them on, say, public transport be outlined?


[...]

It highly depends on your jurisdiction and other regulatory requirements  
thus I gather there is no comprehensive answer to this question.


Alternatives could be any of the following:

* Not using a captive portal at all i.e. having just a free WiFi
  for everyone near enough to receive the radio signal.

* Using WPA Enterprise (RADIUS) to have users login without any
  website but directly as part of joining the network. This works
  for very large networks, too. E.g. the `eduroam` common in some
  universities can be accessed from any of the participating
  universities' accounts by just entering their campus e-mail address
  for login.

* RFC8910 - Captive-Portal Identification in DHCP and Router
  Advertisements (RAs). I never never heard of it before searching
  for “Alternatives to captive portals wifi” online :)

See also:

* https://radavis.github.io/captive-portal-is-dead/
* 
https://old.reddit.com/r/HomeNetworking/comments/lrebw5/alternatives_to_a_captive_portal_for_open_networks/
* https://www.rfc-editor.org/rfc/rfc8910.txt

HTH
Linux-Fan

öö


pgpevR0soRT0q.pgp
Description: PGP signature


Re: Memory leak

2022-02-11 Thread Linux-Fan

Stefan Monnier writes:


> I used to have 8 GB on the system, and it would start to thrash at
> about 7+ GB usage. I recently ugrade to 16 GB; memory usage is
> currently over 8 GB, and it seems to be slowly but steadily increasing.

Presumably you bought 16GB to make use of it, right?
So it's only natural for your OS to try and put that memory to use.
Any "free memory" is memory that could potentially be used for something
more useful (IOW "free" = "wasted" in some sense).

It's normal for memory use to increase over time, as your OS finds more
things to put into it.


That was my first intuition, too. There is even a classic website about this  
very topic: https://www.linuxatemyram.com/


HOWEVER, given that the OP mentions looking at the RSS sizes I think the  
classic "all memory used" issue is already ruled-out. The issue seems to be  
modern webbrowsers which could be considered OSes on their own already hence  
they also claim more resources whenever it is useful for them.


Firefox takes just above 1600 MiB here with only six tabs open for four  
hours. Yet I am pretty sure it would take less were this a "lower-end"  
system e.g. fewer CPU cores would cause fewer processes to be spawned and  
hence the memory efficiency might be better in such cases.


HTH and YMMV
Linux-Fan

öö

[...]


pgpNyJMgXvFcU.pgp
Description: PGP signature


Re: Query

2022-02-08 Thread Linux-Fan

Chuck Zmudzinski writes:


On 2/7/2022 4:36 PM, Greg Wooledge wrote:

On Mon, Feb 07, 2022 at 04:31:51PM -0500, Chuck Zmudzinski wrote:

On 2/7/2022 10:50 AM, William Lee Valentine wrote:

I am wondering whether a current Debian distribution can be installed
and run on an older Pentium III computer. (I have Debian 11.2 on a DVD.)

The computer is

    Dell Dimension XPS T500: Intel Pentium III processor (Katnai)
    memory: 756 megabytes, running at 500 megahertz
    IDE disc drive: 60 gigabytes
    Debian partition: currently 42 gigabytes
    Debian 6.0: Squeeze

Based on what others are saying, it looks like a typical modern Debian
desktop environment such as Gnome or Plasma KDE will not work well with such
an old system. I suggest you look for a Distro that is tailored for old
hardware.

Bah, silly.  Just use a traditional window manager instead of a bloated
Desktop Environment.  Problem solved.


Which windows manager for an extremely resource-limited system? Debian's


One could use one of e.g. the following list:

- IceWM
- i3
- Fluxbox

All of them are packaged for Debian and work on low-resource computers. I  
have successfully deployed i3 on a system with similar specs to the OP's.  
Mine is still on Debian 10 and not upgraded to Debian 11 yet, though.


wiki page on window managers lists more than 30 possibilities. Its not silly  
to take a look at a distro based on Debian that is tailored for low  
resources as a starting point to try and build a Debian 11.2 system that  
will work OK on a Pentium III with less than 1 GB of memory. Debian provides


Of course, its a valid approach :)

so many packages, and such distros like antiX can give one an idea about  
which packages to use when trying to build a Debian 11.2 system that will  
work well on an older system with such a small amount of memory and such an  
old CPU.


The other option is to ask here for recommendations. Debian is one of the  
last large/mainstream distributions to still support i386 architecture hence  
it is not unlikely that some people will be running old hardware here (I do  
for instance :) ).



But the *real* problem will come when they try to run a web browser.  That's
where the truly massive memory demand is.

756 MB is plenty of RAM for daily use of everything except a web browser.


Yes, it will be important to try to find a web browser that is the least  
bloated as possible. Again, looking at the browser choices of distros  
tailored for old hardware can help build a Debian 11.2 system that will work  
well on old hardware.


Independent of the other distros one will need to do a compromise here  
because:


* Any browser supporting all the modern features (mostly JS and CSS3) will
  be too slow for such old a machine.

* Any other browser will be too limited in features to satisfy a modern
  user's needs. E.g. try to access Gmail or Youtube over any lightweight
  browser and see how it goes (I suspect it will not work _at all_!)

In any case, it will need to be a carefully crafted selection of Debian 11.2  
packages to have a decent experience, and most definitely start with a small  
netinst installation with only the text console to start, and then build the  
GUI environment carefully from the ground up.


On such an old system one should only install what is needed because any  
additional background service will reduce the already very limited  
computational capacity. Rather than crafting a set of applications it might  
be easier to start with the question what the machine is going to be used  
for and then figure out if this is even possible for the hardware and only  
afterwards check which applications will fit the purpose _and_ resource  
constraints.


E.g. I regularly run `maxima` as a "calculator" app. On an old machine it  
takes many seconds to run and to compute even simple expressions. Hence I  
switched to `sc-im` (a lightweight spreadsheet program) on old machines for  
such tasks.


HTH and YMMV
Linux-Fan

öö

[...]


pgppno1xg3h5S.pgp
Description: PGP signature


Re: Mini server hardware for home use NAS purposes

2022-02-02 Thread Linux-Fan

Jonathan Dowland writes:


On Wed, Feb 02, 2022 at 03:11:57PM +0100, Christian Britz wrote:

Do you have any recommendations for me?


I have much the requirements and my current solution is documented here:
<https://jmtd.net/hardware/phobos/>


I am using an Intel NUC with Celeron J3455 with 8 GiB of RAM.

It has a fan but runs very quietly.

Here, it is only a "backup-server" hence the low-speed CPU is not an issue.

It is currently still on oldstable (see sheet below), but that's only  
because I have not found any time to upgrade it yet. Given that it is just a  
regular amd64 machine, I do not expect any problems with upgrading.


┌─── System Sheet Script 1.2.7, Copyright (c) 2012-2021 Ma_Sys.ma ─────┐
│ linux-fan (id 1000) on rxvt-unicode-256colorDebian GNU/Linux 10 (buster) │
│ Linux 4.19.0-18-amd64 x86_64 │
│ 02.02.2022 22:21:37   masysma-16 │
│ up 65 days, 41 min,  1 user,  load avg: 0.02, 0.03, 0.00781/7856 MiB │
│ 4 Intel(R) Celeron(R) CPU J3455 @ 1.50GHz│
├── Network ───┤
│ Interface  Sent/MiB  Received/MiBAddress │
│ enp2s017925 18624192.168.1.22/24 │
├─── File systems ─┤
│ Mountpoint Used/GiBOf/GiB Percentage │
│ /   246  181714% │
├─── Users ┤
│ Username MEM/MiBTop/MEM  CPU  Top/CPU   Time/min │
│ root 271dockerd 0.6%  dockerd972 │
│ backupuser   194   megasync 0.2% megasync    382 │
│ linux-fan 32systemd   0% syssheet  0 │
│ monitorix 21monitorix-httpd   0%  monitorix-httpd  2 │
└──┘

The system has been running since its installation in 09/2020 in mostly 24/7  
operation (a few weeks of vacation per year) with little to no issues -- I  
only remember overloading it once with a time series database and having to  
reboot to restore some order :)


HTH
Linux-Fan

[...]


pgpobDn0eFivj.pgp
Description: PGP signature


Re: smartd

2022-01-23 Thread Linux-Fan

pe...@easthope.ca writes:


From: Andy Smith 
Date: Sat, 22 Jan 2022 19:07:23 +
> ... you use RAID.

I knew nothing of RAID.  Therefore read here.
https://en.wikipedia.org/wiki/RAID

Reliability is more valuable to me than speed.  RAID 0 won't help.
For reliability I need a mirrored 2nd drive in the host; RAID 1 or
higher.

Google of "site:wiki.debian.org raid" returned ten pages, each quite
specialized and jargonified.  A few tips to establish mirroring can
help.


Here, it returns a few results, too. I think the most straight-forward is  
this one:


https://wiki.debian.org/SoftwareRAID

For most purposes, I recommend RAID1. If you have four HDDs of identical  
size, RAID10 might be tempting, too, but I'd still consider and possibly  
prefer just creating two independent RAID1 arrays.


If you want to configure it from the installer, these step-by-step  
instructions show all the relevant installer screens:


https://sleeplessbeastie.eu/2013/10/04/how-to-configure-software-raid1-during-installation-process/

Also, keep in mind that establishing the mirroring is not all you need to  
do. To really profit from the enhanced reliability, you need to play through  
the recovery scenario, too. I recommend doing this in a VM unless you have  
some dedicated machine with at least two HDDs to play with.


[...]

HTH and YMMV
Linux-Fan

öö


pgp1HGhRHorQN.pgp
Description: PGP signature


Re: downsides to replacing xfce4-terminal?

2022-01-08 Thread Linux-Fan

Greg Wooledge writes:


On Sat, Jan 08, 2022 at 12:16:44AM +0100, Michael Lange wrote:
> In case you have a mouse with a wheel, what's wrong with middle-button
> pasting?


It is worth mentioning that the common Windows program to access Linux  
machines over SSH `putty.exe` has the right-click for paste behaviour. I  
gather it might be hard to adjust muscle memory to the middle-mouse-click if  
one switches from Putty to something else.



It's virtually impossible to press the wheel without accidentally turning
it, either forward or backward.  Depending on where you're clicking,
this can have undesired side effects.


This problem is highly hardware-dependent. I know there are some mice where  
the wheel requires much force to press whereas scrolling happens immediately  
as soon as you touch it. Given that I need both, the middle mouse button as  
well as the wheel function I specifically avoid them when buying and try to  
find a mouse with the opposite behaviour: Easy to click wheel and hard to  
unexpectedly trigger scrolling.


For my uses, I have found the "MadCatz R.A.T.3" to work well (seven years of  
light mouse usage passed). It seems to be superseded by the "R.A.T.4+"  
which has a bunch of more featurs that I probably don't need :)



Personally, I'm still using a three-button mouse with no wheel.  The
middle button pastes, just as the gods of Unix (or Xerox) intended.


That's also a fine choice iff one can do without the wheel :)

YMMV
Linux-Fan

öö


pgp_2lQcnZGC5.pgp
Description: PGP signature


Re: Looking for reccomendations

2021-12-29 Thread Linux-Fan

Juan R.D. Silva writes:

The headphone jack failed on my Dell M4800 laptop. I need to find reliable  
with decent stereo audio output External USB Sound Card/Audio Adapter with  
3.5mm Stereo Headphone (3 pole plug) and Mono Microphone (nice to have)  
Jacks. It should be available in North America.


Back when I wanted an USB sound card, I bought
"Creative Sound Blaster PLAY! 3" [1]. It works out of the box with Debian  
stable (back then it was Debian 10) and PulseAudio+ALSA. I did not check if  
any of the advertised advanced audio functionality (higher sampling rate  
etc.) work under LInux. Note also, that I used this with a desktop-style  
system and thus do not know about its power consumption.


I gather it is one of the more expensive ones. From my (limited) experience  
the audio quality is decent and it has a nice extra feature: The headphone  
plug supports 3-pole plugs as well as 4-pole plugs. I.e.: If you want to use  
your smartphone's 4-pole headset, it will work, too. It also has a dedicated  
microphone jack to use for "regular" PC headsets.


AFAICT, it should be available in America. I bought my one from a retail  
store in Germany, though.


[1] https://www.newegg.com/creative-sound-blaster-play-3/p/N82E16829102100

HTH and YMMV
Linux-Fan

öö

[...]


pgpD1mIVhKG9H.pgp
Description: PGP signature


Re: Emoji fonts in Debian [WAS:] Re: How to NOT automatically mount a specific partition of an external device?

2021-11-27 Thread Linux-Fan

Nate Bargmann writes:


* On 2021 26 Nov 11:36 -0600, Celejar wrote:
> On Thu, 25 Nov 2021 10:43:16 +
> Jonathan Dowland  wrote:
>
> ...
>
> > Jonathan Dowland
> > ✎  j...@debian.org
> >  https://jmtd.net
>
> I finally got tired of seeing tofu for some of the glyphs in your sig,
> so I looked up their Unicode codepoints:

Interestingly, I see the glyphs in Mutt running in Gnome Terminal and in
Vim as I edit this in the same Gnome Terminal.  My font is one
installed locally, Droid Sans Mono Slashed which provides the zero
character with a slash.

I know that there is keyboard sequence in Gnome Terminal (Ctl-Shift-E
then Space) to bring up a menu to select Unicode glyphs.



- Nate


I use the cone e-mail client in rxvt-unicode with the Terminus bitmap font  
and I see only the icon next to `j...@debian.org`. Apart from that, the  
first line of the signature has two squares, the third line one and the post  
by Nate has a single square, too.


I can view the glyphs correctly by saving the mail as text file and opening  
it with mousepad. `aptitude search ~inoto` returns the following here:


| idA fonts-noto-color-emoji- color emoji font from Google
| i A fonts-noto-core   - "No Tofu" font families with large
| i A fonts-noto-extra  - "No Tofu" font families with large
| i A fonts-noto-mono   - "No Tofu" monospaced font family wi
| i A fonts-noto-ui-core

I am pretty fine with _not_ seeing the correct glyphs by default given that  
I do not want fancy colorful icons in my terminals anyway :)


YMMV
Linux-Fan

öö

[...]


pgpGNsO7W6ND3.pgp
Description: PGP signature


Re: ffmpeg no crea video en milisegundos

2021-11-16 Thread Debia Linux
On Tue, Nov 16, 2021 at 12:47 AM Camaleón  wrote:
>
> El 2021-11-15 a las 21:24 -0600, Debia Linux escribió:
>
> > On Mon, Nov 15, 2021 at 9:04 PM Debia Linux  wrote:
> > >
> > > Lista:
> > >
> > > Espero se encuentre bien.
> > >
> > > Les comento que quiero hacer varios videos de menos de un segundo de
> > > duracion, para despues unirlos en uno solo.
> > >
> > > Ejemplo: Quiero que el video dure 0.286 milisegundos, pero al
> > > verificarlo, ffmpeg lo crea de 0.294 milisegundos.
> > >
> > > Al unir todos los videos, el video dura 3 segundos mas. Es decir, si
> > > debia durar 3 minutos con 46 segundos, ahora dura 3 minutos con 49
> > > segundos.
> > >
> > > No habria problema, si no le tuviera que agregar un audio que dura 3
> > > minutos con 46 segundos. Por tanto, siempre se esta desfasando.
> > >
> > > Ya busque por varios dias completos en la red y no encuentro respuesta.
> > >
> > > ¿Alguno de ustedes que me pueda orientar como hacer que ffmpeg, me
> > > haga el video EXACTO en milisegundos?.
>
> > las ordenes que estoy usando, son las siguientes.
> > ffmpeg -r 3/1001 -loop 1 -i 1-02.png -aspect 16:9 -t 00:00:0.286
> > tmp-1-02.mp4
>
> How to use FFmpeg to convert images to video
> https://shotstack.io/learn/use-ffmpeg-to-convert-images-to-video/
>
> Por lo que dice en ese tutorial, los valores de la tasa de fotogramas de
> entrada y salida (-framerate y -r) pueden influir en la duración total
> del vídeo, independientemente de lo que definas a un intervalo concreto.

Ok, entonces hay un framerate (fps) de entrada y salida!. Eso no lo sabía

>
> Prueba jugando con esos valores para adecuarlos al tiempo exacto que
> buscas, que es muy corto.

Gracias, siempre tan oportuna y años ayudando a la comunidad... ¿No
seras como la pitonisa de Matrix?.

> Saludos,
>
> --
> Camaleón
>



Re: ffmpeg no crea video en milisegundos

2021-11-15 Thread Debia Linux
las ordenes que estoy usando, son las siguientes.
ffmpeg -r 3/1001 -loop 1 -i 1-02.png -aspect 16:9 -t 00:00:0.286
tmp-1-02.mp4

On Mon, Nov 15, 2021 at 9:04 PM Debia Linux  wrote:
>
> Lista:
>
> Espero se encuentre bien.
>
> Les comento que quiero hacer varios videos de menos de un segundo de
> duracion, para despues unirlos en uno solo.
>
> Ejemplo: Quiero que el video dure 0.286 milisegundos, pero al
> verificarlo, ffmpeg lo crea de 0.294 milisegundos.
>
> Al unir todos los videos, el video dura 3 segundos mas. Es decir, si
> debia durar 3 minutos con 46 segundos, ahora dura 3 minutos con 49
> segundos.
>
> No habria problema, si no le tuviera que agregar un audio que dura 3
> minutos con 46 segundos. Por tanto, siempre se esta desfasando.
>
> Ya busque por varios dias completos en la red y no encuentro respuesta.
>
> ¿Alguno de ustedes que me pueda orientar como hacer que ffmpeg, me
> haga el video EXACTO en milisegundos?.
>
> Gracias y agradezco su tiempo.
>
> Debianero



ffmpeg no crea video en milisegundos

2021-11-15 Thread Debia Linux
Lista:

Espero se encuentre bien.

Les comento que quiero hacer varios videos de menos de un segundo de
duracion, para despues unirlos en uno solo.

Ejemplo: Quiero que el video dure 0.286 milisegundos, pero al
verificarlo, ffmpeg lo crea de 0.294 milisegundos.

Al unir todos los videos, el video dura 3 segundos mas. Es decir, si
debia durar 3 minutos con 46 segundos, ahora dura 3 minutos con 49
segundos.

No habria problema, si no le tuviera que agregar un audio que dura 3
minutos con 46 segundos. Por tanto, siempre se esta desfasando.

Ya busque por varios dias completos en la red y no encuentro respuesta.

¿Alguno de ustedes que me pueda orientar como hacer que ffmpeg, me
haga el video EXACTO en milisegundos?.

Gracias y agradezco su tiempo.

Debianero



Re: Debian version

2021-11-09 Thread Linux-Fan

Koler, Nethanel writes:


Hi
I am Nati, I am trying to find a variable that is configured in the linux- 
headers that can tell me on which Debian I am


Any reason for not using /etc/os-release instead?
IIRC this one is available on RHEL _and_ Debian systems.


For example in RedHat
After downloading the linux-headers
I can go to cd /usr/src/kernels//include/generated/uapi/linux
There there is a file called version.h
Where they define this variables

#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))
#define RHEL_MAJOR 8
#define RHEL_MINOR 4

When I tried the same with Debian I got to a dead end
Can you please help me find something similar in the linux-headers for  
Debian?


I tried

$ grep -RF Debian /usr/src

and got a few hits, among those are

| .../include/generated/autoconf.h:#define CONFIG_CC_VERSION_TEXT "gcc-10 (Debian 
10.2.1-6) 10.2.1 20210110"
| .../include/generated/compile.h:#define LINUX_COMPILER "gcc-10 (Debian 10.2.1-6) 
10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2"
| .../include/generated/compile.h:#define UTS_VERSION "#1 SMP Debian 5.10.46-5 
(2021-09-23)"
| .../include/generated/package.h:#define LINUX_PACKAGE_ID " Debian 5.10.46-5"

If your goal is to evaluate them programatically during compile-time of a C  
project, this might not be ideal though, because all of the values I found  
seem to be strings.


HTH
Linux-Fan

öö


pgpliwZzHZIW3.pgp
Description: PGP signature


Re: Is "Debian desktop environment" identical to "GNOME" upon installation?

2021-11-05 Thread Linux-Fan

Brian writes:


On Fri 05 Nov 2021 at 17:02:01 +, Andrew M.A. Cater wrote:

> On Fri, Nov 05, 2021 at 04:04:17PM +, Brian wrote:
> > On Fri 05 Nov 2021 at 13:43:29 +, Tixy wrote:
> >
> > > On Fri, 2021-11-05 at 07:47 -0500, Nicholas Geovanis wrote:
> > > > On Fri, Nov 5, 2021, 7:21 AM Greg Wooledge  wrote:


[...]


> > > > > With the "Live" installers, the default is different.
> > > >
> > > > And if I may ask: Why is it different? If there is a reason or two.
> > >
> > > Guessing here... because a live version already has a desktop
> > > environment on the disk, so it make sense to default to installing that
> > > one. E.g. if you choose, say, the XFCE live iso, it would default to
> > > XFCE not Gnome. Would be a bit perverse otherwise.


AFAICT it does not only "default" to the DE contained within the live system  
but rather does not even show the choice screen because it installs by  
copying/extracting the live system's data and hence, the DE (and other  
software choices) are already set. See below.



> > I rather thought the Live images contained a copy of d-i but am not
> > going to download an ISO to refresh my menory. I will offer
> >
> >   https://live-team.pages.debian.net/live-manual/html/live- 
> >   manual/customizing-installer.en.html

> >
> > I'd see it as a bit unusual for this copy to differ from the regular d-i.



> A few things:


[...]


> 3. The live CDs are designed so that you download the one with the desktop
> you want. The "standard" one installs a minimum Debian with standard  
> packages and no gui.


OK, but the relevance to the OP's issue is obscure. Does it need to
taken into account for the issue raised?


TL;DR: Live Installers do not present the DE selection screen hence it  
should not relate to the OP.



> 4. Live CD install is not guaranteed to be the same as the traditional
> Debian installer. Calamares is very significantly different. Live CD/DVD is
> maintained by a different libe CD team and not by the Debian media team.

Ah! Calamares. It alters the way tasksel behaves in d-i? Heaven help us!
Is that is what is meant when it is claimed  by Greg Wooledg:

 With the "Live" installers, the default is different"?

Calamares introduces a new ball game?


[...]

Let me try to clarify this a little bit from my experience as an "advanced"  
user :)


Calamares is an entirely separate installer that can be invoked from within  
a running (live) system. It is _one_ way to install Debian from a live  
system but it is not the only one. It is worth stressing that there is _no_  
interaction between Calamares and d-i and that they prsent different  
screens. Behind the scenes, Calamares invokes an `rsync` to copy the data  
from within the live system to the target.


For a typical session in Calamares, see [1] for an example from Debian Buster.

Now d-i is separate in that it does not run from within the live system but  
has to be invoked _instead_ of the respective live system from the boot  
menu. It is, however, contained on the same ISO image/DVD together with the  
live system's data. The d-i variant used on live systems does not ask for  
the choice of DE because its software selection cannot be customized like in  
the regular d-i. Instead, it simply copies the data from the live file  
system to the targed drive (? I am not exactly sure on this one ?). See [2]  
for an example from Debian Buster and note the absence of the tasksel  
screen.


Now the regular d-i shows the tasksel screen and asks for which DE to  
install. See [3] for an example from the Debian Bullseye Alpha 3 installer.


Here are some "real screenshots" :)

[1] 
https://masysma.lima-city.de/37/debian_i386_installation_reports_att/m2-dl1080-i386-lxde-calamares.xhtml

[2] 
https://masysma.lima-city.de/37/debian_i386_installation_reports_att/m2-dl1080-i386-lxde-di.xhtml

[3] 
https://masysma.lima-city.de/37/debian_i386_installation_reports_att/m6-d11a3-i386-netinst.xhtml

HTH
Linux-Fan

öö


pgpK1tyhC5AEL.pgp
Description: PGP signature


Re: LXQT desktop environment hangs?

2021-10-31 Thread Linux-Fan

kaye n writes:


SORRY I SHOULD HAVE SENT IT TO DEBIAN LIST


Never mind, I am taking this as an OK to post my answer to the list :)


On Sun, Oct 31, 2021 at 9:16 PM kaye n  wrote:


   On Fri, Oct 29, 2021 at 10:35 PM Linux-Fan  wrote:


[...]


 I'd suggest to try debugging _why_ Firefox has become so much slower as
 per 
 the upgrade. It is unlikely that it is the new Firefox version being
 slower 
 than the old ones (although it may be possible). What I'd rather guess
 is 
 that something different changed in the system which now causes it to
 run 
 slower.

 What GPU are you using? How is the RAM load during the slow phases? Can
 you 
 close invidual tabs to stop the slowness i.e. can you make out that the
 high 
 load is caused by a specific website or a specific type of website. It


[...]


    What GPU are you using?
     Device-1: Intel 82945G/GZ Integrated Graphics driver: i915 v: kernel
     Display: x11 server: X.Org 1.20.11 driver: loaded: intel
     unloaded: fbdev,modesetting,vesa resolution: 1366x768~60Hz
     OpenGL: renderer: Mesa DRI Intel 945G v: 1.4 Mesa 20.3.5

   Can you  close invidual tabs to stop the slowness
   Almost always no.


Actually, this looks pretty OK to me. I conclude that it is most likely not  
to be a GPU-related issue, although I do not know what else might be causing  
the slowness. Others suggested to check the CPU load with `htop`. This might  
indeed be a good next thing to try?


HTH
Linux-Fan

öö


pgpZXeQQQzcJd.pgp
Description: PGP signature


Re: LXQT desktop environment hangs?

2021-10-29 Thread Linux-Fan

kaye n writes:


Hi Friends!

LXQT says,

It will not get in your way. It will not hang or slow down your system.

However, I've experienced the opposite, especially when I'm using Firefox  
browser and attempt to open several pages in different tabs.


I don't think the problem lies with Firefox though because I didn't have that  
problem on Debian 10 XFCE.  I am now running Debian 11 LXQT.


Note that from Debian 10 XFCE to Debian 11 LXQT a lot of things changed. I  
am pretty sure that it is not LXQT's fault here especially if the problem  
only occurs while Firefox is running.


I'd suggest to try debugging _why_ Firefox has become so much slower as per  
the upgrade. It is unlikely that it is the new Firefox version being slower  
than the old ones (although it may be possible). What I'd rather guess is  
that something different changed in the system which now causes it to run  
slower.


What GPU are you using? How is the RAM load during the slow phases? Can you  
close invidual tabs to stop the slowness i.e. can you make out that the high  
load is caused by a specific website or a specific type of website. It could  
be possible that during the upgrade a combination of driver+GUI was  
installed that is no longer compatible with your GPU. This would typically  
manifest in high CPU load for things like video playing or animations of all  
kinds because the system falls back to "software rendering" in case the GPU  
is not supported properly.



I don't want to use XFCE again just because I want to try another.

Could LXDE be better?  Is there a way I can install LXDE to my existing  
Debian 11 running LXQT?  Or should I just fresh install Debian 11 with LXDE  
as the default desktop environment?


Adding LXDE to your existing installation should be as simple as

# apt-get install task-lxde-desktop

Watch out if APT displays any conflicts with your existing packages. During  
login, you should then be able select different types of "sessions"  
including LXDE and LXQT ones.


From experiments on slow (old, i386) machines, my experience was that LXDE  
was indeed a little bit faster than LXQT but the difference is probably not  
notable on anything but >13 year old hardware :)



Thank you for your time.


HTH and YMMV
Linux-Fan

öö

OT: If you want something that _really_ does not get in the way or slow  
down, consider switching to any lightweight window manager like Fluxbox,  
IceWM, i3 etc. These are know to be _very_ responsive even on old machines.  
Of course, this will not solve the slowness caused by the software you use.  
Webbrowsers will remain bulky and large as many sites do not work with the  
lightweight browsers anymore...


pgpMLA9XQ8nad.pgp
Description: PGP signature


Re: Leibniz' "best of all possible worlds" ...

2021-10-24 Thread Linux-Fan

Ricardo C. Lopez writes:


 Use case: At work (a school) they use Windows 10 and IT is kind of
fundamentalist about it. So I am thinking of "just" using the RAM and
the processor in their machine. I am thinking of:



 * running Debian Live from an external USB attached DVD player


May work, but is going to run slow. While live images of today are still ISO  
files, running them from actual DVDs is slow. It will need dedicated  
preparation time before each use to startup such live images.



 * via Qemu, which, of course, I will have to install and for which I
may need admin rights, and


I am not sure if QEMU on Windows supports the virtualization accelleration  
yet. If yes, this route might be quite feasible. If no, you could consider  
using one of the other solutions like Microsoft Hyper V, VirtualBox, VMWare  
Player. All of them need admin rights. Also: Where would you store the VM's  
HDD image?



 * attached an external pan and/or microdrive with whatever code I
need for my business.


This is what I would probably try first. Especially consider these two  
variants:


- Live system from external pen drive (16 GB stick or similar)
  You might consider adding a "persistence" partition such that
  work results can be saved.

- Installed system from external hard drive or large pen drive
  (> 32 GB). This allows for the most "natural" feel of Linux because
  all settings will be saved persistently by default.


 Is such an environment possible? What kinds of technical problems do
you foresee with such setup?, probably with the BIOS? Any tips you
would share or any other way of doing such thing (I don't like to use
Windows, but at work you must use it)?


Problem I foresee: Most likely, your admins have deactivated booting from  
external devices.


You could also tackle this with a networked approach: Given that your  
Windows PCs are most likely already properly connected to a common local  
network, you could make use of that by providing a central "Linux  
server" - if your admins insist, it could be a Windows host with Linux in  
VMs. Then, you could access it from any windows host either vith a  
virtualization client software (VMWare vSphere?) or through the SSH and VNC  
protocols. 


HTH and YMMV
Linux-Fan

öö


pgpDksUqm6OCc.pgp
Description: PGP signature


Re: How to install official AMDGPU linux driver on Debian 11?

2021-10-21 Thread Linux-Fan

Markos writes:


Em 17-10-2021 19:47, piorunz escreveu:

On 17/10/2021 22:27, Markos wrote:

Hi,

Please, could someone suggest a tutorial (for a basic user) on how to
install the driver for the graphics card for a laptop Lenovo IdeaPad
S145 with AMD Ryzen™ 5 3500U and AMD Radeon RX Vega 8 running Debian 11
(Bullseye).

I found a more complete tutorial just for Stretch and Buster:

https://wiki.debian.org/AMDGPUDriverOnStretchAndBuster2

What are the possible risks of problems using these AMD drivers?


[...]


No reply so far.

So, it seems that no one is interested in this question. :-(

Or none managed to do this installation, yet.


[...]

Before your initial post, there was already some discussion about a very  
similar case in the following thread:

https://lists.debian.org/debian-user/2021/10/msg00700.html

Summary: Just following AMDs instructions may lead to compile errors
(see https://lists.debian.org/debian-user/2021/10/msg00738.html)
whereas it worked for my GPU and downloaded driver:
(see https://lists.debian.org/debian-user/2021/10/msg00738.html)

I am interested in the questions of yours, but unfortunately cannot provide  
much of an assistance beyond what I already wrote in the other thread.


HTH
Linux-Fan

öö





pgpbWc1g5EV6Y.pgp
Description: PGP signature


Re: AMD OpenCL support

2021-10-18 Thread Linux-Fan

piorunz writes:


On 17/10/2021 23:31, Linux-Fan wrote:

Back then, I got it from
https://www.amd.com/en/support/professional-graphics/radeon-pro/radeon-pro- 
w5000-series/radeon-pro-w5500

under
"Radeon(TM) Pro Software for Enterprise on Ubuntu 20.04.2" and the
download still seems to point to a file with the same SHA-256 sum.

It could be worth trying the exact same version that I used?


Thanks for your reply.

I have Radeon 6900XT, which is different type of card. Not sure if
https://www.amd.com/en/support/professional-graphics/radeon-pro/radeon-pro- 
w5000-series/radeon-pro-w5500

will work for me?


I use a Radeon Pro W5500.

AMDs website leads me to
https://www.amd.com/de/support/graphics/amd-radeon-6000-series/amd-radeon-6900-series/amd-radeon-rx-6900-xt
for your GPU and proposes to download "Radon(TM) Software for Linux Driver  
for Ubuntu 20.04.3" which seems to be a different TGZ than what I am using  
(it has 1290604 vs. 1292797).


I cannot find the list of compatible GPUs for the particular package I have  
downloaded, the documentation only tells me about "Stack Variants" quoting  
from it (amdgpu graphis and compute stack 21.20 from the TGZ with 1292797):


| There are two major stack variants available for installation:
|
|  * Pro: recommended for use with Radeon Pro graphics products.
|  * All-Open: recommended for use with consumer products.

Hence it is clear that AMD proposes using the "amdgpu-pro" only with the  
"Radon Pro" graphics cards. Whether that also means that the driver is  
incompatible with "consumer products", I do not know.


Searching online yields these links:

* https://wiki.debian.org/AMDGPUDriverOnStretchAndBuster2
* https://www.amd.com/en/support/kb/release-notes/rn-amdgpu-unified-linux-21-20

The second page indicates that "AMD Radeon™ RX 6900/6800/6700 Series  
Graphics" are compatible with the "Radeon(TM) Software for Linux(R) 21.20".  
Now whether that document is the correct one to correspond with my  
downloaded TGZ I cannot really tell. But if they match, it may as well  
indicate that it is possible to use that driver with your GPU, too.


HTH
Linux-Fan

öö

[...]


pgpQdJA4EUpZ6.pgp
Description: PGP signature


Re: AMD OpenCL support

2021-10-17 Thread Linux-Fan

piorunz writes:


On 17/10/2021 21:50, Linux-Fan wrote:


There, the suggested fix is to switch to amdgpu-pro (which seems to
remedy the issue but not entirely...) which lead me to try the `.deb`
files from AMD. I downloaded
`amdgpu-pro-21.20-1292797-ubuntu-20.04.tar.xz` and it seems to have
installed just fine.


Can't install that on my Debian Bullseye. I have clean system with
nothing modified or added from outside of Debian.

Result:
sudo ./amdgpu-install --opencl=legacy,rocr
(...)
Loading new amdgpu-5.11.19.98-1290604 DKMS files...

   ^^^

Our driver versions seem to differ. I have 5.11.5.30-1292797 rather than  
5.11.19.98-1290604. It has the following SHA-256 sum:


ef242adeaa84619cea4a51a2791553a7a7904448dde81159ee2128221efe8e50
amdgpu-pro-21.20-1292797-ubuntu-20.04.tar.xz

Back then, I got it from https://www.amd.com/en/support/professional- 
graphics/radeon-pro/radeon-pro-w5000-series/radeon-pro-w5500 under
"Radeon(TM) Pro Software for Enterprise on Ubuntu 20.04.2" and the download  
still seems to point to a file with the same SHA-256 sum.


It could be worth trying the exact same version that I used?

[...]


I tried various attempts:
sudo ./amdgpu-install --opencl=rocr --headless
sudo ./amdgpu-install
sudo ./amdgpu-install --opencl=rocr

Same result each time, something with compiling amdgpu-dkms.


I used ./amdgpu-pro-install without additional arguments. It is a symlink  
to the same script but the script executes different code if invoked with  
the `-pro` inserted. Not sure if it will make a difference, though.


After successful installation with `./amdgpu-pro-install` I installed  
additional packages from the repository added by the `amdgpu-pro-install` in  
order to enable the OpenCL features.



As a result, I should be running the proprietary driver now and thus
have OpenCL running -- I only ever tested it with a demo application,
though...


I'd love that, but it fails on my system. What system do you have? How
did you do it?


[...]

Debian 11 Bullseye. Before writing this post, I was still on kernel
5.10.0-8-amd64, but I just upgraded and the DKMS compiled successfully for  
the new 5.10.0-9-amd64.


Differences between our systems seem to be as follows:

- Minor version difference in proprietary drivers
- I installed by using the symlink with `-pro` in its name

I might add that I have installed a bunch of firmware from non-free and am  
running ZFS on Linux as provided by non-free package `zfs-dkms`.


HTH
Linux-Fan

öö


pgpKCdX0RatVL.pgp
Description: PGP signature


Re: AMD OpenCL support

2021-10-17 Thread Linux-Fan

piorunz writes:


On 17/10/2021 09:00, didier gaumet wrote:


[...]


Yes I have that mesa version of OpenCL installed. Unfortunately, this
version is too old and not recognized. I need OpenCL 1.2 at least I
think. clinfo says, among many other things:
  Device Version  OpenCL 1.1 Mesa 20.3.5
  Driver Version  20.3.5
  Device OpenCL C Version OpenCL C 1.1


Perhaps your claim of not having OpenCL support is erroneous and what
happens actually is you have uncomplete/unsufficent support for your
use case: a typical example is Darktable not having OpenCL image
support, this requiring more recent OpenCL implementation that the Mesa
one.

Then you would probably have to either:
- revert to use the proprietary amdgpu-pro driver (including an AMD
ICD) instead of the free amdgpu one



https://www.amd.com/en/support/kb/faq/amdgpu-installation


This procedure requires downloading .deb drivers from
https://support.amd.com/en-us/download. Only distros supported are
Ubuntu 18.04.5 HWE, Ubuntu 20.04.3. They will most likely fail in Debian.


[...]

Hello,

I happened to have some issues wrt. a bug similar to this:
https://bugs.freedesktop.org/show_bug.cgi?id=111481

There, the suggested fix is to switch to amdgpu-pro (which seems to remedy  
the issue but not entirely...) which lead me to try the `.deb` files from  
AMD. I downloaded `amdgpu-pro-21.20-1292797-ubuntu-20.04.tar.xz` and it  
seems to have installed just fine.


As a result, I should be running the proprietary driver now and thus have  
OpenCL running -- I only ever tested it with a demo application, though...


Excerpt from clinfo:

~~~
 Platform Name:  AMD Accelerated Parallel Processing
Number of devices:   1
 Device Type:CL_DEVICE_TYPE_GPU
 Device OpenCL C version:OpenCL C 2.0
 Driver version: 3261.0 (HSA1.1,LC)
 Profile:FULL_PROFILE
 Version:OpenCL 2.0
~~~

btw. I do not seem to have a `Device Version` string in there?

~~~
# dpkg -l | grep opencl | cut -c -90
ii  amdgpu-pro-rocr-opencl21.20-1292797   
ii  ocl-icd-libopencl1:amd64  2.2.14-2
ii  ocl-icd-libopencl1:i386   2.2.14-2
ii  ocl-icd-libopencl1-amdgpu-pro:amd64   21.20-1292797   
ii  ocl-icd-libopencl1-amdgpu-pro-dev:amd64   21.20-1292797   
ii  ocl-icd-opencl-dev:amd64  2.2.14-2
ii  opencl-base   1.2-4.4.0.117   
ii  opencl-c-headers  3.0~2020.12.18-1
ii  opencl-clhpp-headers  3.0~2.0.13-1
ii  opencl-headers3.0~2020.12.18-1
ii  opencl-intel-cpu  1.2-4.4.0.117   
ii  opencl-orca-amdgpu-pro-icd:amd64  21.20-1292797   
ii  opencl-rocr-amdgpu-pro:amd64  21.20-1292797   
ii  opencl-rocr-amdgpu-pro-dev:amd64  21.20-1292797

~~~

To summarize: It might be worth trying the Ubuntu-.debs out on Debian.
Although its not a "clean" solution by any means, it might "just work"?

HTH
Linux-Fan

öö


pgpyhukeY7BBf.pgp
Description: PGP signature


Re: Disk partitioning phase of installation

2021-10-16 Thread Linux-Fan

Richard Owlett writes:


I routinely place /home on its own partition.
Its structure resembles:
/home/richard
├── Desktop
├── Documents
├── Downloads
├── Notebooks
└── Pictures

My questions:
1. Can I have /home/richard/Downloads bed on its own partition?


Yes. The only thing to consider is that they are mounted in correct order  
i.e. first /home/richard then /home/richard/Downloads.


Alternatively, you could mount them at independent times by using a  
mountpoint outside of /home/richard (e.g. /media/richards_downloads) and  
having `Downloads` as a symbolic link pointing to the mountpoint of choice  
(`ln -s /media/richards_downloads Downloads`).



2. How could I have found the answer?


By trying it out :) If you do it wrongly, it yields "mountpoint does not  
exist" or similar. If you do it correctly, it "just works".


HTH
Linux-Fan

[...]

öö


pgpAlDLwrDZxM.pgp
Description: PGP signature


Re: perspectiva desconhecida

2021-10-07 Thread Linux - Junior Polegato

Olá!

        É algo a se pensar sempre, a tecnologia transforma as relações, 
as empresa de tecnologia, agora ofertando infra como um serviço, visto a 
necessidade de todos estarem on-line, móveis e acesso na palma da mão. 
Tudo que expresso aqui é pura visão e opinião minha.


        Assim hoje um empresário, e não interessa o porte, tem acesso e 
não quer ter mais infra de servidores/dados dentro da empresa, sala 
climatizada e pessoas para cuidarem disso, apenas pessoas/terceiros 
cuidando do meio de comunicação. Mas ainda estamos na transição para 
grandes empresas, as quais ainda por um tempo irão manter sua 
infra/pessoal devido ao investimento, por um tempo.


        Aí quem é da área irá para a rua, mas irá para a rua trabalhar 
(às vezes de casa), interconectar pessoas e empresas (às vezes ASs), 
configurar e implantar sistemas de comunicação e monitoramento, IoT, um 
pouco de telefonia, que ainda persiste, mas o modelo antigo de infra 
está sendo migrado para máquinas virtuais alocadas por esses gigantes.


        Esse gigantes já ofertam além, o que chamam de quiosque, onde 
um usuário tem seu pacote de aplicativos e espaço, não faz mais nada ali 
além desses serviços, sendo executando em máquina/sessão remota, 
realmente precisando apenas de uma conexão e tela 
(+touch/mouse/teclado), muitas vezes não importando o SO, vide os 
RaspberryPi entrando no mercado embutido num teclado que se liga em 
monitor/TV e acessa o quiosque, tem empresas alugando o RaspberryPi com 
VPN configurada, semelhante a TV citada.


        Vejo hoje empresas ofertando até o gateway/firewall virtual, 
isto é, você "fecha" uma VPN com o firewall deles das suas 
empresas/filiais, fecha também a VPN com o datacenter do ERP, do 
quiosque, de outros servidores/serviços, outros funcionários 
isoladamente fazem o mesmo de onde estiverem, então essa empresa cuida 
de toda segurança, monitoramento, relatórios, controle de usuários e 
horários, o que podem acessar, quais portas ficam abertas para o 
público, quais IPs, regiões/cidades/países, regras de bloqueio, dentre 
outros serviços.


        Concluindo, num futuro próximo, vejo o pessoal de inteligência 
de infra somente trabalhando dentro dessas empresas de locação de 
tecnologia de infra e não mais dentro de cada empresa localmente.


--

[]'s

Junior Polegato



Em 07/10/2021 12:35, luigui escreveu:

Bom dia aos prezados membros desta comunidade.
Sou usuario do Linux faz uns 5 talvez 6 anos.
A pergunta que venho lhes trazer, foge ao escopo desta lista.
Contudo a trago para aqueles que se interessarem em compartilhar esta 
indagacao e quem sabe abrir uma luz na minha duvida.
Como sabemos, a Amazon atraves da EC2 esta transformando o mundo 
infraestrutural oferecendo-os como um servico.
Entao, como grande parte do conhecimento adquirido durante esta 
jornada como configuracoes de servidores, meios de instalacoes e uma 
gama de ferramentas de administracoes de sistema se encaixarao no novo 
escopo que esta se desenhando na atualidade?
Saliento que a mesma high-tech de tecnologia ja possui projeto em 
execucao de fabricacao de seus proprios televisores os quais penso 
mesmo que ate poderao em devido prazo, substituir grande parte de 
notebooks, desktops e etc em se tratando de usuarios finais, visto que 
poderemos utilizar tais TVs para acessarmos a internet e quem sabe 
apenas termos um terminal (teclado) como unica ferramenta necessaria 
para novas tarefas administrativas.
Por fim, se me aventurei a trazer esta questao aos senhores e a 
acharem irrelevante, espero nao ser motivo de graca e que a apenas a 
desconsiderem.

Um abraco a todos
Att
Luiz Carlos




Re: GNOME - Ajustes gráficos

2021-10-06 Thread Linux - Junior Polegato

Olá!

        Você encontra ajuste de fonte/dpi(fator) em "Ajustes" no Gnome, 
pacote gnome-tweaks. Quanto à janela do VPN não caber, tem que ajustar a 
resolução, pois nessa opção nos ajustes vai ajustar apenas tamanho de 
renderização da fonte. Pode usar [Alt]+[F7]+[mover mouse] para mover a 
janela caso não tenha resolução suficiente.


--

[]'s

Junior Polegato



Em 06/10/2021 06:31, Leandro Silva escreveu:

Saudações à comunidade!
Recentemente, voltei a utilizar o Debian para trabalhar, no entanto 
tenho estranhado a falta de um gerenciador para refinar/melhora a 
visualização do X, como por exemplo o DPI, fonte…. Etc…

Encontrei apenas a resolução.
Antes de sair instalando o q falam na internet, gostaria de saber se o 
gnome do debian 11 vem cru quanto a esses ajustes finos ou se estou 
pisando em ovos e qual é o melhor caminho.
Estou utilizando em um notebook Dell inspirion, normalmente ligado em 
um segundo monitor.
Outro dia precisei mexer sem o monitor e para a minha supresa não 
conseguia ver toda a janela do gerenciador da VPN.

Desde já agradeço qq ajuda!
Sorte !
[]’s




Re: Executar um programa no modo gráfico no momento da inicialização do Debian

2021-10-06 Thread Linux - Junior Polegato
Lembrando que nesse caso, o .desktop tem que estar no 
/usr/share/applications/ e não no ".local" de um usuário em específico. 
Agora não sei dizer se é válido colocar em 
/usr/local/share/applications/ para indicar que é um arquivo "local", 
que é de fora do sistema padrão, faça um teste.

[]'s, Junior

Em 06/10/2021 04:41, Artur Bernardo Mallmann escreveu:
Só para complementar, se o amigo Amarildo quiser automatizar a 
autoinicialização em qualquer interface gráfica é só pôr os arquivos 
.desktop do programa dele dentro do diretório /etc/xdg/autostart/ pra 
ter globalmente para todos os usuários, ou na pasta 
${HOME}/.config/autostart somente para um usuário.

Att. Artur

Em qua., 6 de out. de 2021 às 04:40, Artur Bernardo Mallmann 
mailto:arturbmallm...@gmail.com>> escreveu:


Só para complementar, se o amigo Amarildo quiser automatizar a
autoinicialização em qualquer interface gráfica é só pôr os
arquivos .desktop do programa dele dentro do diretório
/etc/xdg/autostart/ pra ter globalmente para todos os usuários, ou
na pasta ${HOME}/.config/autostart somente para um usuário.
Att. Artur

Em ter., 5 de out. de 2021 às 17:03, Daniel Venturini
mailto:danielventurini...@gmail.com>> escreveu:

Boa, Junior. Acredito que criar um .desktop seja a melhor opção.
Porém, eu salvaria o arquivo .desktop na pasta
*/home//.local/share/applications/*
(~/.local/share/applications/).
Os ícones criados pelo sistema eu deixo na pasta
*/usr/share/applications/* (pasta que você citou), mas os que
eu mesmo crio, eu gosto de salvar na pasta que eu citei acima.

Abraços.

Em ter., 5 de out. de 2021 às 09:13, Linux - Junior Polegato
mailto:li...@juniorpolegato.com.br>> escreveu:

Olá!

        Falando do Gnome, o ideal seria criar um
"«programa_do_século».desktop", então jogar esse arquivo
dentro de "/usr/share/applications/". A dica seria copiar
um que já tem lá, por exemplo
"/usr/share/applications/org.gnome.Terminal.desktop", e ir
alterando para seu programa, que geralmente deverá estar
em "/usr/bin/", escolher um ícone, nome, traduções, enfim,
o que achar relevante.

        Então, ao reiniciar o Gnome, o seu programa vai
aparecer nos menus, nas buscas por aplicativo, é possível
colocar em favoritos, e também na inicialização após o
login em "Aplicativos -> Ferramentas do sistema ->
Ajustes", ou simplesmente procurando o aplicativo
"Ajustes". Dentro desse aplicativo, ir na aba "Aplicativos
de inicialização", então clicar no "+" e localizar seu
aplicativo, que agora tem ícone e nome. Só reiniciar e o
verá abrindo magicamente.

-- 


[]'s

Junior Polegato



Em 04/10/2021 13:43, Amarildo Machoski escreveu:


Boa tarde,

Primeiramente gostaria de agradecer em participar, espero
também porder colaborar.

Como é a primeira vez que estou mandando uma dúvida não
sei se estou fazendo de modo correto enviando este e-mail.

Estou voltando a utuilizar o Linux e escolhi o Debian
para utilizar aqui na empresa. Tenho algum conhecimento
com Linux mas de muito tempo atrás.

1) Instalaei a versão 11 em um PC e rodo somente Linux
neste PC

2) Sou programador pascal, desenvolvi um programa e
gostaria que ao startar o Debian , este programa
desenvolvido por mim abrisse em seguida na tela.

Encontrei alguma coisa na internet falando sobre

"Sessão de inicialização" . Porém não consegui chegar
nessa tal de sessão.. se este for o caminho eu precisaria
de um passo a passo.

Até porque simplismente instalei o Debian, mas não sei se
para ter essa sessão disponível ficou faltando instalar algo.

Peço desculpas se não consegui transmitir minha dúvida de
modo claro.

Se alguém puder me auxiliar, ficarei muito grato.





Re: Executar um programa no modo gráfico no momento da inicialização do Debian

2021-10-05 Thread Linux - Junior Polegato

Olá!

        Falando do Gnome, o ideal seria criar um 
"«programa_do_século».desktop", então jogar esse arquivo dentro de 
"/usr/share/applications/". A dica seria copiar um que já tem lá, por 
exemplo "/usr/share/applications/org.gnome.Terminal.desktop", e ir 
alterando para seu programa, que geralmente deverá estar em "/usr/bin/", 
escolher um ícone, nome, traduções, enfim, o que achar relevante.


        Então, ao reiniciar o Gnome, o seu programa vai aparecer nos 
menus, nas buscas por aplicativo, é possível colocar em favoritos, e 
também na inicialização após o login em "Aplicativos -> Ferramentas do 
sistema -> Ajustes", ou simplesmente procurando o aplicativo "Ajustes". 
Dentro desse aplicativo, ir na aba "Aplicativos de inicialização", então 
clicar no "+" e localizar seu aplicativo, que agora tem ícone e nome. Só 
reiniciar e o verá abrindo magicamente.


--

[]'s

Junior Polegato



Em 04/10/2021 13:43, Amarildo Machoski escreveu:


Boa tarde,

Primeiramente gostaria de agradecer em participar, espero também 
porder colaborar.


Como é a primeira vez que estou mandando uma dúvida não sei se estou 
fazendo de modo correto enviando este e-mail.


Estou voltando a utuilizar o Linux e escolhi o Debian para utilizar 
aqui na empresa. Tenho algum conhecimento com Linux mas de muito tempo 
atrás.


1) Instalaei a versão 11 em um PC e rodo somente Linux neste PC

2) Sou programador pascal, desenvolvi um programa e gostaria que ao 
startar o Debian , este programa desenvolvido por mim abrisse em 
seguida na tela.


Encontrei alguma coisa na internet falando sobre

"Sessão de inicialização" . Porém não consegui chegar nessa tal de 
sessão..  se este for o caminho eu precisaria de um passo a passo.


Até porque simplismente instalei o Debian, mas não sei se para ter 
essa sessão disponível ficou faltando instalar algo.


Peço desculpas se não consegui transmitir minha dúvida de modo claro.

Se alguém puder me auxiliar, ficarei muito grato.





Re: New mdadm RAID1 gets renamed from md3 to md127 after each reboot

2021-10-01 Thread Linux-Fan

Reiner Buehl writes:

I created a new mdadm RAID 1 as /dev/md3. But after each reboot, it gets  
activated as md127. How can I fix this - preferably without haveing to delete  
the whole array again...

The array is defined like this in /etc/mdadm:

ARRAY /dev/md3 metadata=1.2  level=raid1 num-devices=1 UUID=41e0a87f: 
22a2205f:0187c73d:d8ffefea


[...]

I have observed this in the past, too and do not know how to "fix" it.

Why is it necessary for the volume to appear under /dev/md3? Might it be  
possible to use its UUID instead, i.e. check the output of


ls -l /dev/disk/by-uuid

to find out if your md3/md127 can be accessed by an unique ID. You could  
then point the entries in /etc/fstab to the UUID rather than the "unstable"  
device name?


HTH and YMMV
Linux-Fan

öö


pgpXLRom6yQiu.pgp
Description: PGP signature


Re: usb audio interface recommendation

2021-09-29 Thread Linux-Fan

Russell L. Harris writes:


Needed:  a USB audio interface which "just works" with Debian 9, 10,
11 on i386 and amd64 desktop machines.   The newest of my machines is
several years years old and has both black and blue USB ports.


I am using an SSL 2 here:
https://www.solidstatelogic.com/products/ssl2

Tested successfully with Debian 10 amd64 and Debian 11 amd64 each with ALSA  
+ PulseAudio non-professional audio. In case you consider buying it, I might  
be able to do a basic test with a Debian 11 i386, too.


Caveat: I have found the interface to only be recognized properly if I  
attach it _after_ PulseAudio has already started up. Hence, I have it  
disconnected by default and upon needing it, first start `pavucontrol` and  
only afterwards attach the interface.


Btw.: I saw you asked about the Motu M2 earlier
(https://lists.debian.org/debian-user/2021/09/msg00958.html). Was there any  
progress in getting it to run properly? A cursory internet search suggests  
that there were problems wrt. old kernels and PulseAudio. Additionally, some  
tuning to reduce kernel latency might be needed? See  
https://panther.kapsi.fi/posts/2020-02-02_motu_m4 for a summary.


Back when I searched for audio interfaces, I had also considered the Zoom  
UAC-2 (https://www.zoom.co.jp/sites/default/files/products/downloads/pdfs/E_UAC-2.pdf).  
Reviews seemed to indicate acceptable Linux compatibility, but I do not have  
any first-hand experience with it.


HTH
Linux-Fan
*who uses the SSL 2 for video conferencing*

öö

[...]


pgpKahUviJy2M.pgp
Description: PGP signature


Re: Privacy and defamation of character on Debian public forums

2021-09-26 Thread Linux-Fan

rhkra...@gmail.com writes:


On Sunday, September 26, 2021 08:45:13 AM Greg Wooledge wrote:
> On Sun, Sep 26, 2021 at 07:00:06AM -0400, rhkra...@gmail.com wrote:
> > Well, to be fair to Google, the first two or three hits did show FAOD,
> > but without explaining what it meant -- those sites that you have to
> > actually go to to find the meaning.  I skipped over those to find a hit
> > that actually included the meaning, and the first (and next) one(s) I
> > found were for FOAD, and I didn't notice the difference.
>
> Fascinating.  My first page results from Google were all of this form:
>
>   What are long-chain fatty acid oxidation disorders (LC-FAOD)?
>   https://www.faodinfocus.com › learn-about-lc-faod
>   LC-FAOD are rare, genetic metabolic disorders that prevent the body from
> breaking down long-chain fatty acids into energy during metabolism.
>
> This is obviously wrong in this context.
>
> The entire first page consisted solely of results like this, so I didn't
> even bother going to page 2.

Hmm, I don't remember the eact google query I tried, I might have done
something like [define: FAOD slang] or something similar (maybe "acronym",  
but

maybe more likely "slang").


Trying to find out what the fuss was about, my first query was

FAOD urban dictionary

which yielded

1. https://www.urbandictionary.com/define.php?term=Faod (not helpful)
2. https://www.urbandictionary.com/define.php?term=foad

Like rhkramer, I did not notice that the letters were intermingled.

When entering

FAOD

I get results similar to Greg's. When entering

FAOD meaning

(as suggested by Google for related searches), I get this among the top  
results:


https://www.acronymfinder.com/Slang/FAOD.html

That, finally, explains it.

Most of the time, I try to stick to acronyms that are found in "The Jargon  
File" (package `jargon`) because these seem to have a pretty agreed-upon  
meaning :)


HTH and YMMV
Linux-Fan

öö


pgpjxsIFEVmwU.pgp
Description: PGP signature


Re: write only storage.

2021-09-21 Thread Linux-Fan

Marco Möller writes:


On 21.09.21 17:53, Tim Woodall wrote:

I would like to have some WORM memory for my backups. At the moment
they're copied to an archive machine using a chrooted unprivileged user
and then moved via a cron job so that that user cannot delete them
(other than during a short window).

My though was to use a raspberry-pi4 to provide a USB mass storage
device that is modified to not permit deleting. If the pi4 is not
accessible via the network then other than bugs in the mass storage API
it should be impossible to delete things without physical access to the
pi.


What about the overall storage size: Assume an adversary might corrupt your  
local data and then invoke the backup procedure in an endless loop in an  
attempt to reach the limit of the "isolated" pi's underlying storage. You  
might need a way to ensure that the influx of data is somehow rate-limited.



Before I start reinventing the wheel, does anyone know of anything
similar to this already in existence?


I know of three schemes trying to deal with the situation:

(a) Have a pull-based or append-only scheme implemented in software.
Borg's append-only mode and your current method fall into that category.
I am using a variant of that approach, too: Have a backup server pull
the data off my local machine at irregular intervals.

(b) Use physically write-once media like CD-R/DVD-R/BD-R. I *very rarely*
backup the most important data to DVDs (no BD writer here and a single
one would not provide enought redundancy to rely on it in case of
need...).

(c) Use a media-rotation scheme with enough media to cover the interval you
need to notice the adversary's doings. E.g. you could use seven hard
drives all with redundant copies of your data and each day chose
the next drive to update with the "current data" by a clear schedule,
i.e. "Monday" drive on Mondays, "Tuesday" drive on Tuesdays etc.
If an adversary tampers with your data you would need to notice within
one week as to be able from the last drive to still contain unmodified
data.


Things like chattr don't achieve what I want as root can still override
that. I'm looking for something that requires physical access to delete.


My solution is to use a separate, dedicated, not-always-on machine that  
pulls backups when its turned on and then shuts itself off as to reduce the  
time frame in which an adversary might try to break into it via SSH. In  
theory, one could leave out the SSH server on the backup server altogether,  
but this would complicate the rare occasions where maintenance is needed.


The backup tool borg, or borgbackup (this latter is also the package name in  
the Debian repository), has an option to create backup archives to which  
only data can be added but not deleted. If you can get it managed, that only  
borgbackup has access through the network to the backup system but no other  
user can access the backup system from the network, then this might be want  
you want.
Borgbackup appears to be quite professionally designed. I have never had bad  
experience for my usage scenario backing up several home and data  
directories with it and restoring data from the archives - luckily restoring  
data just for testing the archives but not for indeed having needed data  
from a backup. My impression is, that this tool is also in use by the big  
professionals, those who have to keep up and running a real big business.  
Well, maybe someone of those borgbackup users with the big business pressure  
and experience should comment on this and not me. At least for me and my  
laboratory measurement data distributed on still less than 10 computers and  
all together comprising still less than 10 TB data volume, it is the perfect  
tool. Your question sounds like it could also fit your needs.


Its one tool that could be used for the purpose, yes.

Borg runs quite slowly if you have a lot of data (say > 1 TiB). If you can  
accept that/deal with it, it is a tool worth considering. Some modern/faster  
alternatives exist (e.g. Bupstash) but they are too new to be widely  
deployed yet.


AFAIK in "business" contexts, tape libraries and rsync-style mirrors are  
quite widespread.


HTH
Linux-Fan

öö


pgpxke0vDpypy.pgp
Description: PGP signature


Re: Bullseye (mostly) not booting on Proliant DL380 G7

2021-09-16 Thread Linux-Fan

Claudio Kuenzler writes:


On Wed, Jun 30, 2021 at 9:51 AM Paul Wise  wrote:

   Claudio Kuenzler wrote:

   > I currently suspect a Kernel bug in 5.10.

Thanks to everyone for hints and suggestions!

At the end it turned out to be an issue with the hpwdt module. After  
blacklisting this module, no boot or stability issues with Bullseye were  
detected anymore.

Findings documented in my blog:
https://www.claudiokuenzler.com/blog/1125/debian-11-bullseye-boot-freeze-kernel-panic-hp-proliant-dl380


Thanks for sharing and digging up all that information. I found the article  
worth reading despite not having had this issue - a nicely structured  
approach to tackle such problems!


Linux-Fan
öö


pgpBYX_NJWs13.pgp
Description: PGP signature


Re: Dual GPU dual display support in Debian?

2021-09-15 Thread Linux-Fan

Anssi Saari writes:


Linux-Fan  writes:

> Anssi Saari writes:
>
>> I was wondering, since I didn't really find anything definite via Google
>> but is dual GPU dual display actually a supported configuration in
>> Debian 11?
>
> I would be interested in knowing about that, too.

I wonder if this is a kernel space problem or user space or both? So
does it work in Arch Linux because the Nvidia drivers and Linux kernel
are much newer? Or is there some user space component too that matters?


AFAIK there are at least these components involved:

* Kernel
* Proprietary graphics driver (kernel- and userspace parts, NVidia)
* Mesa (userspace)

I am not sure how much "cooperation" from the X server itself is required  
i.e. if it might be enough to build a newer Mesa but keep running an old X 
server.



>> So basically for a while I ran this kind of setup:

> I believe it might be easier to fix the problem by attaching all
> displays to the NVidia GPU. Here are some hints about what you might
> check to make the NVidia GPU work under Debian 11:

Thanks, but a hint to you and Felix: "for a while I ran..." indicates
something happening in the past which has now ended. I have no current
issues for basic use. I haven't actually tried to do accelerated video
encode with the new video card which I count as advanced use but I need
it so rarely it hasn't come up.


OK

[...]

Linux-Fan
öö


pgpM22WSVfXz6.pgp
Description: PGP signature


Re: Dual GPU dual display support in Debian?

2021-09-15 Thread Linux-Fan

Anssi Saari writes:


I was wondering, since I didn't really find anything definite via Google
but is dual GPU dual display actually a supported configuration in
Debian 11?


I would be interested in knowing about that, too.


So basically for a while I ran this kind of setup:

Display 1 connected to CPU's integrated GPU (Core i7-4790K, Intel HD
Graphics 4600).

Display 2 connected to Nvidia RTX3070Ti.

The two displays setup as a single wide display, i.e. windows
movable/draggable from one display to the other.

I have a triple boot setup, Windows 10 worked fine, Arch Linux with KDE
was hit or miss, Debian didn't work, no image on one display and xrandr
saw only one display. Which one seemed to depend on which GPU was set as
primary in the UEFI setup. No xorg.conf but I fiddled with that too.


[...]


The reason for this setup was that Debian 10 has no drivers for the
RTX3070Ti so I just used one display there and since it worked in Arch
(at least sometimes) I figured it should just start working in Debian 11
after the upgrade but it didn't.


I believe it might be easier to fix the problem by attaching all displays to  
the NVidia GPU. Here are some hints about what you might check to make the  
NVidia GPU work under Debian 11:


https://forums.developer.nvidia.com/t/linux-460-driver-kubuntu-20-04-rtx-3070-not-booting/171085

About doing a "dual GPU dual display" my experience is as follows (from  
Debian 10 oldstable/buster with purely X11 and no Wayland):


First, I never got it to work properly.

Closest I could get was to run two different window managers on the  
respective displays all under the same X server. This allowed the mouse and  
clipboard to move across the screens but the windows needed to remain on the  
GPU they were started on. Additionally, one cannot combine arbitrary window  
managers this way - I used i3 and IceWM. The trick is to not have them  
compete for "focus". The `.xsession` looked as follows:


DISPLAY=:0.1 icewm &
exec i3

IIRC there were also some approaches like starting an X-server on top of the  
two X11 displays (:0.1 etc.) but I cannot seem to find them right now. Back  
when I tried that setup, it seemed these would be unable to provide graphics  
accelleration, hence I opted for the more convoluted variant with two window  
managers and full performance.


HTH
Linux-Fan

öö


pgpTXpJwVgbqn.pgp
Description: PGP signature


Re: HTML mail [was: How to improve my question in stackoverflow?]

2021-09-10 Thread Linux-Fan

to...@tuxteam.de writes:


On Thu, Sep 09, 2021 at 07:45:43PM -0400, Jim Popovitch wrote:

[...]

> First, most folks on tech mailinglists despise HTML email.

The original mail was a passable MIME multipart/alternative
with a plain text part. I /think/ that is OK, what do others
think?


Postel's principle applies, hence: OK :)


Perhaps you can teach you mailer to pick the text part for
you :-)

(I'm just asking, because I've seen this complaint a couple
of times for a well-formed multipart message: personally, I'd
be OK with it, but I'd like to know how the consensus is).


My mail client of choice (cone, not in Debian anymore) seems to prefer  
displaying the HTML part by default. It does this by converting it to a  
almost-text-only presentation retaining some things like bold and underline.  
It gets a little annoying with URLs because it displays link target and  
label which often leads to the same URL being displayed twice in the  
terminal.


AFAIK I cannot configure it to prefer the text version, but I have not  
checked that part of the source code to see how difficult it might be to  
implement such a choice. I can view both variants of the mail content on a  
case-by-case basis by opening them explicitly. This even works from inside  
the MUA, nice feature :) .


For my usage, it is easiest to have text-only e-mails because those always  
display nicely by default.


Additionally, in this C++ question thread, the source code was given in HTML  
and text parts of the e-mail and while the `#include` statements were all on  
separate lines in the HTML, they appear as one long line in the text part.  
IMHO it causes unnecessary confusion to have two slightly differently  
displayed parts of the same mail especially for questions with source code :)


Btw.: For C++ STL components such as `set` or `string` it is perfectly fine  
to not append a `.h`. In fact, it would seem not to be standards compliant  
to append the `.h`, cf. https://stackoverflow.com/questions/15680190


YMMV
Linux-Fan

öö


pgp8MUSLxC_NJ.pgp
Description: PGP signature


[OT] C++ Book Question (Was: Re: How to improve my question in stackoverflow?)

2021-09-09 Thread Linux-Fan

William Torrez Corea writes:

Book.cpp:1:10: fatal error: Set: No existe el fichero o el directorio  
[closed]


I trying compile an example of a book.

The program use three classes: Book, Customer and Library.


The question is offtopic for debian-user, how did it get here?
Also, how are subject and content related?

When giving the source code, give it in entirety, i.e. including the missing  
"customer.h", "library.h" and others. Also, for such long source codes,  
it seems preferrable to provide them as an attachment rather than inline.



Book.cpp
#include 


[...]


When i compile the example with the following command:
g++ -g -Wall  Book.cpp book.h -o book


The result expected is bad.
Book.cpp:1:10: fatal error: Set: No existe el fichero o el directorio
 #include 
  ^
compilation terminated.


[...]


The book is C++17 By Example, published by Packt.


[...]

The immediate problem is with the case "Set" vs. "set". In your source code  
above you correctly have `#include `, but the error message suggests  
that the `Book.cpp` you are trying to compile still has `#include ` as  
displayed by the compiler.


In fact, this seems to be a known erratum for the book sample codes:
https://github.com/PacktPublishing/CPP17-By-Example/issues/1

Clone the repository to get the entire sample source code. I could get to  
compile the `Chapter04` example by making changes as attached in  
`patchbook.patch`. Apply it with


patch --strip 1 < patchbook.patch

from inside the repository.

HTH
Linux-Fan

öö
diff --git a/Chapter04/LibraryPointer/Book.cpp b/Chapter04/LibraryPointer/Book.cpp
index ae86fd2..62f60f5 100644
--- a/Chapter04/LibraryPointer/Book.cpp
+++ b/Chapter04/LibraryPointer/Book.cpp
@@ -1,9 +1,9 @@
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
 using namespace std;

 #include "Book.h"
@@ -84,4 +84,4 @@ ostream& operator<<(ostream& outStream, const Book& book) {
   }

   return outStream;
-}
\ No newline at end of file
+}
diff --git a/Chapter04/LibraryPointer/Customer.cpp b/Chapter04/LibraryPointer/Customer.cpp
index ac17964..31ffe1d 100644
--- a/Chapter04/LibraryPointer/Customer.cpp
+++ b/Chapter04/LibraryPointer/Customer.cpp
@@ -1,8 +1,8 @@
-#include 
-#include 
-#include 
-#include 
-#include 
+#include 
+#include 
+#include 
+#include 
+#include 
 using namespace std;

 #include "Book.h"
@@ -87,4 +87,4 @@ ostream& operator<<(ostream& outStream, const Customer& customer){
   }

   return outStream;
-}
\ No newline at end of file
+}
diff --git a/Chapter04/LibraryPointer/Library.cpp b/Chapter04/LibraryPointer/Library.cpp
index 10b4ac4..2c6f1a7 100644
--- a/Chapter04/LibraryPointer/Library.cpp
+++ b/Chapter04/LibraryPointer/Library.cpp
@@ -1,10 +1,10 @@
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
 using namespace std;

 #include "Book.h"
@@ -611,4 +611,4 @@ Library::~Library() {
   for (const Customer* customerPtr : m_customerPtrList) {
 delete customerPtr;
   }
-}
\ No newline at end of file
+}
diff --git a/Chapter04/LibraryPointer/Main.cpp b/Chapter04/LibraryPointer/Main.cpp
index 18d1637..586b47e 100644
--- a/Chapter04/LibraryPointer/Main.cpp
+++ b/Chapter04/LibraryPointer/Main.cpp
@@ -1,15 +1,16 @@
-#include 
-#include 
-#include 
-#include 
-#include 
-#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
 using namespace std;

 #include "Book.h"
 #include "Customer.h"
 #include "Library.h"

-void main() {
+int main() {
   Library();
-}
\ No newline at end of file
+  return 0;
+}


pgpUlyJNnqpAw.pgp
Description: PGP signature


Re: plugged and unplugged monitor

2021-09-06 Thread Linux-Fan

michaelmorgan...@gmail.com writes:

I have a linux machine for scientific calculations with GPU installed. It  
also installed GUI but normally the default start mode is just terminal  
(multi-user.target).


If I start up the machine with a monitor plugged into the GPU (HDMI or  
Displayport), the monitor works well (showing the terminal). However, if I  
unplug the HDMI cable and connect it back again, there will be no signal to

the monitor any more. The same thing happens if I start up the machine
without a monitor. I can ssh to the machine, but there is no signal if I
plug in the monitor.

What could be the problem?


Does the problem occur if you immediately re-connect the monitor after  
disconnecting or only after a certain amount of time? It might be that after  
a timeout, the console "blanks" and does not display anymore. You could try  
to attach a keyboard and press "any key" ("Alt" is one of my favorites for  
the purpose) to see if that is the problem.


Another route to debug might be to try the connect/disconnect procedure  
while the GUI is running. If you run X11, you might try to capture the  
output of `xrandr` for the connected, disconnected and re-connected cases to  
see if the monitor connection is being recognized by X11. AFAIK it should be  
possible to run `xrandr` over SSH while nothing is displayed as long as the  
GUI is running and you prefix it by the proper `DISPLAY=:0` variable.


HTH
Linux-Fan

öö

[...]


pgp4B7Br5bkKd.pgp
Description: PGP signature


Re: Installing old/deprecated packages

2021-09-06 Thread Linux-Fan

riveravaldez writes:


On 9/5/21, Linux-Fan  wrote:
> riveravaldez writes:
>
>> I have this `phwmon.py`[1] which I use with fluxbox to have a couple
>> of system monitors at hand. It depends on some python2 packages, so
>> stopped working some time ago.
>
> Any specific reason for preferring `phwmon.py` over a tool like `conky`?

Hi, Linux-Fan, thanks a lot for your answers.

`conky` is great, but you have to see the desktop to see `conky`, and I
tend to work with maximized windows.
Monitors like `phwmon.py` or the ones that come by default with IceWM
for instance are permanently visible in the sys-tray/taskbar (no matter
you're using fluxbox, openbox+tint2, etc.). That's the only reason:
minimal and visible.


That makes sense. In case you still want to try conky, there might be means  
to make it appear as a dedicated panel that is not overlapped by maximized  
windows, although I did not test that back when I was using Fluxbox (now on  
i3). See e.g.


https://superuser.com/questions/565784/can-conky-remain-always-visible-alongside-other-windows
https://forum.salixos.org/viewtopic.php?t=1166

[...]


> There are differences: Whenever you install packages, you may not notice
> that they are only avaliable in old releases because the output of
> `apt-cache search` and similar tools will include old packages. Also,
> running a release with stable+oldstable in sources.list is less common than
> the other case: stable in sources.list and some oldstable packages leftover
> from upgrades. In case bugs are fixed in the oldstable package, you will  
> get them automatically if you have them in sources.list.

>
> My personal choice would be to install the packages without adding the
> oldstable repositories as to be reminded that they are obsolete and are
> likely to stop working in the future.

Thanks again. Very informative and educational.
When you say 'as to be reminded that they are obsolete', how/when/where
the system will remind me this?, will it be?


There is no automatism for this that I am aware of. There are tools like  
`deborphan` and `aptitude search ~o` that may tell you about them. The  
release notes recommend proactively removing obsolete packages:


https://www.debian.org/releases/bullseye/amd64/release-notes/ch-upgrading.en.html#for-next


> Be aware that libraries like `python-psutil` may not work with newer
> kernels. Here (on oldstable with a backported kernel 5.10) the script would
>
> not run due to excess fields reported by the kernel for disk statistics:


[...]


Yes, indeed. I didn't mentioned it but I had to "fix" that as seen in:
https://gitlab.com/o9000/phwmon/-/issues/3#note_374558691

Essentially, convert:

`elif flen == 14 or flen == 18:`

to

`elif flen == 14 or flen == 18 or flen == 20:`

In /usr/lib/python2.7/dist-packages/psutil/_pslinux.py

Supposedly shouldn't be problematic, but I'm not sure. Any
comment on this?


It's pretty much how I would go about it for a short-term solution. Any  
upgrade/reinstallation of the respective python package may revert your  
change. I would not mind that too much, given that there will not be any  
"unexpected" upgrades to the package while its repositories are not enabled  
in sources.list :)


[...]


> PS: If you are interested in my thoughts on status bars, see here:
> https://masysma.lima-city.de/32/i3bar.xhtml

Thanks a lot, LF!
I'm checking it right now. Very interesting.


You're welcome. If anything is unclear/wrong there, feel free to tell me  
directly via e-mail :)


HTH
Linux-Fan

öö


pgp_58R53yXQu.pgp
Description: PGP signature


Re: Installing old/deprecated packages

2021-09-05 Thread Linux-Fan

riveravaldez writes:


I have this `phwmon.py`[1] which I use with fluxbox to have a couple
of system monitors at hand. It depends on some python2 packages, so
stopped working some time ago.


Any specific reason for preferring `phwmon.py` over a tool like `conky`?


I've just made it work, installing manually (# apt-get install packages.deb)
this packages that I've downloaded from Debian OldStable official archives:

python-psutil
python-is-python2 (this is in fact in Testing)
python-numpy
python-pkg-resources
python-cairo
libffi6
python-gobject-2
python-gtk2

Therefore, my questions:

How safe is this?


IMHO it's pretty OK because that is quite similar to having upgraded from an  
old system with the legacy packages installed to a new release where they  
are no longer part of.



Is it better to install them as I did, or adding the corresponding line in
sources.list and pull them from there? Is there any difference?


There are differences: Whenever you install packages, you may not notice  
that they are only avaliable in old releases because the output of
`apt-cache search` and similar tools will include old packages. Also,  
running a release with stable+oldstable in sources.list is less common than  
the other case: stable in sources.list and some oldstable packages leftover  
from upgrades. In case bugs are fixed in the oldstable package, you will get  
them automatically if you have them in sources.list.


My personal choice would be to install the packages without adding the  
oldstable repositories as to be reminded that they are obsolete and are  
likely to stop working in the future.


Be aware that libraries like `python-psutil` may not work with newer  
kernels. Here (on oldstable with a backported kernel 5.10) the script would  
not run due to excess fields reported by the kernel for disk statistics:


| $ ./phwmon.py 
| Traceback (most recent call last):

|   File "./phwmon.py", line 341, in 
| HardwareMonitor()
|   File "./phwmon.py", line 128, in __init__
| self.initDiskIo()
|   File "./phwmon.py", line 274, in initDiskIo
| v = psutil.disk_io_counters(perdisk=False)
|   File "/usr/lib/python2.7/dist-packages/psutil/__init__.py", line 2131, in 
disk_io_counters
| rawdict = _psplatform.disk_io_counters(**kwargs)
|   File "/usr/lib/python2.7/dist-packages/psutil/_pslinux.py", line 1121, in 
disk_io_counters
| for entry in gen:
|   File "/usr/lib/python2.7/dist-packages/psutil/_pslinux.py", line 1094, in 
read_procfs
| raise ValueError("not sure how to interpret line %r" % line)
| ValueError: not sure how to interpret line ' 259   0 nvme0n1 42428 17299 
3905792 8439 49354 7425 3352623 15456 0 48512 26929 43429 11 476835656 3033 0 
0\n'

See also: https://forums.bunsenlabs.org/viewtopic.php?id=967

[...]


[1] https://gitlab.com/o9000/phwmon


Btw. it looks as if `python-is-python2` is not needed for this to run?  
`phwmon.py` states `python2` explicitly.


HTH
Linux-Fan

öö

PS: If you are interested in my thoughts on status bars, see here:
   https://masysma.lima-city.de/32/i3bar.xhtml


pgpGp_ovIoHsX.pgp
Description: PGP signature


Re: how to change debian installation mirror list?

2021-08-23 Thread Linux-Fan

Fred 1 writes:

I would like to know where the installation pulls the  list of Debian  
mirrors.


hope I can change it and rebuild the netinstall ISO with jigdo maybe ?

It would be nice if I could manually add a custom one during install, but I  
didn't see such option, strictly just pick from the list


I can only answer the last of your questions: If you want to add a custom  
mirror during the install, go to the top of the menu and choose
"enter information manually". The installer will then ask you for the host  
name, path and proxy information of your custom mirror.


Scroll near the end of
https://masysma.lima-city.de/37/debian_i386_installation_reports_att/m6-d11a3-i386-netinst.xhtml
for screenshots of the respective dialogs. It is from an an alpha release of  
the installer but the dialogs should not differ too much from the final one.


HTH
Linux-Fan

öö

[...]


pgpuHOOoPciDc.pgp
Description: PGP signature


Re: nvme SSD and poor performance

2021-08-20 Thread Linux-Fan

Pierre Willaime writes:


Thanks all.

I activated `# systemctl enable fstrim.timer` (thanks Linux-Fan).


You're welcome :)

But I do not think my issue is trim related after all. I have always a lot  
of I/O activities from jdb2 even just after booting and even when the  
computer is doing nothing for hours.


Here is an extended  log of iotop where you can see jdb2 anormal activities:  
https://pastebin.com/eyGcGdUz


According to that, a lot of firefox-esr and dpkg and some thunderbird  
processes are active. Is there a high intensity of I/O operations when all  
Firefox, Thunderbird instances and system upgrades are closed?


When testing with iotop here, options `-d 10 -P` seemed to help getting a  
steadier and less cluttered view. Still, filtering your iotop output for  
Firefox, Thunderbird and DPKG respectively seems to be quite revealing:


| $ grep firefox-esr eyGcGdUz | grep -E '[0-9]{4,}.[0-9]{2} K/s'
| 10:38:513363 be/4 pierre  0.00 K/s 1811.89 K/s  0.00 % 17.64 % 
firefox-esr [mozStorage #3]
| 10:39:585117 be/4 pierre  0.00 K/s 1112.59 K/s  0.00 %  0.37 % 
firefox-esr [IndexedDB #14]
| 10:41:553363 be/4 pierre  0.00 K/s 6823.06 K/s  0.00 %  0.00 % 
firefox-esr [mozStorage #3]
| 10:41:553305 be/4 pierre   1469.88 K/s0.00 K/s  0.00 % 60.57 % 
firefox-esr [QuotaManager IO]
| 10:41:553363 be/4 pierre   6869.74 K/s 6684.07 K/s  0.00 % 31.96 % 
firefox-esr [mozStorage #3]
| 10:41:566752 be/4 pierre   2517.19 K/s0.00 K/s  0.00 % 99.99 % 
firefox-esr [Indexed~Mnt #13]
| 10:41:566755 be/4 pierre   31114.18 K/s0.00 K/s  0.00 % 99.58 % 
firefox-esr [Indexed~Mnt #16]
| 10:41:563363 be/4 pierre   9153.40 K/s0.00 K/s  0.00 % 87.06 % 
firefox-esr [mozStorage #3]
| 10:41:576755 be/4 pierre   249206.18 K/s0.00 K/s  0.00 % 59.01 % 
firefox-esr [Indexed~Mnt #16]
| 10:41:576755 be/4 pierre   251353.11 K/s0.00 K/s  0.00 % 66.02 % 
firefox-esr [Indexed~Mnt #16]
| 10:41:586755 be/4 pierre   273621.58 K/s0.00 K/s  0.00 % 59.51 % 
firefox-esr [Indexed~Mnt #16]
| 10:41:586755 be/4 pierre   51639.70 K/s0.00 K/s  0.00 % 94.90 % 
firefox-esr [Indexed~Mnt #16]
| 10:41:596755 be/4 pierre   113869.64 K/s0.00 K/s  0.00 % 79.03 % 
firefox-esr [Indexed~Mnt #16]
| 10:41:596755 be/4 pierre   259549.09 K/s0.00 K/s  0.00 % 56.99 % 
firefox-esr [Indexed~Mnt #16]
| 10:44:413265 be/4 pierre   1196.21 K/s0.00 K/s  0.00 % 20.89 % 
firefox-esr
| 10:44:413289 be/4 pierre   3813.36 K/s  935.22 K/s  0.00 %  4.59 % 
firefox-esr [Cache2 I/O]
| 10:44:533363 be/4 pierre  0.00 K/s 1176.90 K/s  0.00 %  0.00 % 
firefox-esr [mozStorage #3]
| 10:49:283363 be/4 pierre  0.00 K/s 1403.16 K/s  0.00 %  0.43 % 
firefox-esr [mozStorage #3]

So there are incredible amounts of data being read by Firefox (Gigabytes in  
a few minutes)? Does this load reflect in atop or iotop's summarizing lines  
at the begin of the respective screens?


| $ grep thunderbird eyGcGdUz | grep -E '[0-9]{4,}.[0-9]{2} K/s'
| 10:38:432846 be/4 pierre  0.00 K/s 1360.19 K/s  0.00 % 15.51 % 
thunderbird [mozStorage #1]
| 10:39:492873 be/4 pierre  0.00 K/s 4753.74 K/s  0.00 %  0.00 % 
thunderbird [mozStorage #6]
| 10:39:492875 be/4 pierre  0.00 K/s 19217.56 K/s  0.00 %  0.00 % 
thunderbird [mozStorage #7]
| 10:39:502883 be/4 pierre  0.00 K/s 18014.56 K/s  0.00 % 29.39 % 
thunderbird [mozStorage #8]
| 10:39:502883 be/4 pierre  0.00 K/s 3305.94 K/s  0.00 % 27.28 % 
thunderbird [mozStorage #8]
| 10:39:512883 be/4 pierre  0.00 K/s 61950.19 K/s  0.00 % 63.11 % 
thunderbird [mozStorage #8]
| 10:39:512883 be/4 pierre  0.00 K/s 41572.77 K/s  0.00 % 27.19 % 
thunderbird [mozStorage #8]
| 10:39:522883 be/4 pierre  0.00 K/s 20961.20 K/s  0.00 % 65.02 % 
thunderbird [mozStorage #8]
| 10:39:522883 be/4 pierre  0.00 K/s 43345.16 K/s  0.00 %  0.19 % 
thunderbird [mozStorage #8]
| 10:42:272846 be/4 pierre  0.00 K/s 1189.63 K/s  0.00 %  0.45 % 
thunderbird [mozStorage #1]
| 10:42:332846 be/4 pierre  0.00 K/s 1058.52 K/s  0.00 %  0.31 % 
thunderbird [mozStorage #1]
| 10:47:272846 be/4 pierre  0.00 K/s 2113.53 K/s  0.00 %  0.66 % 
thunderbird [mozStorage #1]

Thunderbird seems to write a lot here. This would average at ~18 MiB/s of writing  
and hence explain why the SSD is loaded continuously. Again: Does it match the  
data reported by atop? [I am not experienced in reading iotop output, hence  
I might interpret the data wrongly].


By comparison, dpkg looks rather harmless:

| $ grep dpkg eyGcGdUz | grep -E '[0-9]{4,}.[0-9]{2} K/s'
| 10:38:254506 be/4 root0.00 K/s 4553.67 K/s  0.00 %  0.26 % dpkg 
--status-fd 23 --no-triggers --unpack --auto-deconfigure 
--force-remove-protected --recursive /tmp/apt-dpkg-install-E69bfZ
| 10:38:334506 be/4 root7.73 K/s 4173.77 K/s  0.00 %  1.52 % dpkg 
--status-fd 23 --no-triggers --unpack --auto-deconfigure

Re: nvme SSD and poor performance

2021-08-17 Thread Linux-Fan

Christian Britz writes:


On 17.08.21 at 15:30 Linux-Fan wrote:

Pierre Willaime writes:


P-S: If triming it is needed for ssd, why debian do not trim by default?


Detecting reliably if the current system has SSDs that would benefit from  
trimming AND that the user has not taken their own measures is difficult. I  
guess this might be the reason for there not being an automatism, but you  
can enable the systemd timer suggested above with a single command.


I am pretty sure that I have never played with fstrim.timer and this is the  
output of "systemctl status fstrim.timer" on my bullseye system:


● fstrim.timer - Discard unused blocks once a week
 Loaded: loaded (/lib/systemd/system/fstrim.timer; enabled; vendor  
preset: enabled)

 Active: active (waiting) since Tue 2021-08-17 14:10:17 CEST; 3h 5min ago
    Trigger: Mon 2021-08-23 01:01:39 CEST; 5 days left
   Triggers: ● fstrim.service
   Docs: man:fstrim

So it seems this weekly schedule is enabled by default on bullseye.


Nice, thanks for sharing :)

BTW, if my system is not online at that time, will it be triggered on next  
boot?


Yes, I would think so.

A systemd timer can be configured to run if its schedule is missed by  
`[Timer] Persistent=true` which on my machine is configured for  
`fstrim.timer` (check `systemctl cat fstrim.timer`).


See also: https://jeetblogs.org/post/scheduling-jobs-cron-anacron-systemd/

Of course, it is also possible to find out experimentally. Check
`systemctl list-timers` to find out about all registered timers and when  
they ran last.


HTH
Linux-Fan

öö


pgpb4GJLB0KsQ.pgp
Description: PGP signature


  1   2   3   4   5   6   7   8   9   10   >