Re: removing file??

2022-11-10 Thread Tim Woodall

On Thu, 10 Nov 2022, Amn wrote:


I did that, but I got this :

jamiil@ArbolOne:~$ sudo apt clean
jamiil@ArbolOne:~$ sudo apt install codeblocks-dev
Reading package lists... Done

snip
?trying to overwrite '/usr/include/codeblocks/Alignment.h', which is also in 
pac

kage codeblocks-headers 20.03
dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)
Errors were encountered while processing:
?/var/cache/apt/archives/codeblocks-dev_20.03-3.1+b1_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)



apt-get remove --purge codeblocks-headers

Make sure that doesn't remove stuff you want.




Re: Increased read IO wait times after Bullseye upgrade

2022-11-10 Thread Vukovics Mihály

Hi Gareth,

I have already tried to change the queue depth for the physichal disks 
but that has almost no effect.
There is almost no load on the filesystem, here is 10s sample from atop. 
1-2 write requests but 30-50ms of average io.


DSK |  sdc  | busy 27%  | read   0  | write  2  | 
KiB/r  0  | KiB/w  0  |   | MBr/s    0.0  | MBw/s    
0.0  | avq 1.83  | avio 38.0 ms  |
DSK |  sdb  | busy 18%  | read   0  | write 1  | 
KiB/r  0  | KiB/w  1  |   | MBr/s 0.0  | MBw/s    
0.0  | avq 1.63  | avio 52.0 ms  |
DSK |  sde  | busy 18%  | read   0  | write 1  | 
KiB/r  0  | KiB/w  1  |   | MBr/s 0.0  | MBw/s    
0.0  | avq 1.63  | avio 52.0 ms  |
DSK |  sda  | busy 17%  | read   0  | write 1  | 
KiB/r  0  | KiB/w  1  |   | MBr/s 0.0  | MBw/s    
0.0  | avq 1.60  | avio 48.0 ms  |


On 2022. 11. 10. 14:32, Gareth Evans wrote:

On Thu 10 Nov 2022, at 11:36, Gareth Evans  wrote:
[...]

This assumes the identification of the driver in [3] (below) is
anything to go by.

I meant [1] not [3].

Also potentially of interest:

"Queue depth

The queue depth is a number between 1 and ~128 that shows how many I/O requests 
are queued (in-flight) on average. Having a queue is beneficial as the requests 
in the queue can be submitted to the storage subsystem in an optimised manner 
and often in parallel. A queue improves performance at the cost of latency.

If you have some kind of storage performance monitoring solution in place, a high 
queue depth could be an indication that the storage subsystem cannot handle the 
workload. You may also observe higher than normal latency figures. As long as 
latency figures are still within tolerable limits, there may be no problem."

https://louwrentius.com/understanding-storage-performance-iops-and-latency.html

See

$ cat /sys/block/sdX/device/queue_depth


--
--
Köszönettel:
Vukovics Mihály



Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread hw
On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
> Linux-Fan wrote: 
> 
> 
> [...]
> * RAID 5 and 6 restoration incurs additional stress on the other
>   disks in the RAID which makes it more likely that one of them
>   will fail. The advantage of RAID 6 is that it can then recover
>   from that...

Disks are always being stressed when used, and they're being stessed as well
when other types of RAID arrays than 5 or 6 are being rebuild.  And is there
evidence that disks fail *because* RAID arrays are being rebuild or would they
have failed anyway when stressed?

> * RAID 10 gets you better read performance in terms of both
>   throughput and IOPS relative to the same number of disks in
>   RAID 5 or 6. Most disk activity is reading.
> 

and it requires more disks for the same capacity

For disks used for backups, most activity is writing.  That goes for some other
purposes as well.

> [...]
> 
>  The power of open source software is that we can make
> opportunities open to people with small budgets that are
> otherwise reserved for people with big budgets.

That's only one advantage.

> Most of the computers in my house have one disk. If I value any
> data on that disk,

Then you don't use only one disk but redundancy.  There's also your time and
nerves you might value.

>  I back it up to the server, which has 4 4TB
> disks in ZFS RAID10. If a disk fails in that, I know I can
> survive that and replace it within 24 hours for a reasonable
> amount of money -- rather more reasonable in the last few
> months.

How do you get a new suitable disk within 24 hours?  For reasonable amounts of
money?  Disk prices keep changing all the time.

Backups are no substitute for redundancy.



Re: Increased read IO wait times after Bullseye upgrade

2022-11-10 Thread Vukovics Mihály

Hello Gareth,

the average io wait state is 3% in the last 1d14h. I have checked the IO 
usage with several tools and have not found any processes/threads 
generation too much read/write requests. As you can see on my first 
graph, only the read wait time increased significantly, the write not.


Br,
Mihaly

On 2022. 11. 10. 16:13, Gareth Evans wrote:

On Thu 10 Nov 2022, at 11:36, Gareth Evans  wrote:
[...]

I might be barking up the wrong tree ...

But simpler inquiries first.

I was wondering if MD might be too high-level to cause what does seem more like a 
"scheduley" issue -

https://www.thomas-krenn.com/de/wikiDE/images/d/d0/Linux-storage-stack-diagram_v4.10.pdf

but, apparently, even relatively "normal" processes can cause high iowait too:

NFS...
https://www.howtouselinux.com/post/troubleshoot-high-iowait-issue-on-linux-system
https://www.howtouselinux.com/post/use-linux-nfsiostat-to-troubleshoot-nfs-performance-issue

SSH...
Use iotop to find io-hogs...
https://www.howtouselinux.com/post/quick-guide-to-fix-linux-iowait-issue
https://www.howtouselinux.com/post/check-disk-io-usage-per-process-with-iotop-on-linux

What's a typical wa value (%) from top?

Thanks,
G





--
--
Köszönettel:
Vukovics Mihály



Re: removing file??

2022-11-10 Thread Peter von Kaehne
You got a conflict. 

The problem does not lie in /var/apt/cache but in 
'/usr/include/codeblocks/Alignment.h'
which was put there by something else than you are now installing. 

To resolve the conflict you need to figure out what is the (other) owner of 
'/usr/include/codeblocks/Alignment.h'. You can then remove that (if you want 
to) to install the other file. 

Sometimes there are stale and unowned files left behind by previous incomplete 
package removals. These can then usually simply get deleted. 

Peter

Sent from my phone. Please forgive misspellings and weird “corrections”

> On 11 Nov 2022, at 04:21, Amn  wrote:
> 
> I did that, but I got this :
> 
> jamiil@ArbolOne:~$ sudo apt clean
> jamiil@ArbolOne:~$ sudo apt install codeblocks-dev
> Reading package lists... Done
> Building dependency tree... Done
> Reading state information... Done
> The following NEW packages will be installed:
>   codeblocks-dev
> 0 upgraded, 1 newly installed, 0 to remove and 322 not upgraded.
> Need to get 522 kB of archives.
> After this operation, 1,905 kB of additional disk space will be used.
> Get:1 http://ftp.de.debian.org/debian sid/main amd64 codeblocks-dev amd64 
> 20.03-3.1+b1 [522 kB]
> Fetched 522 kB in 2s (313 kB/s)
> (Reading database ... 258020 files and directories currently installed.)
> Preparing to unpack .../codeblocks-dev_20.03-3.1+b1_amd64.deb ...
> Unpacking codeblocks-dev:amd64 (20.03-3.1+b1) ...
> dpkg: error processing archive 
> /var/cache/apt/archives/codeblocks-dev_20.03-3.1+
> b1_amd64.deb (--unpack):
>  trying to overwrite '/usr/include/codeblocks/Alignment.h', which is also in 
> pac
> kage codeblocks-headers 20.03
> dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)
> Errors were encountered while processing:
>  /var/cache/apt/archives/codeblocks-dev_20.03-3.1+b1_amd64.deb
> E: Sub-process /usr/bin/dpkg returned an error code (1)
> 
>> On 2022-11-10 11:17 p.m., Peter von Kaehne wrote:
>> Why do you want to remove it?
>> 
>> You would remove it with sudo apt clean
>> 
>> At which point that directory should be empty, it is just a cache for 
>> install files
>> 
>> Sent from my phone. Please forgive misspellings and weird “corrections”
>> 
 On 11 Nov 2022, at 03:33, Amn  wrote:
>>> 
>>> I am trying to remove this file : 
>>> '/var/cache/apt/archives/codeblocks-dev_20.03-3_amd64.deb', but after 
>>> trying, even as a 'su', but I am unable to. Any suggestion how to do this?
>>> 



Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread hw
On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
> hw writes:
> 
> > On Wed, 2022-11-09 at 19:17 +0100, Linux-Fan wrote:
> > > hw writes:
> > > > On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > > > > Le 09/11/2022 à 12:41, hw a écrit :
> 
> [...]
> 
> > > > I'd
> > > > have to use mdadm to create a RAID5 (or use the hardware RAID but that  
> > > > isn't
> > > 
> > > AFAIK BTRFS also includes some integrated RAID support such that you do  
> > > not necessarily need to pair it with mdadm.
> > 
> > Yes, but RAID56 is broken in btrfs.
> > 
> > > It is advised against using for RAID 
> > > 5 or 6 even in most recent Linux kernels, though:
> > > 
> > > https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices
> > 
> > Yes, that's why I would have to use btrfs on mdadm when I want to make a  
> > RAID5.
> > That kinda sucks.
> > 
> > > RAID 5 and 6 have their own issues you should be aware of even when  
> > > running 
> > > them with the time-proven and reliable mdadm stack. You can find a lot of 
> > > interesting results by searching for “RAID5 considered harmful” online.  
> > > This 
> > > one is the classic that does not seem to make it to the top results,  
> > > though:
> > 
> > Hm, really?  The only time that RAID5 gave me trouble was when the hardware 
> 
> [...]
> 
> I have never used RAID5 so how would I know :)
> 
> I think the arguments of the RAID5/6 critics summarized were as follows:
> 
>  * Running in a RAID level that is 5 or 6 degrades performance while
>    a disk is offline significantly. RAID 10 keeps most of its speed and
>    RAID 1 only degrades slightly for most use cases.

It's fine when they pay for the additional disks required by RAID1 or 10.

>  * During restore, RAID5 and 6 are known to degrade performance more compared
>    to restoring one of the other RAID levels.

When that matters, don't use this RAID level.  It's not an issue about keeping
data save.

>  * Disk space has become so cheap that the savings of RAID5 may
>    no longer rectify the performance and reliability degradation
>    compared to RAID1 or 10.

When did that happen?  Disk space is anything but cheap and the electricty
needed to run these disks is anything but cheap.  Needing more disks for the
same capacity is anything but cheap because a server can fit only so many disks,
and the more disks you put into a server, the more expensive that server gets. 
You might even need more server, further increasing costs for hardware and
electricity.

> All of these arguments come from a “server” point of view where it is  
> assumed that
> 
>  (1) You win something by running the server so you can actually
>  tell that there is an economic value in it. This allows for
>  arguments like “storage is cheap” which may not be the case at
>  all if you are using up some thightly limited private budget.

You're not gona win anything when your storage gets too expensive.

>  (2) Uptime and delivering the service is paramount.

That can get expensive very quickly.

>  Hence there
>  are some considerations regarding the online performance of
>  the server while the RAID is degraded and while it is restoring.
>  If you are fine to take your machine offline or accept degraded
>  performance for prolonged times then this does not apply of
>  course.

Sure, when you have issues like that, find a different solution.

>  If you do not value the uptime making actual (even
>  scheduled) copies of the data may be recommendable over
>  using a RAID because such schemes may (among other advantages)
>  protect you from accidental file deletions, too.

Huh?

> Also note that in today's computing landscape, not all unwanted file  
> deletions are accidental. With the advent of “crypto trojans” adversaries  
> exist that actually try to encrypt or delete your data to extort a ransom.
> 

When have unwanted file deletions been exclusively accidential?

> > More than one disk can fail?  Sure can, and it's one of the reasons why I  
> > make
> > backups.
> > 
> > You also have to consider costs.  How much do you want to spend on storage  
> > and
> > and on backups?  And do you want make yourself crazy worrying about your  
> > data?
> 
> I am pretty sure that if I separate my PC into GPU, CPU, RAM and Storage, I  
> spent most on storage actually. Well established schemes of redundancy and  
> backups make me worry less about my data.

Well, what did they say: Disk space has become cheap.  Yeah, for sure ... :)

> I still worry enough about backups to have written my own software:
> https://masysma.net/32/jmbb.xhtml
> and that I am also evaluating new developments in that area to probably  
> replace my self-written program by a more reliable (because used by more  
> people!) alternative:
> https://masysma.net/37/backup_tests_borg_bupstash_kopia.xhtml
> > 

cool :)

> [...]
> > 
> > Is anyone still using ext4?  I'm not saying it's bad or 

Re: ZFS performance (was: Re: deduplicating file systems: VDO withDebian?)

2022-11-10 Thread tomas
On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:

[...]

> Why would anyone use SSDs for backups?  They're way too expensive for that.

Possibly.

> So far, the failure rate with SSDs has been not any better than the failure 
> rate
> of hard disks.  Considering that SSDs are supposed to fail less, the 
> experience
> with them is pretty bad.

You keep pulling things out of whatever (thin air, it seems). Here's a report
by folks who do lots of HDDs and SDDs:

  https://www.backblaze.com/blog/backblaze-hard-drive-stats-q1-2021/

The gist, for disks playing similar roles (they don't use yet SSDs for bulk
storage, because of the costs): 2/1518 failures for SSDs, 44/1669 for HDDs.

I'll leave the maths as an exercise to the reader.

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: ZFS pool size (Re: else or Debian)

2022-11-10 Thread hw
On Thu, 2022-11-10 at 15:19 +0100, DdB wrote:
> Am 10.11.2022 um 14:28 schrieb DdB:
> > Take some time to
> > play with an installation (in a vm or just with a file based pool should
> > be considered).
> 
> an example to show, that is is possible to allocate hugefiles (bigger
> than a single disk size) from a pool:
> 
> > datakanja@PBuster-NFox:~$ mkdir disks
> > datakanja@PBuster-NFox:~$ cd disks/
> > datakanja@PBuster-NFox:~/disks$ seq  -w 0 15 | xargs -i truncate -s 4T
> > disk{}.bin # this creates sparse files to act as virtual disks
> > datakanja@PBuster-NFox:~/disks$ zpool create TEST raidz3 ~/disks/d*
> > datakanja@PBuster-NFox:~/disks$ zpool list
> > NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH 
> > ALTROOT
> > TEST  64.0T   314K  64.0T    - - 0% 0%  1.00x    ONLINE 
> > -
> 16*4 TB = 64 TB size
> > datakanja@PBuster-NFox:~/disks$ zfs list TEST
> > NAME   USED  AVAIL REFER  MOUNTPOINT
> > TEST   254K  50.1T 64.7K  /TEST
> # due to redundacy in the pool, the maximum size of a file is slightly
> over 50TB
> 
> #do not forget to clean up (destroying pool and files)
> 

Ok, but once the space gets actually allocated, things blow up.  Or what happens
when you use partitions and allocate more space than the partitions have?



Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread hw
On Thu, 2022-11-10 at 14:28 +0100, DdB wrote:
> Am 10.11.2022 um 13:03 schrieb Greg Wooledge:
> > If it turns out that '?' really is the filename, then it becomes a ZFS
> > issue with which I can't help.
> 
> just tested: i could create, rename, delete a file with that name on a
> zfs filesystem just as with any other fileystem.
> 
> But: i recall having seen an issue with corrupted filenames in a
> snapshot once (several years ago though). At the time, i did resort to
> send/recv to get the issue straightened out.

Well, the ZFS version in use is ancient ...  But that I could rename it is a
good sign.

> But it is very much more likely, that the filename '?' is entirely
> unrelated to zfs. Although zfs is perceived as being easy to handle
> (only 2 commands need to be learned: zpool and zfs),

Ha, it's for from easy.  These commands have many options ...

>  it takes a while to
> get acquainted with all the concepts and behaviors. Take some time to
> play with an installation (in a vm or just with a file based pool should
> be considered).

Ah, yes, that's a good idea :)



Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread hw
On Thu, 2022-11-10 at 08:48 -0500, Dan Ritter wrote:
> hw wrote: 
> > And I've been reading that when using ZFS, you shouldn't make volumes with
> > more
> > than 8 disks.  That's very inconvenient.
> 
> 
> Where do you read these things?

I read things like this:

"Sun™ recommends that the number of devices used in a RAID-Z configuration be
between three and nine. For environments requiring a single pool consisting of
10 disks or more, consider breaking it up into smaller RAID-Z groups. If two
disks are available, ZFS mirroring provides redundancy if required. Refer to
zpool(8) for more details."

That's on https://docs.freebsd.org/en/books/handbook/zfs/

I don't remember where I read about 8, could have been some documentation about
FreeNAS.  I've also been reading different amounts of RAM required for
deduplication, so who knows what's true.

> The number of disks in a zvol can be optimized, depending on
> your desired redundancy method, total number of drives, and
> tolerance for reduced performance during resilvering. 
> 
> Multiple zvols together form a zpool. Filesystems are allocated from
> a zpool.
> 
> 8 is not a magic number.
> 

You mean like here:
https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/

That seems rather complicated.  I guess it's just a bad guide.  I'll find out if
I use ZFS.



Re: ZFS performance (was: Re: deduplicating file systems: VDO withDebian?)

2022-11-10 Thread hw
On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
> > On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> > > On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> > > > And mind you, SSDs are *designed to fail* the sooner the more data you
> > > > write
> > > > to
> > > > them.  They have their uses, maybe even for storage if you're so
> > > > desperate,
> > > > but
> > > > not for backup storage.
> > > 
> > > It's unlikely you'll "wear out" your SSDs faster than you wear out your
> > > HDs.
> > > 
> > 
> > I have already done that.
> 
> Then you're either well into "not normal" territory and need to buy an 
> SSD with better write longevity (which I seriously doubt for a backup 
> drive) or you just got unlucky and got a bad copy (happens with 
> anything) or you've misdiagnosed some other issue.
> 

Why would anyone use SSDs for backups?  They're way too expensive for that.

So far, the failure rate with SSDs has been not any better than the failure rate
of hard disks.  Considering that SSDs are supposed to fail less, the experience
with them is pretty bad.

There was no misdiagnosis.  Have you ever had a failed SSD?  They usually just
disappear.  I've had one exception in which the SDD at first only sometimes
disappeared and came back, until it disappeared and didn't come back.

There was no "not normal" territory, either, unless maybe you consider ZFS cache
as "not normal".  In that case, I would argue that SSDs are well suited for such
applications because they allow for lots of IOOPs and high data transfer rates,
and a hard disk probably wouldn't have failed in place of the SSD because they
don't wear out so quickly.  Since SSDs are so well suited for such purposes,
that can't be "not normal" territory for them.  Perhaps they just need to be
more resilient than they are.

You could argue that the SSDs didn't fail because they were worn out but for
other reasons.  I'd answer that it's irrelevant for the user why exactly a disk
failed, especially when it just disappears, and that hard disks don't fail
because the storage media wears out like SSDs do but for other reasons.

Perhaps you could buy an SSD that withstands being written to it better.  The
question then is if that's economical.  It would also be based on the assumption
that SSDs don't so much fail for other reasons than the storage media being worn
out.  Since all the failed SSDs have disappeared, I have to assume that they
didn't fail because the storage media was worn out but for other reasons.

Considering that, SSDs generally must be of really bad quality for that to
happen, don't you think?



Re: Problem with card reader on Debian 11

2022-11-10 Thread tomas
On Thu, Nov 10, 2022 at 11:21:21PM +0100, Claudia Neumann wrote:
> Hi all,
> 
> I programmed a library to read german electronic health cards from special 
> devices
> certified in Germany.
> 
> After an update from Debian 10 to Debian 11 one of these card readers reads 
> only 64
> bytes using /dev/ttyACM0. It should read 256 Bytes which it did from 2010 on.
> 
> Something must have changed from Debian 10 to Debian 11. Is there a 
> configuration
> where I can change this behaviour? I don't know which package to blame for 
> the change
> und what kind of Information I should give you.

Hm. Where to start?

Do you have access to one Debian 10 and one Debian 11 installation to compare
things?

The "ttyACM" is a hint that the device ends up as a "modem" (this is not to
be taken too seriously). Does that happen in both installations?

One main suspect is, of course, the kernel (mainly the USB modules). Can you
compare the output of "lsusb" in both installations, perhaps before and after
inserting the device?

Another hint would be the output of `lsusb -vvv'. Can you identify the device
in question? Any differences between Debian 10 and 11?

Cheers and good luck

(NOTE: I kept you in CC because I don't know whether you are subscribed,
if you prefer, I can drop that)

-- 
tomás


signature.asc
Description: PGP signature


Re: weird directory entry on ZFS volume (Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?)))

2022-11-10 Thread David Christensen

On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:

ls -la
insgesamt 5
drwxr-xr-x  3 namefoo namefoo    3 16. Aug 22:36 .
drwxr-xr-x 24 root    root    4096  1. Nov 2017  ..
drwxr-xr-x  2 namefoo namefoo    2 21. Jan 2020  ?
namefoo@host /srv/datadir $ ls -la '?'
ls: Zugriff auf ? nicht möglich: Datei oder Verzeichnis nicht gefunden
namefoo@host /srv/datadir $


This directory named ? appeared on a ZFS volume for no reason and I can't
access
it and can't delete it.  A scrub doesn't repair it.  It doesn't seem to do
any
harm yet, but it's annoying.

Any idea how to fix that?



2022-11-10 21:24:23 dpchrist@f3 ~/foo
$ freebsd-version ; uname -a
12.3-RELEASE-p7
FreeBSD f3.tracy.holgerdanske.com 12.3-RELEASE-p6 FreeBSD 
12.3-RELEASE-p6 GENERIC  amd64


2022-11-10 21:24:45 dpchrist@f3 ~/foo
$ bash --version
GNU bash, version 5.2.0(3)-release (amd64-portbld-freebsd12.3)
Copyright (C) 2022 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 



This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

2022-11-10 21:24:52 dpchrist@f3 ~/foo
$ ll
total 13
drwxr-xr-x   2 dpchrist  dpchrist   2 2022/11/10 21:24:21 .
drwxr-xr-x  14 dpchrist  dpchrist  30 2022/11/10 21:24:04 ..

2022-11-10 21:25:03 dpchrist@f3 ~/foo
$ touch '?'

2022-11-10 21:25:08 dpchrist@f3 ~/foo
$ ll
total 14
drwxr-xr-x   2 dpchrist  dpchrist   3 2022/11/10 21:25:08 .
drwxr-xr-x  14 dpchrist  dpchrist  30 2022/11/10 21:24:04 ..
-rw-r--r--   1 dpchrist  dpchrist   0 2022/11/10 21:25:08 ?

2022-11-10 21:25:11 dpchrist@f3 ~/foo
$ rm '?'
remove ?? y

2022-11-10 21:25:19 dpchrist@f3 ~/foo
$ ll
total 13
drwxr-xr-x   2 dpchrist  dpchrist   2 2022/11/10 21:25:19 .
drwxr-xr-x  14 dpchrist  dpchrist  30 2022/11/10 21:24:04 ..


David



Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?)

2022-11-10 Thread David Christensen

On 11/10/22 07:44, hw wrote:

On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:

On 11/9/22 00:24, hw wrote:
  > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:



Be careful that you do not confuse a ~33 GiB full backup set, and 78
snapshots over six months of that same full backup set, with a full
backup of 3.5 TiB of data.



The full backup isn't deduplicated?



"Full", "incremental", etc., occur at the backup utility level -- e.g. 
on top of the ZFS filesystem.  (All of my backups are full backups using 
rsync.)  ZFS deduplication occurs at the block level -- e.g. the bottom 
of the ZFS filesystem.  If your backup tool is writing to, or reading 
from, a ZFS filesystem, the backup tool is oblivious to the internal 
operations of ZFS (compression or none, deduplicaton or none, etc.) so 
long as the filesystem "just works".




Writing to a ZFS filesystem with deduplication is much slower than
simply writing to, say, an ext4 filesystem -- because ZFS has to hash
every incoming block and see if it matches the hash of any existing
block in the destination pool.  Storing the existing block hashes in a
dedicated dedup virtual device will expedite this process.


But when it needs to write almost nothing because almost everthing gets
deduplicated, can't it be faster than having to write everthing?



There are many factors that affect how fast ZFS can write files to disk. 
 You will get the best answers if you run benchmarks using your 
hardware and data.




  >> I run my backup script each night.  It uses rsync to copy files and
  >
  > Aww, I can't really do that because my servers eats like 200-300W
because it has
  > so many disks in it.  Electricity is outrageously expensive here.


Perhaps platinum rated power supplies?  Energy efficient HDD's/ SSD's?


If you pay for it ... :)

Running it once in a while for some hours to make backups is still possible.
Replacing the hardware is way more expensive.



My SOHO server has ~1 TiB of data.  A ZFS snapshot takes a few seconds. 
ZFS incremental replication to the backup server proceeds at anywhere 
from 0 to 50 MB/s, depending upon how much content is new or has changed.





  > Sounds like a nice setup.  Does that mean you use snapshots to keep
multiple
  > generations of backups and make backups by overwriting everything
after you made
  > a snapshot?

Yes.


I start thinking more and more that I should make use of snapshots.



Taking snapshots is fast and easy.  The challenge is deciding when to 
destroy them.



zfs-auto-snapshot can do both automatically:

https://packages.debian.org/bullseye/zfs-auto-snapshot

https://manpages.debian.org/bullseye/zfs-auto-snapshot/zfs-auto-snapshot.8.en.html



Without deduplication or compression, my backup set and 78 snapshots
would require 3.5 TiB of storage.  With deduplication and compression,
they require 86 GiB of storage.


Wow that's quite a difference!  What makes this difference, the compression or
the deduplication? 



Deduplication.



When you have snapshots, you would store only the
differences from one snapshot to the next, 
and that would mean that there aren't

so many duplicates that could be deduplicated.



I do not know -- I have not crawled the ZFS code; I just use it.



Users can recover their own files without needing help from a system
administrator.


You have users who know how to get files out of snapshots?



Not really; but the feature is there.



   For compressed and/or encrypted archives, image, etc., I do not use
   compression or de-duplication
  >>>
  >>> Yeah, they wouldn't compress.  Why no deduplication?
  >>
  >>
  >> Because I very much doubt that there will be duplicate blocks in
such files.
  >
  > Hm, would it hurt?

Yes.  ZFS deduplication is resource intensive.


But you're using it already.



I have learned the hard way to only use deduplication when it makes sense.



What were the makes and models of the 6 disks?  Of the SSD's?  If you
have a 'zpool status' console session from then, please post it.


They were (and still are) 6x4TB WD Red (though one or two have failed over time)
and two Samsung 850 PRO, IIRC.  I don't have an old session anymore.

These WD Red are slow to begin with.  IIRC, both SDDs failed and I removed them.

The other instance didn't use SSDs but 6x2TB HGST Ultrastar.  Those aren't
exactly slow but ZFS is slow.



Those HDD's should be fine with ZFS; but those SSD's are desktop drives, 
not cache devices.  That said, I am making the same mistake with Intel 
SSD 520 Series.  I have considered switching to one Intel Optane Memory 
Series and a PCIe 4x adapter card in each server.




MySQL appears to have the ability to use raw disks.  Tuned correctly,
this should give the best results:

https://dev.mysql.com/doc/refman/8.0/en/innodb-system-tablespace.html#innodb-raw-devices


Could mysql 5.6 already do that?  I'll have to see if mariadb can do that now
...



I do not know -- I do not run MySQL or Maria.




Re: removing file??

2022-11-10 Thread Amn

I did that, but I got this :

jamiil@ArbolOne:~$ sudo apt clean
jamiil@ArbolOne:~$ sudo apt install codeblocks-dev
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
  codeblocks-dev
0 upgraded, 1 newly installed, 0 to remove and 322 not upgraded.
Need to get 522 kB of archives.
After this operation, 1,905 kB of additional disk space will be used.
Get:1 http://ftp.de.debian.org/debian sid/main amd64 codeblocks-dev 
amd64 20.03-3.1+b1 [522 kB]

Fetched 522 kB in 2s (313 kB/s)
(Reading database ... 258020 files and directories currently installed.)
Preparing to unpack .../codeblocks-dev_20.03-3.1+b1_amd64.deb ...
Unpacking codeblocks-dev:amd64 (20.03-3.1+b1) ...
dpkg: error processing archive 
/var/cache/apt/archives/codeblocks-dev_20.03-3.1+

b1_amd64.deb (--unpack):
 trying to overwrite '/usr/include/codeblocks/Alignment.h', which is 
also in pac

kage codeblocks-headers 20.03
dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)
Errors were encountered while processing:
 /var/cache/apt/archives/codeblocks-dev_20.03-3.1+b1_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

On 2022-11-10 11:17 p.m., Peter von Kaehne wrote:

Why do you want to remove it?

You would remove it with sudo apt clean

At which point that directory should be empty, it is just a cache for install 
files

Sent from my phone. Please forgive misspellings and weird “corrections”


On 11 Nov 2022, at 03:33, Amn  wrote:

I am trying to remove this file : 
'/var/cache/apt/archives/codeblocks-dev_20.03-3_amd64.deb', but after trying, 
even as a 'su', but I am unable to. Any suggestion how to do this?





Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread Michael Stone

On Thu, Nov 10, 2022 at 08:32:36PM -0500, Dan Ritter wrote:

* RAID 5 and 6 restoration incurs additional stress on the other
 disks in the RAID which makes it more likely that one of them
 will fail.


I believe that's mostly apocryphal; I haven't seen science backing that 
up, and it hasn't been my experience either.



 The advantage of RAID 6 is that it can then recover
 from that...


The advantage to RAID 6 is that it can tolerate a double disk failure. 
With RAID 1 you need 3x your effective capacity to achieve that and even 
though storage has gotten cheaper, it hasn't gotten that cheap. (e.g., 
an 8 disk RAID 6 has the same fault tolerance as an 18 disk RAID 1 of 
equivalent capacity, ignoring pointless quibbling over probabilities.)




Re: ZFS performance (was: Re: deduplicating file systems: VDO withDebian?)

2022-11-10 Thread Michael Stone

On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:

On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:

On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> And mind you, SSDs are *designed to fail* the sooner the more data you write
> to
> them.  They have their uses, maybe even for storage if you're so desperate,
> but
> not for backup storage.

It's unlikely you'll "wear out" your SSDs faster than you wear out your
HDs.



I have already done that.


Then you're either well into "not normal" territory and need to buy an 
SSD with better write longevity (which I seriously doubt for a backup 
drive) or you just got unlucky and got a bad copy (happens with 
anything) or you've misdiagnosed some other issue.




removing file??

2022-11-10 Thread Amn
I am trying to remove this file : 
'/var/cache/apt/archives/codeblocks-dev_20.03-3_amd64.deb', but after 
trying, even as a 'su', but I am unable to. Any suggestion how to do this?




Re: definiing deduplication

2022-11-10 Thread Stefan Monnier
>> Or are you referring to the data being altered while a backup is in
>> progress?
> Yes.  Data of different files or at different places in the same file
> may have relations which may become inconsistent during change operations
> until the overall change is complete.

Arguably this can be considered as a bug in the application (because
a failure in the middle could thus result in an inconsistent state).

> If you are unlucky you can even catch a plain text file that is only half
> stored.

Indeed, many such files are written an a non-atomic way.

> The risk for this is not 0 with filesystem snapshots, but it grows further
> if there is a time interval during which changes may or may not be copied
> into the backup, depending on filesystem internals and bad luck.

With snapshots, such problems can be considered application bugs, but if
you don't use snapshots, then your backup will not see "the state at time
T" but instead will see the state of different files at different times,
and in the case you can very easily see an inconsistent state even
without any bug in an application: the bug is in the backup
process itself.

If some part of your filesystem is frequently/constantly being modified,
then such inconsistent backups can be very common.


Stefan



Re : Un renseignement au sujet du chiffrement avec thunderbird sous debian bulleyes

2022-11-10 Thread k6dedijon
Bonjour,
Vous parlez de popup, ne serait-ce pas pop ou pop3 ?
Il s'agit dans ce cas du courrier entrant.
Si c'est cela essayez de remplacer pop par imap.

En espérant vous avoir aidé
Cassis


- Mail d'origine -
De: Olivier backup my spare 
À: debian-user-french@lists.debian.org
Envoyé: Tue, 08 Nov 2022 07:06:02 +0100 (CET)
Objet: Un renseignement au sujet du chiffrement avec thunderbird sous debian 
bulleyes

Bonjour

On, les utilisateur et moi, rencontrons un drôle de problème avec le 
chiffrement et la signature des mails s-mime.
L'employeur nous fourni à chacun un certificat pour signer s-mime.
Thunderbird par défaut propose OpenPGP alors que l'option : 
"Sélectionner automatiquement en fonction des clés ou certificats 
disponibles" est cochée.
Rien à faire thunderbird met par des photos OpenPGP et l'option 
privilégier S/MIME est "grisée", on ne peut l'activer.
Cela a pour conséquence que les utilisateurs, lorsque ils rédigent un 
mail, doivent choisir S/MIME et ensuite la signature numérique. Si ils 
oublient le mail reste en "envoi" avec le popup.
Le problème n'est que sous bulleyes. Je n'ai pas eu de retour pour 
debian 10.
Connaissez vous un moyen de forcer la bascule sur S/MIME?

Cordialement



-- 
AI Gestionnaire d'infrastructure/ Gestionnaire de Parc.
Centre d'économie S**
“It is possible to commit no errors and still lose. That is not a 
weakness. That is life.”
– Captain Jean-Luc Picard to Data



Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread Dan Ritter
Linux-Fan wrote: 
> I think the arguments of the RAID5/6 critics summarized were as follows:
> 
> * Running in a RAID level that is 5 or 6 degrades performance while
>   a disk is offline significantly. RAID 10 keeps most of its speed and
>   RAID 1 only degrades slightly for most use cases.
> 
> * During restore, RAID5 and 6 are known to degrade performance more compared
>   to restoring one of the other RAID levels.

* RAID 5 and 6 restoration incurs additional stress on the other
  disks in the RAID which makes it more likely that one of them
  will fail. The advantage of RAID 6 is that it can then recover
  from that...

* RAID 10 gets you better read performance in terms of both
  throughput and IOPS relative to the same number of disks in
  RAID 5 or 6. Most disk activity is reading.

> * Disk space has become so cheap that the savings of RAID5 may
>   no longer rectify the performance and reliability degradation
>   compared to RAID1 or 10.

I think that's a case-by-base basis. Every situation is
different, and should be assessed for cost, reliability and
performance concerns.

> All of these arguments come from a “server” point of view where it is
> assumed that
> 
> (1) You win something by running the server so you can actually
> tell that there is an economic value in it. This allows for
> arguments like “storage is cheap” which may not be the case at
> all if you are using up some thightly limited private budget.
> 
> (2) Uptime and delivering the service is paramount. Hence there
> are some considerations regarding the online performance of
> the server while the RAID is degraded and while it is restoring.
> If you are fine to take your machine offline or accept degraded
> performance for prolonged times then this does not apply of
> course. If you do not value the uptime making actual (even
> scheduled) copies of the data may be recommendable over
> using a RAID because such schemes may (among other advantages)
> protect you from accidental file deletions, too.

Even in household situations, knowing that you could have traded $100
last year for a working computer right now is an incentive to set up
disk mirroring. If you're storing lots of data that other
people in the household depend on, that might factor in to your
decisions, too.

Everybody has a budget. Some have big budgets, and some have
small. The power of open source software is that we can make
opportunities open to people with small budgets that are
otherwise reserved for people with big budgets.

Most of the computers in my house have one disk. If I value any
data on that disk, I back it up to the server, which has 4 4TB
disks in ZFS RAID10. If a disk fails in that, I know I can
survive that and replace it within 24 hours for a reasonable
amount of money -- rather more reasonable in the last few
months.

> > Is anyone still using ext4?  I'm not saying it's bad or anything, it
> > only seems that it has gone out of fashion.
> 
> IIRC its still Debian's default. Its my file system of choice unless I have
> very specific reasons against it. I have never seen it fail outside of
> hardware issues. Performance of ext4 is quite acceptable out of the box.
> E.g. it seems to be slightly faster than ZFS for my use cases. Almost every
> Linux live system can read it. There are no problematic licensing or
> stability issues whatsoever. By its popularity its probably one of the most
> widely-deployed Linux file systems which may enhance the chance that
> whatever problem you incur with ext4 someone else has had before...

All excellent reasons to use ext4.

-dsr-



Re: Consulta ftp

2022-11-10 Thread Roberto José Blandino Cisneros
Si es así puedes ajustar el script de arranque para que pase por parámetros
esas opciones.

El jue, 10 de nov. de 2022 12:43 p. m., Fernando Romero <
ffrcaraba...@gmail.com> escribió:

> Hola Roberto gracias por tu respuesta, lo resolvi levantado el servicio de
> otra forma.
> Si lo levantó desde el systemctl, init,d o desde el propio pure-ftpd me
> tira ese error pero sin lo levantó con la siguiente linea me funciona:
>
> pure-ftpd -A -c 50 -B -j -C 2 -E -f ftp -H -I 15 -l
> puredb:/etc/pureftpd.pdb  -L 2000:8 -m4 -s  -U 133:022 -u 33 -x -X -i -R -k
> 98 -Z
>
> Saludos
>
>
>
> El jue, 10 nov 2022 a las 13:58, Roberto José Blandino Cisneros (<
> rojobland...@gmail.com>) escribió:
>
>> Déjame hacer una vm y pruebo.  Nunca he usado pure-ftpd el que he usado
>> ha sido vsftpd.
>>
>> Déjame hacer el ambiente y te comento
>>
>> El jue, 10 de nov. de 2022 6:36 a. m., Fernando Romero <
>> ffrcaraba...@gmail.com> escribió:
>>
>>> Hola como estan, estoy corriendo debian 10 buster y pure-ftpd
>>> Configure usuario y pass pero siempre me da error de autentificación,
>>> intento conectarme desde la consola o usando filezilla y siempre me da error
>>> 530 Autentificaci▒n fallida, lo siento
>>> Login failed.
>>> Remote system type is UNIX.
>>> Using binary mode to transfer files.
>>>
>>> Cambie varias veces el usuario recree la base puredb pero no consigue
>>> loguearme.
>>> Alguien tuvo este problema?
>>>
>>> Si corro  pure-pw show usuario me trae bien los datos del usuario y la
>>> clave no la estoy escribiendo mal, la cambie varias veces
>>>
>>> Login  : usuario
>>> Password   :
>>> $6$DZ/S57P55hv9EoZ0$F..2vyNe5pV8rJzF79MmFoDTHCrNouK/O81qBCX1IoZoDUOh8p8QhecYl9JoTAJjNEObiN4J6VgCrZwP4IWY./
>>> UID: 33 (www-data)
>>> GID: 33 (www-data)
>>> Directory  : /var/www/html/./
>>> Full name  :
>>> Download bandwidth : 0 Kb (unlimited)
>>> Upload   bandwidth : 0 Kb (unlimited)
>>> Max files  : 0 (unlimited)
>>> Max size   : 0 Mb (unlimited)
>>> Ratio  : 0:0 (unlimited:unlimited)
>>> Allowed local  IPs :
>>> Denied  local  IPs :
>>> Allowed client IPs :
>>> Denied  client IPs :
>>> Time restrictions  : - (unlimited)
>>> Max sim sessions   : 0 (unlimited)
>>>
>>> Saludos
>>>
>>


Re: Message surprenant

2022-11-10 Thread didier gaumet



Effectivement le certificat du CA Actalis est bien installé par 
ca-certificates (lui-même dépendance d'apt donc probablement présent sur 
la totalité des installations Debian standard) mais pas activé tant qu' 
update-ca-certificates n'est pas lancé par root (j'ai vérifié sur une 
machine virtuelle Debian 11 quasi vierge qui traîne, au cas où j'aurais 
joué avec les certificats sans m'en souvenir sur mon installation Debian 
habituelle)


Merci pour les explications :-)




Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread DdB
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Am 10.11.2022 um 22:37 schrieb Linux-Fan:
> Ext4 still does not offer snapshots. The traditional way to do
> snapshots outside of fancy BTRFS and ZFS file systems is to add LVM
> to the equation although I do not have any useful experience with
> that. Specifically, I am not using snapshots at all so far, besides
> them being readily available on ZFS

Yes, although i am heavily dependant on zfs, my OS resides on an
nvme-ssd with ext4, i rsync as need be to an imagefile, that i keep on
compressed zfs, and which i snapshot, backup and so on, rather than
the ext4-partition itself. - Seems like a good compromise to me.
-BEGIN PGP SIGNATURE-

iQIzBAEBCAAdFiEEumgd33HMGU/Wk4ZRe3aiXLdoWD0FAmNterMACgkQe3aiXLdo
WD3coA//ZXf0/WfGVEfEi1Fxe/vYpnqqx9UZLfAL5+XrE19Gh1oVd25zGDhkaaFl
SbvwcnVII/v7Lzj6by86nJ44LvPqu/NRjzWGwM7ltK3t4t8C7C+h2lfxPuhVKxfW
zqt/kp053ZCPUj6nD8nD60MLI88sxoyTVRG6nVzqW0FiC7be3VE3l4l4O2E0Qr4U
OM2lqmLDXPJcJ6pNGZp5p4470st/ODBNuG/7nM+mM04ylZJYLJV7ykYpCx6R8uiW
ZKoY+VZuI0dPsTEe24O0CRRm2hmW99ET6p+LAZHEnhKGPB/cxOIECiLTmqy8NICi
90AMxTp+D7bLglgnF4a0ZAukYwxnj+gCMj7B/CKCT62qLVVEavLOdVQydQx/7kue
+DsJ8PrXryVhO7xL01NeZbq4Ur5vwbY1Unk5iWHq8snoh+13Dru+a3DGcRQNr2Ph
QkzHny8LwO7x4Ob+a4/YRhjGjYWPxA9Y9huUDFLEDp0v0QDixtI+oytqED/hE30b
SPWZUbQR2pfF8Mbst02zDqknv5rvt9NxNhh0tvcspVsNv4y81/wUO0O8HVc4yBL2
lcJ6Wf11MRmbJ9J4NoZom3GnFUcysqnfEsxk+XLwJUTpfHU3VZS5Ovheu/OZNrDT
ErRY9g+yES8YTtsb4Vk2hqIvU5LvGu/BzCh0kyavGWkz3+dE9go=
=Ed39
-END PGP SIGNATURE-



Problem with card reader on Debian 11

2022-11-10 Thread Claudia Neumann
Hi all,

I programmed a library to read german electronic health cards from special 
devices
certified in Germany.

After an update from Debian 10 to Debian 11 one of these card readers reads 
only 64
bytes using /dev/ttyACM0. It should read 256 Bytes which it did from 2010 on.

Something must have changed from Debian 10 to Debian 11. Is there a 
configuration
where I can change this behaviour? I don't know which package to blame for the 
change
und what kind of Information I should give you.

Please help

Kind regards

Claudia Neumann


Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread Linux-Fan

hw writes:


On Wed, 2022-11-09 at 19:17 +0100, Linux-Fan wrote:
> hw writes:
> > On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > > Le 09/11/2022 à 12:41, hw a écrit :


[...]


> > I'd
> > have to use mdadm to create a RAID5 (or use the hardware RAID but that  
> > isn't

>
> AFAIK BTRFS also includes some integrated RAID support such that you do  
> not necessarily need to pair it with mdadm.


Yes, but RAID56 is broken in btrfs.

> It is advised against using for RAID 
> 5 or 6 even in most recent Linux kernels, though:
>
> 
https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices

Yes, that's why I would have to use btrfs on mdadm when I want to make a  
RAID5.

That kinda sucks.

> RAID 5 and 6 have their own issues you should be aware of even when  
> running 

> them with the time-proven and reliable mdadm stack. You can find a lot of 
> interesting results by searching for “RAID5 considered harmful” online.  
> This 
> one is the classic that does not seem to make it to the top results,  
> though:


Hm, really?  The only time that RAID5 gave me trouble was when the hardware  


[...]

I have never used RAID5 so how would I know :)

I think the arguments of the RAID5/6 critics summarized were as follows:

* Running in a RAID level that is 5 or 6 degrades performance while
  a disk is offline significantly. RAID 10 keeps most of its speed and
  RAID 1 only degrades slightly for most use cases.

* During restore, RAID5 and 6 are known to degrade performance more compared
  to restoring one of the other RAID levels.

* Disk space has become so cheap that the savings of RAID5 may
  no longer rectify the performance and reliability degradation
  compared to RAID1 or 10.

All of these arguments come from a “server” point of view where it is  
assumed that


(1) You win something by running the server so you can actually
tell that there is an economic value in it. This allows for
arguments like “storage is cheap” which may not be the case at
all if you are using up some thightly limited private budget.

(2) Uptime and delivering the service is paramount. Hence there
are some considerations regarding the online performance of
the server while the RAID is degraded and while it is restoring.
If you are fine to take your machine offline or accept degraded
performance for prolonged times then this does not apply of
course. If you do not value the uptime making actual (even
scheduled) copies of the data may be recommendable over
using a RAID because such schemes may (among other advantages)
protect you from accidental file deletions, too.

Also note that in today's computing landscape, not all unwanted file  
deletions are accidental. With the advent of “crypto trojans” adversaries  
exist that actually try to encrypt or delete your data to extort a ransom.


More than one disk can fail?  Sure can, and it's one of the reasons why I  
make

backups.

You also have to consider costs.  How much do you want to spend on storage  
and
and on backups?  And do you want make yourself crazy worrying about your  
data?


I am pretty sure that if I separate my PC into GPU, CPU, RAM and Storage, I  
spent most on storage actually. Well established schemes of redundancy and  
backups make me worry less about my data.


I still worry enough about backups to have written my own software:
https://masysma.net/32/jmbb.xhtml
and that I am also evaluating new developments in that area to probably  
replace my self-written program by a more reliable (because used by more  
people!) alternative:

https://masysma.net/37/backup_tests_borg_bupstash_kopia.xhtml


> https://www.baarf.dk/BAARF/RAID5_versus_RAID10.txt
>
> If you want to go with mdadm (irrespective of RAID level), you might also 
> consider running ext4 and trade the complexity and features of the  
> advanced file systems for a good combination of stability and support.


Is anyone still using ext4?  I'm not saying it's bad or anything, it only  
seems that it has gone out of fashion.


IIRC its still Debian's default. Its my file system of choice unless I have  
very specific reasons against it. I have never seen it fail outside of  
hardware issues. Performance of ext4 is quite acceptable out of the box.  
E.g. it seems to be slightly faster than ZFS for my use cases.  
Almost every Linux live system can read it. There are no problematic  
licensing or stability issues whatsoever. By its popularity its probably one  
of the most widely-deployed Linux file systems which may enhance the chance  
that whatever problem you incur with ext4 someone else has had before...



I'm considering using snapshots.  Ext4 didn't have those last time I checked.


Ext4 still does not offer snapshots. The traditional way to do snapshots  
outside of fancy BTRFS and ZFS file systems is to add LVM to the equation  
although I do not have any useful experience with that. Specifically, I am  
not 

Re: Message surprenant

2022-11-10 Thread Olivier backup my spare

Bonjour

Les CA certificates livrés en standard avec les navigateurs web et 
client emails, c'est pour vérifier que le certificat de la signature ou 
du chiffrement s-mime est bien légitime.


Tous les Ca Certificates ne sont pas installés par défaut. D'ailleurs le 
paquet ca-certificats dit dans la description : "Certificats courants" 
et non "Tous les certificats".

Par exemple, celui de mon employeur n'est pas installé par défaut.
Celui que j'utilise avec ma tutelle, Swiss-sign, n'était pas reconnu par 
tous les OS.

Cependant, les CA certificats sont régulièrement mis à jours.
Les paquets similaire à ca-certificates sont :
ca-cacert
python-certifi
python3-certifi
sscg
golang-github-google-certificate-transparency-dev
certmonger
pki-ocsp
python-srp
gnomint
python3-srp
python3-certbot-dns-rfc2136.

Si un interlocuteur utilise un certificat peu courant, on peu l'ajouter 
soi-même en ayant vérifier au préalable le CA sur https://ssl-tools.net/
Cela peut être long et pénible. Swiss-sign en a au moins 20. J'ai laché 
l'affaire.

La méthode est simple :
Linux (Ubuntu, Debian)
Pour l'ajout.
Copiez le certificat dans /usr/local/share/ca-certificates/
Utilisez la commande : sudo cp foo.crt 
/usr/local/share/ca-certificates/foo.crt

Pour mettre à jour le magasin : sudo update-ca-certificates
Pour le supprimer :
Supprimer le CA
Mettez à jour le magasin des CA : sudo update-ca-certificates --fresh

Cordialement

Le 10/11/2022 à 18:04, didier gaumet a écrit :

Le 10/11/2022 à 15:27, Olivier Back my spare a écrit :
[...]

Vous pouvez consulter le certificat racine demandé
https://ssl-tools.net/subjects/b0e31e6fe1b4e58b38cd4664dd9184b2eead11f6

[...]

Question de béotien (vu que j'y connais rien): ça résoudra le problème?
L'impression que j'avais est que le certificat de l'autorité (organisme) 
de certification (CA) était très probablement déjà présent sur le 
système (il semble être présent dans le paquet ca-certificates qui est 
une dépendance d'un bonne partie des navigateurs web et clients e-mail)


Je me demandais si il ne s'agissait pas plutôt d'une demande de sécurité 
d'un client mail ou d'un navigateur à l'utilisateur (Philippe) pour lui 
faire reconnaître ce certificat de CA, déjà connu, comme fiable ou non 
(pour certifier les certificats se réclamant de ce CA) ?


Mais c'est peut-être farfelu, comme hypothèse :-)



--
AI Gestionnaire d'infrastructure/ Gestionnaire de Parc.
Centre d'économie S**
“It is possible to commit no errors and still lose. That is not a 
weakness. That is life.”

– Captain Jean-Luc Picard to Data


smime.p7s
Description: Signature cryptographique S/MIME


exim4 vs. frontier.com

2022-11-10 Thread mike . junk . 46
I've struggled off and on for months to get outbound mail via exim4 through 
frontier.com with no joy.
I'm on a single user system using mutt and exim4 plus fetchmail. Inbound is no 
problem.
Outbound I see this in /var/log/exim4/mainlog:
554 5.7.1 <>: Sender address rejected: Access denied
/etc/email-addresses has the proper frontier email address in it.
>From web search I created /etc/exim4/conf.d/rewrite/10_from_rewrite containing 
>this line:
*  "$header_from:" F
This supposedly tells exim4 to set the Envelope header the same as the From 
header.
I think Sent = Envelope headers, admittedly not sure about that.

If there is anyone on the list who has exim4 talking to Frontier.com please 
help.

Thanks,
Mike



Re: Consulta ftp

2022-11-10 Thread Fernando Romero
Hola Roberto gracias por tu respuesta, lo resolvi levantado el servicio de
otra forma.
Si lo levantó desde el systemctl, init,d o desde el propio pure-ftpd me
tira ese error pero sin lo levantó con la siguiente linea me funciona:

pure-ftpd -A -c 50 -B -j -C 2 -E -f ftp -H -I 15 -l
puredb:/etc/pureftpd.pdb  -L 2000:8 -m4 -s  -U 133:022 -u 33 -x -X -i -R -k
98 -Z

Saludos



El jue, 10 nov 2022 a las 13:58, Roberto José Blandino Cisneros (<
rojobland...@gmail.com>) escribió:

> Déjame hacer una vm y pruebo.  Nunca he usado pure-ftpd el que he usado ha
> sido vsftpd.
>
> Déjame hacer el ambiente y te comento
>
> El jue, 10 de nov. de 2022 6:36 a. m., Fernando Romero <
> ffrcaraba...@gmail.com> escribió:
>
>> Hola como estan, estoy corriendo debian 10 buster y pure-ftpd
>> Configure usuario y pass pero siempre me da error de autentificación,
>> intento conectarme desde la consola o usando filezilla y siempre me da error
>> 530 Autentificaci▒n fallida, lo siento
>> Login failed.
>> Remote system type is UNIX.
>> Using binary mode to transfer files.
>>
>> Cambie varias veces el usuario recree la base puredb pero no consigue
>> loguearme.
>> Alguien tuvo este problema?
>>
>> Si corro  pure-pw show usuario me trae bien los datos del usuario y la
>> clave no la estoy escribiendo mal, la cambie varias veces
>>
>> Login  : usuario
>> Password   :
>> $6$DZ/S57P55hv9EoZ0$F..2vyNe5pV8rJzF79MmFoDTHCrNouK/O81qBCX1IoZoDUOh8p8QhecYl9JoTAJjNEObiN4J6VgCrZwP4IWY./
>> UID: 33 (www-data)
>> GID: 33 (www-data)
>> Directory  : /var/www/html/./
>> Full name  :
>> Download bandwidth : 0 Kb (unlimited)
>> Upload   bandwidth : 0 Kb (unlimited)
>> Max files  : 0 (unlimited)
>> Max size   : 0 Mb (unlimited)
>> Ratio  : 0:0 (unlimited:unlimited)
>> Allowed local  IPs :
>> Denied  local  IPs :
>> Allowed client IPs :
>> Denied  client IPs :
>> Time restrictions  : - (unlimited)
>> Max sim sessions   : 0 (unlimited)
>>
>> Saludos
>>
>


Re: weird directory entry on ZFS volume (Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?)))

2022-11-10 Thread Greg Wooledge
On Thu, Nov 10, 2022 at 06:54:31PM +0100, hw wrote:
> Ah, yes.  I tricked myself because I don't have hd installed,

It's just a symlink to hexdump.

lrwxrwxrwx 1 root root 7 Jan 20  2022 /usr/bin/hd -> hexdump

unicorn:~$ dpkg -S usr/bin/hd
bsdextrautils: /usr/bin/hd
unicorn:~$ dpkg -S usr/bin/hexdump
bsdextrautils: /usr/bin/hexdump

> It's an ancient Gentoo

A.  Anyway, from the Debian man page:

   -C, --canonical
  Canonical  hex+ASCII display.  Display the input offset in hexa‐
  decimal, followed by sixteen space-separated, two-column,  hexa‐
  decimal  bytes, followed by the same sixteen bytes in %_p format
  enclosed in '|' characters.  Invoking the program as hd  implies
  this option.

Why on earth the default format of "hexdump" uses that weird 16-bit
little endian nonsense is beyond me.



Re: ZFS performance (was: Re: deduplicating file systems: VDO withDebian?)

2022-11-10 Thread hw
On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> > And mind you, SSDs are *designed to fail* the sooner the more data you write
> > to
> > them.  They have their uses, maybe even for storage if you're so desperate,
> > but
> > not for backup storage.
> 
> It's unlikely you'll "wear out" your SSDs faster than you wear out your 
> HDs.
> 

I have already done that.



Re: weird directory entry on ZFS volume (Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?)))

2022-11-10 Thread hw
On Thu, 2022-11-10 at 09:30 -0500, Greg Wooledge wrote:
> On Thu, Nov 10, 2022 at 02:48:28PM +0100, hw wrote:
> > On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
> 
> [...]
> > printf '%s\0' * | hexdump
> > 000 00c2 6177 7468     
> > 007
> 
> I dislike this output format, but it looks like there are two files
> here.  The first is 0xc2, and the second is 0x77 0x61 0x68 0x74 if
> I'm reversing and splitting the silly output correctly.  (This spells
> "waht", if I got it right.)
> > 

Ah, yes.  I tricked myself because I don't have hd installed, so I redirected
the output of printf into a file --- which I wanted to name 'what' but I
mistyped as 'waht' --- so I could load it into emacs and use hexl-mode.  But the
display kinda sucked and I found I have hexdump installed and used that. 
Meanwhile I totally forgot about the file I had created.

> [...]
> > 
> The file in question appears to have a name which is the single byte 0xc2.
> Since that's not a valid UTF-8 character, ls chooses something to display
> instead.  In your case, it chose a '?' character.

I'm the only one who can create files there, and I didn't create that.  Using
0xc2 as a file name speaks loudly against that I'd create that file
accidentially.

>   I'm guessing this is on
> an older release of Debian.

It's an ancient Gentoo which couldn't be updated in years because they broke the
update process.  Back then, Gentoo was the only Linux distribution that didn't
need fuse for ZFS that I could find.

> In my case, it does this:
> 
> unicorn:~$ mkdir /tmp/x && cd "$_"
> unicorn:/tmp/x$ touch $'\xc2'
> unicorn:/tmp/x$ ls -la
> total 80
> -rw-r--r--  1 greg greg 0 Nov 10 09:21 ''$'\302'
> drwxr-xr-x  2 greg greg  4096 Nov 10 09:21  ./
> drwxrwxrwt 20 root root 73728 Nov 10 09:21  ../
> 
> In my version of ls, there's a --quoting-style= option that can help
> control what you see.  But that's a tangent you can explore later.
> 
> Since we know the actual name of the file (subdirectory) now, let's just
> rename it to something sane.
> 
> mv $'\xc2' subdir
> 
> Then you can investigate it, remove it, or do whatever else you want.

Cool, I've renamed it, thank you very much :)  I'm afraid that the file system
will crash when I remove it ...  It's an empty directory.  Ever since I noticed
it, I couldn't do anything with it and I thought it's some bug in the file
system.



Re: deduplicating file systems: VDO with Debian?

2022-11-10 Thread hw
On Wed, 2022-11-09 at 14:22 +0100, Nicolas George wrote:
> hw (12022-11-08):
> > When I want to have 2 (or more) generations of backups, do I actually want
> > deduplication?  It leaves me with only one actual copy of the data which
> > seems
> > to defeat the idea of having multiple generations of backups at least to
> > some
> > extent.
> 
> The idea of having multiple generations of backups is not to have the
> data physically present in multiple places, this is the role of RAID.
> 
> The idea if having multiple generations of backups is that if you
> accidentally overwrite half your almost-completed novel with lines of
> ALL WORK AND NO PLAY MAKES JACK A DULL BOY and the backup tool runs
> before you notice it, you still have the precious data in the previous
> generation.

Nicely put :)

Let me rephrase a little:

How likely is it that a storage volume (not the underlying media, like discs in
a RAID array) would become unreadble in only some places so that it could be an
advantage to have multiple copies of the same data on the volume?

It's like I can't help unconsciously thinking that it's an advantage to have
several multple copies on a volume for any other reason than not to overwrite
the almost complete novel.  At the same time, I find it difficult to imagine how
a volume could get damaged only in some places, and I don't see other reasons
than that.

Ok, another reason to keep multiple full copies on a volume is making things
simple, easy and thus perhaps more reliable than more complicated solutions.  At
least that's an intention.  But it costs a lot of disk space.



Re: Message surprenant

2022-11-10 Thread didier gaumet

Le 10/11/2022 à 15:27, Olivier Back my spare a écrit :
[...]

Vous pouvez consulter le certificat racine demandé
https://ssl-tools.net/subjects/b0e31e6fe1b4e58b38cd4664dd9184b2eead11f6

[...]

Question de béotien (vu que j'y connais rien): ça résoudra le problème?
L'impression que j'avais est que le certificat de l'autorité (organisme) 
de certification (CA) était très probablement déjà présent sur le 
système (il semble être présent dans le paquet ca-certificates qui est 
une dépendance d'un bonne partie des navigateurs web et clients e-mail)


Je me demandais si il ne s'agissait pas plutôt d'une demande de sécurité 
d'un client mail ou d'un navigateur à l'utilisateur (Philippe) pour lui 
faire reconnaître ce certificat de CA, déjà connu, comme fiable ou non 
(pour certifier les certificats se réclamant de ce CA) ?


Mais c'est peut-être farfelu, comme hypothèse :-)



Re: ZFS performance (was: Re: deduplicating file systems: VDO withDebian?)

2022-11-10 Thread Michael Stone

On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:

And mind you, SSDs are *designed to fail* the sooner the more data you write to
them.  They have their uses, maybe even for storage if you're so desperate, but
not for backup storage.


It's unlikely you'll "wear out" your SSDs faster than you wear out your 
HDs.




Re: Consulta ftp

2022-11-10 Thread Roberto José Blandino Cisneros
Déjame hacer una vm y pruebo.  Nunca he usado pure-ftpd el que he usado ha
sido vsftpd.

Déjame hacer el ambiente y te comento

El jue, 10 de nov. de 2022 6:36 a. m., Fernando Romero <
ffrcaraba...@gmail.com> escribió:

> Hola como estan, estoy corriendo debian 10 buster y pure-ftpd
> Configure usuario y pass pero siempre me da error de autentificación,
> intento conectarme desde la consola o usando filezilla y siempre me da error
> 530 Autentificaci▒n fallida, lo siento
> Login failed.
> Remote system type is UNIX.
> Using binary mode to transfer files.
>
> Cambie varias veces el usuario recree la base puredb pero no consigue
> loguearme.
> Alguien tuvo este problema?
>
> Si corro  pure-pw show usuario me trae bien los datos del usuario y la
> clave no la estoy escribiendo mal, la cambie varias veces
>
> Login  : usuario
> Password   :
> $6$DZ/S57P55hv9EoZ0$F..2vyNe5pV8rJzF79MmFoDTHCrNouK/O81qBCX1IoZoDUOh8p8QhecYl9JoTAJjNEObiN4J6VgCrZwP4IWY./
> UID: 33 (www-data)
> GID: 33 (www-data)
> Directory  : /var/www/html/./
> Full name  :
> Download bandwidth : 0 Kb (unlimited)
> Upload   bandwidth : 0 Kb (unlimited)
> Max files  : 0 (unlimited)
> Max size   : 0 Mb (unlimited)
> Ratio  : 0:0 (unlimited:unlimited)
> Allowed local  IPs :
> Denied  local  IPs :
> Allowed client IPs :
> Denied  client IPs :
> Time restrictions  : - (unlimited)
> Max sim sessions   : 0 (unlimited)
>
> Saludos
>


Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?)

2022-11-10 Thread hw
On Thu, 2022-11-10 at 10:47 +0100, DdB wrote:
> Am 10.11.2022 um 06:38 schrieb David Christensen:
> > What is your technique for defragmenting ZFS?
> well, that was meant more or less a joke: there is none apart from
> offloading all the data, destroying and rebuilding the pool, and filling
> it again from the backup. But i do it from time to time if fragmentation
> got high, the speed improvements are obvious. OTOH the process takes
> days on my SOHO servers
> 

Does it save you days so that you save more time than you spend on defragmenting
it because access is faster?

Perhaps after so many days of not defragging, but how many days?

Maybe use an archive pool that doesn't get deleted from?



Re: ZFS performance (was: Re: deduplicating file systems: VDO withDebian?)

2022-11-10 Thread hw
On Thu, 2022-11-10 at 02:19 -0500, gene heskett wrote:
> On 11/10/22 00:37, David Christensen wrote:
> > On 11/9/22 00:24, hw wrote:
> >  > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
> 
> [...]
> Which brings up another suggestion in two parts:
> 
> 1: use amanda, with tar and compression to reduce the size of the 
> backups.  And use a backup cycle of a week or 2 because amanda will if 
> advancing a level, only backup that which has been changed since the 
> last backup. On a quiet system, a level 3 backup for a 50gb network of 
> several machines can be under 100 megs. More on a busy system of course.
> Amanda keeps track of all that automatically.

Amanda is nice, yet quite unwieldy (try to get a file out of the backups ...). 
I used it long time ago (with tapes) and I'd have to remember or re-learn how to
use amanda to back up particular directories and such ...

I think I might be better off learning more about snapshots.

> 2: As disks fail, replace them with SSD's which use much less power than 
> spinning rust. And they are typically 5x faster than commodity spinning 
> rust.

Is this a joke?

https://www.dell.com/en-us/shop/visiontek-16tb-class-qlc-7mm-25-ssd/apd/ab329068/storage-drives-media

Cool, 30% discount on black friday saves you $2280 for every pair of disks, and
it even starts right now.  (Do they really mean that? What if I had a datacenter
and ordered 512 or so of them?  I'd save almost $1.2 million, what a great
deal!)

And mind you, SSDs are *designed to fail* the sooner the more data you write to
them.  They have their uses, maybe even for storage if you're so desperate, but
not for backup storage.

> Here, and historically with spinning rust, backing up 5 machines, at 3am 
> every morning is around 10gb total and under 45 minutes. This includes 
> the level 0's it does by self adjusting the schedule to spread the level 
> 0's, AKA the fulls, out over the backup cycle so the amount of storage 
> used for any one backup run is fairly consistent.

That's almost half a month for 4TB.  Why does it take so long?



Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?)

2022-11-10 Thread hw
On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
> On 11/9/22 00:24, hw wrote:
>  > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
> 
>  > Hmm, when you can backup like 3.5TB with that, maybe I should put 
> FreeBSD on my
>  > server and give ZFS a try.  Worst thing that can happen is that it 
> crashes and
>  > I'd have made an experiment that wasn't successful.  Best thing, I 
> guess, could
>  > be that it works and backups are way faster because the server 
> doesn't have to
>  > actually write so much data because it gets deduplicated and reading 
> from the
>  > clients is faster than writing to the server.
> 
> 
> Be careful that you do not confuse a ~33 GiB full backup set, and 78 
> snapshots over six months of that same full backup set, with a full 
> backup of 3.5 TiB of data.  I would suggest a 10 GiB pool to backup the 
> latter.

The full backup isn't deduplicated?

> Writing to a ZFS filesystem with deduplication is much slower than 
> simply writing to, say, an ext4 filesystem -- because ZFS has to hash 
> every incoming block and see if it matches the hash of any existing 
> block in the destination pool.  Storing the existing block hashes in a 
> dedicated dedup virtual device will expedite this process.

But when it needs to write almost nothing because almost everthing gets
deduplicated, can't it be faster than having to write everthing?

>  >> I run my backup script each night.  It uses rsync to copy files and
>  >
>  > Aww, I can't really do that because my servers eats like 200-300W 
> because it has
>  > so many disks in it.  Electricity is outrageously expensive here.
> 
> 
> Perhaps platinum rated power supplies?  Energy efficient HDD's/ SSD's?

If you pay for it ... :)

Running it once in a while for some hours to make backups is still possible. 
Replacing the hardware is way more expensive.

> [...]
>  > Sounds like a nice setup.  Does that mean you use snapshots to keep 
> multiple
>  > generations of backups and make backups by overwriting everything 
> after you made
>  > a snapshot?
> 
> Yes.

I start thinking more and more that I should make use of snapshots.

>  > In that case, is deduplication that important/worthwhile?  You're not
>  > duplicating it all by writing another generation of the backup but 
> store only
>  > what's different through making use of the snapshots.
> 
> Without deduplication or compression, my backup set and 78 snapshots 
> would require 3.5 TiB of storage.  With deduplication and compression, 
> they require 86 GiB of storage.

Wow that's quite a difference!  What makes this difference, the compression or
the deduplication?  When you have snapshots, you would store only the
differences from one snapshot to the next, and that would mean that there aren't
so many duplicates that could be deduplicated.

>  > ... I only never got around to figure [ZFS snapshots] out because I 
> didn't have the need.
> 
> 
> I accidentally trash files on occasion.  Being able to restore them 
> quickly and easily with a cp(1), scp(1), etc., is a killer feature.

indeed

> Users can recover their own files without needing help from a system 
> administrator.

You have users who know how to get files out of snapshots?

>  > But it could also be useful for "little" things like taking a 
> snapshot of the
>  > root volume before updating or changing some configuration and being 
> able to
>  > easily to undo that.
> 
> 
> FreeBSD with ZFS-on-root has a killer feature called "Boot Environments" 
> that has taken that idea to the next level:
> 
> https://klarasystems.com/articles/managing-boot-environments/

That's really cool.  Linux is missing out on a lot by treating ZFS as an alien.

I guess btrfs could, in theory, make something like boot environments possible,
but you can't even really boot from btrfs because it'll fail to boot as soon as
the boot volume is degraded, like when a disc has failed, and then you're
screwed because you can't log in through ssh to fix anything but have to
actually go to the machine to get it back up.  That's a non-option and you have
to use something else than btrfs to boot from.

>  >> I have 3.5 TiB of backups.
> 
> 
> It is useful to group files with similar characteristics (size, 
> workload, compressibility, duplicates, backup strategy, etc.) into 
> specific ZFS filesystems (or filesystem trees).  You can then adjust ZFS 
> properties and backup strategies to match.

That's a good idea.

>   For compressed and/or encrypted archives, image, etc., I do not use
>   compression or de-duplication
>  >>>
>  >>> Yeah, they wouldn't compress.  Why no deduplication?
>  >>
>  >>
>  >> Because I very much doubt that there will be duplicate blocks in 
> such files.
>  >
>  > Hm, would it hurt?
> 
> 
> Yes.  ZFS deduplication is resource intensive.

But you're using it already.

>  > Oh it's not about performance when degraded, but about performance. 
> IIRC when
>  > you have a ZFS pool that uses the equivalent 

Re: deduplicating file systems: VDO with Debian?

2022-11-10 Thread d-u
On Wed, 09 Nov 2022 13:28:46 +0100
hw  wrote:

> On Tue, 2022-11-08 at 09:52 +0100, DdB wrote:
> > Am 08.11.2022 um 05:31 schrieb hw:  
> > > > That's only one point.  
> > > What are the others?
> > >   
> > > >  And it's not really some valid one, I think, as 
> > > > you do typically not run into space problems with one single
> > > > action (YMMV). Running multiple sessions and out-of-band
> > > > deduplication between them works for me.  
> > > That still requires you to have enough disk space for at least
> > > two full backups.
> > > I can see it working for three backups because you can
> > > deduplicate the first two, but not for two.  And why would I
> > > deduplicate when I have sufficient disk
> > > space.
> > >   
> > Your wording likely confuses 2 different concepts:  
> 
> N, I'm not confusing that :)  Everyone says so and I don't know
> why ...
> 
> > Deduplication avoids storing identical data more than once.
> > whereas
> > Redundancy stores information on more than one place on purpose to
> > avoid loos of data in case of havoc.
> > ZFS can do both, as it combines the features of a volume manager
> > with those of a filesystem and a software RAID.( I am using
> > zfsonlinux since its early days, for over 10 years now, but without
> > dedup. )
> > 
> > In the past, i used shifting/rotating external backup media for that
> > purpose, because, as the saying goes: RAID is NOT a backup! Today, i
> > have a second server only for the backups, using zfs as well, which
> > allows for easy incremental backups, minimizing traffic and disk
> > usage.
> > 
> > but you should be clear as to what you want: redundancy or
> > deduplication?  
> 
> The question is rather if it makes sense to have two full backups on
> the same machine for redundancy and to be able to go back in time, or
> if it's better to give up on redundancy and to have only one copy and
> use snapshots or whatever to be able to go back in time.

And the answer is no. The redundancy you gain from this is almost,
though not quite, meaningless, because of the large set of common
data-loss scenarios against which it offers no protection. You've made
it clear that the cost of storage media is a problem in your situation.
Doubling your backup server's requirement for scarce and expensive disk
space in order to gain a tiny fraction of the resiliency that's
normally implied by "redundancy" doesn't make sense. And being able to
go "back in time" can be achieved much more efficiently by using a
solution (be it off-the-shelf or roll-your-own) that starts with a full
backup and then just stores deltas of changes over time (aka incremental
backups). None of this, for the record, is "deduplication", and I
haven't seen any indication in this thread so far that actual
deduplication is relevant to your use case.

> Of course it would better to have more than one machine, but I don't
> have that.

Fine, just be realistic about the fact that this means you cannot in
any meaningful sense have "two full backups" or "redundancy". If and
when you can some day devote an RPi tethered to some disks to the job,
then you can set it up to hold a second, completely independent,
store of "full backup plus deltas". And *then* you would have
meaningful redundancy that offers some real resilience. Even better if
the second one is physically offsite. 

In the meantime, storing multiple full copies of your data on one
backup server is just a way to rapidly run out of disk space on your
backup server for essentially no reason.


Cheers!
 -Chris



Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread Dan Ritter
Brad Rogers wrote: 
> On Thu, 10 Nov 2022 08:48:43 -0500
> Dan Ritter  wrote:
> 
> Hello Dan,
> 
> >8 is not a magic number.
> 
> Clearly, you don't read Terry Pratchett.   :-)

In the context of ZFS, 8 is not a magic number.

May you be ridiculed by Pictsies.

-dsr-



Re: deduplicating file systems: VDO with Debian?

2022-11-10 Thread Curt
On 2022-11-10, Nicolas George  wrote:
> Curt (12022-11-10):
>> Why restate it then needlessly? 
>
> To NOT state that you were wrong when you were not.
>
> This branch of the discussion bores me. Goodbye.
>

This isn't solid enough for a branch. It couldn't support a hummingbird.
And me too! That old ennui! Adieu!






Re: Increased read IO wait times after Bullseye upgrade

2022-11-10 Thread Gareth Evans
On Thu 10 Nov 2022, at 11:36, Gareth Evans  wrote:
[...]
> I might be barking up the wrong tree ...

But simpler inquiries first.

I was wondering if MD might be too high-level to cause what does seem more like 
a "scheduley" issue -

https://www.thomas-krenn.com/de/wikiDE/images/d/d0/Linux-storage-stack-diagram_v4.10.pdf

but, apparently, even relatively "normal" processes can cause high iowait too:

NFS...
https://www.howtouselinux.com/post/troubleshoot-high-iowait-issue-on-linux-system
https://www.howtouselinux.com/post/use-linux-nfsiostat-to-troubleshoot-nfs-performance-issue

SSH...
Use iotop to find io-hogs...
https://www.howtouselinux.com/post/quick-guide-to-fix-linux-iowait-issue
https://www.howtouselinux.com/post/check-disk-io-usage-per-process-with-iotop-on-linux

What's a typical wa value (%) from top?

Thanks,
G






Re: deduplicating file systems: VDO with Debian?

2022-11-10 Thread Nicolas George
Curt (12022-11-10):
> Why restate it then needlessly? 

To NOT state that you were wrong when you were not.

This branch of the discussion bores me. Goodbye.

-- 
  Nicolas George



Re: deduplicating file systems: VDO with Debian?

2022-11-10 Thread Curt
On 2022-11-10, Nicolas George  wrote:
> Curt (12022-11-10):
>> > one drive fails → you can replace it immediately, no downtime
>> That's precisely what I said,
>
> I was not stating that THIS PART of what you said was srong.

Why restate it then needlessly? 

>>  so I'm baffled by the redundancy of your
>> words.
>
> Hint: my mail did not stop at the line you quoted. Reading mails to the
> end is usually a good practice to avoid missing information.
>

It's also an insect repellent.




Re: Gnus/procmail doesn't read new mails

2022-11-10 Thread Urs Thuermann
Eric S Fraga  writes:

> Just in case, what happens if you expand "~" in the path to PROCMAIL?

That was one of the first things I've tested but that didn't change
anything.

urs



Re: Est il possible de forcer Linux a écrire plus souvent dans le swap

2022-11-10 Thread Olivier Back my spare

Bonjour

Je ne suis pas prof. Je travaille dans le Service IT d'un labo de 
recherche en économie.


J'ai installé un serveur Debian 11 quelque chose, un truc du genre 256 
Go de RAM avec un swap 256 Go avec un agregat de carte réseau et le 
raccordement au LDAP de l'Active Directory.

Les applications sont : Stata, Matlab, Mathématica, QGis, etc.
J'ai installé X2Go Server avec un bureau léger LXDE pour que les 
étudiants et doctorants puisse travailler. Et tous les utilisateurs ont 
un quota. Les étudiants se distinguent par leur vocabulaire informatique 
alors qu'ils ne savent pas de quoi ils parlent. Au début lorsqu'ils me 
demandaient git, je croyais qu'ils savaient s'en servir. Lorsqu'ils me 
demandaient comment faire pour le "première connexion", j'ai déchanté :)


Il y a un autre serveur de calcul dédié aux chercheurs et assistant de 
recherche. Même principe d'installation. C'est surtou celui-là où je 
dois limiter l'usage parce que les utilisateurs sont du genre: "premier 
arrivé, premier servi" et ils usent de toutes les ressources et leur 
calculs peuvent durer des semaines.


Voilà, encore une fois, non, je ne suis pas prof :)

--
Gestionnaire d'infrastructure/ Gestionnaire de parc informatique
"It is possible to commit no errors and still lose. That is not a 
weakness. That is life."

– Captain Jean-Luc Picard to Data


On 10/11/2022 12:06, David Martin wrote:

Salut,

Quel calcul, via une appli en particulier ?
Tu fais du nomachine pour qu'il est un bureau unitaire ?
Tu as configuré un serveur X avec un environnement X déporté par User ?

ce qui n'est pas une bonne idée AMHA

Tu es prof ;-

Cordialement

Le lun. 7 nov. 2022 à 15:01, Olivier backup my spare <
backup.my.sp...@gmail.com> a écrit :


Bonjour

J'ai un serveur de calcul et j'ai des utilisateurs qui ouvrent des
instances à distance.
Pour beaucoup, il lancent un calcul et attendent le résultat mais ils
utilisent leur instance comme un bureau et y laissent des applis ouverte
mais non utilisées.
Y a t-il un moyen de forcer linux à mettre ces applis en stand by dans le
swap.
C'est un serveur Del avec une debian Bulleyes.

--
AI Gestionnaire d'infrastructure/ Gestionnaire de Parc.
Centre d'économie S**
“It is possible to commit no errors and still lose. That is not a
weakness. That is life.”
– Captain Jean-Luc Picard to Data






--
Gestionnaire d'infrastructure/ Gestionnaire de parc informatique
"It is possible to commit no errors and still lose. That is not a 
weakness. That is life."

– Captain Jean-Luc Picard to Data


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Debian 11 - How to install Gtkmm

2022-11-10 Thread Alex Mestiashvili

On 11/10/22 01:09, Amn wrote:

Trying to install Gtkmm 4 in a Debian 11 box I do this :
sudo apt install libgtkmm-4.0-dev

But then I get this error :

Unable to locate package libgtkmm-4.0-dev

What am I doing wrong?



rmadison libgtkmm-4.0-dev
libgtkmm-4.0-dev | 4.8.0-2   | unstable   | amd64, arm64, armel, 
armhf, i386, mips64el, mipsel, ppc64el, s390x


It is available only in unstable for now.



Re: deduplicating file systems: VDO with Debian?

2022-11-10 Thread Nicolas George
Curt (12022-11-10):
> > one drive fails → you can replace it immediately, no downtime
> That's precisely what I said,

I was not stating that THIS PART of what you said was srong.

>   so I'm baffled by the redundancy of your
> words.

Hint: my mail did not stop at the line you quoted. Reading mails to the
end is usually a good practice to avoid missing information.

-- 
  Nicolas George



Re: deduplicating file systems: VDO with Debian?

2022-11-10 Thread Curt
On 2022-11-10, Nicolas George  wrote:
> Curt (12022-11-10):
>> Maybe it's a question of intent more than anything else. I thought RAID
>> was intended for a server scenario where if a disk fails, you're down
>> time is virtually null, whereas as a backup is intended to prevent data
>> loss.
>
> Maybe just use common sense. RAID means your data is present on several
> drives. You can just deduce what it can help for:
>
> one drive fails → you can replace it immediately, no downtime

That's precisely what I said, so I'm baffled by the redundancy of your
words. Or are you a human RAID? 




definitions of "backup" (was Re: deduplicating file systems: VDO with Debian?)

2022-11-10 Thread The Wanderer
On 2022-11-10 at 09:06, Dan Ritter wrote:

> Now, RAID is not a backup because it is a single store of data: if
> you delete something from it, it is deleted. If you suffer a
> lightning strike to the server, there's no recovery from molten
> metal.

Here's where I find disagreement.

Say you didn't use RAID, and you had two disks in the same machine.

In order to avoid data loss in the event that one of the disks failed,
you engaged in a practice of copying all files from one disk onto the other.

That process could, and would, easily be referred to as backing up the
files. It's not a very distant backup, and it wouldn't protect against
that lightning strike, but it's still a separate backed-up copy.

But copying those files manually is a pain, so you might well set up a
process to automate it. That then becomes a scheduled backup, from one
disk onto another.

That scheduled process means that you have periods where the most
recently updated copy of the live data hasn't made it into the backup,
so there's still a time window where you're at risk of data loss if the
first disk fails. So you might set things up for the automated process
to in effect run continuously, writing the data to both disks in
parallel as it comes in.

And at that point you've basically reinvented mirroring RAID.

You've also lost the protection against "if you delete something from
it"; unlike deeper, more robust forms of backup, RAID does not protect
against accidental deletion. But you still have the protection against
"if one disk fails" - and that one single layer of protection against
one single cause of data loss is, I contend, still valid to refer to as
a "backup" just as much as the original manually-made copies were.

> Some filesystems have snapshotting. Snapshotting can protect you
> from the accidental deletion scenario, by allowing you to recover
> quickly, but does not protect you from lightning.
> 
> The lightning scenario requires a copy of the data in some other 
> location. That's a backup.

There are many possible causes of data loss. My contention is that
anything that protects against *any* of them qualifies as some level of
backup, and that there are consequently multiple levels / tiers /
degrees / etc. of backup.

RAID is not an advanced form of protection against data loss; it only
protects against one type of cause. But it still does protect against
that one type, and thus it is not valid to kick it out of that circle
entirely.

-- 
   The Wanderer

The reasonable man adapts himself to the world; the unreasonable one
persists in trying to adapt the world to himself. Therefore all
progress depends on the unreasonable man. -- George Bernard Shaw



signature.asc
Description: OpenPGP digital signature


Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread Brad Rogers
On Thu, 10 Nov 2022 08:48:43 -0500
Dan Ritter  wrote:

Hello Dan,

>8 is not a magic number.

Clearly, you don't read Terry Pratchett.   :-)

-- 
 Regards  _   "Valid sig separator is {dash}{dash}{space}"
 / )  "The blindingly obvious is never immediately apparent"
/ _)rad   "Is it only me that has a working delete key?"
It's only the children of the f** wealthy tend to be good looking
Ugly - The Stranglers


pgp6ozKr0osey.pgp
Description: OpenPGP digital signature


Re: definiing deduplication (was: Re: deduplicating file systems: VDO with Debian?)

2022-11-10 Thread Thomas Schmitt
Hi,

i wrote:
> > the time window in which the backuped data
> > can become inconsistent on the application level.

hw wrote:
> Or are you referring to the data being altered while a backup is in
> progress?

Yes. Data of different files or at different places in the same file
may have relations which may become inconsistent during change operations
until the overall change is complete.
If you are unlucky you can even catch a plain text file that is only half
stored.

The risk for this is not 0 with filesystem snapshots, but it grows further
if there is a time interval during which changes may or may not be copied
into the backup, depending on filesystem internals and bad luck.


> Would you even make so many backups on the same machine?

It depends on the alternatives.
If you have other storage systems which can host backups, then it is of
course good to use them for backup storage. But if you have less separate
storage than independent backups, then it is still worthwhile to put more
than one backup on the same storage.


> Isn't 5 times a day a bit much?

It depends on how much you are willing to lose in case of a mishap.
My $HOME backup runs last about 90 seconds each. So it is not overly
cumbersome.


>  And it's an odd number.

That's because the early afternoon backup is done twice. (A tradition
which started when one of my BD burners began to become unreliable.)


> Yes, I'm re-using the many small hard discs that have accumulated over the
> years.

If it's only their size which disqualifies them for production purposes,
then it's ok. But if they are nearing the end of their life time, then
i would consider to decommission them.


> I wish we could still (relatively) easily make backups on tapes.

My personal endeavor with backups on optical media began when a customer
had a major data mishap and all backup tapes turned out to be unusable.
Frequent backups had been made and allegedly been checkread. But in the
end it was big drama.
I then proposed to use a storage where the boss of the department can
make random tests with the applications which made and read the files.
So i came to writing backup scripts which used mkisofs and cdrecord
for CD-RW media.


> Just change
> the tape every day and you can have a reasonable number of full backups.

If you have thousandfold the size of Blu-rays worth of backup, then
probably a tape library would be needed. (I find LTO tapes with up to
12 TB in the web, which is equivalent to 480 BD-R.)


> A full new backup takes ages

It would help if you could divide your backups into small agile parts and
larger parts which don't change often.
The agile ones need frequent backup, whereas the lazy ones would not suffer
so much damage if the newset available backup is a few days old.


> I need to stop modifying stuff and not start all over again

The backup part of a computer system should be its most solid and artless
part. No shortcuts, no fancy novelties, no cumbersome user procedures.


Have a nice day :)

Thomas



Re: weird directory entry on ZFS volume (Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?)))

2022-11-10 Thread Greg Wooledge
On Thu, Nov 10, 2022 at 02:48:28PM +0100, hw wrote:
> On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
> good idea:
> 
> printf %s * | hexdump
> 000 77c2 6861 0074 
> 005

Looks like there might be more than one file here.

> > If you misrepresented the situation, and there's actually more than one
> > file in this directory, then use something like this instead:
> > 
> > shopt -s failglob
> > printf '%s\0' ? | hd
> 
> shopt -s failglob
> printf '%s\0' ? | hexdump
> 000 00c2   
> 002

OK, that's a good result.

> > Note that the ? is *not* quoted here, because we want it to match any
> > one-character filename, no matter what that character actually is.  If
> > this doesn't work, try ?? or * as the glob, until you manage to find it.
> 
> printf '%s\0' ?? | hexdump
> -bash: Keine Entsprechung: ??
> 
> (meaning something like "no equivalent")

The English version is "No match".

> printf '%s\0' * | hexdump
> 000 00c2 6177 7468 
> 007

I dislike this output format, but it looks like there are two files
here.  The first is 0xc2, and the second is 0x77 0x61 0x68 0x74 if
I'm reversing and splitting the silly output correctly.  (This spells
"waht", if I got it right.)

> > If it turns out that '?' really is the filename, then it becomes a ZFS
> > issue with which I can't help.
> 
> I would think it is.  Is it?

The file in question appears to have a name which is the single byte 0xc2.
Since that's not a valid UTF-8 character, ls chooses something to display
instead.  In your case, it chose a '?' character.  I'm guessing this is on
an older release of Debian.

In my case, it does this:

unicorn:~$ mkdir /tmp/x && cd "$_"
unicorn:/tmp/x$ touch $'\xc2'
unicorn:/tmp/x$ ls -la
total 80
-rw-r--r--  1 greg greg 0 Nov 10 09:21 ''$'\302'
drwxr-xr-x  2 greg greg  4096 Nov 10 09:21  ./
drwxrwxrwt 20 root root 73728 Nov 10 09:21  ../

In my version of ls, there's a --quoting-style= option that can help
control what you see.  But that's a tangent you can explore later.

Since we know the actual name of the file (subdirectory) now, let's just
rename it to something sane.

mv $'\xc2' subdir

Then you can investigate it, remove it, or do whatever else you want.



Re: Message surprenant

2022-11-10 Thread Olivier Back my spare

Bonjour

Je vous prie de m'excuser. C'est ma faute.
J'ai un certificat s-mime fait par actalis.
Il semble que vous devriez faire une mise à jour de vos certificats 
racine et le problème sera réglé.


Vous pouvez consulter le certificat racine demandé
https://ssl-tools.net/subjects/b0e31e6fe1b4e58b38cd4664dd9184b2eead11f6

Mes excuses, je suis vraiment désolé



On 10/11/2022 15:13, Dethegeek wrote:

Bonjour

Il semble que quelque chose génère des certificats en utilisant une chaîne
de confiance remontant au certificat racine mentionné dans le message.

Peut être qu'au moment où apparaît le message on verra dans les processus
openssl ou gnutls qui sont les outils en ligne de commande capables de
faire ces certificattions.

Le jeu. 10 nov. 2022 à 14:43, MERLIN Philippe 
a écrit :


Bonjour,

Depuis quelque jours sur mon écran apparaît de temps en temps une  fenêtre

Début du message

Message

Attribuez-vous une confiance ultime

<< CN=Actalis Authentification Root CA

O=Actalis S.p.A.\x2f...

L=Milan

C=IT>>

pour certifier correctement les certificats de l'utilisateur ?

  Annuler  Oui

_

  elle disparaît d'elle même au bout d'un certain

temps mais réapparaît de temps en temps , lorsqu'elle est à l'écran je ne
peux rien faire,

si je clique sur annuler il disparaît.

Je ne suis pas un expert en sécurité mais je pense être victime d'une

tentative d'intrusion.

Qu'en pensez vous ?  Avez vous eu le même problème ? Que dois je faire
pour me

débarrasser de cette fenêtre. Quel programme génère cette fenêtre ?

Merci pour vos suggestions.

Philippe Merlin





--
Gestionnaire d'infrastructure/ Gestionnaire de parc informatique
"It is possible to commit no errors and still lose. That is not a 
weakness. That is life."

– Captain Jean-Luc Picard to Data


smime.p7s
Description: S/MIME Cryptographic Signature


Re: deduplicating file systems: VDO with Debian?

2022-11-10 Thread Dan Ritter
Curt wrote: 
> On 2022-11-08, The Wanderer  wrote:
> >
> > That more general sense of "backup" as in "something that you can fall
> > back on" is no less legitimate than the technical sense given above, and
> > it always rubs me the wrong way to see the unconditional "RAID is not a
> > backup" trotted out blindly as if that technical sense were the only one
> > that could possibly be considered applicable, and without any
> > acknowledgment of the limited sense of "backup" which is being used in
> > that statement.
> >
> 
> Maybe it's a question of intent more than anything else. I thought RAID
> was intended for a server scenario where if a disk fails, you're down
> time is virtually null, whereas as a backup is intended to prevent data
> loss. RAID isn't ideal for the latter because it doesn't ship the saved
> data off-site from the original data (or maybe a RAID array is
> conceivable over a network and a distance?).

RAID means "redundant array of inexpensive disks". The idea, in the
name, is to bring together a bunch of cheap disks to mimic a single more
expensive disk, in a way which hopefully is more resilient to failure.

If you need a filesystem that is larger than a single disk (that you can
afford, or that exists), RAID is the name for the general approach to
solving that.

The three basic technologies of RAID are:

striping: increase capacity by writing parts of a data stream to N
disks. Can increase performance in some situations.

mirroring: increase resiliency by redundantly writing the same data to
multiple disks. Can increase performance of reads.

checksums/erasure coding: increase resilency by writing data calculated
from the real data (but not a full copy) that allows reconstruction of
the real data from a subset of disks. RAID5 allows one failure, RAID6
allows recovery from two simultaneous failures, fancier schemes may
allow even more.

You can work these together, or separately.

Now, RAID is not a backup because it is a single store of data: if you
delete something from it, it is deleted. If you suffer a lightning
strike to the server, there's no recovery from molten metal.

Some filesystems have snapshotting. Snapshotting can protect you from
the accidental deletion scenario, by allowing you to recover quickly,
but does not protect you from lightning.

The lightning scenario requires a copy of the data in some other
location. That's a backup.

You can store the backup on a RAID. You might need to store the backup
on a RAID, or perhaps by breaking it up into pieces to store on tapes or
optical disks or individual hard disks. The kind of RAID you choose for
the backup is not related to the kind of RAID you use on your primary
storage.

> Of course, I wouldn't know one way or another, but the complexity (and
> substantial verbosity) of this thread seem to indicate that that all
> these concepts cannot be expressed clearly and succinctly, from which I
> draw my own conclusions.

The fact that many people talk about things that they don't understand
does not restrict the existence of people who do understand it. Only
people who understand what they are talking about can do so clearly and
succinctly.

-dsr-



ZFS pool size (Re: else or Debian)

2022-11-10 Thread DdB
Am 10.11.2022 um 14:28 schrieb DdB:
> Take some time to
> play with an installation (in a vm or just with a file based pool should
> be considered).

an example to show, that is is possible to allocate hugefiles (bigger
than a single disk size) from a pool:

> datakanja@PBuster-NFox:~$ mkdir disks
> datakanja@PBuster-NFox:~$ cd disks/
> datakanja@PBuster-NFox:~/disks$ seq  -w 0 15 | xargs -i truncate -s 4T 
> disk{}.bin # this creates sparse files to act as virtual disks
> datakanja@PBuster-NFox:~/disks$ zpool create TEST raidz3 ~/disks/d*
> datakanja@PBuster-NFox:~/disks$ zpool list
> NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAGCAP  DEDUPHEALTH  
> ALTROOT
> TEST  64.0T   314K  64.0T- - 0% 0%  1.00xONLINE  -
16*4 TB = 64 TB size
> datakanja@PBuster-NFox:~/disks$ zfs list TEST
> NAME   USED  AVAIL REFER  MOUNTPOINT
> TEST   254K  50.1T 64.7K  /TEST
# due to redundacy in the pool, the maximum size of a file is slightly
over 50TB

#do not forget to clean up (destroying pool and files)



Re: Message surprenant

2022-11-10 Thread Dethegeek
Bonjour

Il semble que quelque chose génère des certificats en utilisant une chaîne
de confiance remontant au certificat racine mentionné dans le message.

Peut être qu'au moment où apparaît le message on verra dans les processus
openssl ou gnutls qui sont les outils en ligne de commande capables de
faire ces certificattions.

Le jeu. 10 nov. 2022 à 14:43, MERLIN Philippe 
a écrit :

> Bonjour,
>
> Depuis quelque jours sur mon écran apparaît de temps en temps une  fenêtre
>
> Début du message
>
> Message
>
> Attribuez-vous une confiance ultime
>
> << CN=Actalis Authentification Root CA
>
> O=Actalis S.p.A.\x2f...
>
> L=Milan
>
> C=IT>>
>
> pour certifier correctement les certificats de l'utilisateur ?
>
>  Annuler  Oui
>
> _
>
>  elle disparaît d'elle même au bout d'un certain
>
> temps mais réapparaît de temps en temps , lorsqu'elle est à l'écran je ne
> peux rien faire,
>
> si je clique sur annuler il disparaît.
>
> Je ne suis pas un expert en sécurité mais je pense être victime d'une
>
> tentative d'intrusion.
>
> Qu'en pensez vous ?  Avez vous eu le même problème ? Que dois je faire
> pour me
>
> débarrasser de cette fenêtre. Quel programme génère cette fenêtre ?
>
> Merci pour vos suggestions.
>
> Philippe Merlin
>


Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread Dan Ritter
hw wrote: 
> And I've been reading that when using ZFS, you shouldn't make volumes with 
> more
> than 8 disks.  That's very inconvenient.


Where do you read these things?

The number of disks in a zvol can be optimized, depending on
your desired redundancy method, total number of drives, and
tolerance for reduced performance during resilvering. 

Multiple zvols together form a zpool. Filesystems are allocated from
a zpool.

8 is not a magic number.

-dsr-



Re: Gnus/procmail doesn't read new mails

2022-11-10 Thread Eric S Fraga
Just in case, what happens if you expand "~" in the path to PROCMAIL?
-- 
Eric S Fraga via gnus (Emacs 29.0.50 2022-11-10) on Debian 11.4



Re: deduplicating file systems: VDO with Debian?

2022-11-10 Thread Nicolas George
Curt (12022-11-10):
> Maybe it's a question of intent more than anything else. I thought RAID
> was intended for a server scenario where if a disk fails, you're down
> time is virtually null, whereas as a backup is intended to prevent data
> loss.

Maybe just use common sense. RAID means your data is present on several
drives. You can just deduce what it can help for:

one drive fails → you can replace it immediately, no downtime

one drive fails → the data is present elsewhere, no data loss

several¹ drive fail → downtime and data loss²

1: depending on RAID level
2: or not if you have backups too

>   RAID isn't ideal for the latter because it doesn't ship the saved
> data off-site from the original data (or maybe a RAID array is
> conceivable over a network and a distance?).

It is always a matter of compromise. You cannot duplicate your data
off-site at the same rate as you duplicate it on a second local drive.

That means your off-site data will survive an EMP, but you will lose
minutes / hours / days of data prior to the EMP. OTOH, RAID will not
survive an EMP, but it will prevent all data loss caused by isolated
hardware failure.

-- 
  Nicolas George



definitions of "backup" (was Re: deduplicating file systems: VDO with Debian?)

2022-11-10 Thread The Wanderer
On 2022-11-10 at 08:40, Curt wrote:

> On 2022-11-08, The Wanderer  wrote:
> 
>> That more general sense of "backup" as in "something that you can
>> fall back on" is no less legitimate than the technical sense given
>> above, and it always rubs me the wrong way to see the unconditional
>> "RAID is not a backup" trotted out blindly as if that technical
>> sense were the only one that could possibly be considered
>> applicable, and without any acknowledgment of the limited sense of
>> "backup" which is being used in that statement.
> 
> Maybe it's a question of intent more than anything else. I thought
> RAID was intended for a server scenario where if a disk fails, you're
> down time is virtually null, whereas as a backup is intended to
> prevent data loss.

If the disk fails, the data stored on the disk is lost (short of
forensic-style data recovery, anyway), so anything that ensures that
that data is still available serves to prevent data loss.

RAID ensures that the data is still available even if the single disk
fails, so it qualifies under that criterion.

> RAID isn't ideal for the latter because it doesn't ship the saved 
> data off-site from the original data (or maybe a RAID array is 
> conceivable over a network and a distance?).

Shipping the data off-site is helpful to protect against most possible
causes for data loss, such as damage to or theft of the on-site
equipment. (Or, for that matter, accidental deletion of the live data.)

It's not necessary to protect against some causes, however, such as
failure of a local disk. For that cause, RAID fulfills the purpose just
fine.

RAID does not protect against most of those other scenarios, however, so
there's certainly still a role for - and a reason to recommend! -
off-site backup. It's just that the existence of those options does not
mean RAID does not have a role to play in avoiding data loss, and
thereby a valid sense in which it can be considered to provide something
to fall back on, which is the approximate root meaning of the
nontechnical sense of "backup".

-- 
   The Wanderer

The reasonable man adapts himself to the world; the unreasonable one
persists in trying to adapt the world to himself. Therefore all
progress depends on the unreasonable man. -- George Bernard Shaw



signature.asc
Description: OpenPGP digital signature


weird directory entry on ZFS volume (Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?)))

2022-11-10 Thread hw
On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
> On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
> > ls -la
> > insgesamt 5
> > drwxr-xr-x  3 namefoo namefoo    3 16. Aug 22:36 .
> > drwxr-xr-x 24 root    root    4096  1. Nov 2017  ..
> > drwxr-xr-x  2 namefoo namefoo    2 21. Jan 2020  ?
> > namefoo@host /srv/datadir $ ls -la '?'
> > ls: Zugriff auf ? nicht möglich: Datei oder Verzeichnis nicht gefunden
> > namefoo@host /srv/datadir $ 
> > 
> > 
> > This directory named ? appeared on a ZFS volume for no reason and I can't
> > access
> > it and can't delete it.  A scrub doesn't repair it.  It doesn't seem to do
> > any
> > harm yet, but it's annoying.
> > 
> > Any idea how to fix that?
> 
> ls -la might not be showing you the true name.  Try this:
> 
> printf %s * | hd
> 
> That should give you a hex dump of the bytes in the actual filename.

good idea:

printf %s * | hexdump
000 77c2 6861 0074 
005

> If you misrepresented the situation, and there's actually more than one
> file in this directory, then use something like this instead:
> 
> shopt -s failglob
> printf '%s\0' ? | hd

shopt -s failglob
printf '%s\0' ? | hexdump
000 00c2   
002

> Note that the ? is *not* quoted here, because we want it to match any
> one-character filename, no matter what that character actually is.  If
> this doesn't work, try ?? or * as the glob, until you manage to find it.

printf '%s\0' ?? | hexdump
-bash: Keine Entsprechung: ??

(meaning something like "no equivalent")


printf '%s\0' * | hexdump
000 00c2 6177 7468 
007


> If it turns out that '?' really is the filename, then it becomes a ZFS
> issue with which I can't help.

I would think it is.  Is it?

perl -e 'print chr(0xc2) . "\n"'

... prints a blank line.  What's 0xc2?  I guess that should be UTF8 ...


printf %s *
aht

What would you expect it to print after shopt?



Message surprenant

2022-11-10 Thread MERLIN Philippe
Bonjour,
Depuis quelque jours sur mon écran apparaît de temps en temps une  fenêtre
Début du message
Message
Attribuez-vous une confiance ultime
<< CN=Actalis Authentification Root CA
O=Actalis S.p.A.\x2f...
L=Milan
C=IT>>
pour certifier correctement les certificats de l'utilisateur ?

 Annuler  Oui
_
 elle disparaît d'elle même au bout d'un certain 
temps mais réapparaît de temps en temps , lorsqu'elle est à l'écran je ne peux 
rien faire, 
si je clique sur annuler il disparaît. 
Je ne suis pas un expert en sécurité mais je pense être victime d'une 
tentative d'intrusion.
Qu'en pensez vous ?  Avez vous eu le même problème ? Que dois je faire pour me 
débarrasser de cette fenêtre. Quel programme génère cette fenêtre ?
Merci pour vos suggestions.
Philippe Merlin


Re: deduplicating file systems: VDO with Debian?

2022-11-10 Thread Curt
On 2022-11-08, The Wanderer  wrote:
>
> That more general sense of "backup" as in "something that you can fall
> back on" is no less legitimate than the technical sense given above, and
> it always rubs me the wrong way to see the unconditional "RAID is not a
> backup" trotted out blindly as if that technical sense were the only one
> that could possibly be considered applicable, and without any
> acknowledgment of the limited sense of "backup" which is being used in
> that statement.
>

Maybe it's a question of intent more than anything else. I thought RAID
was intended for a server scenario where if a disk fails, you're down
time is virtually null, whereas as a backup is intended to prevent data
loss. RAID isn't ideal for the latter because it doesn't ship the saved
data off-site from the original data (or maybe a RAID array is
conceivable over a network and a distance?).

Of course, I wouldn't know one way or another, but the complexity (and
substantial verbosity) of this thread seem to indicate that that all
these concepts cannot be expressed clearly and succinctly, from which I
draw my own conclusions.



Re: Increased read IO wait times after Bullseye upgrade

2022-11-10 Thread Gareth Evans
On Thu 10 Nov 2022, at 11:36, Gareth Evans  wrote:
[...]
> This assumes the identification of the driver in [3] (below) is 
> anything to go by.  

I meant [1] not [3].

Also potentially of interest:

"Queue depth

The queue depth is a number between 1 and ~128 that shows how many I/O requests 
are queued (in-flight) on average. Having a queue is beneficial as the requests 
in the queue can be submitted to the storage subsystem in an optimised manner 
and often in parallel. A queue improves performance at the cost of latency.

If you have some kind of storage performance monitoring solution in place, a 
high queue depth could be an indication that the storage subsystem cannot 
handle the workload. You may also observe higher than normal latency figures. 
As long as latency figures are still within tolerable limits, there may be no 
problem."

https://louwrentius.com/understanding-storage-performance-iops-and-latency.html

See 

$ cat /sys/block/sdX/device/queue_depth



Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread DdB
Am 10.11.2022 um 13:03 schrieb Greg Wooledge:
> If it turns out that '?' really is the filename, then it becomes a ZFS
> issue with which I can't help.

just tested: i could create, rename, delete a file with that name on a
zfs filesystem just as with any other fileystem.

But: i recall having seen an issue with corrupted filenames in a
snapshot once (several years ago though). At the time, i did resort to
send/recv to get the issue straightened out.

But it is very much more likely, that the filename '?' is entirely
unrelated to zfs. Although zfs is perceived as being easy to handle
(only 2 commands need to be learned: zpool and zfs), it takes a while to
get acquainted with all the concepts and behaviors. Take some time to
play with an installation (in a vm or just with a file based pool should
be considered).



block devices vs. partitions (Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?)))

2022-11-10 Thread hw
On Thu, 2022-11-10 at 10:59 +0100, DdB wrote:
> Am 10.11.2022 um 04:46 schrieb hw:
> > On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > > [...]
> [...]
> > > 
> > Why would partitions be better than the block device itself?  They're like
> > an
> > additional layer and what could be faster and easier than directly using the
> > block devices?
> > 
> > 
> hurts my eyes to see such desinformation circulating.

What's wrong about it?



Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread hw
On Thu, 2022-11-10 at 10:34 +0100, Christoph Brinkhaus wrote:
> Am Thu, Nov 10, 2022 at 04:46:12AM +0100 schrieb hw:
> > On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > > [...]
> [...]
> > > 
> > 
> > Why would partitions be better than the block device itself?  They're like
> > an
> > additional layer and what could be faster and easier than directly using the
> > block devices?
>  
>  Using the block device is no issue until you have a mirror or so.
>  In case of a mirror ZFS will use the capacity of the smallest drive.

But you can't make partitions larger than the drive.

>  I have read that a for example 100GB disk might be slightly larger
>  then 100GB. When you want to replace a 100GB disk with a spare one
>  which is less larger than the original one the pool will not fit on
>  the disk and the replacement fails.

Ah yes, right!  I kinda did that a while ago for spinning disks that might be
replaced by SSDs eventually and wanted to make sure that the SSDs wouldn't be
too small.  I forgot about that, my memory really isn't what it used to be ...

>  With partitions you can specify the space. It does not hurt if there
>  are a few MB unallocated. But then the partitions of the diks have
>  exactly the same size.

yeah



Gnus/procmail doesn't read new mails

2022-11-10 Thread Urs Thuermann
I want to move my mail server and Gnus MUA from a very old machine
(emacs 20.7.1 and gnus 5.8.8) to a Debian bullseye machine with emacs
and Gnus 5.13.

The mail is filtered by procmail into several files in the ~/PROCMAIL
directory.  From there it should be read by Gnus and stored in mail
groups in ~/Mail/.

Filtering to ~/PROCMAIL works as in my old machine:

$ ls -l ~/PROCMAIL
total 12
-rw--- 1 urs urs 709 Nov 10 13:54 bar.gnus
-rw--- 1 urs urs 709 Nov 10 09:13 foo.gnus
-rw--- 1 urs urs 705 Nov 10 09:13 mail.gnus

However, Gnus doesn't read the mails from there.  My .gnus config file
is

$ cat ~/.gnus
(setq gnus-secondary-select-methods '((nnml "")))

(setq mail-sources '((directory :path "~/PROCMAIL/"
:suffix ".gnus")))

When starting Gnus with 'M-x gnus' the .gnus file is read but the
configured mail-sources aren't read.  Running strace on emacs shows
that the PROCMAIL directory isn't touched at all.  However, the (setq
mail-sources...) isn't ignored completely, since mail also is not read
from /var/mail/urs which would be the default.

If I disable procmail and leave incoming mail in /var/mail/urs and if
I remove the (setq mail-sources ...) form .gnus but keep the
gnus-secondary-select-methods, mails are read from /var/mail/urs and
written to ~/Mail/mail/misc as docuemnted in the Gnus manual.

What am I doing wrong that the 'directory' mail source doesn't work?

urs



Re: definiing deduplication (was: Re: deduplicating file systems: VDO with Debian?)

2022-11-10 Thread hw
On Wed, 2022-11-09 at 12:08 +0100, Thomas Schmitt wrote:
> Hi,
> 
> i wrote:
> > >   https://github.com/dm-vdo/kvdo/issues/18
> 
> hw wrote:
> > So the VDO ppl say 4kB is a good block size
> 
> They actually say that it's the only size which they support.
> 
> 
> > Deduplication doesn't work when files aren't sufficiently identical,
> 
> The definition of sufficiently identical probably differs much between
> VDO and ZFS.
> ZFS has more knowledge about the files than VDO has. So it might be worth
> for it to hold more info in memory.

Dunno, apparently they keep checksums of blocks in memory.  More checksums, more
memory ...

> > It seems to make sense that the larger
> > the blocks are, the lower chances are that two blocks are identical.
> 
> Especially if the filesystem's block size is smaller than the VDO
> block size, or if the filesystem does not align file content intervals
> to block size, like ReiserFS does.

That would depend on the files.

> > So how come that deduplication with ZFS works at all?
> 
> Inner magic and knowledge about how blocks of data form a file object.
> A filesystem does not have to hope that identical file content is
> aligned to a fixed block size.

No, but when it uses large blocks it can store more files in a block and won't
be able to deduplicate the identical files in a block because the blocks are
atoms in deduplication.  The larger the blocks are, the less likely it seems
that multiple blocks are identical.

> didier gaumet wrote:
> > > > The goal being primarily to optimize storage space
> > > > for a provider of networked virtual machines to entities or customers
> 
> I wrote:
> > > Deduplicating over several nearly identical filesystem images might indeed
> > > bring good size reduction.
> 
> hw wrote:
> > Well, it's independant of the file system.
> 
> Not entirely. As stated above, i would expect VDO to work not well for
> ReiserFS with its habit to squeeze data into unused parts of storage blocks.
> (This made it great for storing many small files, but also led to some
> performance loss by more fragmentation.)

VDO is independant of the file system, and 4k blocks are kinda small.  It
doesn't matter how files are aligned to blocks of a file system because VDO
always uses chunks of 4k each and compares them and always works the same.  You
can always create a file system with an unlucky block size for the files on it
or even one that makes sure that all the 4k blocks are not identical.  We could
call it spitefs maybe :)

> > Do I want/need controlled redundancy with
> > backups on the same machine, or is it better to use snapshots and/or
> > deduplication to reduce the controlled redundancy?
> 
> I would want several independent backups on the first hand.

Independent?  Like two full copies like I'm making?

> The highest risk for backup is when a backup storage gets overwritten or
> updated. So i want several backups still untouched and valid, when the
> storage hardware or the backup software begin to spoil things.

That's what I thought, but I'm about to run out of disk space for multiple full
copies.

> Deduplication increases the risk that a partial failure of the backup
> storage damages more than one backup. On the other hand it decreases the
> work load on the storage

It may make all backups unusable because the single copy deduplication has left
has been damaged.  However, how likely is a partial failure of a stoarge volume
to happen, and how relevant is it?  How often does a storage volume --- the
underlying media doesn't necessarily matter; for example, when a disk goes bad
in a RAID, you replace it and keep going --- goes bad in only one place?  When
the volume has gone away, so have all the copies.

>  and the time window in which the backuped data
> can become inconsistent on the application level.

Huh?

> Snapshot before backup reduces that window size to 0. But this still
> does not prevent application level inconsistencies if the application is
> caught in the act of reworking its files.

You make the snapshot of the backup before starting to make a backup, not while
making one.

Or are you referring to the data being altered while a backup is in progress?

> So i would use at least four independent storage facilities interchangeably.
> I would make snapshots, if the filesystem supports them, and backup those
> instead of the changeable filesystem.
> I would try to reduce the activity of applications on the filesystem when
> the snapshot is made.

right

> I would allow each independent backup storage to do its own deduplication,
> not sharing it with the other backup storages.

If you have them on different machines or volumes, it would be difficult to do
it otherwise.

> > > In case of VDO i expect that you need to use different deduplicating
> > > devices to get controlled redundancy.
> 
> > How would the devices matter?  It's the volume residing on devices that gets
> > deduplicated, not the devices.
> 
> I understand that one VDO device 

Consulta ftp

2022-11-10 Thread Fernando Romero
Hola como estan, estoy corriendo debian 10 buster y pure-ftpd
Configure usuario y pass pero siempre me da error de autentificación,
intento conectarme desde la consola o usando filezilla y siempre me da error
530 Autentificaci▒n fallida, lo siento
Login failed.
Remote system type is UNIX.
Using binary mode to transfer files.

Cambie varias veces el usuario recree la base puredb pero no consigue
loguearme.
Alguien tuvo este problema?

Si corro  pure-pw show usuario me trae bien los datos del usuario y la
clave no la estoy escribiendo mal, la cambie varias veces

Login  : usuario
Password   :
$6$DZ/S57P55hv9EoZ0$F..2vyNe5pV8rJzF79MmFoDTHCrNouK/O81qBCX1IoZoDUOh8p8QhecYl9JoTAJjNEObiN4J6VgCrZwP4IWY./
UID: 33 (www-data)
GID: 33 (www-data)
Directory  : /var/www/html/./
Full name  :
Download bandwidth : 0 Kb (unlimited)
Upload   bandwidth : 0 Kb (unlimited)
Max files  : 0 (unlimited)
Max size   : 0 Mb (unlimited)
Ratio  : 0:0 (unlimited:unlimited)
Allowed local  IPs :
Denied  local  IPs :
Allowed client IPs :
Denied  client IPs :
Time restrictions  : - (unlimited)
Max sim sessions   : 0 (unlimited)

Saludos


Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread Greg Wooledge
On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
> ls -la
> insgesamt 5
> drwxr-xr-x  3 namefoo namefoo3 16. Aug 22:36 .
> drwxr-xr-x 24 rootroot4096  1. Nov 2017  ..
> drwxr-xr-x  2 namefoo namefoo2 21. Jan 2020  ?
> namefoo@host /srv/datadir $ ls -la '?'
> ls: Zugriff auf ? nicht möglich: Datei oder Verzeichnis nicht gefunden
> namefoo@host /srv/datadir $ 
> 
> 
> This directory named ? appeared on a ZFS volume for no reason and I can't 
> access
> it and can't delete it.  A scrub doesn't repair it.  It doesn't seem to do any
> harm yet, but it's annoying.
> 
> Any idea how to fix that?

ls -la might not be showing you the true name.  Try this:

printf %s * | hd

That should give you a hex dump of the bytes in the actual filename.

If you misrepresented the situation, and there's actually more than one
file in this directory, then use something like this instead:

shopt -s failglob
printf '%s\0' ? | hd

Note that the ? is *not* quoted here, because we want it to match any
one-character filename, no matter what that character actually is.  If
this doesn't work, try ?? or * as the glob, until you manage to find it.

If it turns out that '?' really is the filename, then it becomes a ZFS
issue with which I can't help.



Re: Increased read IO wait times after Bullseye upgrade

2022-11-10 Thread Gareth Evans
On Thu 10 Nov 2022, at 07:04, Vukovics Mihaly  wrote:
> Hi Gareth,
>
> - Smartmon/smarctl does not report any hw issues on the HDDs.
> - Fragmentation score is 1 (not fragmented at all)
> - 18% used only
> - RAID status is green (force-resynced)
> - rebooted several times
> - the IO utilization is almost zero(!) - chart attached
> - tried to change the io scheduler of the disks from mq-deadline to noop: 
> does not bring change.
> - the read latency increased on ALL 4 discs by the bullseye upgrade!
> - no errors/warnings in kern.log/syslog/messages
>
> Br,
> Mihaly

Hi Mihaly,

People here often recommend SATA cable/connection checks - might this be 
worthwhile?

"[using] 

iostat -mxy 10 

[...]

If %util is consistently under 30% most of the time, most likely you don’t have 
a problem with disk i/o. If you’re on the fence, you can also look at r_await 
and w_await columns — the average amount of time in milliseconds a read or 
write disk request is waiting before being serviced — to see if the drive is 
able to handle requests in a timely manner. A value less than 10ms for SSD or 
100ms for hard drives is usually not cause for concern,"

(-- from the article linked in my first response)

Are the {r,w}_await values what is shown in your first chart, inter alia?  I 
imagine so.

Does performance actually seem to be suffering?

Unfortunately I can't try to replicate this as I don't have a HDD system, let 
alone MDRAID5, hence the rather basic questions, which you may well already 
have considered.  

Write wait time has also increased somewhat, according to your first graph.

Is anything hogging CPU?  

Free memory/swap usage seems unlikely to be the issue after several reboots.

I might be barking up the wrong tree here but do you know which kernel version 
you were using on Buster?  

There were minor version changes to mdadm in Bullseye 
https://packages.debian.org/buster/mdadm
https://packages.debian.org/bullseye/mdadm

which made me wonder if in-kernel parts of MD changed too.

It might be interesting to diff the relevant kernel sources between Buster and 
Bullseye, perhaps starting with drivers/md/raid5.c extracted from 
/usr/src/linux-source-{KERNEL-VERSION}.tar.xz

https://packages.debian.org/search?keywords=linux-source=names=oldstable=all

https://packages.debian.org/bullseye/linux-source-5.10

This assumes the identification of the driver in [3] (below) is anything to go 
by.  

$ apt-file search md/raid456

[on Bullseye] seems to agree.

[though are sources up to date?...
$ uname -a 
... 5.10.149-2 ...

vs

"Package: ... (5.10.140-1)"
https://packages.debian.org/bullseye/linux-source-5.10]

I'm somewhat out of my range of experience here and this would be very much "an 
exercise" for me.  

I'm sorry I can't try to replicate the issue.  Can you, on another system, if 
you have time?

Best wishes,
G

[1]  "Initial driver: 
/lib/modules/4.18.0-305.el8.x86_64/kernel/drivers/md/raid456.ko.xz
I think this has been changed?"
https://grantcurell.github.io/Notes%20on%20mdraid%20Performance%20Testing/




Re: Est il possible de forcer Linux a écrire plus souvent dans le swap

2022-11-10 Thread David Martin
Salut,

Quel calcul, via une appli en particulier ?
Tu fais du nomachine pour qu'il est un bureau unitaire ?
Tu as configuré un serveur X avec un environnement X déporté par User ?

ce qui n'est pas une bonne idée AMHA

Tu es prof ;-

Cordialement

Le lun. 7 nov. 2022 à 15:01, Olivier backup my spare <
backup.my.sp...@gmail.com> a écrit :

> Bonjour
>
> J'ai un serveur de calcul et j'ai des utilisateurs qui ouvrent des
> instances à distance.
> Pour beaucoup, il lancent un calcul et attendent le résultat mais ils
> utilisent leur instance comme un bureau et y laissent des applis ouverte
> mais non utilisées.
> Y a t-il un moyen de forcer linux à mettre ces applis en stand by dans le
> swap.
> C'est un serveur Del avec une debian Bulleyes.
>
> --
> AI Gestionnaire d'infrastructure/ Gestionnaire de Parc.
> Centre d'économie S**
> “It is possible to commit no errors and still lose. That is not a
> weakness. That is life.”
> – Captain Jean-Luc Picard to Data
>


-- 
david martin


Re: deduplicating file systems: VDO with Debian?

2022-11-10 Thread hede

On Wed, 09 Nov 2022 13:52:26 +0100 hw  wrote:

Does that work?  Does bees run as long as there's something to 
deduplicate and

only stops when there isn't?


Bees is a service (daemon) which runs 24/7 watching btrfs transaction 
state (the checkpoints). If there are new transactions then it kicks in. 
But it's a niced service (man nice, man ionice). If your backup process 
has higher priority than "idle" (which is typically the case) and 
produces high load it will potentially block out bees until the backup 
is finished (maybe!).



I thought you start it when the data is place and
not before that.


That's the case with fdupes, duperemove, etc.

You can easily make changes to two full copies --- "make changes" 
meaning that
you only change what has been changed since last time you made the 
backup.


Do you mean to modify (make changes) to one of the backups? I never 
considered making changes to my backups. I do make changes to the live 
data and next time (when the incremental backup process runs) these 
changes do get into backup storage. Making changes to some backups ... I 
won't call that backups anymore.


Or do you mean you have two copies and alternatively "update" these 
copies to reflect the live state? I do not see a benefit in this. At 
least if both reside on the same storage system. There's a waste in 
storage space (doubled files). One copy with many incremental backups 
would be better. And if you plan to deduplicate both copys, simply use a 
backup solution with incremental backups.


Syncing two adjacent copies means to submit all changes a second time, 
which was already transferred for the first copy. The second copy is 
still on some older state the moment you update this one.


Yet again I do prefer a single process for having one[sic] consistent 
backup storage with a working history.


Two copies on two different locations is some other story, that indeed 
can have benefits.



> For me only the first backup is a full backup, every other backup is
> incremental.

When you make a second full backup, that second copy is not 
incremental.  It's a

full backup.


correct. That's the reason I do make incremental backups. And with 
incremental backups I do mean that I can restore "full" backups for 
several days: every day of the last week, one day for every month of the 
year, even several days of past years and so on. But the whole backup of 
all those "full" backups is not even two full backups in size. It's less 
in size but offers more.


For me a single full backup needs several days (Terabytes via DSL upload 
to the backup location) while incremental backups are MUCH faster 
(typically a few minutes if there wasn't changed that much). So I use 
the later one.


What difference does it make wether the deduplication is block based or 
somehow

file based (whatever that means).


File based deduplication means files do get compared in a whole. Result: 
Two big and nearly identical files need to get stored in full: they do 
differ.
Say for example a backup of a virtual machine image which got started 
between two backup runs. More than 99% of the image is the same as 
before, but because there's some log written inside the VM image they do 
differ. Those files are nearly identical, even in position of identical 
data.


Block based deduplication can find parts of a file to be exclusive 
(changed blocks) and other parts to set shared (blocks with same 
content):


#
# btrfs fi du file1 file2

 Total   Exclusive  Set shared  Filename
   2.30GiB23.00MiB 2.28GiB  file1
   2.30GiB   149.62MiB 2.16GiB  file2
#
here both files share data but do also have their exclusive data.


I'm flexible, but I distrust "backup solutions".


I would say, it depends on. I do also distrust everything, but some sane 
solution maybe I do distrust a little less then my "self built" one. ;-)


Don't trust your own solution more than others "on principle", without 
some real reasons for distrust.


Sounds good.  Before I try it, I need to make a backup in case 
something goes

wrong.


;-)

regards
hede



Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread DdB
Am 10.11.2022 um 04:46 schrieb hw:
> On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
>> Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
>> [...]
>>> FreeBSD has ZFS but can't even configure the disk controllers, so that won't
>>> work.  
>>
>> If I understand you right you mean RAID controllers?
> 
> yes
> 
>> According to my knowledge ZFS should be used without any RAID
>> controllers. Disks or better partions are fine.
> 
> I know, but it's what I have.  JBOD controllers are difficult to find.  And it
> doesn't really matter because I can configure each disk as a single disk ---
> still RAID though.  It may even be an advantage because the controllers have 
> 1GB
> cache each and the computers CPU doesn't need to do command queuing.
> 
> And I've been reading that when using ZFS, you shouldn't make volumes with 
> more
> than 8 disks.  That's very inconvenient.
> 
> Why would partitions be better than the block device itself?  They're like an
> additional layer and what could be faster and easier than directly using the
> block devices?
> 
> 
hurts my eyes to see such desinformation circulating. But i myself am
only one happy zfs user for a decade by now. I suggest to get in contact
with the zfs gurus on ZoL (or read the archive from
https://zfsonlinux.topicbox.com/groups/zfs-discuss)



Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?)

2022-11-10 Thread DdB
Am 10.11.2022 um 06:38 schrieb David Christensen:
> What is your technique for defragmenting ZFS?
well, that was meant more or less a joke: there is none apart from
offloading all the data, destroying and rebuilding the pool, and filling
it again from the backup. But i do it from time to time if fragmentation
got high, the speed improvements are obvious. OTOH the process takes
days on my SOHO servers



Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread Christoph Brinkhaus
Am Thu, Nov 10, 2022 at 04:46:12AM +0100 schrieb hw:
> On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > [...]
> > > FreeBSD has ZFS but can't even configure the disk controllers, so that 
> > > won't
> > > work.  
> > 
> > If I understand you right you mean RAID controllers?
> 
> yes
> 
> > According to my knowledge ZFS should be used without any RAID
> > controllers. Disks or better partions are fine.
> 
> I know, but it's what I have.  JBOD controllers are difficult to find.  And it
> doesn't really matter because I can configure each disk as a single disk ---
> still RAID though.  It may even be an advantage because the controllers have 
> 1GB
> cache each and the computers CPU doesn't need to do command queuing.
> 
> And I've been reading that when using ZFS, you shouldn't make volumes with 
> more
> than 8 disks.  That's very inconvenient.
> 
> Why would partitions be better than the block device itself?  They're like an
> additional layer and what could be faster and easier than directly using the
> block devices?
 
 Using the block device is no issue until you have a mirror or so.
 In case of a mirror ZFS will use the capacity of the smallest drive.

 I have read that a for example 100GB disk might be slightly larger
 then 100GB. When you want to replace a 100GB disk with a spare one
 which is less larger than the original one the pool will not fit on
 the disk and the replacement fails.

 With partitions you can specify the space. It does not hurt if there
 are a few MB unallocated. But then the partitions of the diks have
 exactly the same size.

 Kind regards,
 Christoph