Re: Why is disk full?

2022-03-29 Thread Aner Perez

Delete them from the destination and resync with -S.

    - Aner

On 3/29/22 12:14, F Bax wrote:

Looks like sparse files are no longer sparse on /mnt/wd2l/ !! Thanks Otto &
Aner.
du reported different sizes for several dozen folders that contain files
created by scan to PDF. Not all of the scanned files were affected; but
some might contain mostly blank pages.
For one sample file; ls -l reports
-rw-rw  1 fbax fbax  6683710 Oct 21  2019
du reports
13056   /mnt/wd1/ ...
13184   /mnt/wd2l/ ...

rsync -anvS does NOT report these files! Is there an easy way to make these
files to be sparse on wd2l?

On Tue, Mar 29, 2022 at 11:32 AM Aner Perez  wrote:


You may have large files with "holes" in them (i.e. sparse files).  Rsync
has a --sparse
(-S) flag that tries to create holes in the replicated files when it finds
sequences of
nulls in the source file.

The -a flag does not turn on this sparse file handling.

You can run "du" on different directories to narrow down where the file
size difference is
coming from.

  - Aner

On 3/29/22 10:58, F Bax wrote:

I used rsync to copy files.
sudo rsync -anv --delete /mnt/wd1l/ /mnt/wd2l/
reports no changes required (runtime under 3 minutes).
sudo diff -r /mnt/wd1l/ /mnt/wd2l/
reports no difference (runtime 10 hours)

$ sudo df -i /mnt/wd1l/ /mnt/wd2l/
Filesystem  512-blocks  Used Avail Capacity iused   ifree  %iused
   Mounted on
/dev/wd1l   2138940784 1997329632  3466412898%  483707 33313411

  1%

/mnt/wd1l
/dev/wd2l   2138951776 2033043696  -1039504   100%  483707 33313411

  1%

/mnt/wd2l

On Tue, Mar 29, 2022 at 10:49 AM F Bax  wrote:


I used rsync to copy files. df -i reports 483707 inodes used for both
partitions.
sudo rsync -anv --delete /mnt/wd1l/ /mnt/wd2l/
reports no changes required (runtime under 3 minutes).
sudo diff -r /mnt/wd1l/ /mnt/wd2l/
reports no difference (runtime 10 hours)

On Tue, Mar 29, 2022 at 10:39 AM Otto Moerbeek  wrote:


On Tue, Mar 29, 2022 at 10:25:34AM -0400, F Bax wrote:


I copied all files from /mnt/wd1l to /mnt/wd2l

wd2l is slightly larger than wd1l; yet wd2l is full!

$ df -h /mnt/wd1l /mnt/wd2l
Filesystem Size Used Avail Capacity Mounted on
/dev/wd1l 1020G 952G 16.5G 98% /mnt/wd1l
/dev/wd2l 1020G 969G -508M 100% /mnt/wd2l

How did you copy? Some forms of copy will cause hardlinked files to be
separate files on the destination. df -i will tell how many inodes you
have used. If wd2l has more inodes in use, I bet it's that.

  -Otto


Output from disklabel is almost identical:

type: SCSI
disk: SCSI disk
label: WDC WD2000FYYZ-0
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 255
sectors/cylinder: 16065
cylinders: 243201
total sectors: 3907029168
rpm: 0
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0 # microseconds
track-to-track seek: 3907029168 # microseconds
drivedata: 0

Difference between wd1 and wd2:
wd1: interleave: 0
wd2: interleave: 1

Partition details (A added 'wd1/wd2' to beginning of line:
# size offset fstype [fsize bsize cpg]
wd1l: 2147472640 525486208 4.2BSD 8192 65536 1
wd2l: 2147483647 63 4.2BSD 8192 65536 1

   Why is wd2l full?




Re: Why is disk full?

2022-03-29 Thread F Bax
Looks like sparse files are no longer sparse on /mnt/wd2l/ !! Thanks Otto &
Aner.
du reported different sizes for several dozen folders that contain files
created by scan to PDF. Not all of the scanned files were affected; but
some might contain mostly blank pages.
For one sample file; ls -l reports
-rw-rw  1 fbax fbax  6683710 Oct 21  2019
du reports
13056   /mnt/wd1/ ...
13184   /mnt/wd2l/ ...

rsync -anvS does NOT report these files! Is there an easy way to make these
files to be sparse on wd2l?

On Tue, Mar 29, 2022 at 11:32 AM Aner Perez  wrote:

> You may have large files with "holes" in them (i.e. sparse files).  Rsync
> has a --sparse
> (-S) flag that tries to create holes in the replicated files when it finds
> sequences of
> nulls in the source file.
>
> The -a flag does not turn on this sparse file handling.
>
> You can run "du" on different directories to narrow down where the file
> size difference is
> coming from.
>
>  - Aner
>
> On 3/29/22 10:58, F Bax wrote:
> > I used rsync to copy files.
> > sudo rsync -anv --delete /mnt/wd1l/ /mnt/wd2l/
> > reports no changes required (runtime under 3 minutes).
> > sudo diff -r /mnt/wd1l/ /mnt/wd2l/
> > reports no difference (runtime 10 hours)
> >
> > $ sudo df -i /mnt/wd1l/ /mnt/wd2l/
> > Filesystem  512-blocks  Used Avail Capacity iused   ifree  %iused
> >   Mounted on
> > /dev/wd1l   2138940784 1997329632  3466412898%  483707 33313411
>  1%
> >/mnt/wd1l
> > /dev/wd2l   2138951776 2033043696  -1039504   100%  483707 33313411
>  1%
> >/mnt/wd2l
> >
> > On Tue, Mar 29, 2022 at 10:49 AM F Bax  wrote:
> >
> >> I used rsync to copy files. df -i reports 483707 inodes used for both
> >> partitions.
> >> sudo rsync -anv --delete /mnt/wd1l/ /mnt/wd2l/
> >> reports no changes required (runtime under 3 minutes).
> >> sudo diff -r /mnt/wd1l/ /mnt/wd2l/
> >> reports no difference (runtime 10 hours)
> >>
> >> On Tue, Mar 29, 2022 at 10:39 AM Otto Moerbeek  wrote:
> >>
> >>> On Tue, Mar 29, 2022 at 10:25:34AM -0400, F Bax wrote:
> >>>
>  I copied all files from /mnt/wd1l to /mnt/wd2l
> 
>  wd2l is slightly larger than wd1l; yet wd2l is full!
> 
>  $ df -h /mnt/wd1l /mnt/wd2l
>  Filesystem Size Used Avail Capacity Mounted on
>  /dev/wd1l 1020G 952G 16.5G 98% /mnt/wd1l
>  /dev/wd2l 1020G 969G -508M 100% /mnt/wd2l
> >>> How did you copy? Some forms of copy will cause hardlinked files to be
> >>> separate files on the destination. df -i will tell how many inodes you
> >>> have used. If wd2l has more inodes in use, I bet it's that.
> >>>
> >>>  -Otto
> >>>
>  Output from disklabel is almost identical:
> 
>  type: SCSI
>  disk: SCSI disk
>  label: WDC WD2000FYYZ-0
>  flags:
>  bytes/sector: 512
>  sectors/track: 63
>  tracks/cylinder: 255
>  sectors/cylinder: 16065
>  cylinders: 243201
>  total sectors: 3907029168
>  rpm: 0
>  interleave: 1
>  trackskew: 0
>  cylinderskew: 0
>  headswitch: 0 # microseconds
>  track-to-track seek: 3907029168 # microseconds
>  drivedata: 0
> 
>  Difference between wd1 and wd2:
>  wd1: interleave: 0
>  wd2: interleave: 1
> 
>  Partition details (A added 'wd1/wd2' to beginning of line:
>  # size offset fstype [fsize bsize cpg]
>  wd1l: 2147472640 525486208 4.2BSD 8192 65536 1
>  wd2l: 2147483647 63 4.2BSD 8192 65536 1
> 
>    Why is wd2l full?
>
>


Re: Why is disk full?

2022-03-29 Thread Otto Moerbeek
On Tue, Mar 29, 2022 at 11:12:23AM -0400, F Bax wrote:

> # dumpfs /dev/rwd1l | head -1
> magic   11954 (FFS1)timeWed Jan  8 19:45:37 2020
> # dumpfs /dev/rwd2l | head -1
> magic   11954 (FFS1)timeSun Mar 27 13:01:58 2022

OK, third option: you had sparse files on the source disk. Sparse
files contain blocks of all zeroes that are not stored as data blocks.
I think by default rsync does not (re)create those as sparse (see
rsync option -S).

-Otto

> 
> On Tue, Mar 29, 2022 at 11:07 AM Otto Moerbeek  wrote:
> 
> > On Tue, Mar 29, 2022 at 10:58:49AM -0400, F Bax wrote:
> >
> > > I used rsync to copy files.
> > > sudo rsync -anv --delete /mnt/wd1l/ /mnt/wd2l/
> > > reports no changes required (runtime under 3 minutes).
> > > sudo diff -r /mnt/wd1l/ /mnt/wd2l/
> > > reports no difference (runtime 10 hours)
> > >
> > > $ sudo df -i /mnt/wd1l/ /mnt/wd2l/
> > > Filesystem  512-blocks  Used Avail Capacity iused   ifree  %iused
> > >  Mounted on
> > > /dev/wd1l   2138940784 1997329632  3466412898%  483707 33313411
> >  1%
> > >   /mnt/wd1l
> > > /dev/wd2l   2138951776 2033043696  -1039504   100%  483707 33313411
> >  1%
> > >   /mnt/wd2l
> >
> > Ok, then it could be an FFS1 vs FFS2 thing. FFS2 has a larger
> > meta-data overhead due to its larger inodes.
> >
> > Check
> >
> > # dumpfs /dev/rwd1l | head -1
> > # dumpfs /dev/rwd2l | head -1
> >
> > -Otto
> >
> > >
> > > On Tue, Mar 29, 2022 at 10:49 AM F Bax  wrote:
> > >
> > > > I used rsync to copy files. df -i reports 483707 inodes used for both
> > > > partitions.
> > > > sudo rsync -anv --delete /mnt/wd1l/ /mnt/wd2l/
> > > > reports no changes required (runtime under 3 minutes).
> > > > sudo diff -r /mnt/wd1l/ /mnt/wd2l/
> > > > reports no difference (runtime 10 hours)
> > > >
> > > > On Tue, Mar 29, 2022 at 10:39 AM Otto Moerbeek  wrote:
> > > >
> > > >> On Tue, Mar 29, 2022 at 10:25:34AM -0400, F Bax wrote:
> > > >>
> > > >> > I copied all files from /mnt/wd1l to /mnt/wd2l
> > > >> >
> > > >> > wd2l is slightly larger than wd1l; yet wd2l is full!
> > > >> >
> > > >> > $ df -h /mnt/wd1l /mnt/wd2l
> > > >> > Filesystem Size Used Avail Capacity Mounted on
> > > >> > /dev/wd1l 1020G 952G 16.5G 98% /mnt/wd1l
> > > >> > /dev/wd2l 1020G 969G -508M 100% /mnt/wd2l
> > > >>
> > > >> How did you copy? Some forms of copy will cause hardlinked files to be
> > > >> separate files on the destination. df -i will tell how many inodes you
> > > >> have used. If wd2l has more inodes in use, I bet it's that.
> > > >>
> > > >> -Otto
> > > >>
> > > >> >
> > > >> > Output from disklabel is almost identical:
> > > >> >
> > > >> > type: SCSI
> > > >> > disk: SCSI disk
> > > >> > label: WDC WD2000FYYZ-0
> > > >> > flags:
> > > >> > bytes/sector: 512
> > > >> > sectors/track: 63
> > > >> > tracks/cylinder: 255
> > > >> > sectors/cylinder: 16065
> > > >> > cylinders: 243201
> > > >> > total sectors: 3907029168
> > > >> > rpm: 0
> > > >> > interleave: 1
> > > >> > trackskew: 0
> > > >> > cylinderskew: 0
> > > >> > headswitch: 0 # microseconds
> > > >> > track-to-track seek: 3907029168 # microseconds
> > > >> > drivedata: 0
> > > >> >
> > > >> > Difference between wd1 and wd2:
> > > >> > wd1: interleave: 0
> > > >> > wd2: interleave: 1
> > > >> >
> > > >> > Partition details (A added 'wd1/wd2' to beginning of line:
> > > >> > # size offset fstype [fsize bsize cpg]
> > > >> > wd1l: 2147472640 525486208 4.2BSD 8192 65536 1
> > > >> > wd2l: 2147483647 63 4.2BSD 8192 65536 1
> > > >> >
> > > >> >  Why is wd2l full?
> > > >>
> > > >
> >



Re: Why is disk full?

2022-03-29 Thread Aner Perez
You may have large files with "holes" in them (i.e. sparse files).  Rsync has a --sparse 
(-S) flag that tries to create holes in the replicated files when it finds sequences of 
nulls in the source file.


The -a flag does not turn on this sparse file handling.

You can run "du" on different directories to narrow down where the file size difference is 
coming from.


    - Aner

On 3/29/22 10:58, F Bax wrote:

I used rsync to copy files.
sudo rsync -anv --delete /mnt/wd1l/ /mnt/wd2l/
reports no changes required (runtime under 3 minutes).
sudo diff -r /mnt/wd1l/ /mnt/wd2l/
reports no difference (runtime 10 hours)

$ sudo df -i /mnt/wd1l/ /mnt/wd2l/
Filesystem  512-blocks  Used Avail Capacity iused   ifree  %iused
  Mounted on
/dev/wd1l   2138940784 1997329632  3466412898%  483707 33313411 1%
   /mnt/wd1l
/dev/wd2l   2138951776 2033043696  -1039504   100%  483707 33313411 1%
   /mnt/wd2l

On Tue, Mar 29, 2022 at 10:49 AM F Bax  wrote:


I used rsync to copy files. df -i reports 483707 inodes used for both
partitions.
sudo rsync -anv --delete /mnt/wd1l/ /mnt/wd2l/
reports no changes required (runtime under 3 minutes).
sudo diff -r /mnt/wd1l/ /mnt/wd2l/
reports no difference (runtime 10 hours)

On Tue, Mar 29, 2022 at 10:39 AM Otto Moerbeek  wrote:


On Tue, Mar 29, 2022 at 10:25:34AM -0400, F Bax wrote:


I copied all files from /mnt/wd1l to /mnt/wd2l

wd2l is slightly larger than wd1l; yet wd2l is full!

$ df -h /mnt/wd1l /mnt/wd2l
Filesystem Size Used Avail Capacity Mounted on
/dev/wd1l 1020G 952G 16.5G 98% /mnt/wd1l
/dev/wd2l 1020G 969G -508M 100% /mnt/wd2l

How did you copy? Some forms of copy will cause hardlinked files to be
separate files on the destination. df -i will tell how many inodes you
have used. If wd2l has more inodes in use, I bet it's that.

 -Otto


Output from disklabel is almost identical:

type: SCSI
disk: SCSI disk
label: WDC WD2000FYYZ-0
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 255
sectors/cylinder: 16065
cylinders: 243201
total sectors: 3907029168
rpm: 0
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0 # microseconds
track-to-track seek: 3907029168 # microseconds
drivedata: 0

Difference between wd1 and wd2:
wd1: interleave: 0
wd2: interleave: 1

Partition details (A added 'wd1/wd2' to beginning of line:
# size offset fstype [fsize bsize cpg]
wd1l: 2147472640 525486208 4.2BSD 8192 65536 1
wd2l: 2147483647 63 4.2BSD 8192 65536 1

  Why is wd2l full?




Re: Why is disk full?

2022-03-29 Thread F Bax
# dumpfs /dev/rwd1l | head -1
magic   11954 (FFS1)timeWed Jan  8 19:45:37 2020
# dumpfs /dev/rwd2l | head -1
magic   11954 (FFS1)timeSun Mar 27 13:01:58 2022

On Tue, Mar 29, 2022 at 11:07 AM Otto Moerbeek  wrote:

> On Tue, Mar 29, 2022 at 10:58:49AM -0400, F Bax wrote:
>
> > I used rsync to copy files.
> > sudo rsync -anv --delete /mnt/wd1l/ /mnt/wd2l/
> > reports no changes required (runtime under 3 minutes).
> > sudo diff -r /mnt/wd1l/ /mnt/wd2l/
> > reports no difference (runtime 10 hours)
> >
> > $ sudo df -i /mnt/wd1l/ /mnt/wd2l/
> > Filesystem  512-blocks  Used Avail Capacity iused   ifree  %iused
> >  Mounted on
> > /dev/wd1l   2138940784 1997329632  3466412898%  483707 33313411
>  1%
> >   /mnt/wd1l
> > /dev/wd2l   2138951776 2033043696  -1039504   100%  483707 33313411
>  1%
> >   /mnt/wd2l
>
> Ok, then it could be an FFS1 vs FFS2 thing. FFS2 has a larger
> meta-data overhead due to its larger inodes.
>
> Check
>
> # dumpfs /dev/rwd1l | head -1
> # dumpfs /dev/rwd2l | head -1
>
> -Otto
>
> >
> > On Tue, Mar 29, 2022 at 10:49 AM F Bax  wrote:
> >
> > > I used rsync to copy files. df -i reports 483707 inodes used for both
> > > partitions.
> > > sudo rsync -anv --delete /mnt/wd1l/ /mnt/wd2l/
> > > reports no changes required (runtime under 3 minutes).
> > > sudo diff -r /mnt/wd1l/ /mnt/wd2l/
> > > reports no difference (runtime 10 hours)
> > >
> > > On Tue, Mar 29, 2022 at 10:39 AM Otto Moerbeek  wrote:
> > >
> > >> On Tue, Mar 29, 2022 at 10:25:34AM -0400, F Bax wrote:
> > >>
> > >> > I copied all files from /mnt/wd1l to /mnt/wd2l
> > >> >
> > >> > wd2l is slightly larger than wd1l; yet wd2l is full!
> > >> >
> > >> > $ df -h /mnt/wd1l /mnt/wd2l
> > >> > Filesystem Size Used Avail Capacity Mounted on
> > >> > /dev/wd1l 1020G 952G 16.5G 98% /mnt/wd1l
> > >> > /dev/wd2l 1020G 969G -508M 100% /mnt/wd2l
> > >>
> > >> How did you copy? Some forms of copy will cause hardlinked files to be
> > >> separate files on the destination. df -i will tell how many inodes you
> > >> have used. If wd2l has more inodes in use, I bet it's that.
> > >>
> > >> -Otto
> > >>
> > >> >
> > >> > Output from disklabel is almost identical:
> > >> >
> > >> > type: SCSI
> > >> > disk: SCSI disk
> > >> > label: WDC WD2000FYYZ-0
> > >> > flags:
> > >> > bytes/sector: 512
> > >> > sectors/track: 63
> > >> > tracks/cylinder: 255
> > >> > sectors/cylinder: 16065
> > >> > cylinders: 243201
> > >> > total sectors: 3907029168
> > >> > rpm: 0
> > >> > interleave: 1
> > >> > trackskew: 0
> > >> > cylinderskew: 0
> > >> > headswitch: 0 # microseconds
> > >> > track-to-track seek: 3907029168 # microseconds
> > >> > drivedata: 0
> > >> >
> > >> > Difference between wd1 and wd2:
> > >> > wd1: interleave: 0
> > >> > wd2: interleave: 1
> > >> >
> > >> > Partition details (A added 'wd1/wd2' to beginning of line:
> > >> > # size offset fstype [fsize bsize cpg]
> > >> > wd1l: 2147472640 525486208 4.2BSD 8192 65536 1
> > >> > wd2l: 2147483647 63 4.2BSD 8192 65536 1
> > >> >
> > >> >  Why is wd2l full?
> > >>
> > >
>


Re: Why is disk full?

2022-03-29 Thread Otto Moerbeek
On Tue, Mar 29, 2022 at 10:58:49AM -0400, F Bax wrote:

> I used rsync to copy files.
> sudo rsync -anv --delete /mnt/wd1l/ /mnt/wd2l/
> reports no changes required (runtime under 3 minutes).
> sudo diff -r /mnt/wd1l/ /mnt/wd2l/
> reports no difference (runtime 10 hours)
> 
> $ sudo df -i /mnt/wd1l/ /mnt/wd2l/
> Filesystem  512-blocks  Used Avail Capacity iused   ifree  %iused
>  Mounted on
> /dev/wd1l   2138940784 1997329632  3466412898%  483707 33313411 1%
>   /mnt/wd1l
> /dev/wd2l   2138951776 2033043696  -1039504   100%  483707 33313411 1%
>   /mnt/wd2l

Ok, then it could be an FFS1 vs FFS2 thing. FFS2 has a larger
meta-data overhead due to its larger inodes.

Check 

# dumpfs /dev/rwd1l | head -1
# dumpfs /dev/rwd2l | head -1

-Otto

> 
> On Tue, Mar 29, 2022 at 10:49 AM F Bax  wrote:
> 
> > I used rsync to copy files. df -i reports 483707 inodes used for both
> > partitions.
> > sudo rsync -anv --delete /mnt/wd1l/ /mnt/wd2l/
> > reports no changes required (runtime under 3 minutes).
> > sudo diff -r /mnt/wd1l/ /mnt/wd2l/
> > reports no difference (runtime 10 hours)
> >
> > On Tue, Mar 29, 2022 at 10:39 AM Otto Moerbeek  wrote:
> >
> >> On Tue, Mar 29, 2022 at 10:25:34AM -0400, F Bax wrote:
> >>
> >> > I copied all files from /mnt/wd1l to /mnt/wd2l
> >> >
> >> > wd2l is slightly larger than wd1l; yet wd2l is full!
> >> >
> >> > $ df -h /mnt/wd1l /mnt/wd2l
> >> > Filesystem Size Used Avail Capacity Mounted on
> >> > /dev/wd1l 1020G 952G 16.5G 98% /mnt/wd1l
> >> > /dev/wd2l 1020G 969G -508M 100% /mnt/wd2l
> >>
> >> How did you copy? Some forms of copy will cause hardlinked files to be
> >> separate files on the destination. df -i will tell how many inodes you
> >> have used. If wd2l has more inodes in use, I bet it's that.
> >>
> >> -Otto
> >>
> >> >
> >> > Output from disklabel is almost identical:
> >> >
> >> > type: SCSI
> >> > disk: SCSI disk
> >> > label: WDC WD2000FYYZ-0
> >> > flags:
> >> > bytes/sector: 512
> >> > sectors/track: 63
> >> > tracks/cylinder: 255
> >> > sectors/cylinder: 16065
> >> > cylinders: 243201
> >> > total sectors: 3907029168
> >> > rpm: 0
> >> > interleave: 1
> >> > trackskew: 0
> >> > cylinderskew: 0
> >> > headswitch: 0 # microseconds
> >> > track-to-track seek: 3907029168 # microseconds
> >> > drivedata: 0
> >> >
> >> > Difference between wd1 and wd2:
> >> > wd1: interleave: 0
> >> > wd2: interleave: 1
> >> >
> >> > Partition details (A added 'wd1/wd2' to beginning of line:
> >> > # size offset fstype [fsize bsize cpg]
> >> > wd1l: 2147472640 525486208 4.2BSD 8192 65536 1
> >> > wd2l: 2147483647 63 4.2BSD 8192 65536 1
> >> >
> >> >  Why is wd2l full?
> >>
> >



Re: Why is disk full?

2022-03-29 Thread F Bax
I used rsync to copy files.
sudo rsync -anv --delete /mnt/wd1l/ /mnt/wd2l/
reports no changes required (runtime under 3 minutes).
sudo diff -r /mnt/wd1l/ /mnt/wd2l/
reports no difference (runtime 10 hours)

$ sudo df -i /mnt/wd1l/ /mnt/wd2l/
Filesystem  512-blocks  Used Avail Capacity iused   ifree  %iused
 Mounted on
/dev/wd1l   2138940784 1997329632  3466412898%  483707 33313411 1%
  /mnt/wd1l
/dev/wd2l   2138951776 2033043696  -1039504   100%  483707 33313411 1%
  /mnt/wd2l

On Tue, Mar 29, 2022 at 10:49 AM F Bax  wrote:

> I used rsync to copy files. df -i reports 483707 inodes used for both
> partitions.
> sudo rsync -anv --delete /mnt/wd1l/ /mnt/wd2l/
> reports no changes required (runtime under 3 minutes).
> sudo diff -r /mnt/wd1l/ /mnt/wd2l/
> reports no difference (runtime 10 hours)
>
> On Tue, Mar 29, 2022 at 10:39 AM Otto Moerbeek  wrote:
>
>> On Tue, Mar 29, 2022 at 10:25:34AM -0400, F Bax wrote:
>>
>> > I copied all files from /mnt/wd1l to /mnt/wd2l
>> >
>> > wd2l is slightly larger than wd1l; yet wd2l is full!
>> >
>> > $ df -h /mnt/wd1l /mnt/wd2l
>> > Filesystem Size Used Avail Capacity Mounted on
>> > /dev/wd1l 1020G 952G 16.5G 98% /mnt/wd1l
>> > /dev/wd2l 1020G 969G -508M 100% /mnt/wd2l
>>
>> How did you copy? Some forms of copy will cause hardlinked files to be
>> separate files on the destination. df -i will tell how many inodes you
>> have used. If wd2l has more inodes in use, I bet it's that.
>>
>> -Otto
>>
>> >
>> > Output from disklabel is almost identical:
>> >
>> > type: SCSI
>> > disk: SCSI disk
>> > label: WDC WD2000FYYZ-0
>> > flags:
>> > bytes/sector: 512
>> > sectors/track: 63
>> > tracks/cylinder: 255
>> > sectors/cylinder: 16065
>> > cylinders: 243201
>> > total sectors: 3907029168
>> > rpm: 0
>> > interleave: 1
>> > trackskew: 0
>> > cylinderskew: 0
>> > headswitch: 0 # microseconds
>> > track-to-track seek: 3907029168 # microseconds
>> > drivedata: 0
>> >
>> > Difference between wd1 and wd2:
>> > wd1: interleave: 0
>> > wd2: interleave: 1
>> >
>> > Partition details (A added 'wd1/wd2' to beginning of line:
>> > # size offset fstype [fsize bsize cpg]
>> > wd1l: 2147472640 525486208 4.2BSD 8192 65536 1
>> > wd2l: 2147483647 63 4.2BSD 8192 65536 1
>> >
>> >  Why is wd2l full?
>>
>


Re: Why is disk full?

2022-03-29 Thread Otto Moerbeek
On Tue, Mar 29, 2022 at 10:25:34AM -0400, F Bax wrote:

> I copied all files from /mnt/wd1l to /mnt/wd2l
> 
> wd2l is slightly larger than wd1l; yet wd2l is full!
> 
> $ df -h /mnt/wd1l /mnt/wd2l
> Filesystem Size Used Avail Capacity Mounted on
> /dev/wd1l 1020G 952G 16.5G 98% /mnt/wd1l
> /dev/wd2l 1020G 969G -508M 100% /mnt/wd2l

How did you copy? Some forms of copy will cause hardlinked files to be
separate files on the destination. df -i will tell how many inodes you
have used. If wd2l has more inodes in use, I bet it's that.

-Otto

> 
> Output from disklabel is almost identical:
> 
> type: SCSI
> disk: SCSI disk
> label: WDC WD2000FYYZ-0
> flags:
> bytes/sector: 512
> sectors/track: 63
> tracks/cylinder: 255
> sectors/cylinder: 16065
> cylinders: 243201
> total sectors: 3907029168
> rpm: 0
> interleave: 1
> trackskew: 0
> cylinderskew: 0
> headswitch: 0 # microseconds
> track-to-track seek: 3907029168 # microseconds
> drivedata: 0
> 
> Difference between wd1 and wd2:
> wd1: interleave: 0
> wd2: interleave: 1
> 
> Partition details (A added 'wd1/wd2' to beginning of line:
> # size offset fstype [fsize bsize cpg]
> wd1l: 2147472640 525486208 4.2BSD 8192 65536 1
> wd2l: 2147483647 63 4.2BSD 8192 65536 1
> 
>  Why is wd2l full?



Why is disk full?

2022-03-29 Thread F Bax
I copied all files from /mnt/wd1l to /mnt/wd2l

wd2l is slightly larger than wd1l; yet wd2l is full!

$ df -h /mnt/wd1l /mnt/wd2l
Filesystem Size Used Avail Capacity Mounted on
/dev/wd1l 1020G 952G 16.5G 98% /mnt/wd1l
/dev/wd2l 1020G 969G -508M 100% /mnt/wd2l

Output from disklabel is almost identical:

type: SCSI
disk: SCSI disk
label: WDC WD2000FYYZ-0
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 255
sectors/cylinder: 16065
cylinders: 243201
total sectors: 3907029168
rpm: 0
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0 # microseconds
track-to-track seek: 3907029168 # microseconds
drivedata: 0

Difference between wd1 and wd2:
wd1: interleave: 0
wd2: interleave: 1

Partition details (A added 'wd1/wd2' to beginning of line:
# size offset fstype [fsize bsize cpg]
wd1l: 2147472640 525486208 4.2BSD 8192 65536 1
wd2l: 2147483647 63 4.2BSD 8192 65536 1

 Why is wd2l full?


Re: OpenBGPd: fatal in RDE: aspath_get: Cannot allocate memory

2022-03-29 Thread Laurent CARON

Le 29/03/2022 à 12:10, Claudio Jeker a écrit :

I doubt it is the filters. You run into some sort of memory leak. Please
monitor 'bgpctl show rib mem' output. Also check ps aux | grep bgpd output
to see why and when the memory starts to go up.
With that information it may be possible to figure out where this leak
sits and how to fix it.

Cheers


Thanks Claudio, will do and report.



Re: issue with move to php8 as default

2022-03-29 Thread Stuart Henderson
On 2022-03-28, ITwrx  wrote:
> I'm running php7.4 and php8 at the same time on an OpenBSD 7.0 machine
> i'm testing as a web server. I'm pretty sure they were both starting up
> fine until yesterday (it's been a while) after i updated with pkg_add -u
> and syspatch. Now, php8 fails to start with:
>
> ERROR: Another FPM instance seems to already listen on 
> /var/www/run/php-fpm.sock
> ERROR: FPM initialization failed
>
> This seems to be due to the fact that php8.0 became the new default,
> but it looks like php74 is still trying to use php-fpm.sock instead of
> php-fpm74.sock, or whatever it's supposed to be called once it's not
> the default anymore.

The php-fpm ports default to using /etc/php-fpm.conf.

If you are running both php74_fpm and php80_fpm together then you must
change this default for at least one of them and point it at its own
configuration file e.g.

php74_fpm_flags=-y /etc/php-fpm-7.4.conf

Nothing has changed in this respect.


-- 
Please keep replies on the mailing list.



Re: OpenBGPd: fatal in RDE: aspath_get: Cannot allocate memory

2022-03-29 Thread Stuart Henderson
On 2022-03-29, Claudio Jeker  wrote:
> On Tue, Mar 29, 2022 at 09:53:56AM +0200, Laurent CARON wrote:
>> Hi,
>> 
>> I'm happily running several OpenBGPd routers (Openbsd 7.0).
>> 
>> After having applied the folloxing filters (to blackhole traffic from
>> certain countries):
>> 
>> include "/etc/bgpd/deny-asn.ru.bgpd"
>> include "/etc/bgpd/deny-asn.by.bgpd"
>> include "/etc/bgpd/deny-asn.ua.bgpd"
>> 
>> 
>> # head /etc/bgpd/deny-asn.ru.bgpd
>> match from any AS 2148 set { localpref 250 nexthop blackhole }
>> match from any AS 2585 set { localpref 250 nexthop blackhole }
>> match from any AS 2587 set { localpref 250 nexthop blackhole }
>> match from any AS 2599 set { localpref 250 nexthop blackhole }
>> match from any AS 2766 set { localpref 250 nexthop blackhole }
>> match from any AS 2848 set { localpref 250 nexthop blackhole }
>> match from any AS 2854 set { localpref 250 nexthop blackhole }
>> match from any AS 2875 set { localpref 250 nexthop blackhole }
>> match from any AS 2878 set { localpref 250 nexthop blackhole }
>> match from any AS 2895 set { localpref 250 nexthop blackhole }
>> 
>> The bgpd daemon crashes every few days with the following:
>> 
>> Mar 21 11:36:54 bgpgw-004 bgpd[76476]: 338 roa-set entries expired
>> Mar 21 12:06:54 bgpgw-004 bgpd[76476]: 36 roa-set entries expired
>> Mar 21 12:11:54 bgpgw-004 bgpd[76476]: 82 roa-set entries expired
>> Mar 21 12:22:36 bgpgw-004 bgpd[99215]: fatal in RDE: prefix_alloc: Cannot
>> allocate memory
>> Mar 21 12:22:36 bgpgw-004 bgpd[65049]: peer closed imsg connection
>> Mar 21 12:22:36 bgpgw-004 bgpd[65049]: main: Lost connection to RDE
>> Mar 21 12:22:36 bgpgw-004 bgpd[76476]: peer closed imsg connection
>> Mar 21 12:22:36 bgpgw-004 bgpd[58155]: peer closed imsg connection
>> Mar 21 12:22:36 bgpgw-004 bgpd[76476]: RTR: Lost connection to RDE
>> Mar 21 12:22:36 bgpgw-004 bgpd[58155]: SE: Lost connection to RDE
>> Mar 21 12:22:36 bgpgw-004 bgpd[58155]: peer closed imsg connection
>> Mar 21 12:22:36 bgpgw-004 bgpd[76476]: peer closed imsg connection
>> Mar 21 12:22:36 bgpgw-004 bgpd[58155]: SE: Lost connection to RDE control
>> Mar 21 12:22:36 bgpgw-004 bgpd[76476]: fatal in RTR: Lost connection to
>> parent
>> Mar 21 12:22:36 bgpgw-004 bgpd[58155]: Can't send message 61 to RDE, pipe
>> closed
>> Mar 21 12:22:36 bgpgw-004 bgpd[58155]: peer closed imsg connection
>> Mar 21 12:22:36 bgpgw-004 bgpd[58155]: SE: Lost connection to parent
>> ...
>> 
>> Mar 24 06:34:17 bgpgw-004 bgpd[83062]: 17 roa-set entries expired
>> Mar 24 06:54:47 bgpgw-004 bgpd[82782]: fatal in RDE: communities_copy:
>> Cannot allocate memory
>> Mar 24 06:54:47 bgpgw-004 bgpd[99753]: peer closed imsg connection
>> Mar 24 06:54:47 bgpgw-004 bgpd[83062]: peer closed imsg connection
>> Mar 24 06:54:47 bgpgw-004 bgpd[99753]: main: Lost connection to RDE
>> Mar 24 06:54:47 bgpgw-004 bgpd[83062]: RTR: Lost connection to RDE
>> Mar 24 06:54:47 bgpgw-004 bgpd[83062]: peer closed imsg connection
>> Mar 24 06:54:47 bgpgw-004 bgpd[83062]: fatal in RTR: Lost connection to
>> parent
>> Mar 24 06:54:47 bgpgw-004 bgpd[40748]: peer closed imsg connection
>> Mar 24 06:54:47 bgpgw-004 bgpd[40748]: SE: Lost connection to RDE
>> Mar 24 06:54:47 bgpgw-004 bgpd[40748]: peer closed imsg connection
>> Mar 24 06:54:47 bgpgw-004 bgpd[40748]: SE: Lost connection to RDE control
>> Mar 24 06:54:47 bgpgw-004 bgpd[40748]: Can't send message 61 to RDE, pipe
>> closed
>> Mar 24 06:54:47 bgpgw-004 bgpd[40748]: peer closed imsg connection
>> Mar 24 06:54:47 bgpgw-004 bgpd[40748]: SE: Lost connection to parent
>> ...
>> 
>> Mar 27 13:07:56 bgpgw-004 bgpd[95001]: fatal in RDE: aspath_get: Cannot
>> allocate memory
>> Mar 27 13:07:56 bgpgw-004 bgpd[84816]: peer closed imsg connection
>> Mar 27 13:07:56 bgpgw-004 bgpd[84816]: main: Lost connection to RDE
>> Mar 27 13:07:56 bgpgw-004 bgpd[3118]: peer closed imsg connection
>> Mar 27 13:07:56 bgpgw-004 bgpd[3118]: RTR: Lost connection to RDE
>> Mar 27 13:07:56 bgpgw-004 bgpd[3118]: peer closed imsg connection
>> Mar 27 13:07:56 bgpgw-004 bgpd[3118]: fatal in RTR: Lost connection to
>> parent
>> Mar 27 13:07:56 bgpgw-004 bgpd[60695]: peer closed imsg connection
>> Mar 27 13:07:56 bgpgw-004 bgpd[60695]: SE: Lost connection to RDE
>> Mar 27 13:07:56 bgpgw-004 bgpd[60695]: peer closed imsg connection
>> Mar 27 13:07:56 bgpgw-004 bgpd[60695]: SE: Lost connection to RDE control
>> Mar 27 13:07:56 bgpgw-004 bgpd[60695]: peer closed imsg connection
>> Mar 27 13:07:56 bgpgw-004 bgpd[60695]: SE: Lost connection to parent
>> 
>> Is my filter too aggressive for bgpd ? Is there a more efficient way to
>> write it ?
>  
> I doubt it is the filters. You run into some sort of memory leak. Please
> monitor 'bgpctl show rib mem' output. Also check ps aux | grep bgpd output 
> to see why and when the memory starts to go up.
> With that information it may be possible to figure out where this leak
> sits and how to fix it.
>
> Cheers

Also: check the values for bgpd's login class (as root, "su 

Re: OpenBGPd: fatal in RDE: aspath_get: Cannot allocate memory

2022-03-29 Thread Claudio Jeker
On Tue, Mar 29, 2022 at 09:53:56AM +0200, Laurent CARON wrote:
> Hi,
> 
> I'm happily running several OpenBGPd routers (Openbsd 7.0).
> 
> After having applied the folloxing filters (to blackhole traffic from
> certain countries):
> 
> include "/etc/bgpd/deny-asn.ru.bgpd"
> include "/etc/bgpd/deny-asn.by.bgpd"
> include "/etc/bgpd/deny-asn.ua.bgpd"
> 
> 
> # head /etc/bgpd/deny-asn.ru.bgpd
> match from any AS 2148 set { localpref 250 nexthop blackhole }
> match from any AS 2585 set { localpref 250 nexthop blackhole }
> match from any AS 2587 set { localpref 250 nexthop blackhole }
> match from any AS 2599 set { localpref 250 nexthop blackhole }
> match from any AS 2766 set { localpref 250 nexthop blackhole }
> match from any AS 2848 set { localpref 250 nexthop blackhole }
> match from any AS 2854 set { localpref 250 nexthop blackhole }
> match from any AS 2875 set { localpref 250 nexthop blackhole }
> match from any AS 2878 set { localpref 250 nexthop blackhole }
> match from any AS 2895 set { localpref 250 nexthop blackhole }
> 
> The bgpd daemon crashes every few days with the following:
> 
> Mar 21 11:36:54 bgpgw-004 bgpd[76476]: 338 roa-set entries expired
> Mar 21 12:06:54 bgpgw-004 bgpd[76476]: 36 roa-set entries expired
> Mar 21 12:11:54 bgpgw-004 bgpd[76476]: 82 roa-set entries expired
> Mar 21 12:22:36 bgpgw-004 bgpd[99215]: fatal in RDE: prefix_alloc: Cannot
> allocate memory
> Mar 21 12:22:36 bgpgw-004 bgpd[65049]: peer closed imsg connection
> Mar 21 12:22:36 bgpgw-004 bgpd[65049]: main: Lost connection to RDE
> Mar 21 12:22:36 bgpgw-004 bgpd[76476]: peer closed imsg connection
> Mar 21 12:22:36 bgpgw-004 bgpd[58155]: peer closed imsg connection
> Mar 21 12:22:36 bgpgw-004 bgpd[76476]: RTR: Lost connection to RDE
> Mar 21 12:22:36 bgpgw-004 bgpd[58155]: SE: Lost connection to RDE
> Mar 21 12:22:36 bgpgw-004 bgpd[58155]: peer closed imsg connection
> Mar 21 12:22:36 bgpgw-004 bgpd[76476]: peer closed imsg connection
> Mar 21 12:22:36 bgpgw-004 bgpd[58155]: SE: Lost connection to RDE control
> Mar 21 12:22:36 bgpgw-004 bgpd[76476]: fatal in RTR: Lost connection to
> parent
> Mar 21 12:22:36 bgpgw-004 bgpd[58155]: Can't send message 61 to RDE, pipe
> closed
> Mar 21 12:22:36 bgpgw-004 bgpd[58155]: peer closed imsg connection
> Mar 21 12:22:36 bgpgw-004 bgpd[58155]: SE: Lost connection to parent
> ...
> 
> Mar 24 06:34:17 bgpgw-004 bgpd[83062]: 17 roa-set entries expired
> Mar 24 06:54:47 bgpgw-004 bgpd[82782]: fatal in RDE: communities_copy:
> Cannot allocate memory
> Mar 24 06:54:47 bgpgw-004 bgpd[99753]: peer closed imsg connection
> Mar 24 06:54:47 bgpgw-004 bgpd[83062]: peer closed imsg connection
> Mar 24 06:54:47 bgpgw-004 bgpd[99753]: main: Lost connection to RDE
> Mar 24 06:54:47 bgpgw-004 bgpd[83062]: RTR: Lost connection to RDE
> Mar 24 06:54:47 bgpgw-004 bgpd[83062]: peer closed imsg connection
> Mar 24 06:54:47 bgpgw-004 bgpd[83062]: fatal in RTR: Lost connection to
> parent
> Mar 24 06:54:47 bgpgw-004 bgpd[40748]: peer closed imsg connection
> Mar 24 06:54:47 bgpgw-004 bgpd[40748]: SE: Lost connection to RDE
> Mar 24 06:54:47 bgpgw-004 bgpd[40748]: peer closed imsg connection
> Mar 24 06:54:47 bgpgw-004 bgpd[40748]: SE: Lost connection to RDE control
> Mar 24 06:54:47 bgpgw-004 bgpd[40748]: Can't send message 61 to RDE, pipe
> closed
> Mar 24 06:54:47 bgpgw-004 bgpd[40748]: peer closed imsg connection
> Mar 24 06:54:47 bgpgw-004 bgpd[40748]: SE: Lost connection to parent
> ...
> 
> Mar 27 13:07:56 bgpgw-004 bgpd[95001]: fatal in RDE: aspath_get: Cannot
> allocate memory
> Mar 27 13:07:56 bgpgw-004 bgpd[84816]: peer closed imsg connection
> Mar 27 13:07:56 bgpgw-004 bgpd[84816]: main: Lost connection to RDE
> Mar 27 13:07:56 bgpgw-004 bgpd[3118]: peer closed imsg connection
> Mar 27 13:07:56 bgpgw-004 bgpd[3118]: RTR: Lost connection to RDE
> Mar 27 13:07:56 bgpgw-004 bgpd[3118]: peer closed imsg connection
> Mar 27 13:07:56 bgpgw-004 bgpd[3118]: fatal in RTR: Lost connection to
> parent
> Mar 27 13:07:56 bgpgw-004 bgpd[60695]: peer closed imsg connection
> Mar 27 13:07:56 bgpgw-004 bgpd[60695]: SE: Lost connection to RDE
> Mar 27 13:07:56 bgpgw-004 bgpd[60695]: peer closed imsg connection
> Mar 27 13:07:56 bgpgw-004 bgpd[60695]: SE: Lost connection to RDE control
> Mar 27 13:07:56 bgpgw-004 bgpd[60695]: peer closed imsg connection
> Mar 27 13:07:56 bgpgw-004 bgpd[60695]: SE: Lost connection to parent
> 
> Is my filter too aggressive for bgpd ? Is there a more efficient way to
> write it ?
 
I doubt it is the filters. You run into some sort of memory leak. Please
monitor 'bgpctl show rib mem' output. Also check ps aux | grep bgpd output 
to see why and when the memory starts to go up.
With that information it may be possible to figure out where this leak
sits and how to fix it.

Cheers
-- 
:wq Claudio



OpenBGPd: fatal in RDE: aspath_get: Cannot allocate memory

2022-03-29 Thread Laurent CARON

Hi,

I'm happily running several OpenBGPd routers (Openbsd 7.0).

After having applied the folloxing filters (to blackhole traffic from 
certain countries):


include "/etc/bgpd/deny-asn.ru.bgpd"
include "/etc/bgpd/deny-asn.by.bgpd"
include "/etc/bgpd/deny-asn.ua.bgpd"


# head /etc/bgpd/deny-asn.ru.bgpd
match from any AS 2148 set { localpref 250 nexthop blackhole }
match from any AS 2585 set { localpref 250 nexthop blackhole }
match from any AS 2587 set { localpref 250 nexthop blackhole }
match from any AS 2599 set { localpref 250 nexthop blackhole }
match from any AS 2766 set { localpref 250 nexthop blackhole }
match from any AS 2848 set { localpref 250 nexthop blackhole }
match from any AS 2854 set { localpref 250 nexthop blackhole }
match from any AS 2875 set { localpref 250 nexthop blackhole }
match from any AS 2878 set { localpref 250 nexthop blackhole }
match from any AS 2895 set { localpref 250 nexthop blackhole }

The bgpd daemon crashes every few days with the following:

Mar 21 11:36:54 bgpgw-004 bgpd[76476]: 338 roa-set entries expired
Mar 21 12:06:54 bgpgw-004 bgpd[76476]: 36 roa-set entries expired
Mar 21 12:11:54 bgpgw-004 bgpd[76476]: 82 roa-set entries expired
Mar 21 12:22:36 bgpgw-004 bgpd[99215]: fatal in RDE: prefix_alloc: 
Cannot allocate memory

Mar 21 12:22:36 bgpgw-004 bgpd[65049]: peer closed imsg connection
Mar 21 12:22:36 bgpgw-004 bgpd[65049]: main: Lost connection to RDE
Mar 21 12:22:36 bgpgw-004 bgpd[76476]: peer closed imsg connection
Mar 21 12:22:36 bgpgw-004 bgpd[58155]: peer closed imsg connection
Mar 21 12:22:36 bgpgw-004 bgpd[76476]: RTR: Lost connection to RDE
Mar 21 12:22:36 bgpgw-004 bgpd[58155]: SE: Lost connection to RDE
Mar 21 12:22:36 bgpgw-004 bgpd[58155]: peer closed imsg connection
Mar 21 12:22:36 bgpgw-004 bgpd[76476]: peer closed imsg connection
Mar 21 12:22:36 bgpgw-004 bgpd[58155]: SE: Lost connection to RDE control
Mar 21 12:22:36 bgpgw-004 bgpd[76476]: fatal in RTR: Lost connection to 
parent
Mar 21 12:22:36 bgpgw-004 bgpd[58155]: Can't send message 61 to RDE, 
pipe closed

Mar 21 12:22:36 bgpgw-004 bgpd[58155]: peer closed imsg connection
Mar 21 12:22:36 bgpgw-004 bgpd[58155]: SE: Lost connection to parent
...

Mar 24 06:34:17 bgpgw-004 bgpd[83062]: 17 roa-set entries expired
Mar 24 06:54:47 bgpgw-004 bgpd[82782]: fatal in RDE: communities_copy: 
Cannot allocate memory

Mar 24 06:54:47 bgpgw-004 bgpd[99753]: peer closed imsg connection
Mar 24 06:54:47 bgpgw-004 bgpd[83062]: peer closed imsg connection
Mar 24 06:54:47 bgpgw-004 bgpd[99753]: main: Lost connection to RDE
Mar 24 06:54:47 bgpgw-004 bgpd[83062]: RTR: Lost connection to RDE
Mar 24 06:54:47 bgpgw-004 bgpd[83062]: peer closed imsg connection
Mar 24 06:54:47 bgpgw-004 bgpd[83062]: fatal in RTR: Lost connection to 
parent

Mar 24 06:54:47 bgpgw-004 bgpd[40748]: peer closed imsg connection
Mar 24 06:54:47 bgpgw-004 bgpd[40748]: SE: Lost connection to RDE
Mar 24 06:54:47 bgpgw-004 bgpd[40748]: peer closed imsg connection
Mar 24 06:54:47 bgpgw-004 bgpd[40748]: SE: Lost connection to RDE control
Mar 24 06:54:47 bgpgw-004 bgpd[40748]: Can't send message 61 to RDE, 
pipe closed

Mar 24 06:54:47 bgpgw-004 bgpd[40748]: peer closed imsg connection
Mar 24 06:54:47 bgpgw-004 bgpd[40748]: SE: Lost connection to parent
...

Mar 27 13:07:56 bgpgw-004 bgpd[95001]: fatal in RDE: aspath_get: Cannot 
allocate memory

Mar 27 13:07:56 bgpgw-004 bgpd[84816]: peer closed imsg connection
Mar 27 13:07:56 bgpgw-004 bgpd[84816]: main: Lost connection to RDE
Mar 27 13:07:56 bgpgw-004 bgpd[3118]: peer closed imsg connection
Mar 27 13:07:56 bgpgw-004 bgpd[3118]: RTR: Lost connection to RDE
Mar 27 13:07:56 bgpgw-004 bgpd[3118]: peer closed imsg connection
Mar 27 13:07:56 bgpgw-004 bgpd[3118]: fatal in RTR: Lost connection to 
parent

Mar 27 13:07:56 bgpgw-004 bgpd[60695]: peer closed imsg connection
Mar 27 13:07:56 bgpgw-004 bgpd[60695]: SE: Lost connection to RDE
Mar 27 13:07:56 bgpgw-004 bgpd[60695]: peer closed imsg connection
Mar 27 13:07:56 bgpgw-004 bgpd[60695]: SE: Lost connection to RDE control
Mar 27 13:07:56 bgpgw-004 bgpd[60695]: peer closed imsg connection
Mar 27 13:07:56 bgpgw-004 bgpd[60695]: SE: Lost connection to parent

Is my filter too aggressive for bgpd ? Is there a more efficient way to 
write it ?


Thanks

Laurent



Re: Tunnel traffic does not match SA on initial connection to remote httpd

2022-03-29 Thread Tobias Heider
On Fri, Mar 25, 2022 at 12:23:45PM -0500, rea...@catastrophe.net wrote:
> The setup is two gateways with IPsec channels setup in tunnel mode
> to bridge networks 10.255.255.0/24 and 10.254.255.0/24. Traffic from 
> server-east:enc0 does not match a SA in place when trying to connect to
> httpd on server-west.
> 
> Setup in ASCII art:
> 
> em0:203.0.113.50 -~-~- ipsec tunnel -~-~-~- vio0:100.64.1.92
>  | SERVER-WEST | | SERVER-EAST |
> enc0:10.255.255.1/24enc0:10.254.255.1/24
> 
> When traffic sources from 10.254.255.1 to server-west's httpd, the initial
> SYN goes out 100.64.1.92 and does not match the ipsec SA in place:
> 
> flow esp out from 10.254.255.0/24 to 10.255.255.0/24 peer 203.0.113.50 srcid
> FQDN/server-east.example.com dstid FQDN/server-west.example.com type require
> 
> However, return traffic on server-west matches an SA already in place and is
> sent back over the tunnel to server-east. Here is a pcap from server-west
> showing the initial connection; the second packet is the response from
> server-west to server-east over the tunnel, etc.
> 
> 11:15:07.595477 100.64.1.92.53545 > 203.0.113.50.80: SWE 
> 466527235:466527235(0) win 16384  6,nop,nop,timestamp 3005156378 0> (DF)
> 11:15:07.641673 203.0.113.50 > 100.64.1.92: esp spi 0x5787a1ca seq 1 len 80 
> (DF)
> 11:15:07.641901 100.64.1.92 > 203.0.113.50: esp spi 0x9a987eb3 seq 1 len 76
> 11:15:11.959583 100.64.1.92.63317 > 203.0.113.50.80: SWE 
> 321626718:321626718(0) win 16384  6,nop,nop,timestamp 891794631 0> (DF)
> 11:15:12.005730 203.0.113.50 > 100.64.1.92: esp spi 0x5787a1ca seq 2 len 80 
> (DF)
> 
> The SA being match on server-west is:
> 
> esp tunnel from 203.0.113.50 to 100.64.1.92 spi 0x5787a1ca enc aes-256-gcm
> 
> Is something missing in my configs or does anything look obviously broken?

It looks like synproxy in your pf.conf might be the problem.
You could try adding a "from 10.254.255.1/24 to 203.0.113.50" flow to your
iked config and see if that catches the initial syn or remove the synproxy
option in pf to test how that works.

> 
> Many thanks in advance for any help.
> 
> 
> PF RULES
> 
> 
> # server-west pf
> match in all scrub (no-df random-id max-mss 1440)
> match out on em0 inet from 10.255.255.0/24 to any nat-to (em0) round-robin
> block drop in log on ! em0 inet from 203.0.113.48/30 to any
> block drop log all
> pass out proto tcp all modulate state
> pass out proto udp from any to any port = 500
> pass out proto udp from any to any port = 4500
> pass out proto esp all
> pass out proto ah all
> pass out all modulate state
> block drop in log from urpf-failed to any label "uRPF"
> block drop in log from no-route to any
> pass in proto udp from any to 203.0.113.50 port = 500 keep state
> pass in proto udp from any to 203.0.113.50 port = 4500 keep state
> pass in proto esp from any to 203.0.113.50 
> pass in proto ah from any to 203.0.113.50
> pass in inet proto tcp from any to 203.0.113.50 port = 80 flags S/SA synproxy 
> state (source-track rule, max-src-conn 256, max-src-conn-rate 40/2, overload 
>  flush, src.track 2)
> pass in inet proto tcp from 100.64.1.92 to 203.0.113.50 port = 5201 flags S/SA
> 
> # server-east pf
> match in all scrub (no-df random-id max-mss 1440)
> match out on vio0 inet from 10.254.255.0/24 to any nat-to (vio0) round-robin
> block drop in log on ! vio0 inet from 100.64.0.0/23 to any
> block drop log all
> pass out proto tcp all modulate state
> pass out proto udp from any to any port = 500
> pass out proto udp from any to any port = 4500
> pass out proto esp all
> pass out proto ah all
> pass out all modulate state
> block drop in log from urpf-failed to any label "uRPF"
> block drop in log from no-route to any
> pass in inet proto udp from any to 100.64.1.92 port = 500 keep state
> pass in inet proto udp from any to 100.64.1.92 port = 4500 keep state
> pass in inet proto esp from any to 100.64.1.92
> pass in inet proto ah from any to 100.64.1.92
> pass on enc0 all flags S/SA modulate state (if-bound) tagged VPN.SERVER-WEST
> pass on enc0 all flags S/SA modulate state (if-bound)
> pass in inet proto tcp from any to 100.64.1.92 port = 80 flags S/SA synproxy 
> state (source-track rule, max-src-conn 256, max-src-conn-rate 40/2, overload 
>  flush, src.track 2)
> pass in inet proto tcp from 203.0.113.50 to 100.64.1.92 port = 5201 flags S/SA
> 
> IPSEC FLOWS
> ===
> 
> # server-west flows
> FLOWS:
> flow esp in from 10.254.255.0/24 to 10.255.255.0/24 peer 100.64.1.92 srcid 
> FQDN/server-west.example.com dstid FQDN/server-east.example.com type require
> flow esp in from 100.64.1.92 to 203.0.113.50 peer 100.64.1.92 srcid 
> FQDN/server-west.example.com dstid FQDN/server-east.example.com type require
> flow esp out from 10.255.255.0/24 to 10.254.255.0/24 peer 100.64.1.92 srcid 
> FQDN/server-west.example.com dstid FQDN/server-east.example.com type require
> flow esp out from 203.0.113.50 to 100.64.1.92 peer 100.64.1.92 srcid 
> 

Re: How to determine if WiFi AP is compatible?

2022-03-29 Thread Łukasz Moskała
Dnia Tue, Mar 29, 2022 at 05:31:56AM +0300, Mihai Popescu napisał(a):
> > Pure access points are just network bridges ...
> 
> Most AP I encountered were linux based with web servers for
> configuration access.
> Do you know if there is an AP model with minimal firmware to do that bridging?
> If so, can you post some models, please?
> 
> Thank you.
>

I'd say anything that is supported by OpenWRT. Still linux-based, but you can 
compile it yourself, removing everything you don't want/need, including web ui.