[Touch-packages] [Bug 1940691] Re: sfdisk fails to overwrite disks that contain a pre-existing linux partition

2021-08-20 Thread Bill Yikes
** Description changed:

  This command should non-interactively obliterate whatever partition
  table is on /dev/sde, and create a new table with a linux partition that
  spans the whole disk:
  
-   $ sfdisk --label gpt /dev/sde <<< 'start=2048, type='"$(sfdisk --label
+   $ sfdisk --label gpt /dev/sde <<< 'start=2048, type='"$(sfdisk --label
  gpt -T | awk '{IGNORECASE = 1;} /linux filesystem/{print $1}')"
  
  Sometimes it works correctly; sometimes not.  It seems to work correctly
  as long as the pre-existing partition table does not already contain a
  linux partition.  E.g. if the existing table just contains an exFAT
  partition, there's no issue.  But if there is a linux partition, it
  gives this output:
  
  -
  Old situation:
  
  Device StartEndSectors   Size Type
  /dev/sde1   2048 1953525134 1953523087 931.5G Linux root (x86-64)
  
  >>> Created a new GPT disklabel (GUID: ).
  /dev/sde1: Sector 2048 already used.
  Failed to add #1 partition: Numerical result out of range
  Leaving.
  -
  
  sfdisk should *always* overwrite the target disk unconditionally.  This
  is what the dump of the target drive looks like in the failure case:
  
  $ sfdisk -d /dev/sdd
  label: gpt
  label-id: 
  device: /dev/sdd
  unit: sectors
  first-lba: 34
  last-lba: 1953525134
  grain: 33553920
  sector-size: 512
  
  /dev/sdd1 : start=2048, size=  1953523087,
  type=4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709, uuid=, name="Linux
  x86-64 root (/)"
  
  $ sfdisk -v
  sfdisk from util-linux 2.36.1
  
  Even the workaround is broken.  That is, running the following:
  
  $ wipefs -a /dev/sde
  
  should put the disk in a state that can be overwritten.  But whatever
  residual metadata it leaves behind still triggers the sfdisk bug:
  
  $ sfdisk --label gpt /dev/sde <<< 'start=2048, type='"$(sfdisk --label gpt -T 
| awk '{IGNORECASE = 1;} /linux filesystem/{print $1}')"
  Checking that no-one is using this disk right now ... OK
  
  Disk /dev/sde: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
- Disk model: Disk
+ Disk model: Disk
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 4096 bytes / 33553920 bytes
  
  >>> Created a new GPT disklabel (GUID: ).
  /dev/sde1: Sector 2048 already used.
  Failed to add #1 partition: Numerical result out of range
  Leaving.
+ 
+ Workaround 2 (also fails):
+ 
+ $ dd if=/dev/zero of=/dev/sde bs=1M count=512
+ 
+ $ sfdisk -d /dev/sde
+ sfdisk: /dev/sde: does not contain a recognized partition table
+ 
+ ^ the nuclear option did the right thing, but sfdisk still fails to
+ partition the drive (same error).
+ 
+ The *only* workaround that works is an interactive partitioning with
+ gdisk.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to util-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1940691

Title:
  sfdisk fails to overwrite disks that contain a pre-existing linux
  partition

Status in util-linux package in Ubuntu:
  New

Bug description:
  This command should non-interactively obliterate whatever partition
  table is on /dev/sde, and create a new table with a linux partition
  that spans the whole disk:

    $ sfdisk --label gpt /dev/sde <<< 'start=2048, type='"$(sfdisk
  --label gpt -T | awk '{IGNORECASE = 1;} /linux filesystem/{print
  $1}')"

  Sometimes it works correctly; sometimes not.  It seems to work
  correctly as long as the pre-existing partition table does not already
  contain a linux partition.  E.g. if the existing table just contains
  an exFAT partition, there's no issue.  But if there is a linux
  partition, it gives this output:

  -
  Old situation:

  Device StartEndSectors   Size Type
  /dev/sde1   2048 1953525134 1953523087 931.5G Linux root (x86-64)

  >>> Created a new GPT disklabel (GUID: ).
  /dev/sde1: Sector 2048 already used.
  Failed to add #1 partition: Numerical result out of range
  Leaving.
  -

  sfdisk should *always* overwrite the target disk unconditionally.
  This is what the dump of the target drive looks like in the failure
  case:

  $ sfdisk -d /dev/sdd
  label: gpt
  label-id: 
  device: /dev/sdd
  unit: sectors
  first-lba: 34
  last-lba: 1953525134
  grain: 33553920
  sector-size: 512

  /dev/sdd1 : start=2048, size=  1953523087,
  type=4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709, uuid=,
  name="Linux x86-64 root (/)"

  $ sfdisk -v
  sfdisk from util-linux 2.36.1

  Even the workaround is broken.  That is, running the following:

  $ wipefs -a /dev/sde

  should put the disk in a state that can be overwritten.  But whatever
  residual metadata it leaves behind still triggers the sfdisk bug:

  $ sfdisk --label gpt /dev/sde <<< 'start=2048, type='"$(sfdisk --label gpt -T 
| awk '{IGNORECASE = 1;} /linux filesystem/{print $1}')"
  Checking that no-one is using this disk right now ... OK

  Disk /dev/sde: 931.51 GiB

[Touch-packages] [Bug 1940691] [NEW] sfdisk fails to overwrite disks that contain a pre-existing linux partition

2021-08-20 Thread Bill Yikes
Public bug reported:

This command should non-interactively obliterate whatever partition
table is on /dev/sde, and create a new table with a linux partition that
spans the whole disk:

  $ sfdisk --label gpt /dev/sde <<< 'start=2048, type='"$(sfdisk --label
gpt -T | awk '{IGNORECASE = 1;} /linux filesystem/{print $1}')"

Sometimes it works correctly; sometimes not.  It seems to work correctly
as long as the pre-existing partition table does not already contain a
linux partition.  E.g. if the existing table just contains an exFAT
partition, there's no issue.  But if there is a linux partition, it
gives this output:

-
Old situation:

Device StartEndSectors   Size Type
/dev/sde1   2048 1953525134 1953523087 931.5G Linux root (x86-64)

>>> Created a new GPT disklabel (GUID: ).
/dev/sde1: Sector 2048 already used.
Failed to add #1 partition: Numerical result out of range
Leaving.
-

sfdisk should *always* overwrite the target disk unconditionally.  This
is what the dump of the target drive looks like in the failure case:

$ sfdisk -d /dev/sdd
label: gpt
label-id: 
device: /dev/sdd
unit: sectors
first-lba: 34
last-lba: 1953525134
grain: 33553920
sector-size: 512

/dev/sdd1 : start=2048, size=  1953523087,
type=4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709, uuid=, name="Linux
x86-64 root (/)"

$ sfdisk -v
sfdisk from util-linux 2.36.1

Even the workaround is broken.  That is, running the following:

$ wipefs -a /dev/sde

should put the disk in a state that can be overwritten.  But whatever
residual metadata it leaves behind still triggers the sfdisk bug:

$ sfdisk --label gpt /dev/sde <<< 'start=2048, type='"$(sfdisk --label gpt -T | 
awk '{IGNORECASE = 1;} /linux filesystem/{print $1}')"
Checking that no-one is using this disk right now ... OK

Disk /dev/sde: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes

>>> Created a new GPT disklabel (GUID: ).
/dev/sde1: Sector 2048 already used.
Failed to add #1 partition: Numerical result out of range
Leaving.

Workaround 2 (also fails):

$ dd if=/dev/zero of=/dev/sde bs=1M count=512

$ sfdisk -d /dev/sde
sfdisk: /dev/sde: does not contain a recognized partition table

^ the nuclear option did the right thing, but sfdisk still fails to
partition the drive (same error).

The *only* workaround that works is an interactive partitioning with
gdisk.

** Affects: util-linux (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to util-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1940691

Title:
  sfdisk fails to overwrite disks that contain a pre-existing linux
  partition

Status in util-linux package in Ubuntu:
  New

Bug description:
  This command should non-interactively obliterate whatever partition
  table is on /dev/sde, and create a new table with a linux partition
  that spans the whole disk:

    $ sfdisk --label gpt /dev/sde <<< 'start=2048, type='"$(sfdisk
  --label gpt -T | awk '{IGNORECASE = 1;} /linux filesystem/{print
  $1}')"

  Sometimes it works correctly; sometimes not.  It seems to work
  correctly as long as the pre-existing partition table does not already
  contain a linux partition.  E.g. if the existing table just contains
  an exFAT partition, there's no issue.  But if there is a linux
  partition, it gives this output:

  -
  Old situation:

  Device StartEndSectors   Size Type
  /dev/sde1   2048 1953525134 1953523087 931.5G Linux root (x86-64)

  >>> Created a new GPT disklabel (GUID: ).
  /dev/sde1: Sector 2048 already used.
  Failed to add #1 partition: Numerical result out of range
  Leaving.
  -

  sfdisk should *always* overwrite the target disk unconditionally.
  This is what the dump of the target drive looks like in the failure
  case:

  $ sfdisk -d /dev/sdd
  label: gpt
  label-id: 
  device: /dev/sdd
  unit: sectors
  first-lba: 34
  last-lba: 1953525134
  grain: 33553920
  sector-size: 512

  /dev/sdd1 : start=2048, size=  1953523087,
  type=4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709, uuid=,
  name="Linux x86-64 root (/)"

  $ sfdisk -v
  sfdisk from util-linux 2.36.1

  Even the workaround is broken.  That is, running the following:

  $ wipefs -a /dev/sde

  should put the disk in a state that can be overwritten.  But whatever
  residual metadata it leaves behind still triggers the sfdisk bug:

  $ sfdisk --label gpt /dev/sde <<< 'start=2048, type='"$(sfdisk --label gpt -T 
| awk '{IGNORECASE = 1;} /linux filesystem/{print $1}')"
  Checking that no-one is using this disk right now ... OK

  Disk /dev/sde: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
  Disk model: Disk
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal

[Touch-packages] [Bug 1937874] [NEW] one --accept-regex expression negates another

2021-07-23 Thread Bill Yikes
Public bug reported:

This command should theoretically fetch all PDFs on a page:

$ wget -v -d -r --level 1 --adjust-extension --no-clobber --no-directories\
   --accept-regex 'administrative-orders/.*/administrative-order-matter-'\
   --accept-regex 'administrative-orders.*.pdf'\
   --accept-regex 'administrative-orders.page[^&]*$'\
   --directory-prefix=/tmp\
   
'https://www.ncua.gov/regulation-supervision/enforcement-actions/administrative-orders?page=56'

But it fails to grab any of them, giving the output:

---
Deciding whether to enqueue 
"https://www.ncua.gov/files/administrative-orders/AO14-0241-R4.pdf";.
https://www.ncua.gov/files/administrative-orders/AO14-0241-R4.pdf is 
excluded/not-included through regex.
Decided NOT to load it.
---

That's bogus.  The workaround is to remove this option:

--accept-regex 'administrative-orders.page[^&]*$'

But that should not be necessary.  Adding an --accept-* clause should
never cause another --accept-* clause to become invalidated and it
should not shrink the set of fetched files.

** Affects: wget (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to wget in Ubuntu.
https://bugs.launchpad.net/bugs/1937874

Title:
  one --accept-regex expression negates another

Status in wget package in Ubuntu:
  New

Bug description:
  This command should theoretically fetch all PDFs on a page:

  $ wget -v -d -r --level 1 --adjust-extension --no-clobber --no-directories\
 --accept-regex 'administrative-orders/.*/administrative-order-matter-'\
 --accept-regex 'administrative-orders.*.pdf'\
 --accept-regex 'administrative-orders.page[^&]*$'\
 --directory-prefix=/tmp\
 
'https://www.ncua.gov/regulation-supervision/enforcement-actions/administrative-orders?page=56'

  But it fails to grab any of them, giving the output:

  ---
  Deciding whether to enqueue 
"https://www.ncua.gov/files/administrative-orders/AO14-0241-R4.pdf";.
  https://www.ncua.gov/files/administrative-orders/AO14-0241-R4.pdf is 
excluded/not-included through regex.
  Decided NOT to load it.
  ---

  That's bogus.  The workaround is to remove this option:

  --accept-regex 'administrative-orders.page[^&]*$'

  But that should not be necessary.  Adding an --accept-* clause should
  never cause another --accept-* clause to become invalidated and it
  should not shrink the set of fetched files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/wget/+bug/1937874/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1937850] [NEW] the -L / --relative option breaks --accept-regex

2021-07-23 Thread Bill Yikes
Public bug reported:

This code should in principle (per the docs) fetch a few *.pdf files:

$ wget -r --level 1 --adjust-extension --relative --no-clobber --no-directories\
   --domains=ncua.gov --accept-regex 'administrative-orders/.*.pdf'\
   
'https://www.ncua.gov/regulation-supervision/enforcement-actions/administrative-orders?page=22&sort=year&dir=desc&sq='

But it misses all *.pdf files.  When the --relative option is removed,
the PDF files are downloaded.  However, when you examine the tree-top
HTML file, the links pointing to PDF files actually are relative.

** Affects: wget (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to wget in Ubuntu.
https://bugs.launchpad.net/bugs/1937850

Title:
  the -L / --relative option breaks --accept-regex

Status in wget package in Ubuntu:
  New

Bug description:
  This code should in principle (per the docs) fetch a few *.pdf files:

  $ wget -r --level 1 --adjust-extension --relative --no-clobber 
--no-directories\
 --domains=ncua.gov --accept-regex 'administrative-orders/.*.pdf'\
 
'https://www.ncua.gov/regulation-supervision/enforcement-actions/administrative-orders?page=22&sort=year&dir=desc&sq='

  But it misses all *.pdf files.  When the --relative option is removed,
  the PDF files are downloaded.  However, when you examine the tree-top
  HTML file, the links pointing to PDF files actually are relative.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/wget/+bug/1937850/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1937082] [NEW] gunzip fails to extract data from stdin when the zip holds multiple files

2021-07-21 Thread Bill Yikes
Public bug reported:

There is no good reason for this command to fail:

$ wget --quiet -O -
https://web.archive.org/web/20210721004028/freefontsdownload.net/download/76451/lucida_fax.zip
| gunzip -

The output is:

[InternetShortcut]
URL=HOMESITEfree-lucida_fax-font-76451.htmgzip: stdin has more than one 
entry--rest ignored

What's happening is gzip logic has gotten tangled up with the gunzip
logic. If gzip receives an input stream, it's sensible that the
resulting archive contain just one file. But there's no reason gunzip
should not be able to produce multiple files from a zip stream.  Note
that -c was not given to gunzip, so it should not have the constraints
that use of stdout would impose.

The man page is also a problem. The gunzip portion of the aggregated man
page makes no statement about how stdin is expected to operate.  At a
minimum, it should say that a minus ("-") directs gunzip to read from
stdin.

** Affects: gzip (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to gzip in Ubuntu.
https://bugs.launchpad.net/bugs/1937082

Title:
  gunzip fails to extract data from stdin when the zip holds multiple
  files

Status in gzip package in Ubuntu:
  New

Bug description:
  There is no good reason for this command to fail:

  $ wget --quiet -O -
  
https://web.archive.org/web/20210721004028/freefontsdownload.net/download/76451/lucida_fax.zip
  | gunzip -

  The output is:

  [InternetShortcut]
  URL=HOMESITEfree-lucida_fax-font-76451.htmgzip: stdin has more than one 
entry--rest ignored

  What's happening is gzip logic has gotten tangled up with the gunzip
  logic. If gzip receives an input stream, it's sensible that the
  resulting archive contain just one file. But there's no reason gunzip
  should not be able to produce multiple files from a zip stream.  Note
  that -c was not given to gunzip, so it should not have the constraints
  that use of stdout would impose.

  The man page is also a problem. The gunzip portion of the aggregated
  man page makes no statement about how stdin is expected to operate.
  At a minimum, it should say that a minus ("-") directs gunzip to read
  from stdin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gzip/+bug/1937082/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1934040] Re: openssl s_client's '-ssl2' & '-ssl3' options gone, prematurely!

2021-06-30 Thread Bill Yikes
** Description changed:

  SSL2 and SSL3 have been hastily removed, apparently by developers who
  are unaware that these protocols serve purposes other than encryption.
  SSL2/3 is *still used* on onion sites.  Why?  For verification.  An
  onion site has inherent encryption, so it matters not how weak the SSL
  crypto is when the purpose is purely to verify that the server is owned
  by who they say it's owned by.
  
  Proof that disclosure attacks on ssl2/3 are irrelevant to onion sites:
  
  https://blog.torproject.org/tls-certificate-for-onion-site
+ https://community.torproject.org/onion-services/overview/
  
  So here is a real world impact case.  Suppose you get your email from
  one of these onion mail servers:
  
  http://onionmail.info/directory.html
  
  Some (if not all) use ssl2/3 on top of Tor's inherent onion tunnel.
  They force users to use ssl2/3, so even if a user configures the client
  not to impose TLS, the server imposes it.  And it's reasonable because
  the ssl2/3 vulns are orthoganol to the use case.
  
  Some users will get lucky and use a mail client that still supports
  ssl2/3.  But there's still a problem: users can no longer use openssl to
  obtain the fingerprint to pin.  e.g.
  
  $ openssl s_client -proxy 127.0.0.1:8118 -connect xhfheq5i37waj6qb.onion:110 
-showcerts
  CONNECTED(0003)
  140124399195456:error:1408F10B:SSL routines:ssl3_get_record:wrong version 
number:../ssl/record/ssl3_record.c:331:
  ---
  no peer certificate available
  ---
  No client certificate CA names sent
  ---
  SSL handshake has read 44 bytes and written 330 bytes
  Verification: OK
  ---
  New, (NONE), Cipher is (NONE)
  Secure Renegotiation IS NOT supported
  Compression: NONE
  Expansion: NONE
  No ALPN negotiated
  Early data was not sent
  Verify return code: 0 (ok)
  ---
  
  That's openssl version 1.1.1k
  
  Being denied the ability to pin the SSL cert is actually a *degredation*
  of security.  Cert Pinning is particularly useful with self-signed
  certs, as is often the scenario with onion sites.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to openssl in Ubuntu.
https://bugs.launchpad.net/bugs/1934040

Title:
  openssl s_client's '-ssl2' & '-ssl3' options gone, prematurely!

Status in openssl package in Ubuntu:
  New

Bug description:
  SSL2 and SSL3 have been hastily removed, apparently by developers who
  are unaware that these protocols serve purposes other than encryption.
  SSL2/3 is *still used* on onion sites.  Why?  For verification.  An
  onion site has inherent encryption, so it matters not how weak the SSL
  crypto is when the purpose is purely to verify that the server is
  owned by who they say it's owned by.

  Proof that disclosure attacks on ssl2/3 are irrelevant to onion sites:

  https://blog.torproject.org/tls-certificate-for-onion-site
  https://community.torproject.org/onion-services/overview/

  So here is a real world impact case.  Suppose you get your email from
  one of these onion mail servers:

  http://onionmail.info/directory.html

  Some (if not all) use ssl2/3 on top of Tor's inherent onion tunnel.
  They force users to use ssl2/3, so even if a user configures the
  client not to impose TLS, the server imposes it.  And it's reasonable
  because the ssl2/3 vulns are orthoganol to the use case.

  Some users will get lucky and use a mail client that still supports
  ssl2/3.  But there's still a problem: users can no longer use openssl
  to obtain the fingerprint to pin.  e.g.

  $ openssl s_client -proxy 127.0.0.1:8118 -connect xhfheq5i37waj6qb.onion:110 
-showcerts
  CONNECTED(0003)
  140124399195456:error:1408F10B:SSL routines:ssl3_get_record:wrong version 
number:../ssl/record/ssl3_record.c:331:
  ---
  no peer certificate available
  ---
  No client certificate CA names sent
  ---
  SSL handshake has read 44 bytes and written 330 bytes
  Verification: OK
  ---
  New, (NONE), Cipher is (NONE)
  Secure Renegotiation IS NOT supported
  Compression: NONE
  Expansion: NONE
  No ALPN negotiated
  Early data was not sent
  Verify return code: 0 (ok)
  ---

  That's openssl version 1.1.1k

  Being denied the ability to pin the SSL cert is actually a
  *degredation* of security.  Cert Pinning is particularly useful with
  self-signed certs, as is often the scenario with onion sites.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1934040/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1934044] [NEW] openssl removed ssl2/3 and broke cURL because curl uses openssl instead of libssl

2021-06-29 Thread Bill Yikes
Public bug reported:

cURL supports a -ssl3 option (and rightly so), but openssl removed it
prematurely (see
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1934040).  The
fallout:

torsocks curl --insecure --ssl-allow-beast -ssl3 -vvI 
https://xhfheq5i37waj6qb.onion:110 2>&1 
*   Trying 127.42.42.0:110...
* Connected to xhfheq5i37waj6qb.onion (127.42.42.0) port 110 (#0)
* OpenSSL was built without SSLv3 support
* Closing connection 0

Is it possible that curl's check for ssl3 is flawed?  I say that because
both curl and fetchmail are dependant on the same libssl pkg, and yet
fetchmail can still do ssl3 but curl can't.  Neither curl nor fetchmail
names "openssl" as a dependency.  So curl perhaps should not look to the
openssl package to detect ssl3 capability.

SSL3 is still useful for onion sites, so curl should do the necessary to
retain that capability.

** Affects: curl (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to curl in Ubuntu.
https://bugs.launchpad.net/bugs/1934044

Title:
  openssl removed ssl2/3 and broke cURL because curl uses openssl
  instead of libssl

Status in curl package in Ubuntu:
  New

Bug description:
  cURL supports a -ssl3 option (and rightly so), but openssl removed it
  prematurely (see
  https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1934040).  The
  fallout:

  torsocks curl --insecure --ssl-allow-beast -ssl3 -vvI 
https://xhfheq5i37waj6qb.onion:110 2>&1 
  *   Trying 127.42.42.0:110...
  * Connected to xhfheq5i37waj6qb.onion (127.42.42.0) port 110 (#0)
  * OpenSSL was built without SSLv3 support
  * Closing connection 0

  Is it possible that curl's check for ssl3 is flawed?  I say that
  because both curl and fetchmail are dependant on the same libssl pkg,
  and yet fetchmail can still do ssl3 but curl can't.  Neither curl nor
  fetchmail names "openssl" as a dependency.  So curl perhaps should not
  look to the openssl package to detect ssl3 capability.

  SSL3 is still useful for onion sites, so curl should do the necessary
  to retain that capability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curl/+bug/1934044/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1934040] [NEW] openssl s_client's '-ssl2' & '-ssl3' options gone, prematurely!

2021-06-29 Thread Bill Yikes
Public bug reported:

SSL2 and SSL3 have been hastily removed, apparently by developers who
are unaware that these protocols serve purposes other than encryption.
SSL2/3 is *still used* on onion sites.  Why?  For verification.  An
onion site has inherent encryption, so it matters not how weak the SSL
crypto is when the purpose is purely to verify that the server is owned
by who they say it's owned by.

Proof that disclosure attacks on ssl2/3 are irrelevant to onion sites:

https://blog.torproject.org/tls-certificate-for-onion-site

So here is a real world impact case.  Suppose you get your email from
one of these onion mail servers:

http://onionmail.info/directory.html

Some (if not all) use ssl2/3 on top of Tor's inherent onion tunnel.
They force users to use ssl2/3, so even if a user configures the client
not to impose TLS, the server imposes it.  And it's reasonable because
the ssl2/3 vulns are orthoganol to the use case.

Some users will get lucky and use a mail client that still supports
ssl2/3.  But there's still a problem: users can no longer use openssl to
obtain the fingerprint to pin.  e.g.

$ openssl s_client -proxy 127.0.0.1:8118 -connect xhfheq5i37waj6qb.onion:110 
-showcerts
CONNECTED(0003)
140124399195456:error:1408F10B:SSL routines:ssl3_get_record:wrong version 
number:../ssl/record/ssl3_record.c:331:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 44 bytes and written 330 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---

That's openssl version 1.1.1k

Being denied the ability to pin the SSL cert is actually a *degredation*
of security.  Cert Pinning is particularly useful with self-signed
certs, as is often the scenario with onion sites.

** Affects: openssl (Ubuntu)
 Importance: Undecided
 Status: New

** Description changed:

  SSL2 and SSL3 have been hastily removed, apparently by developers who
  are unaware that these protocols serve purposes other than encryption.
  SSL2/3 is *still used* on onion sites.  Why?  For verification.  An
  onion site has inherent encryption, so it matters not how weak the SSL
  crypto is when the purpose is purely to verify that the server is owned
  by who they say it's owned by.
  
  Proof that disclosure attacks on ssl2/3 are irrelevant to onion sites:
  
  https://blog.torproject.org/tls-certificate-for-onion-site
  
  So here is a real world impact case.  Suppose you get your email from
  one of these onion mail servers:
  
  http://onionmail.info/directory.html
  
  Some (if not all) use ssl2/3 on top of Tor's inherent onion tunnel.
  They force users to use ssl2/3, so even if a user configures the client
  not to impose TLS, the server imposes it.  And it's reasonable because
  the ssl2/3 vulns are orthoganol to the use case.
  
  Some users will get lucky and use a mail client that still supports
  ssl2/3.  But there's still a problem: users can no longer use openssl to
  obtain the fingerprint to pin.  e.g.
  
  $ openssl s_client -proxy 127.0.0.1:8118 -connect xhfheq5i37waj6qb.onion:110 
-showcerts
  CONNECTED(0003)
  140124399195456:error:1408F10B:SSL routines:ssl3_get_record:wrong version 
number:../ssl/record/ssl3_record.c:331:
  ---
  no peer certificate available
  ---
  No client certificate CA names sent
  ---
  SSL handshake has read 44 bytes and written 330 bytes
  Verification: OK
  ---
  New, (NONE), Cipher is (NONE)
  Secure Renegotiation IS NOT supported
  Compression: NONE
  Expansion: NONE
  No ALPN negotiated
  Early data was not sent
  Verify return code: 0 (ok)
  ---
  
+ That's openssl version 1.1.1k
  
- That's openssl version 1.1.1k
+ Being denied the ability to pin the SSL cert is actually a *degredation*
+ of security.  Cert Pinning is particularly useful with self-signed
+ certs, as is often the scenario with onion sites.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to openssl in Ubuntu.
https://bugs.launchpad.net/bugs/1934040

Title:
  openssl s_client's '-ssl2' & '-ssl3' options gone, prematurely!

Status in openssl package in Ubuntu:
  New

Bug description:
  SSL2 and SSL3 have been hastily removed, apparently by developers who
  are unaware that these protocols serve purposes other than encryption.
  SSL2/3 is *still used* on onion sites.  Why?  For verification.  An
  onion site has inherent encryption, so it matters not how weak the SSL
  crypto is when the purpose is purely to verify that the server is
  owned by who they say it's owned by.

  Proof that disclosure attacks on ssl2/3 are irrelevant to onion sites:

  https://blog.torproject.org/tls-certificate-for-onion-site

  So here is a real world impact case.  Suppose you get your email from
  one of these onion mail servers:

  http://onionmail.info/directory.html

[Touch-packages] [Bug 1933913] [NEW] "Xorg -configure" results in "modprobe: FATAL: Module fbcon not found in directory /lib/modules/5.10.0-7-amd64"

2021-06-28 Thread Bill Yikes
Public bug reported:

With x11 not running, this was executed: "Xorg -configure".  It should
simply build a configuration file.  The output below appears in the
terminal with an error.  It manages to create a config file anyway, but
what it creates causes "startx" to fall over.  So I am forced to run x11
without a config file (which is a problem for me because I have two
displays and need to change the RightOf/LeftOf setting).

OUTPUT:

X.Org X Server 1.20.11
X Protocol Version 11, Revision 0
Build Operating System: linux Debian
Current Operating System: Linux billyikes 5.10.0-7-amd64 #1 SMP Debian 
5.10.40-1 (2021-05-28) x86_64
Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.10.0-7-amd64 
root=/dev/mapper/grp-root ro 
rd.luks.name=UUID=279a24dc-1014-6495-38cc-75ce88144f44=cryptdisk quiet
Build Date: 13 April 2021  04:07:31PM
xorg-server 2:1.20.11-1 (https://www.debian.org/support)
Current version of pixman: 0.40.0
Before reporting problems, check http://wiki.x.org
to make sure that you have the latest version.
Markers: (--) probed, (**) from config file, (==) default setting,
(++) from command line, (!!) notice, (II) informational,
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
(==) Log file: "/var/log/Xorg.0.log", Time: Tue Jun 29 02:19:12 2021
List of video drivers:
amdgpu
ati
intel
nouveau
qxl
radeon
vmware
modesetting
fbdev
vesa
(++) Using config file: "/root/xorg.conf.new"
(==) Using config directory: "/etc/X11/xorg.conf.d"
(==) Using system config directory "/usr/share/X11/xorg.conf.d"
modprobe: FATAL: Module fbcon not found in directory /lib/modules/5.10.0-7-amd64
intel: waited 2020 ms for i915.ko driver to load
Number of created screens does not match number of detected devices.
  Configuration failed.
(EE) Server terminated with error (2). Closing log file.

** Affects: xorg (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to xorg in Ubuntu.
https://bugs.launchpad.net/bugs/1933913

Title:
  "Xorg -configure" results in "modprobe: FATAL: Module fbcon not found
  in directory /lib/modules/5.10.0-7-amd64"

Status in xorg package in Ubuntu:
  New

Bug description:
  With x11 not running, this was executed: "Xorg -configure".  It should
  simply build a configuration file.  The output below appears in the
  terminal with an error.  It manages to create a config file anyway,
  but what it creates causes "startx" to fall over.  So I am forced to
  run x11 without a config file (which is a problem for me because I
  have two displays and need to change the RightOf/LeftOf setting).

  OUTPUT:

  X.Org X Server 1.20.11
  X Protocol Version 11, Revision 0
  Build Operating System: linux Debian
  Current Operating System: Linux billyikes 5.10.0-7-amd64 #1 SMP Debian 
5.10.40-1 (2021-05-28) x86_64
  Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.10.0-7-amd64 
root=/dev/mapper/grp-root ro 
rd.luks.name=UUID=279a24dc-1014-6495-38cc-75ce88144f44=cryptdisk quiet
  Build Date: 13 April 2021  04:07:31PM
  xorg-server 2:1.20.11-1 (https://www.debian.org/support)
  Current version of pixman: 0.40.0
  Before reporting problems, check http://wiki.x.org
  to make sure that you have the latest version.
  Markers: (--) probed, (**) from config file, (==) default setting,
  (++) from command line, (!!) notice, (II) informational,
  (WW) warning, (EE) error, (NI) not implemented, (??) unknown.
  (==) Log file: "/var/log/Xorg.0.log", Time: Tue Jun 29 02:19:12 2021
  List of video drivers:
  amdgpu
  ati
  intel
  nouveau
  qxl
  radeon
  vmware
  modesetting
  fbdev
  vesa
  (++) Using config file: "/root/xorg.conf.new"
  (==) Using config directory: "/etc/X11/xorg.conf.d"
  (==) Using system config directory "/usr/share/X11/xorg.conf.d"
  modprobe: FATAL: Module fbcon not found in directory 
/lib/modules/5.10.0-7-amd64
  intel: waited 2020 ms for i915.ko driver to load
  Number of created screens does not match number of detected devices.
Configuration failed.
  (EE) Server terminated with error (2). Closing log file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/xorg/+bug/1933913/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1931815] Re: the %{remote_ip} output format is broken on proxied connections

2021-06-17 Thread Bill Yikes
The socks.c code shows that cURL does not even attempt DNS resolution on
SOCKS4a.  Strictly speaking, the SOCKS4a spec expects apps to /attempt/
DNS resolution before contacting the socks server.  I won't complain on
this point though because the status quo is favorable to Tor users (as
it protects them from DNS leaks).

The fallout is that the SOCKS server does not give feedback to the app
on the IP it settles on in the socks4a scenario.  This means cURL has no
possible way of knowing which IP to express in the %{remote_ip} output.

After seeing the code I'm calling out these bugs:

bug 1) SOCKS4a: Considering that Curl_resolve() is unconditionally
bypassed, when a user supplies both --resolve and also demands socks4a
cURL will neglect to honor the --resolve option even though the two
options are theoretically compatible.  This is a minor bug because
socks4 can be used instead as a workaround.  But certainly the man page
should at a minimum disclose the artificial incompatibility between
socks4a and --resolve.

bug 2) SOCKS4a docs: cURL has some discretion whether to attempt DNS
resolution or not.  Yet the docs do not clarify.  Users should get
reassurance in the man page that using socks4a unconditionally refrains
from internal DNS resolution.

bug 3) SOCKS4: Since cURL *must* do DNS resolution, cURL must also know
what the target IP is.  Thus cURL should properly return the
%{remote_ip} value.

bug 4) The docs for %{remote_ip} should tell users what to expect for
that value.  The man page is vague enough to be useless.

Workaround: if proxy users need to know which IP cURL connected to, they
must do their own DNS resolution manually outside of cURL (e.g. using
dig), supply the IP & hostname via --resolve, and use SOCKS4 or SOCKS5
(not SOCKS4a).

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to curl in Ubuntu.
https://bugs.launchpad.net/bugs/1931815

Title:
  the %{remote_ip} output format is broken on proxied connections

Status in curl package in Ubuntu:
  New

Bug description:
  This is how a Tor user would use cURL to grab a header, and also
  expect to be told which IP address was contacted:

  curl --ssl --socks4a 127.0.0.1:9050 -L --head -w '(effective URL =>
  "%{url_effective} @ %{remote_ip}")' "$target_url"

  It's broken because the "remote_ip" is actually just printed as the
  127.0.0.1 (likely that of the proxy server not the remote target
  host).

  tested on curl ver 7.52.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curl/+bug/1931815/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1931815] Re: the %{remote_ip} output format is broken on proxied connections

2021-06-17 Thread Bill Yikes
According to the SOCKS4a spec:

  https://www.openssh.com/txt/socks4.protocol
  https://www.openssh.com/txt/socks4a.protocol

With SOCKS4 cURL *must* do DNS resolution and pass the selected IP to
the SOCKS server.  OTOH, SOCKS4a gives cURL the option to resolve.  If
cURL fails at DNS resolution, it's expected to send the hostname and
0.0.0.x.  So generally cURL should succeed at DNS resolution and thus
have the IP of the target server.  Yet it's not sharing that info with
the user.

SOCKS4a was a bad test case because we can't know whether or not cURL
did the DNS resolution.  A better test case is with SOCKS4 as follows:

curl --ssl --socks4 127.0.0.1:9050 -L --head -w '(effective URL =>
"%{url_effective} @ %{remote_ip}")' "$target_url"

We are assured per the SOCKS4 protocol that cURL *must* do DNS
resolution, so cURL must know the remote IP address.  Yet it still
neglects to correctly set the %{remote_ip} value.  This is certainly a
bug.

Secondary bug--

the manpage states: "remote_ip The remote IP address of the most
recently done connection - can be either IPv4 or IPv6 (Added in 7.29.0)"

The man page is ambiguous. The /rule of least astonishment/ would have
the user naturally expecting "remote IP" to be the target IP whenever
cURL knows it.  Since the behavior is non-intuitive, the man page should
state in detail what the user should expect to receive for the
%{remote_ip} value.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to curl in Ubuntu.
https://bugs.launchpad.net/bugs/1931815

Title:
  the %{remote_ip} output format is broken on proxied connections

Status in curl package in Ubuntu:
  New

Bug description:
  This is how a Tor user would use cURL to grab a header, and also
  expect to be told which IP address was contacted:

  curl --ssl --socks4a 127.0.0.1:9050 -L --head -w '(effective URL =>
  "%{url_effective} @ %{remote_ip}")' "$target_url"

  It's broken because the "remote_ip" is actually just printed as the
  127.0.0.1 (likely that of the proxy server not the remote target
  host).

  tested on curl ver 7.52.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curl/+bug/1931815/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1931815] [NEW] the %{remote_ip} output format is broken on proxied connections

2021-06-13 Thread Bill Yikes
Public bug reported:

This is how a Tor user would use cURL to grab a header, and also expect
to be told which IP address was contacted:

curl --ssl --socks4a 127.0.0.1:9050 -L --head -w '(effective URL =>
"%{url_effective} @ %{remote_ip}")' "$target_url"

It's broken because the "remote_ip" is actually just printed as the
127.0.0.1 (likely that of the proxy server not the remote target host).

tested on curl ver 7.52.1

** Affects: curl (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to curl in Ubuntu.
https://bugs.launchpad.net/bugs/1931815

Title:
  the %{remote_ip} output format is broken on proxied connections

Status in curl package in Ubuntu:
  New

Bug description:
  This is how a Tor user would use cURL to grab a header, and also
  expect to be told which IP address was contacted:

  curl --ssl --socks4a 127.0.0.1:9050 -L --head -w '(effective URL =>
  "%{url_effective} @ %{remote_ip}")' "$target_url"

  It's broken because the "remote_ip" is actually just printed as the
  127.0.0.1 (likely that of the proxy server not the remote target
  host).

  tested on curl ver 7.52.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curl/+bug/1931815/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 978587] Re: apt should ensure .deb are not corrupted before handing them to dpkg

2021-05-29 Thread Bill Yikes
This is actually a security issue and it's surprising it's gone unfixed
for 9 years.  It's inconsistent for apt to check the hash on deb files
that it downloads, but then neglect to do so on user-supplied deb files.
The status quo is a recipe for disaster.  To exacerbate the problem, the
man page does not document the inconsistency or the fact that .  There
are a variety of ways to fix this:

1) apt could refuse to accept local .deb files
2) apt could require local .deb files to be supplied with a hash string (which 
would need a new CLI arg)
3) apt could print the hash to the string and instruct the user to confirm 
whether the hash matches
4) apt could check the repos it's aware of to see if the hash matches anything 
served by a trusted repo.  If not, follow option 1 or 3 above.

It's also important to note that users don't generally know how deb
files are structured or how deb files are structured.  Should they be
responsible for knowing whether a hash is embedded within the deb file
or not?  Particularly when the man page makes no mention of it?
Generally, the user might know that hashes are checked by the apt-*
tools one way or another.  The apt suite of tools (and docs for it) keep
the user in the dark, and yet the user is responsible knowing how it
works.  The user is not served well in this case.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to apt in Ubuntu.
https://bugs.launchpad.net/bugs/978587

Title:
  apt should ensure .deb are not corrupted before handing them to dpkg

Status in apt package in Ubuntu:
  Confirmed

Bug description:
  Upon upgrading to libreoffice-core 3.5.2 version, I stumbled upon what
  seems to be a bad download issue:

  Preparing to replace libreoffice-core 1:3.5.1-1ubuntu5 (using 
.../libreoffice-core_1%3a3.5.2-2ubuntu1_amd64.deb) ...
  rmdir: failed to remove `/var/lib/libreoffice/basis3.4/program/': No such 
file or directory
  rmdir: failed to remove `/var/lib/libreoffice/basis3.4': No such file or 
directory
  Unpacking replacement libreoffice-core ...
  dpkg-deb (subprocess): data: internal bzip2 read error: 'DATA_ERROR'
  dpkg-deb: error: subprocess  returned error exit status 2
  dpkg: error processing 
/var/cache/apt/archives/libreoffice-core_1%3a3.5.2-2ubuntu1_amd64.deb 
(--unpack):
   subprocess dpkg-deb --fsys-tarfile returned error exit status 2

  I was asked to file a bug about it, as it might be possible for dpkg
  to recover from that more gracefully.

  Further information upon requests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apt/+bug/978587/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1930139] [NEW] the --no-directories option incorrectly documented in the man page

2021-05-29 Thread Bill Yikes
Public bug reported:

man page shows:

   -nd
   --no-directories
   Do not create a hierarchy of directories when retrieving 
recursively.  With this option turned on, all files will get saved
   to the current directory, without clobbering (if a name shows up 
more than once, the filenames will get extensions .n).


The way that's written implies that the -nd option would conflict with the -P 
option.  But when -nd is combined with --directory-prefix (-P), wget honors the 
prefix and also downloads to a flat non-hierarchical "structure".  The behavior 
is sensible but the docs are wrong (-nd does not necessarily download to the 
current dir).

** Affects: wget (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to wget in Ubuntu.
https://bugs.launchpad.net/bugs/1930139

Title:
  the --no-directories option incorrectly documented in the man page

Status in wget package in Ubuntu:
  New

Bug description:
  man page shows:

 -nd
 --no-directories
 Do not create a hierarchy of directories when retrieving 
recursively.  With this option turned on, all files will get saved
 to the current directory, without clobbering (if a name shows up 
more than once, the filenames will get extensions .n).

  
  The way that's written implies that the -nd option would conflict with the -P 
option.  But when -nd is combined with --directory-prefix (-P), wget honors the 
prefix and also downloads to a flat non-hierarchical "structure".  The behavior 
is sensible but the docs are wrong (-nd does not necessarily download to the 
current dir).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/wget/+bug/1930139/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1929087] Re: sfdisk refuses to write GPT table between sector 34 and 2047

2021-05-20 Thread Bill Yikes
A secondary bug manifests from this, whereby sfdisk chokes on its own
output and therefore cannot restore its own backup.  E.g. suppose
another tool is used to put a BIOS boot partition from sector 34 to
2047, as follows:

$ sgdisk --clear -a 1 --new=1:34:2047 -c 1:"BIOS boot"
--typecode=1:$(sgdisk --list-types | sed -ne
's/.*\(\).bios.*/\1/gip') /dev/sdb

That works fine, and from that we can run "sfdisk -d /dev/sdb >
dump.txt".  But when dump.txt is fed back into sfdisk, it pukes.  Yet
the docs claim "It is recommended to save the layout of your devices.
sfdisk supports two ways." .. "Use  the  --dump  option to save a
description of the device layout to a text file." .. "This can later be
restored by: sfdisk /dev/sda < sda.dump"

It's actually a security issue, because someone can make an non-
restorable backup and have the false sense of security that it is
restorable.  They wouldn't necessary test restoration either because
that's a destructive process.

** Description changed:

  According to https://wiki.archlinux.org/title/GRUB#BIOS_systems, it's
  both legal and interesting to place the BIOS BOOT partition from sector
  34 to sector 2047, as follows:
  
  $ sudo sfdisk --no-act -f --label gpt /dev/sdb << EOF
  start=   34, size=2013, name=bios,   type=$(sfdisk 
--label gpt -T | awk '{IGNORECASE = 1;} /bios boot/{print $1}')
  start= 2048, size=12582912, name=swap,   type=$(sfdisk 
--label gpt -T | awk '{IGNORECASE = 1;} /linux swap/{print $1}')
  EOF
  
  The output is:
  
  /dev/sdb1: Sector 34 already used.
  Failed to add #1 partition: Numerical result out of range
  Leaving.
  
- It's a false error.  As a workaround, users must omit the BIOS BOOT
- partition then use gdisk to insert it manually.  This was uncovered in
- 2015 and perhaps never reported to a bug tracker because it's still
- broken.  See https://www.spinics.net/lists/util-linux-ng/msg11253.html
+ It's a false error.  As a workaround, users must use parted or sgdisk
+ instead.  (note fdisk & gdisk are also broken in the same way)
+ 
+ This bug was uncovered in 2015 and perhaps never reported to a bug
+ tracker because it's still broken.  See https://www.spinics.net/lists
+ /util-linux-ng/msg11253.html

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to util-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1929087

Title:
  sfdisk refuses to write GPT table between sector 34 and 2047

Status in util-linux package in Ubuntu:
  New

Bug description:
  According to https://wiki.archlinux.org/title/GRUB#BIOS_systems, it's
  both legal and interesting to place the BIOS BOOT partition from
  sector 34 to sector 2047, as follows:

  $ sudo sfdisk --no-act -f --label gpt /dev/sdb << EOF
  start=   34, size=2013, name=bios,   type=$(sfdisk 
--label gpt -T | awk '{IGNORECASE = 1;} /bios boot/{print $1}')
  start= 2048, size=12582912, name=swap,   type=$(sfdisk 
--label gpt -T | awk '{IGNORECASE = 1;} /linux swap/{print $1}')
  EOF

  The output is:

  /dev/sdb1: Sector 34 already used.
  Failed to add #1 partition: Numerical result out of range
  Leaving.

  It's a false error.  As a workaround, users must use parted or sgdisk
  instead.  (note fdisk & gdisk are also broken in the same way)

  This bug was uncovered in 2015 and perhaps never reported to a bug
  tracker because it's still broken.  See https://www.spinics.net/lists
  /util-linux-ng/msg11253.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/util-linux/+bug/1929087/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1929087] [NEW] sfdisk refuses to write GPT table between sector 34 and 2047

2021-05-20 Thread Bill Yikes
Public bug reported:

According to https://wiki.archlinux.org/title/GRUB#BIOS_systems, it's
both legal and interesting to place the BIOS BOOT partition from sector
34 to sector 2047, as follows:

$ sudo sfdisk --no-act -f --label gpt /dev/sdb << EOF
start=   34, size=2013, name=bios,   type=$(sfdisk --label 
gpt -T | awk '{IGNORECASE = 1;} /bios boot/{print $1}')
start= 2048, size=12582912, name=swap,   type=$(sfdisk --label 
gpt -T | awk '{IGNORECASE = 1;} /linux swap/{print $1}')
EOF

The output is:

/dev/sdb1: Sector 34 already used.
Failed to add #1 partition: Numerical result out of range
Leaving.

It's a false error.  As a workaround, users must omit the BIOS BOOT
partition then use gdisk to insert it manually.  This was uncovered in
2015 and perhaps never reported to a bug tracker because it's still
broken.  See https://www.spinics.net/lists/util-linux-ng/msg11253.html

** Affects: util-linux (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to util-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1929087

Title:
  sfdisk refuses to write GPT table between sector 34 and 2047

Status in util-linux package in Ubuntu:
  New

Bug description:
  According to https://wiki.archlinux.org/title/GRUB#BIOS_systems, it's
  both legal and interesting to place the BIOS BOOT partition from
  sector 34 to sector 2047, as follows:

  $ sudo sfdisk --no-act -f --label gpt /dev/sdb << EOF
  start=   34, size=2013, name=bios,   type=$(sfdisk 
--label gpt -T | awk '{IGNORECASE = 1;} /bios boot/{print $1}')
  start= 2048, size=12582912, name=swap,   type=$(sfdisk 
--label gpt -T | awk '{IGNORECASE = 1;} /linux swap/{print $1}')
  EOF

  The output is:

  /dev/sdb1: Sector 34 already used.
  Failed to add #1 partition: Numerical result out of range
  Leaving.

  It's a false error.  As a workaround, users must omit the BIOS BOOT
  partition then use gdisk to insert it manually.  This was uncovered in
  2015 and perhaps never reported to a bug tracker because it's still
  broken.  See https://www.spinics.net/lists/util-linux-ng/msg11253.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/util-linux/+bug/1929087/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1925381] Re: rsync conceals file deletions from reporting when --dry-run --remove-source-files are used together

2021-04-22 Thread Bill Yikes
For me the fact that the upstream repo moved from bugzilla.samba.org to
github.com is sufficient to diverge from upstream. But to each his own.

My contempt for github is in fact why I reported the bug downstream. I
will not use github but I still intended to make a public record of the
bug, hence why it's here.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to rsync in Ubuntu.
https://bugs.launchpad.net/bugs/1925381

Title:
  rsync conceals file deletions from reporting when --dry-run --remove-
  source-files are used together

Status in rsync package in Ubuntu:
  Triaged

Bug description:
  Rsync has an astonishing and dangerous bug:

  The dry run feature (-n / --dry-run) inhibits reporting of file
  deletions when --remove-source-files is used. This is quite serious.
  People use --dry-run to see if an outcome will work as expected before
  a live run. When the simulated run shows *less* destruction than the
  live run, the consequences can be serious because rsync may
  unexpectedly destroy the only copy(*) of a file.

  Users rely on --dry-run. Although users probably expect --dry-run to
  have limitations, we don't expect destructive operations to be under
  reported. If it were reversed, such that the live run were less
  destructive than the dry run, this wouldn't be as serious.

  Reproducer:

  $ mkdir -p /tmp/src /tmp/dest
  $ printf '%s\n' 'yada yada' > /tmp/src/foo.txt
  $ printf '%s\n' 'yada yada' > /tmp/src/bar.txt
  $ cp /tmp/src/foo.txt /tmp/dest
  $ ls /tmp/src/ /tmp/dest/
  /tmp/dest/:
  foo.txt

  /tmp/src/:
  bar.txt  foo.txt

  $ rsync -na --info=remove1 --remove-source-files --existing src/* dest/
  (no output)

  $ rsync -a --info=remove1 --remove-source-files --existing src/* dest/
  sender removed foo.txt

  $ ls /tmp/src/ /tmp/dest/
  /tmp/dest/:
  foo.txt

  /tmp/src/:
  bar.txt

  (*) note when I say it can destroy the only copy of a file, another
  circumstance is needed: that is, rsync does not do a checksum by
  default.  It checks for identical files based on superficial
  parameters like name and date.  So it's possible that two files match
  in the default superficial comparison but differ in the actual
  content.  Losing a unique file in this scenario is perhaps a rare
  corner case, but this bug should be fixed nonetheless.  In the typical
  case of losing files at the source, there is still a significant
  inconvenience of trying to identify what files to copy back.

  Note this bug is similar but differs in a few ways:
  https://bugzilla.samba.org/show_bug.cgi?id=3844

  I've marked this as a security vulnerability because it causes
  unexpected data loss due to --dry-run creating a false expectation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rsync/+bug/1925381/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1925381] Re: rsync conceals file deletions from reporting when --dry-run --remove-source-files are used together

2021-04-21 Thread Bill Yikes
** Description changed:

  Rsync has an astonishing and dangerous bug:
  
  The dry run feature (-n / --dry-run) fails to report file deletions when
  --remove-source-files is used. This is quite serious. People use --dry-
  run to see if an outcome will work as expected before a live run. When
  the simulated run shows *less* destruction than the live run, the
  consequences can be serious because rsync may unexpectedly destroy the
- only copy of a file.
+ only copy(*) of a file.
  
  Users rely on --dry-run. Although users probably expect --dry-run to
  have limitations, we don't expect destructive operations to be under
  reported. If it were reversed, such that the live run were less
  destructive than the dry run, this wouldn't be as serious.
  
  Reproducer:
  
  $ mkdir -p /tmp/src /tmp/dest
  $ printf '%s\n' 'yada yada' > /tmp/src/foo.txt
  $ printf '%s\n' 'yada yada' > /tmp/src/bar.txt
  $ cp /tmp/src/foo.txt /tmp/dest
  $ ls /tmp/src/ /tmp/dest/
  /tmp/dest/:
  foo.txt
  
  /tmp/src/:
  bar.txt  foo.txt
  
  $ rsync -na --info=remove1 --remove-source-files --existing src/* dest/
  (no output)
  
  $ rsync -a --info=remove1 --remove-source-files --existing src/* dest/
  sender removed foo.txt
  
  $ ls /tmp/src/ /tmp/dest/
  /tmp/dest/:
  foo.txt
  
  /tmp/src/:
  bar.txt
  
+ (*) note when I say it can destroy the only copy of a file, another
+ circumstance is needed: that is, rsync does not do a checksum by
+ default.  It checks for identical files based on parameters like name
+ and date.  So it's possible that two files match in the default
+ comparison but differ in the actual content.  Losing a unique file in
+ this scenario is perhaps a rare corner case, but this bug should be
+ fixed nonetheless.
+ 
  Note this bug is similar but differs in a few ways:
  https://bugzilla.samba.org/show_bug.cgi?id=3844
  
  I've marked this as a security vulnerability because it causes
  unexpected data loss due to --dry-run creating a false expectation.

** Description changed:

  Rsync has an astonishing and dangerous bug:
  
  The dry run feature (-n / --dry-run) fails to report file deletions when
  --remove-source-files is used. This is quite serious. People use --dry-
  run to see if an outcome will work as expected before a live run. When
  the simulated run shows *less* destruction than the live run, the
  consequences can be serious because rsync may unexpectedly destroy the
  only copy(*) of a file.
  
  Users rely on --dry-run. Although users probably expect --dry-run to
  have limitations, we don't expect destructive operations to be under
  reported. If it were reversed, such that the live run were less
  destructive than the dry run, this wouldn't be as serious.
  
  Reproducer:
  
  $ mkdir -p /tmp/src /tmp/dest
  $ printf '%s\n' 'yada yada' > /tmp/src/foo.txt
  $ printf '%s\n' 'yada yada' > /tmp/src/bar.txt
  $ cp /tmp/src/foo.txt /tmp/dest
  $ ls /tmp/src/ /tmp/dest/
  /tmp/dest/:
  foo.txt
  
  /tmp/src/:
  bar.txt  foo.txt
  
  $ rsync -na --info=remove1 --remove-source-files --existing src/* dest/
  (no output)
  
  $ rsync -a --info=remove1 --remove-source-files --existing src/* dest/
  sender removed foo.txt
  
  $ ls /tmp/src/ /tmp/dest/
  /tmp/dest/:
  foo.txt
  
  /tmp/src/:
  bar.txt
  
  (*) note when I say it can destroy the only copy of a file, another
  circumstance is needed: that is, rsync does not do a checksum by
- default.  It checks for identical files based on parameters like name
- and date.  So it's possible that two files match in the default
- comparison but differ in the actual content.  Losing a unique file in
- this scenario is perhaps a rare corner case, but this bug should be
- fixed nonetheless.
+ default.  It checks for identical files based on superficial parameters
+ like name and date.  So it's possible that two files match in the
+ default superficial comparison but differ in the actual content.  Losing
+ a unique file in this scenario is perhaps a rare corner case, but this
+ bug should be fixed nonetheless.
  
  Note this bug is similar but differs in a few ways:
  https://bugzilla.samba.org/show_bug.cgi?id=3844
  
  I've marked this as a security vulnerability because it causes
  unexpected data loss due to --dry-run creating a false expectation.

** Description changed:

  Rsync has an astonishing and dangerous bug:
  
- The dry run feature (-n / --dry-run) fails to report file deletions when
- --remove-source-files is used. This is quite serious. People use --dry-
- run to see if an outcome will work as expected before a live run. When
- the simulated run shows *less* destruction than the live run, the
- consequences can be serious because rsync may unexpectedly destroy the
- only copy(*) of a file.
+ The dry run feature (-n / --dry-run) inhibits reporting of file
+ deletions when --remove-source-files is used. This is quite serious.
+ People use --dry-run to see if an outcome will work as expected before a
+ live run. When the simulated run shows *le

[Touch-packages] [Bug 1925381] [NEW] rsync conceals file deletions from reporting when --dry-run --remove-source-files are used together

2021-04-21 Thread Bill Yikes
Public bug reported:

Rsync has an astonishing and dangerous bug:

The dry run feature (-n / --dry-run) fails to report file deletions when
--remove-source-files is used. This is quite serious. People use --dry-
run to see if an outcome will work as expected before a live run. When
the simulated run shows *less* destruction than the live run, the
consequences can be serious because rsync may unexpectedly destroy the
only copy of a file.

Users rely on --dry-run. Although users probably expect --dry-run to
have limitations, we don't expect destructive operations to be under
reported. If it were reversed, such that the live run were less
destructive than the dry run, this wouldn't be as serious.

Reproducer:

$ mkdir -p /tmp/src /tmp/dest
$ printf '%s\n' 'yada yada' > /tmp/src/foo.txt
$ printf '%s\n' 'yada yada' > /tmp/src/bar.txt
$ cp /tmp/src/foo.txt /tmp/dest
$ ls /tmp/src/ /tmp/dest/
/tmp/dest/:
foo.txt

/tmp/src/:
bar.txt  foo.txt

$ rsync -na --info=remove1 --remove-source-files --existing src/* dest/
(no output)

$ rsync -a --info=remove1 --remove-source-files --existing src/* dest/
sender removed foo.txt

$ ls /tmp/src/ /tmp/dest/
/tmp/dest/:
foo.txt

/tmp/src/:
bar.txt

Note this bug is similar but differs in a few ways:
https://bugzilla.samba.org/show_bug.cgi?id=3844

I've marked this as a security vulnerability because it causes
unexpected data loss due to --dry-run creating a false expectation.

** Affects: rsync (Ubuntu)
 Importance: Undecided
 Status: New

** Information type changed from Private Security to Public

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to rsync in Ubuntu.
https://bugs.launchpad.net/bugs/1925381

Title:
  rsync conceals file deletions from reporting when --dry-run --remove-
  source-files are used together

Status in rsync package in Ubuntu:
  New

Bug description:
  Rsync has an astonishing and dangerous bug:

  The dry run feature (-n / --dry-run) fails to report file deletions
  when --remove-source-files is used. This is quite serious. People use
  --dry-run to see if an outcome will work as expected before a live
  run. When the simulated run shows *less* destruction than the live
  run, the consequences can be serious because rsync may unexpectedly
  destroy the only copy of a file.

  Users rely on --dry-run. Although users probably expect --dry-run to
  have limitations, we don't expect destructive operations to be under
  reported. If it were reversed, such that the live run were less
  destructive than the dry run, this wouldn't be as serious.

  Reproducer:

  $ mkdir -p /tmp/src /tmp/dest
  $ printf '%s\n' 'yada yada' > /tmp/src/foo.txt
  $ printf '%s\n' 'yada yada' > /tmp/src/bar.txt
  $ cp /tmp/src/foo.txt /tmp/dest
  $ ls /tmp/src/ /tmp/dest/
  /tmp/dest/:
  foo.txt

  /tmp/src/:
  bar.txt  foo.txt

  $ rsync -na --info=remove1 --remove-source-files --existing src/* dest/
  (no output)

  $ rsync -a --info=remove1 --remove-source-files --existing src/* dest/
  sender removed foo.txt

  $ ls /tmp/src/ /tmp/dest/
  /tmp/dest/:
  foo.txt

  /tmp/src/:
  bar.txt

  Note this bug is similar but differs in a few ways:
  https://bugzilla.samba.org/show_bug.cgi?id=3844

  I've marked this as a security vulnerability because it causes
  unexpected data loss due to --dry-run creating a false expectation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rsync/+bug/1925381/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1916341] [NEW] poppler-utils/pdfimages list function names the wrong color in output table

2021-02-20 Thread Bill Yikes
Public bug reported:

When running:

$ pdfimages -list grayscale-document.pdf

the output is:

```
   page   num  type   width height color comp bpc  enc interp  object ID x-ppi 
y-ppi size ratio
   

  1 0 image2556  3288  rgb 3   8  jpeg   no 7  0   301  
 300  200K 0.8%
  2 1 image2556  3288  rgb 3   8  jpeg   no12  0   301  
 300  275K 1.5%
  3 2 image2556  3288  rgb 3   8  jpeg   no17  0   301  
 300  395K 0.8%
  4 3 image2556  3288  rgb 3   8  jpeg   no22  0   301  
 300  583K 2.8%
  5 4 image2556  3288  rgb 3   8  jpeg   no27  0   301  
 300  317K 1.3%
...
```

"rgb" is incorrect, which is evident after running: 'pdfimages -all -f 1
-l 1 grayscale-document.pdf foo && identify -format "%[type]"
foo-000.jpg', which correctly outputs "Grayscale".

Strangely, when "pdfimages -list" is fed a bilevel document, it states
that every page has color "gray".

** Affects: poppler (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to poppler in Ubuntu.
https://bugs.launchpad.net/bugs/1916341

Title:
  poppler-utils/pdfimages list function names the wrong color in output
  table

Status in poppler package in Ubuntu:
  New

Bug description:
  When running:

  $ pdfimages -list grayscale-document.pdf

  the output is:

  ```
 page   num  type   width height color comp bpc  enc interp  object ID 
x-ppi y-ppi size ratio
 

1 0 image2556  3288  rgb 3   8  jpeg   no 7  0   
301   300  200K 0.8%
2 1 image2556  3288  rgb 3   8  jpeg   no12  0   
301   300  275K 1.5%
3 2 image2556  3288  rgb 3   8  jpeg   no17  0   
301   300  395K 0.8%
4 3 image2556  3288  rgb 3   8  jpeg   no22  0   
301   300  583K 2.8%
5 4 image2556  3288  rgb 3   8  jpeg   no27  0   
301   300  317K 1.3%
  ...
  ```

  "rgb" is incorrect, which is evident after running: 'pdfimages -all -f
  1 -l 1 grayscale-document.pdf foo && identify -format "%[type]"
  foo-000.jpg', which correctly outputs "Grayscale".

  Strangely, when "pdfimages -list" is fed a bilevel document, it states
  that every page has color "gray".

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/poppler/+bug/1916341/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 910272] Re: USB->Parallel adapter produces crappy device URI and CUPS "usb" backend cannot cope with it

2020-08-19 Thread Bill Yikes
Same problem for me.  I have an old parallel-only printer.  It has been
working with a dedicated printer server, but the printer server died and
the PC has no parallel port.  So I connected a USB-to-LPT cable.  As I
attach the cable /var/log/syslog shows:

parport0: fix this legacy no-device port driver!
lp0: using parport0 (polling)

There does not exist a /dev/usb/lp0.  I still set "DeviceURI
parallel:/dev/usb/lp0" since that is what the instructions still say
(https://wiki.ubuntu.com/DebuggingPrintingProblems#USB_-.3E_Parallel_adapter),
and indeed that fails.  I also tried "DeviceURI parallel:/dev/parport0"
and CUPS rejected it.  Then I tried "DeviceURI parallel:/dev/lp0".  CUPS
accepted it, I sent a test print, and it just sits on the job queue with
the msg "Printer not connected; will retry in 30 seconds".

# usb_printerid /dev/lp0
Error: Invalid argument: GET_DEVICE_ID on '/dev/lp0'
# usb_printerid /dev/parport0
Error: Invalid argument: GET_DEVICE_ID on '/dev/parport0'

At another moment, I got a different error:

# usb_printerid /dev/lp0
Error: Device or resource busy: can't open '/dev/lp0'

This is on Bionic.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/910272

Title:
  USB->Parallel adapter produces crappy device URI and CUPS "usb"
  backend cannot cope with it

Status in cups package in Ubuntu:
  Fix Released
Status in cups source package in Oneiric:
  Won't Fix

Bug description:
  Parallel port dot matrix printer Panasonic KX-P2124 connected via usb-
  parallel cable to HP6600 PC; running Ubuntu 11.10 system located on
  external hard drive attached by usb.  Worked on Ubuntu version 8 using
  URI "parallel:/dev/usb/lp0".  Understand that usblp has been
  deprecated; replace with ???  Cups reports printer not connected.
  CUPS version is (from UbuntuSoftwareCenter) - "cups 1.5.0-8ubuntu6".

  #Researched existing bugs - lots of info - no results.  No "Help" menu
  found by selecting upper right located "Gear"/"Printers; could not
  find any "Wizard."

  Thank you for your assistance,
  nvsoar
  
  w8@w8-FJ463AAR-ABA-a6528p:~$ uname -a
  Linux w8-FJ463AAR-ABA-a6528p 3.0.0-15-generic #24-Ubuntu SMP Mon Dec 12 
15:25:25 UTC 2011 i686 i686 i386 GNU/Linux

  w8@w8-FJ463AAR-ABA-a6528p:~$ lsusb
  Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
  Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
  Bus 001 Device 002: ID 067b:3507 Prolific Technology, Inc. PL3507 ATAPI6 
Bridge
  Bus 001 Device 004: ID 058f:6377 Alcor Micro Corp. Multimedia Card Reader
  Bus 002 Device 002: ID 050d:0002 Belkin Components
  #Device immediately above is usb to parallel cable
   
  w8@w8-FJ463AAR-ABA-a6528p:~$ lsmod | grep lp
  lp 17455  0 
  parport40930  4 parport_pc,ppdev,uss720,lp

  w8@w8-FJ463AAR-ABA-a6528p:~$ lsmod | grep ppdev
  ppdev  12849  0 
  parport40930  4 parport_pc,ppdev,uss720,lp

  w8@w8-FJ463AAR-ABA-a6528p:~$ lsmod | grep parport_pc
  parport_pc 32114  0 
  parport40930  4 parport_pc,ppdev,uss720,lp

  w8@w8-FJ463AAR-ABA-a6528p:~$ ls -l /dev/usb/lp* /dev/bus/usb/*/*
  ls: cannot access /dev/usb/lp*: No such file or directory
  crw-rw-r-- 1 root root 189,   0 2011-12-30 15:04 /dev/bus/usb/001/001
  crw-rw-r-- 1 root root 189,   1 2011-12-30 15:04 /dev/bus/usb/001/002
  crw-rw-r-- 1 root root 189,   3 2011-12-30 15:04 /dev/bus/usb/001/004
  crw-rw-r-- 1 root root 189, 128 2011-12-30 15:04 /dev/bus/usb/002/001
  crw-rw-r-- 1 root lp   189, 129 2011-12-30 15:25 /dev/bus/usb/002/002

  w8@w8-FJ463AAR-ABA-a6528p:~$ dmesg | grep par
  [0.00] Booting paravirtualized kernel on bare hardware
  [0.00] vt handoff: transparent VT on vt#7
  [0.210399] hpet0: 3 comparators, 32-bit 25.00 MHz counter
  [   28.754567] uss720: protocols (eg. bitbang) over USS720 usb to parallel 
cables
  [   28.893725] type=1400 audit(1325286282.534:2): apparmor="STATUS" 
operation="profile_load" name="/sbin/dhclient" pid=705 comm="apparmor_parser"
  [   28.893736] type=1400 audit(1325286282.534:3): apparmor="STATUS" 
operation="profile_replace" name="/sbin/dhclient" pid=755 comm="apparmor_parser"
  [   28.894175] type=1400 audit(1325286282.534:4): apparmor="STATUS" 
operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" 
pid=755 comm="apparmor_parser"
  [   28.894324] type=1400 audit(1325286282.534:5): apparmor="STATUS" 
operation="profile_replace" 
name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=705 
comm="apparmor_parser"
  [   28.894438] type=1400 audit(1325286282.534:6): apparmor="STATUS" 
operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" 
pid=755 comm="apparmor_parser"
  [   28.894592] type=1400 audit(1325286282.534:7): apparmor="STATUS" 
operation="profile_replace" name="/usr/l

[Touch-packages] [Bug 1890836] Re: udev rule read but "RUN" command is not executed

2020-08-08 Thread Bill Yikes
Bug report to improve debugging output:

https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1890890

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1890836

Title:
  udev rule read but "RUN" command is not executed

Status in systemd package in Ubuntu:
  Invalid

Bug description:
  The following was introduced as
  "/etc/udev/rules.d/99-harvest_camera.rules":

  ACTION=="add", KERNEL=="sd*[!0-9]", ENV{ID_FS_UUID_ENC}=="C355-A42D",
  RUN+="/usr/bin/sudo -u bill /home/bill/scripts/harvest_camera.sh"

  When the device is attached, it gets mounted but the harvest_camera.sh
  script does not execute.  The udev_log=debug parameter is set in
  /etc/udev/udev.conf.  Verbosity was also maximized by running "udevadm
  control --log-priority=debug".  After running "udevadm monitor
  --environment --udev", the drive was attached.  The output shows a
  device being created (/dev/sda1) and the UUID in the rule matches.
  But the script simply never executes.  The /var/log/syslog file shows:

  "
  Reading rules file: /etc/udev/rules.d/99-harvest_camera.rules
  Successfully forked off 'n/a' as PID 2201.
  1-2.3: Worker [2201] is forced for rocessing SEQNUM=7973.
  "

  So the logs make it look as though the script was executed.  The
  script is simply a stub that does a "touch /tmp/IhaveExecuted" and a
  call to zenity to give a popup announcement.  But none of that happens
  (no popup and no new file in /tmp/).  With maximum verbosity, there is
  no way to further diagnose this.  Deeper detail is needed that shows
  which matching criteria in the rules is being tried and what the
  result is.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1890836/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1890890] [NEW] more debugging verbosity for udev rules needed

2020-08-08 Thread Bill Yikes
Public bug reported:

I wrote a udev rule that would not trigger when expected.  I spent 2
days working on it, trying to understand what the problem was.  I
finally figured it out -- it was a matching problem:

https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1890836

The most verbose output possible is with the log level set to "debug".
This level of logging does not show users what criteria is being checked
and what the result is.  So users are working in the dark.  We have to
guess what udev is doing.  And we're bad at it, because if we could
guess correctly we probably would have written the rule correctly in the
first place.

Consider procmail.  This is an application where users write rules that
contain matching criteria.  When Procmail doesn't work as expected, the
logs show in detail what criteria matches and what does not.  This is
what udev needs.

** Affects: systemd (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1890890

Title:
  more debugging verbosity for udev rules needed

Status in systemd package in Ubuntu:
  New

Bug description:
  I wrote a udev rule that would not trigger when expected.  I spent 2
  days working on it, trying to understand what the problem was.  I
  finally figured it out -- it was a matching problem:

  https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1890836

  The most verbose output possible is with the log level set to "debug".
  This level of logging does not show users what criteria is being
  checked and what the result is.  So users are working in the dark.  We
  have to guess what udev is doing.  And we're bad at it, because if we
  could guess correctly we probably would have written the rule
  correctly in the first place.

  Consider procmail.  This is an application where users write rules
  that contain matching criteria.  When Procmail doesn't work as
  expected, the logs show in detail what criteria matches and what does
  not.  This is what udev needs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1890890/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1890836] Re: udev rule read but "RUN" command is not executed

2020-08-08 Thread Bill Yikes
PEBKAC

** Changed in: systemd (Ubuntu)
   Status: Incomplete => Invalid

** Changed in: systemd (Ubuntu)
 Assignee: (unassigned) => Bill Yikes (yik3s)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1890836

Title:
  udev rule read but "RUN" command is not executed

Status in systemd package in Ubuntu:
  Invalid

Bug description:
  The following was introduced as
  "/etc/udev/rules.d/99-harvest_camera.rules":

  ACTION=="add", KERNEL=="sd*[!0-9]", ENV{ID_FS_UUID_ENC}=="C355-A42D",
  RUN+="/usr/bin/sudo -u bill /home/bill/scripts/harvest_camera.sh"

  When the device is attached, it gets mounted but the harvest_camera.sh
  script does not execute.  The udev_log=debug parameter is set in
  /etc/udev/udev.conf.  Verbosity was also maximized by running "udevadm
  control --log-priority=debug".  After running "udevadm monitor
  --environment --udev", the drive was attached.  The output shows a
  device being created (/dev/sda1) and the UUID in the rule matches.
  But the script simply never executes.  The /var/log/syslog file shows:

  "
  Reading rules file: /etc/udev/rules.d/99-harvest_camera.rules
  Successfully forked off 'n/a' as PID 2201.
  1-2.3: Worker [2201] is forced for rocessing SEQNUM=7973.
  "

  So the logs make it look as though the script was executed.  The
  script is simply a stub that does a "touch /tmp/IhaveExecuted" and a
  call to zenity to give a popup announcement.  But none of that happens
  (no popup and no new file in /tmp/).  With maximum verbosity, there is
  no way to further diagnose this.  Deeper detail is needed that shows
  which matching criteria in the rules is being tried and what the
  result is.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1890836/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1890836] Re: udev rule read but "RUN" command is not executed

2020-08-08 Thread Bill Yikes
It turns out the problem arises out of cargo cult programming on my
part.  The bang in KERNEL=="sd*[!0-9]" is what I copied out of
stackexchange, and it's exactly what prevents the mount from matching.
I suspected that early on, but ruled it out because when the device
first connects it must connect as /dev/sda (as the partition table must
be read before there can be an sda1).  Now I'm figuring the sda gets
replaced with sda1 before it reaches my custom rule.

In any case, the bug is not as I reported.  I'm sorry this turned out to
be a non-bug and I'm sorry I wasted your time Mr. Streetman.  This can
be closed.

I may open a new bug report to improve the debug output, because we
shouldn't have to work in the dark and guess about what the matching
criteria is doing.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1890836

Title:
  udev rule read but "RUN" command is not executed

Status in systemd package in Ubuntu:
  Invalid

Bug description:
  The following was introduced as
  "/etc/udev/rules.d/99-harvest_camera.rules":

  ACTION=="add", KERNEL=="sd*[!0-9]", ENV{ID_FS_UUID_ENC}=="C355-A42D",
  RUN+="/usr/bin/sudo -u bill /home/bill/scripts/harvest_camera.sh"

  When the device is attached, it gets mounted but the harvest_camera.sh
  script does not execute.  The udev_log=debug parameter is set in
  /etc/udev/udev.conf.  Verbosity was also maximized by running "udevadm
  control --log-priority=debug".  After running "udevadm monitor
  --environment --udev", the drive was attached.  The output shows a
  device being created (/dev/sda1) and the UUID in the rule matches.
  But the script simply never executes.  The /var/log/syslog file shows:

  "
  Reading rules file: /etc/udev/rules.d/99-harvest_camera.rules
  Successfully forked off 'n/a' as PID 2201.
  1-2.3: Worker [2201] is forced for rocessing SEQNUM=7973.
  "

  So the logs make it look as though the script was executed.  The
  script is simply a stub that does a "touch /tmp/IhaveExecuted" and a
  call to zenity to give a popup announcement.  But none of that happens
  (no popup and no new file in /tmp/).  With maximum verbosity, there is
  no way to further diagnose this.  Deeper detail is needed that shows
  which matching criteria in the rules is being tried and what the
  result is.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1890836/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1890836] Re: udev rule read but "RUN" command is not executed

2020-08-07 Thread Bill Yikes
It fails for me on two different systems:

* Ubuntu 20.04 (Focal)
* Linux Mint 19.2 (Tina, which is based on Bionic)

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1890836

Title:
  udev rule read but "RUN" command is not executed

Status in systemd package in Ubuntu:
  Incomplete

Bug description:
  The following was introduced as
  "/etc/udev/rules.d/99-harvest_camera.rules":

  ACTION=="add", KERNEL=="sd*[!0-9]", ENV{ID_FS_UUID_ENC}=="C355-A42D",
  RUN+="/usr/bin/sudo -u bill /home/bill/scripts/harvest_camera.sh"

  When the device is attached, it gets mounted but the harvest_camera.sh
  script does not execute.  The udev_log=debug parameter is set in
  /etc/udev/udev.conf.  Verbosity was also maximized by running "udevadm
  control --log-priority=debug".  After running "udevadm monitor
  --environment --udev", the drive was attached.  The output shows a
  device being created (/dev/sda1) and the UUID in the rule matches.
  But the script simply never executes.  The /var/log/syslog file shows:

  "
  Reading rules file: /etc/udev/rules.d/99-harvest_camera.rules
  Successfully forked off 'n/a' as PID 2201.
  1-2.3: Worker [2201] is forced for rocessing SEQNUM=7973.
  "

  So the logs make it look as though the script was executed.  The
  script is simply a stub that does a "touch /tmp/IhaveExecuted" and a
  call to zenity to give a popup announcement.  But none of that happens
  (no popup and no new file in /tmp/).  With maximum verbosity, there is
  no way to further diagnose this.  Deeper detail is needed that shows
  which matching criteria in the rules is being tried and what the
  result is.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1890836/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1890836] [NEW] udev rule read but "RUN" command is not executed

2020-08-07 Thread Bill Yikes
Public bug reported:

The following was introduced as
"/etc/udev/rules.d/99-harvest_camera.rules":

ACTION=="add", KERNEL=="sd*[!0-9]", ENV{ID_FS_UUID_ENC}=="C355-A42D",
RUN+="/usr/bin/sudo -u bill /home/bill/scripts/harvest_camera.sh"

When the device is attached, it gets mounted but the harvest_camera.sh
script does not execute.  The udev_log=debug parameter is set in
/etc/udev/udev.conf.  Verbosity was also maximized by running "udevadm
control --log-priority=debug".  After running "udevadm monitor
--environment --udev", the drive was attached.  The output shows a
device being created (/dev/sda1) and the UUID in the rule matches.  But
the script simply never executes.  The /var/log/syslog file shows:

"
Reading rules file: /etc/udev/rules.d/99-harvest_camera.rules
Successfully forked off 'n/a' as PID 2201.
1-2.3: Worker [2201] is forced for rocessing SEQNUM=7973.
"

So the logs make it look as though the script was executed.  The script
is simply a stub that does a "touch /tmp/IhaveExecuted" and a call to
zenity to give a popup announcement.  But none of that happens (no popup
and no new file in /tmp/).  With maximum verbosity, there is no way to
further diagnose this.  Deeper detail is needed that shows which
matching criteria in the rules is being tried and what the result is.

** Affects: systemd (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1890836

Title:
  udev rule read but "RUN" command is not executed

Status in systemd package in Ubuntu:
  New

Bug description:
  The following was introduced as
  "/etc/udev/rules.d/99-harvest_camera.rules":

  ACTION=="add", KERNEL=="sd*[!0-9]", ENV{ID_FS_UUID_ENC}=="C355-A42D",
  RUN+="/usr/bin/sudo -u bill /home/bill/scripts/harvest_camera.sh"

  When the device is attached, it gets mounted but the harvest_camera.sh
  script does not execute.  The udev_log=debug parameter is set in
  /etc/udev/udev.conf.  Verbosity was also maximized by running "udevadm
  control --log-priority=debug".  After running "udevadm monitor
  --environment --udev", the drive was attached.  The output shows a
  device being created (/dev/sda1) and the UUID in the rule matches.
  But the script simply never executes.  The /var/log/syslog file shows:

  "
  Reading rules file: /etc/udev/rules.d/99-harvest_camera.rules
  Successfully forked off 'n/a' as PID 2201.
  1-2.3: Worker [2201] is forced for rocessing SEQNUM=7973.
  "

  So the logs make it look as though the script was executed.  The
  script is simply a stub that does a "touch /tmp/IhaveExecuted" and a
  call to zenity to give a popup announcement.  But none of that happens
  (no popup and no new file in /tmp/).  With maximum verbosity, there is
  no way to further diagnose this.  Deeper detail is needed that shows
  which matching criteria in the rules is being tried and what the
  result is.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1890836/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1890520] Re: catastrophic bug with deluser command -- destroys large number of system files

2020-08-07 Thread Bill Yikes
Notice also there is a serious transparency problem.  The output only
shows files for which removal failed.  This acutely heightens the
destruction because it potentially destroyed *thousands* of files as I
sat there and let it run.  The tool gives no idea how what's being
destroyed.  The admin has to trust that their understanding of the scope
of removal is accurate.  In the absence of errors, an admin would let it
run through to the end, resulting in maximum damage.

This tool badly needs a --simulate option.  And regardless of
simulation, it should show what files are affected.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to adduser in Ubuntu.
https://bugs.launchpad.net/bugs/1890520

Title:
  catastrophic bug with deluser command -- destroys large number of
  system files

Status in adduser package in Ubuntu:
  New

Bug description:
  I installed rygel and created a specific rygel user to run it with (as
  directed by the docs). I then realized a specific user was not needed.
  To reverse my steps, I ran deluser which proceeded to delete lots of
  files that have nothing to do with the rygel user:

  root@host:~# adduser --home /home/rygel --disabled-password --disabled-login 
--gecos 'Rygel media server' rygel
  root@host:~# sudo su - rygel
  rygel@host:$ systemctl start rygel
  rygel@host:$ exit
  root@host:~# deluser --remove-home --remove-all-files rygel
  Looking for files to backup/remove ...
  /usr/sbin/deluser: Cannot handle special file /etc/systemd/system/mdm.service
  /usr/sbin/deluser: Cannot handle special file 
/etc/systemd/system/samba-ad-dc.service
  /usr/sbin/deluser: Cannot handle special file 
/etc/systemd/system/cgmanager.service
  /usr/sbin/deluser: Cannot handle special file 
/etc/systemd/system/cgproxy.service
  /usr/sbin/deluser: Cannot handle special file /dev/vcsa7
  /usr/sbin/deluser: Cannot handle special file /dev/vcsu7
  /usr/sbin/deluser: Cannot handle special file /dev/vcs7
  /usr/sbin/deluser: Cannot handle special file /dev/gpiochip0
  /usr/sbin/deluser: Cannot handle special file /dev/dvdrw
  /usr/sbin/deluser: Cannot handle special file /dev/dvd
  /usr/sbin/deluser: Cannot handle special file /dev/cdrw
  /usr/sbin/deluser: Cannot handle special file /dev/zfs
  /usr/sbin/deluser: Cannot handle special file /dev/vhost-vsock
  /usr/sbin/deluser: Cannot handle special file /dev/vhost-net
  /usr/sbin/deluser: Cannot handle special file /dev/uhid
  /usr/sbin/deluser: Cannot handle special file /dev/vhci
  /usr/sbin/deluser: Cannot handle special file /dev/userio
  /usr/sbin/deluser: Cannot handle special file /dev/nvram
  /usr/sbin/deluser: Cannot handle special file /dev/btrfs-control
  /usr/sbin/deluser: Cannot handle special file /dev/cuse
  /usr/sbin/deluser: Cannot handle special file /dev/autofs
  /usr/sbin/deluser: Cannot handle special file /dev/sde
  /usr/sbin/deluser: Cannot handle special file /dev/sdd
  /usr/sbin/deluser: Cannot handle special file /dev/sdc
  /usr/sbin/deluser: Cannot handle special file /dev/sdb
  /usr/sbin/deluser: Cannot handle special file /dev/sg5
  /usr/sbin/deluser: Cannot handle special file /dev/sg4
  /usr/sbin/deluser: Cannot handle special file /dev/sg3
  /usr/sbin/deluser: Cannot handle special file /dev/sg2
  /usr/sbin/deluser: Cannot handle special file /dev/vcsa6
  /usr/sbin/deluser: Cannot handle special file /dev/vcsu6
  /usr/sbin/deluser: Cannot handle special file /dev/vcs6
  /usr/sbin/deluser: Cannot handle special file /dev/vcsa5
  /usr/sbin/deluser: Cannot handle special file /dev/vcsu5
  /usr/sbin/deluser: Cannot handle special file /dev/vcs5
  /usr/sbin/deluser: Cannot handle special file /dev/vcsa4
  /usr/sbin/deluser: Cannot handle special file /dev/vcsu4
  /usr/sbin/deluser: Cannot handle special file /dev/vcs4
  /usr/sbin/deluser: Cannot handle special file /dev/vcsa3
  /usr/sbin/deluser: Cannot handle special file /dev/vcsu3
  /usr/sbin/deluser: Cannot handle special file /dev/vcs3
  /usr/sbin/deluser: Cannot handle special file /dev/vcsa2
  /usr/sbin/deluser: Cannot handle special file /dev/vcsu2
  /usr/sbin/deluser: Cannot handle special file /dev/vcs2
  /usr/sbin/deluser: Cannot handle special file /dev/hidraw2
  /usr/sbin/deluser: Cannot handle special file /dev/hidraw1
  /usr/sbin/deluser: Cannot handle special file /dev/cdrom
  /usr/sbin/deluser: Cannot handle special file /dev/hidraw0
  /usr/sbin/deluser: Cannot handle special file /dev/fb0
  /usr/sbin/deluser: Cannot handle special file /dev/i2c-5
  /usr/sbin/deluser: Cannot handle special file /dev/i2c-4
  /usr/sbin/deluser: Cannot handle special file /dev/i2c-3
  /usr/sbin/deluser: Cannot handle special file /dev/i2c-2
  /usr/sbin/deluser: Cannot handle special file /dev/i2c-1
  /usr/sbin/deluser: Cannot handle special file /dev/i2c-0
  /usr/sbin/deluser: Cannot handle special file /dev/rtc
  /usr/sbin/deluser: Cannot handle special file /dev/stderr
  /usr/sbin/deluser: Cannot handle s

[Touch-packages] [Bug 1890827] [NEW] udev and udevadm documentation missing

2020-08-07 Thread Bill Yikes
Public bug reported:

The man page for udevadm neglects to mention options which seem to only
be documented in stackexchange.  E.g.

udevadm monitor --environment

udevadm control --reload-rules

>From the man page, it's unclear what the difference is between /udevadm
trigger/ and /udevadm test/.  The description should tell the user
enough to work out which one they need to use for a certain situation.
E.g. I wrote a new rule and it's not triggering.  Is the trigger command
or the test command suitable for diagnosing the problem?

The documentation in /usr/share/doc/udev/ is strangely specific to some
esoteric network interface naming.  There is no basic description of
what udev's purpose is or what its capabilities are.  One of the most
notable tasks is to react to the insertion of USB drives, and there is
no mention of this.  The /etc/fstab table also has a role in mounting
USB drives, but there is no mention as to how udev works with
/etc/fstab.  I've found that if I remove a line in /etc/fstab and attach
the drive pertaining to that line, it gets mounted anyway.  If the fstab
file supercedes udev, then this should be mentioned in the
documentation.

** Affects: systemd (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1890827

Title:
  udev and udevadm documentation missing

Status in systemd package in Ubuntu:
  New

Bug description:
  The man page for udevadm neglects to mention options which seem to
  only be documented in stackexchange.  E.g.

  udevadm monitor --environment

  udevadm control --reload-rules

  From the man page, it's unclear what the difference is between
  /udevadm trigger/ and /udevadm test/.  The description should tell the
  user enough to work out which one they need to use for a certain
  situation.  E.g. I wrote a new rule and it's not triggering.  Is the
  trigger command or the test command suitable for diagnosing the
  problem?

  The documentation in /usr/share/doc/udev/ is strangely specific to
  some esoteric network interface naming.  There is no basic description
  of what udev's purpose is or what its capabilities are.  One of the
  most notable tasks is to react to the insertion of USB drives, and
  there is no mention of this.  The /etc/fstab table also has a role in
  mounting USB drives, but there is no mention as to how udev works with
  /etc/fstab.  I've found that if I remove a line in /etc/fstab and
  attach the drive pertaining to that line, it gets mounted anyway.  If
  the fstab file supercedes udev, then this should be mentioned in the
  documentation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1890827/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[Touch-packages] [Bug 1890654] Re: udev service does not log

2020-08-07 Thread Bill Yikes
Sorry, i mean to say /var/log/syslog (not /var/log/system) shows the
device was attached.

The verbosity in /var/log/syslog increases when udev_log=debug, but
still no /var/log/udev.

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1890654

Title:
  udev service does not log

Status in systemd package in Ubuntu:
  New

Bug description:
  By default udev is supposed to log to /var/log.  In Ubuntu 20 it is
  not logging.  Logging was then explicitly enabled by uncommenting this
  line in /etc/udev/udev.conf:

  "udev_log=info"

  After running "systemctl restart udev" and plugging in a USB drive,
  there is still no udev log in /var/log.  Yet /var/log/system shows
  that a device was in fact attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1890654/+subscriptions

-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp