[Bug 1976341] Re: the .netrc man page neglects to disclose the format for the password string

2022-06-05 Thread Bill Yikes
Also note that wget checks the syntax of the ~/.netrc file every time it
runs with default options, and it gives a warning when bash-style
quoting is used for FTP & Fetchmail.  Reported here:

https://savannah.gnu.org/bugs/index.php?62586

** Bug watch added: GNU Savannah Bug Tracker #62586
   http://savannah.gnu.org/bugs/?62586

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1976341

Title:
  the .netrc man page neglects to disclose the format for the password
  string

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netkit-ftp/+bug/1976341/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1977567] Re: security oversight that makes users vulnerable to doxxing

2022-06-03 Thread Bill Yikes
** Description changed:

  Neomutt gives no possible way to send a PGP-encrypted msg and then save
  an unencrypted local copy. This forces users to choose from the
  following workflows:
  
  1) Encrypt the msg only to the recipients & store a copy of it, which
  the sender can never again see the payload they sent. In this case the
  body of the msg is just a useless & space-wasting blob. The sender can
  have a record that they sent a msg (the metadata) but no way to recall
  what they sent. In fact the payload serves as a risk with zero benefit,
  because in the event that the recipients key is compromized the sender’s
  copy can then be read by an adversary.
  
  2) The sender adds an “encrypt-to” config option to gpg.conf that causes
  all msgs sent to be encrypted to themself. This enables the sender to
  keep an accessible record of what they sent out. One side-effect is that
  they may choose to keep email records longer than they keep their
  private key, and so when they lose or delete their private key or
  password, they can no longer access records of what they sent. That’s
  not serious, but consider this scenario: Alice anonymously sends a
  highly sensitive PGP-encrypted msg to wikileaks & forgets that she has
  everything set to encrypt to self. Wikileaks (or someone forcing
  wikileaks) can run pgpdump on the encrypted payload and see Alice’s
  keyID. Doxxed!
  
  Both options 1 & 2 need improvement.
  
  Approach 1 can be improved by giving users the option to store metadata
  only. Mutt should save only the headers (perhaps including the original
  payload size), but delete the payload itself both for security and for
  wiser use of storage space.
  
  Approach 2 should perhaps be possible for users who want that option
  (everyone has their own threat model), but there needs to a be a 3rd
  option: give users the possibility to store a plaintext copy so they are
  not always at risk of accidentally doxxing themselves.
+ 
+ (edit)
+ The fcc_clear option is that 3rd option. So a lot of this is moot now. But 
note two problems still remain:
+ 
+ * wasted space if a user chooses approach one
+ 
+ * the fcc_clear behavior leaves no trace that the msg was sent encrypted
+ which is a bit annoying. It’s useful to know when looking at old sent
+ msgs whether or not a msg was encrypted.  Ideally mutt should add a
+ header that lists encrypted recipients.  E.g.:
+ 
+ X-Mutt-PGP-Encrypted-To: 0xabc123 0xdef456

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1977567

Title:
  security oversight that makes users vulnerable to doxxing

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neomutt/+bug/1977567/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1977567] [NEW] security oversight that makes users vulnerable to doxxing

2022-06-03 Thread Bill Yikes
Public bug reported:

Neomutt gives no possible way to send a PGP-encrypted msg and then save
an unencrypted local copy. This forces users to choose from the
following workflows:

1) Encrypt the msg only to the recipients & store a copy of it, which
the sender can never again see the payload they sent. In this case the
body of the msg is just a useless & space-wasting blob. The sender can
have a record that they sent a msg (the metadata) but no way to recall
what they sent. In fact the payload serves as a risk with zero benefit,
because in the event that the recipients key is compromized the sender’s
copy can then be read by an adversary.

2) The sender adds an “encrypt-to” config option to gpg.conf that causes
all msgs sent to be encrypted to themself. This enables the sender to
keep an accessible record of what they sent out. One side-effect is that
they may choose to keep email records longer than they keep their
private key, and so when they lose or delete their private key or
password, they can no longer access records of what they sent. That’s
not serious, but consider this scenario: Alice anonymously sends a
highly sensitive PGP-encrypted msg to wikileaks & forgets that she has
everything set to encrypt to self. Wikileaks (or someone forcing
wikileaks) can run pgpdump on the encrypted payload and see Alice’s
keyID. Doxxed!

Both options 1 & 2 need improvement.

Approach 1 can be improved by giving users the option to store metadata
only. Mutt should save only the headers (perhaps including the original
payload size), but delete the payload itself both for security and for
wiser use of storage space.

Approach 2 should perhaps be possible for users who want that option
(everyone has their own threat model), but there needs to a be a 3rd
option: give users the possibility to store a plaintext copy so they are
not always at risk of accidentally doxxing themselves.

** Affects: neomutt (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1977567

Title:
  security oversight that makes users vulnerable to doxxing

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/neomutt/+bug/1977567/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1977492] [NEW] Lynx cannot visit .onion sites

2022-06-03 Thread Bill Yikes
Public bug reported:

Running this:

`torsocks lynx
'https://c6usaa6obkiahck7rkn2phbffb3strd375pfpxmawuexnmectjizyjad.onion/about'`

results in:

  “Alert!: Unable to connect to remote host. … lynx: Can't access
startfile …”

The same malfunction occurrs with other onion sites as well. But note
that running `torsocks w3m ` has no problem, so it’s a lynx-
specific problem.


Unrelated, but worth noting in case this bug report is seen by the admin of the 
Lynx website:

If you visit the Lynx homepage using a graphical browser over Tor, a
Sucuri blockade denies access. But if you run “torsocks lynx
https://lynx.invisible-island.net/lynx.html” suddenly your Tor IP
becomes acceptible to their access rules. This is a broken security
policy. Tor users should be treated equally regardless of browser. If
Tor is not actually a security threat, then Sucuri should be dumped.

** Affects: lynx (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1977492

Title:
  Lynx cannot visit .onion sites

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lynx/+bug/1977492/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1976361] [NEW] man page “passes the buck” to a dead end for .netrc docs

2022-05-31 Thread Bill Yikes
Public bug reported:

The man page says:

“See the ftp(1) man page for details of the syntax of the ~/.netrc
file.”

There are a few problems with this:

1) the fetchmail pkg is not dependant on the netkit-ftp package, so the
FTP man page is not necessarily installed.

2) even when FTP is installed, the .netrc manpage is vague and neglects
to specify the syntax of the password field.

3) different apps use the .netrc file and the syntax they expect is
inconsistent (see this bug:
).
Apart from the FTP man page failing to document the password syntax, can
users really expect fetchmail to use the same syntax rules as the FTP
client?

I had a quite complex password in .netrc and fetchmail was not correctly
parsing it so the mail server received incorrect passwords from
fetchmail. After a lot of trial and error I almost gave up. Someone in a
forum made a SWAG (silly wild ass guess) and said try the bash style of
quoting on the password and see if that works. It worked, but no user
should have to go through that guesswork. Fetchmail’s password syntax
should be detailed in the fetchmail man page.

** Affects: fetchmail (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1976361

Title:
  man page “passes the buck” to a dead end for .netrc docs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1976361/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1976341] [NEW] the .netrc man page neglects to disclose the format for the password string

2022-05-31 Thread Bill Yikes
Public bug reported:

The FTP man page references the .netrc man page for the .netrc file
format.  The .netrc man page simply states that the “password” token is
followed by the actual password, but it does not specify the format of
that string.

The problem is that different applications have different expectations
for how that password string is represented.  If an actual password
contains both a single quote and a double quote, cURL expects the
password to be entirely unquoted in .netrc (in fact, curl expects all
passwords to be unquoted and even treats surrounding quotes as part of
the password).  Whereas fetchmail references the FTP man page and from
testing it’s clear that fetchmail expects bash-style quoting.  So take
this password for example:

   foo'123"bar

cURL expects .netrc to have → machine … username … password foo'123"bar

fetchmail expects .netrc to have → machine … username … password
foo"'"123'"'bar

Consequently curl and fetchmail cannot both make use of the same .netrc
record.  And there is no basis for reporting a bug against curl or
fetchmail because the format is not documented.  It’s interesting to
note that IBM is apparently the only organization to even attempt to
produce a spec for the password string:

https://www.ibm.com/docs/en/zos/2.3.0?topic=ftp-netrc-data-set

but also note that IBM’s spec is broken, because it gives no instruction
for the situation where a password contains both a single and double
quote.  Perhaps the IBM docs can be used as a precursor to deriving a
properly documented password string for the .netrc file.

** Affects: netkit-ftp (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1976341

Title:
  the .netrc man page neglects to disclose the format for the password
  string

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/netkit-ftp/+bug/1976341/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1951704] [NEW] Profanity freezes in a hung state when uplink connection is severed

2021-11-20 Thread Bill Yikes
Public bug reported:

When the Internet is unplugged, Profanity hangs.  It’s so frozen there
is no way to quit gracefully -- no way to enter “/quit”.  The only way
to terminate is to kill the process externally (“pkill profanity”).

I’ve seen Profanity hang at other times as well, so it may not always be
related to loss of connectivity.

version → 0.10.0

** Affects: profanity (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1951704

Title:
  Profanity freezes in a hung state when uplink connection is severed

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/profanity/+bug/1951704/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1929241] Re: [feature request] add a proxy CLI option to debootstrap

2021-10-29 Thread Bill Yikes
Now that ~7 months have past, I don't recall if I tried the env var.
Probably not, because nothing in the docs suggests that the env var has
any effect.  The http_proxy environment variable is a convention. Some
apps honor the convention and some don't. In the case at hand, there's
nothing to even suggest that debootstrap is proxy-capable.

If it is proxy-capable, great, but we still have a bug b/c it's
undocumented.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1929241

Title:
  [feature request] add a proxy CLI option to debootstrap

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/debootstrap/+bug/1929241/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1947005] Re: linphone gui has no dialpad if Sway+xWayland is used

2021-10-13 Thread Bill Yikes
i was able to simply enter a phone number in the search field, and it
worked.  So the UI is not broken.

There is still a bug here though: no man page.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1947005

Title:
  linphone gui has no dialpad if Sway+xWayland is used

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linphone-desktop/+bug/1947005/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1947005] Re: linphone gui has no dialpad if Sway+xWayland is used

2021-10-13 Thread Bill Yikes
The attached image shows what Linphone 4.4.21 looks like -- the
defective version.

** Attachment added: "Linphone 4.4.21"
   
https://bugs.launchpad.net/ubuntu/+source/linphone-desktop/+bug/1947005/+attachment/5532508/+files/linphone_screencap_nokeypad.png

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1947005

Title:
  linphone gui has no dialpad if Sway+xWayland is used

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linphone-desktop/+bug/1947005/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1947005] Re: linphone gui has no dialpad if Sway+xWayland is used

2021-10-13 Thread Bill Yikes
** Description changed:

  When Linphone is installed a system that uses Sway & Xwayland (instead
  of Gnome), the app launches with no way to dial a phone number.  There
  is no dialpad, just a field where the SIP account of the other party can
  be entered.  The contacts form also omits phone number from the fields,
  so there's no way around the missing keypad.
  
  There is also no man page and no documentation in /usr/share/doc/, which
  I believe fails the minimum required quality standards of Debian and
  Ubuntu.
+ 
+ Note that some Ubuntu users who have no issue reaching a keypad see this
+ screen:
+ 
+ https://lcom.static.linuxfound.org/images/stories/41373/linphone-
+ VOIP.png
+ 
+ but version 4.4.21 shows something much different.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1947005

Title:
  linphone gui has no dialpad if Sway+xWayland is used

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linphone-desktop/+bug/1947005/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1947005] [NEW] linphone gui has no dialpad if Sway+xWayland is used

2021-10-13 Thread Bill Yikes
Public bug reported:

When Linphone is installed a system that uses Sway & Xwayland (instead
of Gnome), the app launches with no way to dial a phone number.  There
is no dialpad, just a field where the SIP account of the other party can
be entered.  The contacts form also omits phone number from the fields,
so there's no way around the missing keypad.

There is also no man page and no documentation in /usr/share/doc/, which
I believe fails the minimum required quality standards of Debian and
Ubuntu.

** Affects: linphone-desktop (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1947005

Title:
  linphone gui has no dialpad if Sway+xWayland is used

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linphone-desktop/+bug/1947005/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946079] Re: ppmraw device no longer produces output (regression)

2021-10-05 Thread Bill Yikes
Upstream report:

https://bugs.ghostscript.com/show_bug.cgi?id=704500

** Bug watch added: Ghostscript (AFPL) Bugzilla #704500
   http://bugs.ghostscript.com/show_bug.cgi?id=704500

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946079

Title:
  ppmraw device no longer produces output (regression)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ghostscript/+bug/1946079/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946079] [NEW] ppmraw device no longer produces output (regression)

2021-10-05 Thread Bill Yikes
Public bug reported:

In version 9.26, the following command worked as expected -- it produces
a ppm file:

  $ gs -sDEVICE=ppmraw -q -r600 -sPAPERSIZE=a4 -dFIXEDMEDIA -sPageList=4
-o /tmp/raster.ppm vector.pdf

But in ghostscript version 9.53.3, that command produces no PPM file,
and the terminal output is empty. That is, there is no error, warning,
or indication of progress.

FWIW, the PDF in my tests was produced by LaTeX.

** Affects: ghostscript (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946079

Title:
  ppmraw device no longer produces output (regression)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ghostscript/+bug/1946079/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1944040] [NEW] (bad docs) The special network id “all” is undocumented in wpa_cli

2021-09-18 Thread Bill Yikes
Public bug reported:

In wpa_cli:

> help enable_network
commands:
  enable_network  = enable a network

That's insufficient.  What happens when someone runs “select_network 5”
followed at some point by “save config”?  It permantly disables all
networks except network 5.  Users who want to get back to a normal
situation where all networks are enabled so that wherever they are a
network is automatically select are then left running “enable_network
0”..“enable_network 1”..“enable_network 2”..etc.  Pain in the ass.

This webpage mentions that there is a special network id “all”:

https://w1.fi/wpa_supplicant/devel/ctrl_iface_page.html

So after selecting a specific network, one can run “enable_network all”
to get back to a normal situation.  This possibility should be
documented in the enable_network help text as well as the select_network
text so those who use select_network are informed on how to reverse
their action.

** Affects: wpasupplicant (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1944040

Title:
  (bad docs) The special network id “all” is undocumented in wpa_cli

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/wpasupplicant/+bug/1944040/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1942483] [NEW] calcurse import of ICS files is lossy (location, description, and UID are lost)

2021-09-02 Thread Bill Yikes
Public bug reported:

On an Android, new events were entered into v.5.1.2 of this 3rd-party
calendar app:

  https://f-droid.org/en/packages/com.simplemobiletools.calendar.pro/

Events contained a Title ("Summary" field), Location, and Description.
The events were exported to an ICS file and pulled using adb.
Inspection confirmed that all the data made it into the ICS file.  The
ICS file was then imported into calcurse.  Only the Title field and
times survived the import.

I thought perhaps calcurse simply is not showing the data.  So as a test
the records were then re-exported as an ical file.  Indeed the LOCATION
and DESCRIPTION fields are missing.  When the ical file is re-imported
into Simple Calendar Pro, the records are duplicated but obviously with
the lost data.

It's unclear how Simple Calendar Pro guards against duplication (if at
all), but it's likely using the UID field.  UID is also a field that the
original records contained but calcurse lost, so there is little hope of
avoiding dupes when reimporting records.

** Affects: calcurse (Ubuntu)
 Importance: Undecided
 Status: New

** Summary changed:

- calcurse import of ICS files is lossy (location & description is lost)
+ calcurse import of ICS files is lossy (location, description, and UID are 
lost)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1942483

Title:
  calcurse import of ICS files is lossy (location, description, and UID
  are lost)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/calcurse/+bug/1942483/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940691] Re: sfdisk fails to overwrite disks that contain a pre-existing linux partition

2021-08-20 Thread Bill Yikes
** Description changed:

  This command should non-interactively obliterate whatever partition
  table is on /dev/sde, and create a new table with a linux partition that
  spans the whole disk:
  
-   $ sfdisk --label gpt /dev/sde <<< 'start=2048, type='"$(sfdisk --label
+   $ sfdisk --label gpt /dev/sde <<< 'start=2048, type='"$(sfdisk --label
  gpt -T | awk '{IGNORECASE = 1;} /linux filesystem/{print $1}')"
  
  Sometimes it works correctly; sometimes not.  It seems to work correctly
  as long as the pre-existing partition table does not already contain a
  linux partition.  E.g. if the existing table just contains an exFAT
  partition, there's no issue.  But if there is a linux partition, it
  gives this output:
  
  -
  Old situation:
  
  Device StartEndSectors   Size Type
  /dev/sde1   2048 1953525134 1953523087 931.5G Linux root (x86-64)
  
  >>> Created a new GPT disklabel (GUID: ).
  /dev/sde1: Sector 2048 already used.
  Failed to add #1 partition: Numerical result out of range
  Leaving.
  -
  
  sfdisk should *always* overwrite the target disk unconditionally.  This
  is what the dump of the target drive looks like in the failure case:
  
  $ sfdisk -d /dev/sdd
  label: gpt
  label-id: 
  device: /dev/sdd
  unit: sectors
  first-lba: 34
  last-lba: 1953525134
  grain: 33553920
  sector-size: 512
  
  /dev/sdd1 : start=2048, size=  1953523087,
  type=4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709, uuid=, name="Linux
  x86-64 root (/)"
  
  $ sfdisk -v
  sfdisk from util-linux 2.36.1
  
  Even the workaround is broken.  That is, running the following:
  
  $ wipefs -a /dev/sde
  
  should put the disk in a state that can be overwritten.  But whatever
  residual metadata it leaves behind still triggers the sfdisk bug:
  
  $ sfdisk --label gpt /dev/sde <<< 'start=2048, type='"$(sfdisk --label gpt -T 
| awk '{IGNORECASE = 1;} /linux filesystem/{print $1}')"
  Checking that no-one is using this disk right now ... OK
  
  Disk /dev/sde: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
- Disk model: Disk
+ Disk model: Disk
  Units: sectors of 1 * 512 = 512 bytes
  Sector size (logical/physical): 512 bytes / 512 bytes
  I/O size (minimum/optimal): 4096 bytes / 33553920 bytes
  
  >>> Created a new GPT disklabel (GUID: ).
  /dev/sde1: Sector 2048 already used.
  Failed to add #1 partition: Numerical result out of range
  Leaving.
+ 
+ Workaround 2 (also fails):
+ 
+ $ dd if=/dev/zero of=/dev/sde bs=1M count=512
+ 
+ $ sfdisk -d /dev/sde
+ sfdisk: /dev/sde: does not contain a recognized partition table
+ 
+ ^ the nuclear option did the right thing, but sfdisk still fails to
+ partition the drive (same error).
+ 
+ The *only* workaround that works is an interactive partitioning with
+ gdisk.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940691

Title:
  sfdisk fails to overwrite disks that contain a pre-existing linux
  partition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/util-linux/+bug/1940691/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940691] [NEW] sfdisk fails to overwrite disks that contain a pre-existing linux partition

2021-08-20 Thread Bill Yikes
Public bug reported:

This command should non-interactively obliterate whatever partition
table is on /dev/sde, and create a new table with a linux partition that
spans the whole disk:

  $ sfdisk --label gpt /dev/sde <<< 'start=2048, type='"$(sfdisk --label
gpt -T | awk '{IGNORECASE = 1;} /linux filesystem/{print $1}')"

Sometimes it works correctly; sometimes not.  It seems to work correctly
as long as the pre-existing partition table does not already contain a
linux partition.  E.g. if the existing table just contains an exFAT
partition, there's no issue.  But if there is a linux partition, it
gives this output:

-
Old situation:

Device StartEndSectors   Size Type
/dev/sde1   2048 1953525134 1953523087 931.5G Linux root (x86-64)

>>> Created a new GPT disklabel (GUID: ).
/dev/sde1: Sector 2048 already used.
Failed to add #1 partition: Numerical result out of range
Leaving.
-

sfdisk should *always* overwrite the target disk unconditionally.  This
is what the dump of the target drive looks like in the failure case:

$ sfdisk -d /dev/sdd
label: gpt
label-id: 
device: /dev/sdd
unit: sectors
first-lba: 34
last-lba: 1953525134
grain: 33553920
sector-size: 512

/dev/sdd1 : start=2048, size=  1953523087,
type=4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709, uuid=, name="Linux
x86-64 root (/)"

$ sfdisk -v
sfdisk from util-linux 2.36.1

Even the workaround is broken.  That is, running the following:

$ wipefs -a /dev/sde

should put the disk in a state that can be overwritten.  But whatever
residual metadata it leaves behind still triggers the sfdisk bug:

$ sfdisk --label gpt /dev/sde <<< 'start=2048, type='"$(sfdisk --label gpt -T | 
awk '{IGNORECASE = 1;} /linux filesystem/{print $1}')"
Checking that no-one is using this disk right now ... OK

Disk /dev/sde: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes

>>> Created a new GPT disklabel (GUID: ).
/dev/sde1: Sector 2048 already used.
Failed to add #1 partition: Numerical result out of range
Leaving.

Workaround 2 (also fails):

$ dd if=/dev/zero of=/dev/sde bs=1M count=512

$ sfdisk -d /dev/sde
sfdisk: /dev/sde: does not contain a recognized partition table

^ the nuclear option did the right thing, but sfdisk still fails to
partition the drive (same error).

The *only* workaround that works is an interactive partitioning with
gdisk.

** Affects: util-linux (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940691

Title:
  sfdisk fails to overwrite disks that contain a pre-existing linux
  partition

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/util-linux/+bug/1940691/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940001] Re: Segmentation fault when using “-s max”

2021-08-15 Thread Bill Yikes
** Description changed:

  I'm starting with a 2.5gb image of a TomTom device which was created
  using dd.  There is no partition table, so the vfat filesystem begins on
  the very first sector.
  
  $ dd if=/dev/sdd of=image_file_2.5gb.dd
  
  This is a migration to a 4gb sd card.  So I used fdisk to see the full
  size of the target media, and allocated a file to match:
  
  $ fallocate -l 4008706048 image_file_4gb.img
  
  Copied the contents bit-for-bit:
  
  $ dd if=image_file_2.5gb.dd of=image_file_4gb.img conv=notrunc
  
  Finally the vfat filesystem thereon must be stretched:
  
  $ fatresize -s max image_file_4gb.img
  fatresize 1.1.0 (20201114)
  part(start=0, end=7829503, length=7829504)
  Segmentation fault
  
  It resembles this bug:
  
  https://github.com/ya-mouse/fatresize/issues/6
  
  but that bug was supposedly fixed 3 yrs prior to version 1.1.0.
+ 
+ 
+ Workaround:
+ 
+ It's worth noting that the segfault can be avoided if the smaller image
+ is copied to a larger SD card, and “fatresize -s max” is run on the
+ unmounted device (e.g. /dev/sdd).

** Description changed:

  I'm starting with a 2.5gb image of a TomTom device which was created
  using dd.  There is no partition table, so the vfat filesystem begins on
  the very first sector.
  
  $ dd if=/dev/sdd of=image_file_2.5gb.dd
  
  This is a migration to a 4gb sd card.  So I used fdisk to see the full
  size of the target media, and allocated a file to match:
  
  $ fallocate -l 4008706048 image_file_4gb.img
  
  Copied the contents bit-for-bit:
  
  $ dd if=image_file_2.5gb.dd of=image_file_4gb.img conv=notrunc
  
  Finally the vfat filesystem thereon must be stretched:
  
  $ fatresize -s max image_file_4gb.img
  fatresize 1.1.0 (20201114)
  part(start=0, end=7829503, length=7829504)
  Segmentation fault
  
  It resembles this bug:
  
  https://github.com/ya-mouse/fatresize/issues/6
  
  but that bug was supposedly fixed 3 yrs prior to version 1.1.0.
  
- 
  Workaround:
  
  It's worth noting that the segfault can be avoided if the smaller image
  is copied to a larger SD card, and “fatresize -s max” is run on the
- unmounted device (e.g. /dev/sdd).
+ unmounted device (e.g. /dev/sdd) instead of running it on an image file.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940001

Title:
  Segmentation fault when using “-s max”

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fatresize/+bug/1940001/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1940001] [NEW] Segmentation fault when using “-s max”

2021-08-15 Thread Bill Yikes
Public bug reported:

I'm starting with a 2.5gb image of a TomTom device which was created
using dd.  There is no partition table, so the vfat filesystem begins on
the very first sector.

$ dd if=/dev/sdd of=image_file_2.5gb.dd

This is a migration to a 4gb sd card.  So I used fdisk to see the full
size of the target media, and allocated a file to match:

$ fallocate -l 4008706048 image_file_4gb.img

Copied the contents bit-for-bit:

$ dd if=image_file_2.5gb.dd of=image_file_4gb.img conv=notrunc

Finally the vfat filesystem thereon must be stretched:

$ fatresize -s max image_file_4gb.img
fatresize 1.1.0 (20201114)
part(start=0, end=7829503, length=7829504)
Segmentation fault

It resembles this bug:

https://github.com/ya-mouse/fatresize/issues/6

but that bug was supposedly fixed 3 yrs prior to version 1.1.0.

** Affects: fatresize (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940001

Title:
  Segmentation fault when using “-s max”

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fatresize/+bug/1940001/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1939911] [NEW] kitty gives “ValueError: Failed to create GLFWwindow” under Sway

2021-08-13 Thread Bill Yikes
Public bug reported:

Running kitty with Xwayland and Sway installed gives:


[225 13:13:30.143763] [glfw error 65543]: EGL: Failed to create context: 
Arguments are inconsistent
[225 13:13:30.152170] Traceback (most recent call last):
  File "/usr/bin/../lib/kitty/kitty/main.py", line 344, in main
_main()
  File "/usr/bin/../lib/kitty/kitty/main.py", line 337, in _main
run_app(opts, cli_opts, bad_lines)
  File "/usr/bin/../lib/kitty/kitty/main.py", line 160, in __call__
_run_app(opts, args, bad_lines)
  File "/usr/bin/../lib/kitty/kitty/main.py", line 133, in _run_app
window_id = create_os_window(
ValueError: Failed to create GLFWwindow


It's apparently a recurrance of this prematurely closed upstream bug
report:

https://github.com/kovidgoyal/kitty/issues/98

Following the suggestions in that discussion, my user was already in the
audio and video groups.  I installed mesa-utils and ran glxgears, which
shows animated gears.  This further suggests that the underlying
machinery is in place.

** Affects: kitty (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1939911

Title:
  kitty gives “ValueError: Failed to create GLFWwindow” under Sway

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/kitty/+bug/1939911/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1938719] Re: (feature request) support for non-stdio plugins (like Hydroxide)

2021-08-03 Thread Bill Yikes
> Lucas, it is still a support request rather than a feature request
since the feature is already there

It's not.  You misunderstood the ticket.  Your first response mirrors my
opening statement, but neglects everything that followed.

The "plugin" option only works with stdio.  This ticket is to implement
a new capability: to spawn an executable that does *not* talk to
fetchmail via standard i/o.  My opening statement and your response is
the /workaround/ because that capability is lacking.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1938719

Title:
  (feature request) support for non-stdio plugins (like Hydroxide)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1938719/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1938719] [NEW] (feature request) support for non-stdio plugins (like Hydroxide)

2021-08-02 Thread Bill Yikes
Public bug reported:

Protonmail-Hydroxide users are inconvenienced with having to manually
execute this command prior to running fetchmail:

$ torsocks hydroxide imap

The fetchmailrc stanza looks like this:

skip protonmail-hydroxide via 127.0.0.1
protocol  imap
port  1143
username  "billyikes"
sslproto  ''
fetchall

The workflow above functions, but it's a nuissance to have to execute a
daemon manually before running fetchmail.  In principle, it would be
ideal to add: 'plugin "torsocks hydroxide imap"', but the problem is
that fetchmail has the limitation of having to interact with plugins
through standard I/O, which is unsupported by hydroxide.

I suggest adding the following options:

stdio 
plugin_is_a_daemon 
kill_plugin 

If the stdio boolean is true, then the plugin mechanism works the way it
always has.  If the boolean is false, then fetchmail connects to the IP
and port given in the stanza.

If plugin_is_a_daemon is true, then fetchmail should check whether the
process is already running and if not fetchmail should launch it and
disown it before fetching the mail.

If kill_plugin is true, fetchmail should kill the plugin after each
fetch.

Perhaps those 3 options could be consolidated into fewer options but I'm
merely brainstorming the degree of control that would be useful to
users.

*Alternatively*

Perhaps there is a hack that avoids a fetchmail code change.  For
example, would it work to layer in socat alongside hydroxide? Such as:

plugin "torsocks hydroxide imap & socat STDIO TCP:127.0.0.1:%h:%p"

I'm not real proficient with socat (it's very complex), but if there is
a way to launch hydroxide and combine socat as middleware to keep
fetchmail code simple, then this feature request could boil down to
simply adding an example of a hydroxide configuration to the man page.

FYI, Hydroxide lives here: https://github.com/emersion/hydroxide

BTW, I know the upstream tracker is normally the appropriate place for
feature requests, but gitlab.com is inaccessible to myself and probably
many others (largely people using Tor or whose ISP uses CGNAT).  So I've
posted it here just to get it on record, in hopes that an upstream
developer sees it.

** Affects: fetchmail (Ubuntu)
 Importance: Undecided
 Status: New

** Description changed:

  Protonmail-Hydroxide users are inconvenienced with having to manually
  execute this command prior to running fetchmail:
  
  $ torsocks hydroxide imap
  
  The fetchmailrc stanza looks like this:
  
  skip protonmail-hydroxide via 127.0.0.1
- protocol  imap
- port  1143
- username  "billyikes"
- sslproto  ''
- fetchall
+ protocol  imap
+ port  1143
+ username  "billyikes"
+ sslproto  ''
+ fetchall
  
  The workflow above functions, but it's a nuissance to have to execute a
  daemon manually before running fetchmail.  In principle, it would be
  ideal to add: 'plugin "torsocks hydroxide imap"', but the problem is
- that fetchmail has the limitation of having to use standard I/O, which
- is unsupported by hydroxide.
+ that fetchmail has the limitation of having to interact with plugins
+ through standard I/O, which is unsupported by hydroxide.
  
  I suggest adding the following options:
  
  stdio 
  plugin_is_a_daemon 
  kill_plugin 
  
  If the stdio boolean is true, then the plugin mechanism works the way it
  always has.  If the boolean is false, then fetchmail connects to the IP
  and port given in the stanza.
  
  If plugin_is_a_daemon is true, then fetchmail should check whether the
  process is already running and if not fetchmail should launch it and
  disown it before fetching the mail.
  
  If kill_plugin is true, fetchmail should kill the plugin after each
  fetch.
  
  Perhaps those 3 options could be consolidated into fewer options but I'm
  merely brainstorming the degree of control that would be useful to
  users.
  
  *Alternatively*
  
  Perhaps there is a hack that avoids a fetchmail code change.  For
  example, would it work to layer in socat alongside hydroxide? Such as:
  
  plugin "torsocks hydroxide imap & socat STDIO TCP:127.0.0.1:%h:%p"
  
  I'm not real proficient with socat (it's very complex), but if there is
  a way to launch hydroxide and combine socat as middleware to keep
  fetchmail code simple, then this feature request could boil down to
  simply adding an example of a hydroxide configuration to the man page.
  
  FYI, Hydroxide lives here: https://github.com/emersion/hydroxide

** Description changed:

  Protonmail-Hydroxide users are inconvenienced with having to manually
  execute this command prior to running fetchmail:
  
  $ torsocks hydroxide imap
  
  The fetchmailrc stanza looks like this:
  
  skip protonmail-hydroxide via 127.0.0.1
  protocol  imap
  port  1143
  username  "billyikes"
  sslproto  ''
  fetchall
  
  The workflow above 

[Bug 1938548] Re: screen capture broken; ImageMagick "import" command fails

2021-07-30 Thread Bill Yikes
** Description changed:

  When running Xorg+xwayland+wayland+sway, the ImageMagick "import"
  command was executed on the commandline.  The pointer was then dragged
  across an Ungoogled Chromium window.  I should have drawn a rectangle to
  show the area being snapshot'd, but no box appeared.  When the mouse
  button was released, this error was printed in the terminal:
  
  import-im6.q16: unable to read X window image `': Resource temporarily 
unavailable @ error/xwindow.c/XImportImage/4977.
  import-im6.q16: missing an image filename `/tmp/sample.png' @ 
error/import.c/ImportImageCommand/1276.
  
  No image file was produced.
  
  Chromium only works in Wayland for some people and only with some
  special parameters passed to it.  I could not get Ungoogled Chromium to
  run in Wayland, so I had to run it like normal on Xwayland.
+ 
+ This bug is mirrored here:
+ 
+ https://bugs.launchpad.net/ubuntu/+source/imagemagick/+bug/1938556
+ 
+ Most likely ImageMagick is the culprit.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1938548

Title:
  screen capture broken; ImageMagick "import" command fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/xwayland/+bug/1938548/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1938556] [NEW] screen capture broken-- "import" command fails in Wayland

2021-07-30 Thread Bill Yikes
Public bug reported:

The import command fails to work in environments configured with the
following components:

* Xorg
* Wayland
* Xwayland
* Sway

With all of those packages installed, import fails.  More details of
this bug are mirrored in the following bug against Xwayland:

https://bugs.launchpad.net/ubuntu/+source/xwayland/+bug/1938548

Is the bug in Xwayland or Imagemagick?  I'm not certain, but the fact
that grim, slurp, and grimshot work in this environment suggests that
the problem is most likely with ImageMagick.

** Affects: imagemagick (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1938556

Title:
  screen capture broken-- "import" command fails in Wayland

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/imagemagick/+bug/1938556/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1834636] Re: Import selection rectangle leaves trails visible in screenshot

2021-07-30 Thread Bill Yikes
I've noticed that this bug manifests whenever a the screen is animated.
And sometimes the screen /appears/ ot be still, but perhaps some crappy
javascript does something like repeated refreshing to make the still
image "animated".

A good reproducer: use a browser other than Tor Browser to access a
Cloudflared website like opensecrets.org, and do so over Tor.  In this
case, Cloudflare will push a CAPTCHA.  Using import on that captcha
always consistently results in trails of selection rectangles.

When the screen is truly still, the import command works fine,
generally.  In any case, I concur that this bug certainly exists.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1834636

Title:
  Import selection rectangle leaves trails visible in screenshot

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/imagemagick/+bug/1834636/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1938548] [NEW] screen capture broken; ImageMagick "import" command fails

2021-07-30 Thread Bill Yikes
Public bug reported:

When running Xorg+xwayland+wayland+sway, the ImageMagick "import"
command was executed on the commandline.  The pointer was then dragged
across an Ungoogled Chromium window.  I should have drawn a rectangle to
show the area being snapshot'd, but no box appeared.  When the mouse
button was released, this error was printed in the terminal:

import-im6.q16: unable to read X window image `': Resource temporarily 
unavailable @ error/xwindow.c/XImportImage/4977.
import-im6.q16: missing an image filename `/tmp/sample.png' @ 
error/import.c/ImportImageCommand/1276.

No image file was produced.

Chromium only works in Wayland for some people and only with some
special parameters passed to it.  I could not get Ungoogled Chromium to
run in Wayland, so I had to run it like normal on Xwayland.

** Affects: xwayland (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1938548

Title:
  screen capture broken; ImageMagick "import" command fails

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/xwayland/+bug/1938548/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1937874] [NEW] one --accept-regex expression negates another

2021-07-23 Thread Bill Yikes
Public bug reported:

This command should theoretically fetch all PDFs on a page:

$ wget -v -d -r --level 1 --adjust-extension --no-clobber --no-directories\
   --accept-regex 'administrative-orders/.*/administrative-order-matter-'\
   --accept-regex 'administrative-orders.*.pdf'\
   --accept-regex 'administrative-orders.page[^&]*$'\
   --directory-prefix=/tmp\
   
'https://www.ncua.gov/regulation-supervision/enforcement-actions/administrative-orders?page=56'

But it fails to grab any of them, giving the output:

---
Deciding whether to enqueue 
"https://www.ncua.gov/files/administrative-orders/AO14-0241-R4.pdf;.
https://www.ncua.gov/files/administrative-orders/AO14-0241-R4.pdf is 
excluded/not-included through regex.
Decided NOT to load it.
---

That's bogus.  The workaround is to remove this option:

--accept-regex 'administrative-orders.page[^&]*$'

But that should not be necessary.  Adding an --accept-* clause should
never cause another --accept-* clause to become invalidated and it
should not shrink the set of fetched files.

** Affects: wget (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1937874

Title:
  one --accept-regex expression negates another

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/wget/+bug/1937874/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1937850] [NEW] the -L / --relative option breaks --accept-regex

2021-07-23 Thread Bill Yikes
Public bug reported:

This code should in principle (per the docs) fetch a few *.pdf files:

$ wget -r --level 1 --adjust-extension --relative --no-clobber --no-directories\
   --domains=ncua.gov --accept-regex 'administrative-orders/.*.pdf'\
   
'https://www.ncua.gov/regulation-supervision/enforcement-actions/administrative-orders?page=22=year=desc='

But it misses all *.pdf files.  When the --relative option is removed,
the PDF files are downloaded.  However, when you examine the tree-top
HTML file, the links pointing to PDF files actually are relative.

** Affects: wget (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1937850

Title:
  the -L / --relative option breaks --accept-regex

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/wget/+bug/1937850/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1937311] [NEW] copy-dir-locals-to-file-locals-prop-line command does nothing in LaTeX mode

2021-07-22 Thread Bill Yikes
Public bug reported:

When in latex-mode with a latex source file in the buffer, running:

  M-x copy-dir-locals-to-file-locals-prop-line

does nothing.  It doesn't matter whether there is an existing line with
-*- -*- or not.

It's a bit annoying, because there are 4 latex engines to choose from in
the configuration menu, and it would be convenient to be able to make
emacs choose the correct engine for the file at hand (because different
files work with different engines).

Discovered on GNU Emacs 27.1

** Affects: emacs (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1937311

Title:
  copy-dir-locals-to-file-locals-prop-line command does nothing in LaTeX
  mode

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/emacs/+bug/1937311/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1937082] [NEW] gunzip fails to extract data from stdin when the zip holds multiple files

2021-07-21 Thread Bill Yikes
Public bug reported:

There is no good reason for this command to fail:

$ wget --quiet -O -
https://web.archive.org/web/20210721004028/freefontsdownload.net/download/76451/lucida_fax.zip
| gunzip -

The output is:

[InternetShortcut]
URL=HOMESITEfree-lucida_fax-font-76451.htmgzip: stdin has more than one 
entry--rest ignored

What's happening is gzip logic has gotten tangled up with the gunzip
logic. If gzip receives an input stream, it's sensible that the
resulting archive contain just one file. But there's no reason gunzip
should not be able to produce multiple files from a zip stream.  Note
that -c was not given to gunzip, so it should not have the constraints
that use of stdout would impose.

The man page is also a problem. The gunzip portion of the aggregated man
page makes no statement about how stdin is expected to operate.  At a
minimum, it should say that a minus ("-") directs gunzip to read from
stdin.

** Affects: gzip (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1937082

Title:
  gunzip fails to extract data from stdin when the zip holds multiple
  files

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gzip/+bug/1937082/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1936953] [NEW] evince no longer renders grayscale DjVu docs

2021-07-20 Thread Bill Yikes
Public bug reported:

Evince used to have no problem showing a grayscale djvu file. Version
3.22.1 was fine. Now evince version 3.38.2 presents an error: 'Unable to
open document "grayscale_sample.djvu".  File type DjVu image
(image/gnd.djvu) is not supported.'  Now evince can only render bilevel
DjVu images.

To reproduce, start with a grayscale *.pgm file. Use ImageMagick if
needed to obtain one, as follows:

$ convert logo: sample_grayscale.pgm

Convert to djvu:

$ c44 sample_grayscale.pgm sample_grayscale.djvu

Finally:

$ evince sample_grayscale.djvu

(broken)

** Affects: evince (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1936953

Title:
  evince no longer renders grayscale DjVu docs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/evince/+bug/1936953/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1935581] Re: evince no longer shows bookmarks for DjVu files

2021-07-18 Thread Bill Yikes
** Description changed:

  Evince used to have no problem showing bookmarks when viewing a djvu
  file.  Version 3.22.1 was fine.  Now evince version 3.38.2 just shows a
  blank pane for bookmarks.
  
  I'm not sure if the bug is in evince or in djvused, so the bug report is
  mirrored for each package.  The djvused bug report has more details:
  
  https://bugs.launchpad.net/ubuntu/+source/djvulibre/+bug/1935580
+ 
+ Note that I would have reported this upstream, but I was unable to
+ register on gitlab.gnome.org.  I forgot what the issue was, but I
+ probably failed some anti-bot test.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1935581

Title:
  evince no longer shows bookmarks for DjVu files

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/evince/+bug/1935581/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1754431] Re: multiple urls not opening on commandline

2021-07-14 Thread Bill Yikes
On torbrowser-launcher 0.3.3, same issues persist.  URLs passed on the
commandline are ignored.

I also concur with the feature request/bug regarding the ability to add
tabs to existing sessions.  It appears the capability was there at one
time but later removed.  Specifically, I believe the "-allow-remote" CLI
option would allow additional URLs to be passed in to an existing
process, but that option is no longer recognized.  It would be a useful
capability - please restore it.

BTW, I should say the title is a little misleading.  It's not only cases
of multiple URLs that trigger browser inaction.  If one tries to pass
just one URL on the commandline, even that is ignored.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1754431

Title:
  multiple urls not opening on commandline

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/torbrowser-launcher/+bug/1754431/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1935749] [NEW] [string totitle] does not actually convert to title case

2021-07-10 Thread Bill Yikes
Public bug reported:

The [string totitle] operation is falsely named and falsely documented.
The actual behavior of this operation gives /sentence case/, where only
the first character of the whole string is capitalized.  Title case
requires the first character of every word in the string to be
capitalized.

I suggest:

* rename "totitle" to "tosentence"
* create a new "totitle" function that actually delivers title casing

Falling short of the above, if it's determined that changing this
behavior would trigger too many problems with pre-existing code, at a
minimum the man page should be updated to make the false naming clear.
It should start with something like "Does not return title case but
actually returns sentence case" and other mentions of "title case" in
the description should be replaced with "sentence case".

** Affects: tcl8.6 (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1935749

Title:
  [string totitle] does not actually convert to title case

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/tcl8.6/+bug/1935749/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1935724] Re: adding a "like" clause breaks "collate nocase"

2021-07-09 Thread Bill Yikes
** Description changed:

  /* This shows 1800+ records.  Note that foo_tbl values are all title
  case (except one record), while all values in bar_tbl are all uppercase.
  This is why "collate nocase" is important */
  
  select foo_tbl.name,trim(bar_tbl.name),foo_tbl.host,bar_tbl.host from
  foo_tbl join bar_tbl on bar_tbl.uid = foo_tbl.uid where
  trim(foo_tbl.name) = trim(bar_tbl.name) collate nocase;
  
  /* Adding 'and foo_tbl.host like "%"' should have no effect, but in fact
  only 1 record is shown.  foo_tbl has 1 record where the capitalization
  matches.  This indicates that the new "like" condition is breaking the
  "collate nocase" */
  
  select foo_tbl.name,trim(bar_tbl.name),foo_tbl.host,bar_tbl.host from
  foo_tbl join bar_tbl on bar_tbl.uid = foo_tbl.uid where
  trim(foo_tbl.name) = trim(bar_tbl.name) and foo_tbl.host like '%'
  collate nocase;
  
- /* Workaround:  This hacks around the above problem shows 1800+ records.
- */
+ /* Workaround:  This hacks around the above problem shows 1800+ records
+ without having to give up the "like" clause. But it's ugly. */
  
  select * from (select foo_tbl.name,trim(bar_tbl.name),foo_tbl.host,host
  from foo_tbl join bar_tbl on bar_tbl.uid = foo_tbl.id where
  trim(foo_tbl.name) = trim(bar_tbl.name) collate nocase) where
  foo_tbl.host like '%';

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1935724

Title:
  adding a "like" clause breaks "collate nocase"

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sqlite/+bug/1935724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1935724] [NEW] adding a "like" clause breaks "collate nocase"

2021-07-09 Thread Bill Yikes
Public bug reported:

/* This shows 1800+ records.  Note that foo_tbl values are all title
case (except one record), while all values in bar_tbl are all uppercase.
This is why "collate nocase" is important */

select foo_tbl.name,trim(bar_tbl.name),foo_tbl.host,bar_tbl.host from
foo_tbl join bar_tbl on bar_tbl.uid = foo_tbl.uid where
trim(foo_tbl.name) = trim(bar_tbl.name) collate nocase;

/* Adding 'and foo_tbl.host like "%"' should have no effect, but in fact
only 1 record is shown.  foo_tbl has 1 record where the capitalization
matches.  This indicates that the new "like" condition is breaking the
"collate nocase" */

select foo_tbl.name,trim(bar_tbl.name),foo_tbl.host,bar_tbl.host from
foo_tbl join bar_tbl on bar_tbl.uid = foo_tbl.uid where
trim(foo_tbl.name) = trim(bar_tbl.name) and foo_tbl.host like '%'
collate nocase;

/* Workaround:  This hacks around the above problem shows 1800+ records.
*/

select * from (select foo_tbl.name,trim(bar_tbl.name),foo_tbl.host,host
from foo_tbl join bar_tbl on bar_tbl.uid = foo_tbl.id where
trim(foo_tbl.name) = trim(bar_tbl.name) collate nocase) where
foo_tbl.host like '%';

** Affects: sqlite (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1935724

Title:
  adding a "like" clause breaks "collate nocase"

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sqlite/+bug/1935724/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1935689] [NEW] djvm loses bookmarks on merges with -create

2021-07-09 Thread Bill Yikes
Public bug reported:

Merging multiple djvu files should not be a lossy operation, apart from
metadata that only has 1 slot (e.g. author, title, etc).  The bookmarks
(aka outline) is lost, not merged:

$ djvused -e 'print-outline' file-a.djvu

(bookmarks
 ("foo"
  "#1" ) )

$ djvused -e 'print-outline' file-b.djvu

(bookmarks
 ("bar"
  "#1" ) )

$ djvm -c mergetest.djvu file-a.djvu file-b.djvu

$ djvused -e 'print-outline' mergetest.djvu

(no output)

This data loss is also undocumented.  While it wouldn't be necessary to
document obvious unavoidable data loss such as doc-wide fields of author
and title, the loss of bookmarks is due to an incomplete implementation.
And it's unexpected.  So this should either be fixed, or the man page
should be updated perhaps with a BUGS section warning users of the data
loss.

** Affects: djvulibre (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1935689

Title:
  djvm loses bookmarks on merges with -create

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/djvulibre/+bug/1935689/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1935600] [NEW] Does not show bookmarks for DjVu files

2021-07-08 Thread Bill Yikes
Public bug reported:

qpdfview version 0.4.18 just shows a blank pane for bookmarks.

I originally thought the bug was in djvused, but after more testing I've
confirmed that djvused is correctly producing the bookmarks.  All the
details about the problem are here:

https://bugs.launchpad.net/ubuntu/+source/djvulibre/+bug/1935580

** Affects: qpdfview (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1935600

Title:
  Does not show bookmarks for DjVu files

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qpdfview/+bug/1935600/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1935601] [NEW] Does not show bookmarks for DjVu files

2021-07-08 Thread Bill Yikes
Public bug reported:

Okular version 20.12.3 just shows a blank pane for bookmarks.

I originally thought the bug was in djvused, but after more testing I've
confirmed that djvused is correctly producing the bookmarks. All the
details about the problem are here:

https://bugs.launchpad.net/ubuntu/+source/djvulibre/+bug/1935580

** Affects: okular (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1935601

Title:
  Does not show bookmarks for DjVu files

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/okular/+bug/1935601/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1935580] Re: djvused lost the ability to add bookmarks

2021-07-08 Thread Bill Yikes
I just ran another test:  I used the old evince (3.22.1) to open the
djvu file produced by the new djvused command.  The bookmarks appeared
properly.  This suggests that all three viewers are broken in the same
way.  Perhaps they all rely on a library that broke.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1935580

Title:
  djvused lost the ability to add bookmarks

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/djvulibre/+bug/1935580/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1935580] Re: djvused lost the ability to add bookmarks

2021-07-08 Thread Bill Yikes
** Description changed:

  Following these steps to add bookmarks using the set-outline
  instruction:
  
  https://ebooks.stackexchange.com/questions/7866/how-insert-the-outline-
  the-bookmarks-into-djvu
  
  there is no error.  And if I follow that with:
  
  $ djvused foo.djvu -e 'print-outline'
  
- it faithfully reproduces the outline I gave it.  But when that djvu file
- is opened in the following viewers:
+ it faithfully reproduces the outline I gave it.
+ 
+ This technique was also tested and had the same effect:
+ 
+ https://superuser.com/questions/1170248/how-to-embed-bookmarks-to-djvu-
+ file-using-djvused-djvulibre
+ 
+ When the resulting djvu file is opened in the following viewers:
  
  * evince v.3.38.2
  * okular (with okular-extra-backends)
  * qpdfview (with qpdfview-djvu-plugin)
  
  in all 3 cases the bookmarks pane shows nothing.  So either djvused is
  broken or all three viewers are broken.  I believe okular and evince
  have different backends, so djvused is the likely culprit.
  
  djvulibre-bin version is 3.5.28.  I did not see have this problem with
  djvulibre-bin 3.5.27.1 & evince 3.22.1.
  
  Evince bug mirrored here since it's unclear which one has the bug:
  
  https://bugs.launchpad.net/ubuntu/+source/evince/+bug/1935581

** Description changed:

  Following these steps to add bookmarks using the set-outline
  instruction:
  
  https://ebooks.stackexchange.com/questions/7866/how-insert-the-outline-
  the-bookmarks-into-djvu
  
  there is no error.  And if I follow that with:
  
  $ djvused foo.djvu -e 'print-outline'
  
  it faithfully reproduces the outline I gave it.
  
  This technique was also tested and had the same effect:
  
- https://superuser.com/questions/1170248/how-to-embed-bookmarks-to-djvu-
- file-using-djvused-djvulibre
+ https://superuser.com/a/1466837
  
  When the resulting djvu file is opened in the following viewers:
  
  * evince v.3.38.2
  * okular (with okular-extra-backends)
  * qpdfview (with qpdfview-djvu-plugin)
  
  in all 3 cases the bookmarks pane shows nothing.  So either djvused is
  broken or all three viewers are broken.  I believe okular and evince
  have different backends, so djvused is the likely culprit.
  
  djvulibre-bin version is 3.5.28.  I did not see have this problem with
  djvulibre-bin 3.5.27.1 & evince 3.22.1.
  
  Evince bug mirrored here since it's unclear which one has the bug:
  
  https://bugs.launchpad.net/ubuntu/+source/evince/+bug/1935581

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1935580

Title:
  djvused lost the ability to add bookmarks

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/djvulibre/+bug/1935580/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1935580] Re: djvused lost the ability to add bookmarks

2021-07-08 Thread Bill Yikes
** Description changed:

  Following these steps to add bookmarks using the set-outline
  instruction:
  
  https://ebooks.stackexchange.com/questions/7866/how-insert-the-outline-
  the-bookmarks-into-djvu
  
  there is no error.  And if I follow that with:
  
  $ djvused foo.djvu -e 'print-outline'
  
  it faithfully reproduces the outline I gave it.  But when that djvu file
  is opened in the following viewers:
  
  * evince v.3.38.2
  * okular (with okular-extra-backends)
  * qpdfview (with qpdfview-djvu-plugin)
  
  in all 3 cases the bookmarks pane shows nothing.  So either djvused is
  broken or all three viewers are broken.  I believe okular and evince
  have different backends, so djvused is the likely culprit.
  
  djvulibre-bin version is 3.5.28.  I did not see have this problem with
  djvulibre-bin 3.5.27.1 & evince 3.22.1.
+ 
+ Evince bug mirrored here since it's unclear which one has the bug:
+ 
+ https://bugs.launchpad.net/ubuntu/+source/evince/+bug/1935581

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1935580

Title:
  djvused lost the ability to add bookmarks

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/djvulibre/+bug/1935580/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1935581] [NEW] evince no longer shows bookmarks for DjVu files

2021-07-08 Thread Bill Yikes
Public bug reported:

Evince used to have no problem showing bookmarks when viewing a djvu
file.  Version 3.22.1 was fine.  Now evince version 3.38.2 just shows a
blank pane for bookmarks.

I'm not sure if the bug is in evince or in djvused, so the bug report is
mirrored for each package.  The djvused bug report has more details:

https://bugs.launchpad.net/ubuntu/+source/djvulibre/+bug/1935580

** Affects: evince (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1935581

Title:
  evince no longer shows bookmarks for DjVu files

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/evince/+bug/1935581/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1935580] [NEW] djvused lost the ability to add bookmarks

2021-07-08 Thread Bill Yikes
Public bug reported:

Following these steps to add bookmarks using the set-outline
instruction:

https://ebooks.stackexchange.com/questions/7866/how-insert-the-outline-
the-bookmarks-into-djvu

there is no error.  And if I follow that with:

$ djvused foo.djvu -e 'print-outline'

it faithfully reproduces the outline I gave it.  But when that djvu file
is opened in the following viewers:

* evince v.3.38.2
* okular (with okular-extra-backends)
* qpdfview (with qpdfview-djvu-plugin)

in all 3 cases the bookmarks pane shows nothing.  So either djvused is
broken or all three viewers are broken.  I believe okular and evince
have different backends, so djvused is the likely culprit.

djvulibre-bin version is 3.5.28.  I did not see have this problem with
djvulibre-bin 3.5.27.1 & evince 3.22.1.

** Affects: djvulibre (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1935580

Title:
  djvused lost the ability to add bookmarks

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/djvulibre/+bug/1935580/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934699] [NEW] Error messages indisinguishable from app or firejail

2021-07-05 Thread Bill Yikes
Public bug reported:

It's really annoying that firejail simply tags error msgs with "Error:",
and not "Firejail error:".

How are users supposed to know if the error is reported by Firejail, or
the app that firejail is running?

This code is littered with anonymous error messages that makes
troubleshooting difficult:

https://github.com/netblue30/firejail/blob/master/src/firejail/main.c

E.g.

fprintf(stderr, "Error: too many environment variables\n");

** Affects: firejail (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934699

Title:
  Error messages indisinguishable from app or firejail

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/firejail/+bug/1934699/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934698] [NEW] "too short arguments" error

2021-07-05 Thread Bill Yikes
Public bug reported:

This command results in "Error: too short arguments":

$ firejail pastebinit -a '' -b paste.debian.net -i - <<< "hello world"

Even if a "--" parameter terminator is used to prevent firejail from
treating a non-firejail argument as a firejail argument like this:

$ firejail -- pastebinit -a '' -b paste.debian.net -i - <<< "hello
world"

it still results in "Error: too short arguments".  The offending code is
here:

https://github.com/netblue30/firejail/blob/master/src/firejail/main.c#L1028

It's of course overstepping for firejail to impose requirements on args
passed to other applications.  The the example at hand, the "-a ''"
ensures that the author of a pastebin remains unnamed in the event that
pastebinit would decide to default to something like $(whoami).

This bug triggers in firejail version 0.9.64.4.

** Affects: firejail (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934698

Title:
  "too short arguments" error

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/firejail/+bug/1934698/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934570] [NEW] emacs ediff mode treated badly

2021-07-03 Thread Bill Yikes
Public bug reported:

Ratpoison is probably working as designed here, but the design should
probably change.  When running ediff-buffers in emacs, a tiny control
window is spawn.  This control window needs focus to accept commands
while the two buffers under comparison each get half of a split window.
So one window is for keyboard input, and the other window is for
viewing.

In ratpoison, that tiny control window is expanded to consume the whole
frame thus hiding the comparison window.  While it may be possible for
users to create a small tile for the control frame and get the
comparison window in an adjacent frame, it would be extremely tedious to
do that every time someone needs to diff two files.  Perhaps a
workaround within emacs would be to embed the control window as a 3rd
split, but then it takes up more screen estate and still imposes undue
effort on the user.

Ratpoison needs a mechanism to handle that situation more elegantly.
One idea: make the control window a transparent overlay of the
comparison window.  I'm not sure if that requires ratpoison to detect
and give special treatment to emacs, but in any case the status quo
isn't good.

** Affects: ratpoison (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934570

Title:
  emacs ediff mode treated badly

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ratpoison/+bug/1934570/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934155] Re: fetchmail can no longer connect to underwood & gives false error msg (TLS issues)

2021-07-02 Thread Bill Yikes
I see that I overlooked the NEWS file.  That's more detailed than I'm
used to seeing.  As I was just now skimming through it, it was clear
that moves made to protect users from weak algorithms assumed they're on
an untunneled connection, which is not always the case.  Sometimes the
SSL is just used for verification and the crypto is just redundant.

I should also mention that I struggled with the "no sslcertck" syntax.
All the options I've been using to that point were " "
format, and "no sslcertck" is an exceptional transpose of it.  I first
tried "sslcertck no" because I was sure the key-value wouldn't flip.
But in fact the "keyword" really included a space.  Coupled with the
inconsistency of some keywords starting with "set", I felt I couldn't
trust the man page.  Adding to the confusion, some options take
arguments and some do not, but the Keyword/Option Summary table doesn't
show any BNF and it omits the  token, making so we have to work
out from the wording of the "Function" column whether it's unary or
takes an argument.

Anyway, I appreciate your help and I hope my feedback helps for future
revisions.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934155

Title:
  fetchmail can no longer connect to underwood & gives false error msg
  (TLS issues)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1934155/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934526] [NEW] Unable to bind hide_window key, another instance/window has it.

2021-07-02 Thread Bill Yikes
Public bug reported:

Opening a second terminal window is blocked.  To reproduce from the
Ratpoison window manager, type "c-t !" to run a script that contains:

$ terminator $geometry --command="screen -S full -c $configfile"

That launches fine, but within that screen session run "sudo -iu root"
followed by:

$ terminator --command='/usr/bin/screen -c $configforroot'

It cannot open a new window.  It says:

ConfigBase::load: Unable to open /etc/xdg/terminator/config ([Errno 2] No such 
file or directory: '/etc/xdg/terminator/config')
Unable to connect to DBUS Server, proceeding as standalone

** (terminator:2367): WARNING **: 14:17:48.988: Binding 'a' 
failed!
Unable to bind hide_window key, another instance/window has it.

This happens with version Terminator versoin 2.1.0 in the Ratpoison
window manager.  Untested on Gnome.

** Affects: terminator (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934526

Title:
  Unable to bind hide_window key, another instance/window has it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/terminator/+bug/1934526/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934155] Re: fetchmail can no longer connect to underwood & gives false error msg (TLS issues)

2021-07-02 Thread Bill Yikes
** Changed in: fetchmail (Ubuntu)
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934155

Title:
  fetchmail can no longer connect to underwood & gives false error msg
  (TLS issues)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1934155/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934155] Re: fetchmail can no longer connect to underwood & gives false error msg (TLS issues)

2021-07-02 Thread Bill Yikes
I've also noticed that "auto" and "TLS1+" have the same meaning when
passed to --SSLPROTO.  That's part of the problem.  So now there is
duplication, and no way to get opportunistic crypto.  If auto was
previously opportunistic, then the change to remove it should be rolled
back.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934155

Title:
  fetchmail can no longer connect to underwood & gives false error msg
  (TLS issues)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1934155/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934155] Re: fetchmail can no longer connect to underwood & gives false error msg (TLS issues)

2021-07-01 Thread Bill Yikes
I appreciate the quick response, but I have to say both of you
overlooked some of the data I presented, so I will elaborate.  I was
able to workaround the bug, but there are still bugs here.  Having read
the man page more times, I continue to find more anomalies than answers,
which I will elaborate here.

To be clear, fetchmail changed, not my config.  This config works on
fetchmail 6.3.26 but fails on 6.4.16:

```
skip underwood-onion via 127.0.0.1
protocol imap
port 12345
username "billyikes"
fetchall
```
Nothing in your response suggests that I should expect this config to be 
treated differently from the two different versions of fetchmail.  But it is.  
Verbose output shows that version 6.4.16 sends a "STARTTLS" even though the 
config does not call for SSL/TLS in any way.  That's a bug.  The man page says 
"sslproto auto" is a default which superficially seems reasonable until you 
notice that "auto" actually imposes TLS.  This is a distortion of what users 
expect by the meaning of "automatic".  

My first attempt at a fix for underwood was to add these lines (which fixed 
some of my stanzas for other servers):
~~~
sslproto 'SSL3+'
no sslcertck
~~~
Intution suggests this means: "If the server demands SSL, then be permissive 
and allow SSL3 or newer, but don't bother checking whether the cert is good or 
not, just use it if the server insists".  But what we find is that the behavior 
is not intuitive w.r.t the options. And because it fails the rule of least 
astonishment, I'm calling it a bug.  Moreover, the new manpage states:

"This option has a dual use, out of historic fetchmail behaviour. It
controls both the SSL/TLS  protocol  version  and,  if --ssl  is  not
specified, the STARTTLS behaviour (upgrading the protocol to an SSL or
TLS connection in-band). Some other options may however make TLS
mandatory."

To say sslproto "controls STARTTLS behaviour" without fully specifying
how that behavior is controlled is a recipe for confusion.  An empty
argument is said to disable STARTTLS, but that was always the case.  So
there are two problems here.  The first problem is that STARTTLS is
technical under-the-hood jargon that end users probably should not be
responsible for understanding.  Users are basically told whether SSL/TLS
is used, not what low-level commands are being passed to make that
happen.  The second problem is that auto states "require TLS", but does
not mention the low-level "STARTTLS" that the section mentioned earlier.
To developers, perhaps it is clear that "TLS" and "STARTTLS" are one in
the same, but it's not clear to users who don't know the underlying
details expressed in RFC specs.  IOW, STARTTLS is mentioned just enough
to confuse, and not enough to be understandable.

Apart from the misleading & non-intuitive wording, some things are
plainly false.  E.g. this statement from the man page is simply not
true:

"Only if this option [sslproto] and --ssl are both missing for a poll,
there will be opportunistic TLS for POP3  and  IMAP,  where  fetchmail
will attempt to upgrade to TLSv1 or newer."

If that were a true statement, then the first stanza at the top of this
post would work.  This is a contradiction in the man page.  That is,
"sslproto auto" ironically states that TLS is mandated while
simultaneously saying "auto" is also a default, and yet this behavior is
not opportunistic.  Opportunistic means: accommodate (or even
proactively request) higher levels of security, but ultimately settle
for no security at all if availability demands it.

The "sslproto ''" empty string option is bizarre because it strictly
disables STARTTLS without stating whether SSL is also disabled, apart
from saying that it's incompatible with --ssl.  To expressly /force/ no
encryption can be useful but only in limited & obscure test situations.
It's still good that it exists, but what about the much more common case
of users who need opportunistic connections?  The empty string denies
the opportunity for crypto while all the other sslproto options deny the
possibility of no encryption.

Even more strange is "SSL23".  How often does someone want to insist on
SSL2 or SSL3, but not the higher options?  There should be a "sslproto
SSL2+".

And most importantly there should be a "sslproto none+" to truly deliver
opportunistic sessions.

Regarding this suggestion:
```
poll underwood2hj3pwd.onion via 127.0.0.1
plugin "socat STDIO SOCKS4A:127.0.0.1:%h:%p,socksport=9050"
```
which I expanded to:
```
skip underwood2hj3pwd.onion via 127.0.0.1
plugin "socat STDIO SOCKS4A:127.0.0.1:%h:%p,socksport=9050"
protocol   imap
port   143
username   "billyikes"
fetchall
```
result was:
```
fetchmail: running socat STDIO SOCKS4A:127.0.0.1:%h:%p,socksport=9050 (host 
127.0.0.1 service 143)
2021/07/01 16:25:44 socat[2202] E socks: connect request rejected or failed
fetchmail: socket error while fetching from 

[Bug 1934214] [NEW] dual screens are overlapping upon resolution change

2021-06-30 Thread Bill Yikes
Public bug reported:

There is a laptop with an LVDS-1 and a VGA-1 (for the external display).

The first problem is that the native resolution of the external LCD
(VGA-1) is 1680x1050, but that's not the default.  The default
resolution is much lower than 1680x1050.  I'm not sure if that's a xorg
bug or a ratpoison bug.

Anyway, I used xandr to correct it using these steps:

$ cvt 1680 1050
$ xrandr --newmode...
$ xrandr --addmode...
$ xrandr --output...

The result: the resolution increases noticeably, but whenever the right
frame is in focus about 1/3rd of the image is mirrored on the left
frame.  And when the left frame is in focus, part of the image is
mirrored on the right frame.  This only happens when the resolution is
changed.

This is a serious show-stopper issue.  It's hard to tolerate a low
resolution, and the overlapping after increasing the resolution is also
intolerable.

This could be the same bug, but it's vaguely written:

  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=958535

** Affects: ratpoison (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934214

Title:
  dual screens are overlapping upon resolution change

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ratpoison/+bug/1934214/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934155] Re: fetchmail can no longer connect to underwood & gives false error msg (TLS issues)

2021-06-30 Thread Bill Yikes
** Description changed:

  Version 6.4.16 is unable to fetch mail from the underwood onion site.
  This is the output when trying to connect:
  
  fetchmail: normal termination, status 2
  fetchmail: 6.4.16 querying underwood-onion (protocol IMAP) at Wed 30 Jun 2021 
02:10:52 PM UTC: poll started
  fetchmail: Trying to connect to 127.0.0.1/12345...connected.
  fetchmail: IMAP< * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS 
ID ENABLE IDLE AUTH=PLAIN AUTH=LOGIN] Dovecot ready.
  fetchmail: IMAP> A0001 CAPABILITY
  fetchmail: IMAP< * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID 
ENABLE IDLE AUTH=PLAIN AUTH=LOGIN
  fetchmail: IMAP< A0001 OK Pre-login capabilities listed, post-login 
capabilities have more.
  fetchmail: IMAP> A0002 STARTTLS
  fetchmail: IMAP< A0002 BAD TLS support isn't enabled.
  fetchmail: 127.0.0.1: upgrade to TLS failed.
  fetchmail: Unknown login or authentication error on billyikes@127.0.0.1
  fetchmail: socket error while fetching from billyikes@underwood-onion
  
  This worked with past versions.  To reproduce, use this stanza in
  .fetchmailrc:
  
  skip underwood-onion via 127.0.0.1
  protocol   imap
  port   12345
  username   "billyikes"
  sslproto   'SSL3+'
   no sslcertck
  fetchall
  
  Note that past working stanzas did not need "sslproto" or "no sslcertck"
  but were introduced to after upgrading to 6.4.16.
  
  run these commands:
  
  $ socat TCP4-LISTEN:12345,reuseaddr,fork
  SOCKS4A:127.0.0.1:underwood2hj3pwd.onion:143,socksport=9050
  
  $ fetchmail -v -d0 underwood-onion
  
  $ pkill socat
  
  It's the same outcome if "sslproto 'SSL23'" is used instead.
  
  This is one report, but there are a few bugs here:
  
  1) inability to connect to handshake with bad TLS protocols. It's an
  onion site, so SSL is not needed for crypto (it's there for a different
  purpose).  So if fetchmail is judging the crypto to be insecure, it's
  overzealous in this case.
  
  2) the "Unknown login or authentication error" is not only a false
  error, it's alarming.  It's the worst kind of false error because it
  tells the user that there's a problem with their account.
  
  3) there is no per-account SOCKS4a config parameter, so users are pushed
  into this inconvenient and ugly hack of running socat and piping through
  that.  The "plugin" parameter does not help in this case because
  fetchmail still attempts to resolve the underwood2hj3pwd.onion outside
  of the proxy.
  
- Bug number 3 has always existed, but 1 & 2 are new regressions.
+ Bug (3) has always existed, but (1) & (2) are new regressions.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934155

Title:
  fetchmail can no longer connect to underwood & gives false error msg
  (TLS issues)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1934155/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934040] Re: openssl s_client's '-ssl2' & '-ssl3' options gone, prematurely!

2021-06-30 Thread Bill Yikes
** Description changed:

  SSL2 and SSL3 have been hastily removed, apparently by developers who
  are unaware that these protocols serve purposes other than encryption.
  SSL2/3 is *still used* on onion sites.  Why?  For verification.  An
  onion site has inherent encryption, so it matters not how weak the SSL
  crypto is when the purpose is purely to verify that the server is owned
  by who they say it's owned by.
  
  Proof that disclosure attacks on ssl2/3 are irrelevant to onion sites:
  
  https://blog.torproject.org/tls-certificate-for-onion-site
+ https://community.torproject.org/onion-services/overview/
  
  So here is a real world impact case.  Suppose you get your email from
  one of these onion mail servers:
  
  http://onionmail.info/directory.html
  
  Some (if not all) use ssl2/3 on top of Tor's inherent onion tunnel.
  They force users to use ssl2/3, so even if a user configures the client
  not to impose TLS, the server imposes it.  And it's reasonable because
  the ssl2/3 vulns are orthoganol to the use case.
  
  Some users will get lucky and use a mail client that still supports
  ssl2/3.  But there's still a problem: users can no longer use openssl to
  obtain the fingerprint to pin.  e.g.
  
  $ openssl s_client -proxy 127.0.0.1:8118 -connect xhfheq5i37waj6qb.onion:110 
-showcerts
  CONNECTED(0003)
  140124399195456:error:1408F10B:SSL routines:ssl3_get_record:wrong version 
number:../ssl/record/ssl3_record.c:331:
  ---
  no peer certificate available
  ---
  No client certificate CA names sent
  ---
  SSL handshake has read 44 bytes and written 330 bytes
  Verification: OK
  ---
  New, (NONE), Cipher is (NONE)
  Secure Renegotiation IS NOT supported
  Compression: NONE
  Expansion: NONE
  No ALPN negotiated
  Early data was not sent
  Verify return code: 0 (ok)
  ---
  
  That's openssl version 1.1.1k
  
  Being denied the ability to pin the SSL cert is actually a *degredation*
  of security.  Cert Pinning is particularly useful with self-signed
  certs, as is often the scenario with onion sites.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934040

Title:
  openssl s_client's '-ssl2' & '-ssl3' options gone, prematurely!

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1934040/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934155] Re: fetchmail can no longer connect to underwood & gives false error msg (TLS issues)

2021-06-30 Thread Bill Yikes
** Description changed:

  Version 6.4.16 is unable to fetch mail from the underwood onion site.
  This is the output when trying to connect:
  
  fetchmail: normal termination, status 2
  fetchmail: 6.4.16 querying underwood-onion (protocol IMAP) at Wed 30 Jun 2021 
02:10:52 PM UTC: poll started
  fetchmail: Trying to connect to 127.0.0.1/12345...connected.
  fetchmail: IMAP< * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS 
ID ENABLE IDLE AUTH=PLAIN AUTH=LOGIN] Dovecot ready.
  fetchmail: IMAP> A0001 CAPABILITY
  fetchmail: IMAP< * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID 
ENABLE IDLE AUTH=PLAIN AUTH=LOGIN
  fetchmail: IMAP< A0001 OK Pre-login capabilities listed, post-login 
capabilities have more.
  fetchmail: IMAP> A0002 STARTTLS
  fetchmail: IMAP< A0002 BAD TLS support isn't enabled.
  fetchmail: 127.0.0.1: upgrade to TLS failed.
  fetchmail: Unknown login or authentication error on billyikes@127.0.0.1
  fetchmail: socket error while fetching from billyikes@underwood-onion
  
  This worked with past versions.  To reproduce, use this stanza in
  .fetchmailrc:
  
  skip underwood-onion via 127.0.0.1
  protocol   imap
  port   12345
  username   "billyikes"
  sslproto   'SSL3+'
   no sslcertck
  fetchall
  
  Note that past working stanzas did not need "sslproto" or "no sslcertck"
  but were introduced to after upgrading to 6.4.16.
  
  run these commands:
  
  $ socat TCP4-LISTEN:12345,reuseaddr,fork
  SOCKS4A:127.0.0.1:underwood2hj3pwd.onion:143,socksport=9050
  
  $ fetchmail -v -d0 underwood-onion
  
  $ pkill socat
  
+ It's the same outcome if "sslproto 'SSL23'" is used instead.
+ 
  This is one report, but there are a few bugs here:
  
  1) inability to connect to handshake with bad TLS protocols. It's an
  onion site, so SSL is not needed for crypto (it's there for a different
  purpose).  So if fetchmail is judging the crypto to be insecure, it's
  overzealous in this case.
  
  2) the "Unknown login or authentication error" is not only a false
  error, it's alarming.  It's the worst kind of false error because it
  tells the user that there's a problem with their account.
  
  3) there is no per-account SOCKS4a config parameter, so users are pushed
  into this inconvenient and ugly hack of running socat and piping through
  that.  The "plugin" parameter does not help in this case because
  fetchmail still attempts to resolve the underwood2hj3pwd.onion outside
  of the proxy.
  
- Bug \3 has always existed, but 1 & 2 are new regressions.
+ Bug number 3 has always existed, but 1 & 2 are new regressions.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934155

Title:
  fetchmail can no longer connect to underwood & gives false error msg
  (TLS issues)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1934155/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934155] [NEW] fetchmail can no longer connect to underwood & gives false error msg (TLS issues)

2021-06-30 Thread Bill Yikes
Public bug reported:

Version 6.4.16 is unable to fetch mail from the underwood onion site.
This is the output when trying to connect:

fetchmail: normal termination, status 2
fetchmail: 6.4.16 querying underwood-onion (protocol IMAP) at Wed 30 Jun 2021 
02:10:52 PM UTC: poll started
fetchmail: Trying to connect to 127.0.0.1/12345...connected.
fetchmail: IMAP< * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID 
ENABLE IDLE AUTH=PLAIN AUTH=LOGIN] Dovecot ready.
fetchmail: IMAP> A0001 CAPABILITY
fetchmail: IMAP< * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID 
ENABLE IDLE AUTH=PLAIN AUTH=LOGIN
fetchmail: IMAP< A0001 OK Pre-login capabilities listed, post-login 
capabilities have more.
fetchmail: IMAP> A0002 STARTTLS
fetchmail: IMAP< A0002 BAD TLS support isn't enabled.
fetchmail: 127.0.0.1: upgrade to TLS failed.
fetchmail: Unknown login or authentication error on billyikes@127.0.0.1
fetchmail: socket error while fetching from billyikes@underwood-onion

This worked with past versions.  To reproduce, use this stanza in
.fetchmailrc:

skip underwood-onion via 127.0.0.1
protocol   imap
port   12345
username   "billyikes"
sslproto   'SSL3+'
 no sslcertck
fetchall

Note that past working stanzas did not need "sslproto" or "no sslcertck"
but were introduced to after upgrading to 6.4.16.

run these commands:

$ socat TCP4-LISTEN:12345,reuseaddr,fork
SOCKS4A:127.0.0.1:underwood2hj3pwd.onion:143,socksport=9050

$ fetchmail -v -d0 underwood-onion

$ pkill socat

This is one report, but there are a few bugs here:

1) inability to connect to handshake with bad TLS protocols. It's an
onion site, so SSL is not needed for crypto (it's there for a different
purpose).  So if fetchmail is judging the crypto to be insecure, it's
overzealous in this case.

2) the "Unknown login or authentication error" is not only a false
error, it's alarming.  It's the worst kind of false error because it
tells the user that there's a problem with their account.

3) there is no per-account SOCKS4a config parameter, so users are pushed
into this inconvenient and ugly hack of running socat and piping through
that.  The "plugin" parameter does not help in this case because
fetchmail still attempts to resolve the underwood2hj3pwd.onion outside
of the proxy.

Bug \3 has always existed, but 1 & 2 are new regressions.

** Affects: fetchmail (Ubuntu)
 Importance: Undecided
 Status: New

** Description changed:

  Version 6.4.16 is unable to fetch mail from the underwood onion site.
  This is the output when trying to connect:
  
  fetchmail: normal termination, status 2
  fetchmail: 6.4.16 querying underwood-onion (protocol IMAP) at Wed 30 Jun 2021 
02:10:52 PM UTC: poll started
  fetchmail: Trying to connect to 127.0.0.1/12345...connected.
  fetchmail: IMAP< * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS 
ID ENABLE IDLE AUTH=PLAIN AUTH=LOGIN] Dovecot ready.
  fetchmail: IMAP> A0001 CAPABILITY
  fetchmail: IMAP< * CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID 
ENABLE IDLE AUTH=PLAIN AUTH=LOGIN
  fetchmail: IMAP< A0001 OK Pre-login capabilities listed, post-login 
capabilities have more.
  fetchmail: IMAP> A0002 STARTTLS
  fetchmail: IMAP< A0002 BAD TLS support isn't enabled.
  fetchmail: 127.0.0.1: upgrade to TLS failed.
  fetchmail: Unknown login or authentication error on billyikes@127.0.0.1
  fetchmail: socket error while fetching from billyikes@underwood-onion
  
  This worked with past versions.  To reproduce, use this stanza in
  .fetchmailrc:
  
  skip underwood-onion via 127.0.0.1
- protocol   imap
- port   12345
- username   "billyikes"
- sslproto   'SSL3+'
-   no sslcertck
- fetchall
+ protocol   imap
+ port   12345
+ username   "billyikes"
+ sslproto   'SSL3+'
+  no sslcertck
+ fetchall
  
  Note that past working stanzas did not need "sslproto" or "no sslcertck"
  but were introduced to after upgrading to 6.4.16.
  
  run these commands:
  
  $ socat TCP4-LISTEN:12345,reuseaddr,fork
  SOCKS4A:127.0.0.1:underwood2hj3pwd.onion:143,socksport=9050
  
  $ fetchmail -v -d0 underwood-onion
  
  $ pkill socat
  
  This is one report, but there are a few bugs here:
  
  1) inability to connect to handshake with bad TLS protocols. It's an
  onion site, so SSL is not needed for crypto (it's there for a different
  purpose).  So if fetchmail is judging the crypto to be insecure, it's
  overzealous in this case.
  
  2) the "Unknown login or authentication error" is not only a false
  error, it's alarming.  It's the worst kind of false error because it
  tells the user that there's a problem with their account.
  
  3) there is no per-account SOCKS4a config parameter, so users are pushed
  into this inconvenient and ugly hack of running socat and piping through
  that.  The "plugin" parameter does not help in 

[Bug 1934044] [NEW] openssl removed ssl2/3 and broke cURL because curl uses openssl instead of libssl

2021-06-29 Thread Bill Yikes
Public bug reported:

cURL supports a -ssl3 option (and rightly so), but openssl removed it
prematurely (see
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1934040).  The
fallout:

torsocks curl --insecure --ssl-allow-beast -ssl3 -vvI 
https://xhfheq5i37waj6qb.onion:110 2>&1 
*   Trying 127.42.42.0:110...
* Connected to xhfheq5i37waj6qb.onion (127.42.42.0) port 110 (#0)
* OpenSSL was built without SSLv3 support
* Closing connection 0

Is it possible that curl's check for ssl3 is flawed?  I say that because
both curl and fetchmail are dependant on the same libssl pkg, and yet
fetchmail can still do ssl3 but curl can't.  Neither curl nor fetchmail
names "openssl" as a dependency.  So curl perhaps should not look to the
openssl package to detect ssl3 capability.

SSL3 is still useful for onion sites, so curl should do the necessary to
retain that capability.

** Affects: curl (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934044

Title:
  openssl removed ssl2/3 and broke cURL because curl uses openssl
  instead of libssl

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curl/+bug/1934044/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1934040] [NEW] openssl s_client's '-ssl2' & '-ssl3' options gone, prematurely!

2021-06-29 Thread Bill Yikes
Public bug reported:

SSL2 and SSL3 have been hastily removed, apparently by developers who
are unaware that these protocols serve purposes other than encryption.
SSL2/3 is *still used* on onion sites.  Why?  For verification.  An
onion site has inherent encryption, so it matters not how weak the SSL
crypto is when the purpose is purely to verify that the server is owned
by who they say it's owned by.

Proof that disclosure attacks on ssl2/3 are irrelevant to onion sites:

https://blog.torproject.org/tls-certificate-for-onion-site

So here is a real world impact case.  Suppose you get your email from
one of these onion mail servers:

http://onionmail.info/directory.html

Some (if not all) use ssl2/3 on top of Tor's inherent onion tunnel.
They force users to use ssl2/3, so even if a user configures the client
not to impose TLS, the server imposes it.  And it's reasonable because
the ssl2/3 vulns are orthoganol to the use case.

Some users will get lucky and use a mail client that still supports
ssl2/3.  But there's still a problem: users can no longer use openssl to
obtain the fingerprint to pin.  e.g.

$ openssl s_client -proxy 127.0.0.1:8118 -connect xhfheq5i37waj6qb.onion:110 
-showcerts
CONNECTED(0003)
140124399195456:error:1408F10B:SSL routines:ssl3_get_record:wrong version 
number:../ssl/record/ssl3_record.c:331:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 44 bytes and written 330 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---

That's openssl version 1.1.1k

Being denied the ability to pin the SSL cert is actually a *degredation*
of security.  Cert Pinning is particularly useful with self-signed
certs, as is often the scenario with onion sites.

** Affects: openssl (Ubuntu)
 Importance: Undecided
 Status: New

** Description changed:

  SSL2 and SSL3 have been hastily removed, apparently by developers who
  are unaware that these protocols serve purposes other than encryption.
  SSL2/3 is *still used* on onion sites.  Why?  For verification.  An
  onion site has inherent encryption, so it matters not how weak the SSL
  crypto is when the purpose is purely to verify that the server is owned
  by who they say it's owned by.
  
  Proof that disclosure attacks on ssl2/3 are irrelevant to onion sites:
  
  https://blog.torproject.org/tls-certificate-for-onion-site
  
  So here is a real world impact case.  Suppose you get your email from
  one of these onion mail servers:
  
  http://onionmail.info/directory.html
  
  Some (if not all) use ssl2/3 on top of Tor's inherent onion tunnel.
  They force users to use ssl2/3, so even if a user configures the client
  not to impose TLS, the server imposes it.  And it's reasonable because
  the ssl2/3 vulns are orthoganol to the use case.
  
  Some users will get lucky and use a mail client that still supports
  ssl2/3.  But there's still a problem: users can no longer use openssl to
  obtain the fingerprint to pin.  e.g.
  
  $ openssl s_client -proxy 127.0.0.1:8118 -connect xhfheq5i37waj6qb.onion:110 
-showcerts
  CONNECTED(0003)
  140124399195456:error:1408F10B:SSL routines:ssl3_get_record:wrong version 
number:../ssl/record/ssl3_record.c:331:
  ---
  no peer certificate available
  ---
  No client certificate CA names sent
  ---
  SSL handshake has read 44 bytes and written 330 bytes
  Verification: OK
  ---
  New, (NONE), Cipher is (NONE)
  Secure Renegotiation IS NOT supported
  Compression: NONE
  Expansion: NONE
  No ALPN negotiated
  Early data was not sent
  Verify return code: 0 (ok)
  ---
  
+ That's openssl version 1.1.1k
  
- That's openssl version 1.1.1k
+ Being denied the ability to pin the SSL cert is actually a *degredation*
+ of security.  Cert Pinning is particularly useful with self-signed
+ certs, as is often the scenario with onion sites.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1934040

Title:
  openssl s_client's '-ssl2' & '-ssl3' options gone, prematurely!

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openssl/+bug/1934040/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933913] Re: "Xorg -configure" results in "modprobe: FATAL: Module fbcon not found in directory /lib/modules/5.10.0-7-amd64"

2021-06-29 Thread Bill Yikes
** Description changed:

  With x11 not running, this was executed: "Xorg -configure".  It should
  simply build a configuration file.  The output below appears in the
  terminal with an error.  It manages to create a config file anyway, but
  what it creates causes "startx" to fall over.  So I am forced to run x11
  without a config file (which is a problem for me because I have two
  displays and need to change the RightOf/LeftOf setting).
  
  OUTPUT:
  
  X.Org X Server 1.20.11
  X Protocol Version 11, Revision 0
  Build Operating System: linux Debian
  Current Operating System: Linux billyikes 5.10.0-7-amd64 #1 SMP Debian 
5.10.40-1 (2021-05-28) x86_64
  Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.10.0-7-amd64 
root=/dev/mapper/grp-root ro 
rd.luks.name=UUID=279a24dc-1014-6495-38cc-75ce88144f44=cryptdisk quiet
  Build Date: 13 April 2021  04:07:31PM
  xorg-server 2:1.20.11-1 (https://www.debian.org/support)
  Current version of pixman: 0.40.0
- Before reporting problems, check http://wiki.x.org
- to make sure that you have the latest version.
+ Before reporting problems, check http://wiki.x.org
+ to make sure that you have the latest version.
  Markers: (--) probed, (**) from config file, (==) default setting,
- (++) from command line, (!!) notice, (II) informational,
- (WW) warning, (EE) error, (NI) not implemented, (??) unknown.
+ (++) from command line, (!!) notice, (II) informational,
+ (WW) warning, (EE) error, (NI) not implemented, (??) unknown.
  (==) Log file: "/var/log/Xorg.0.log", Time: Tue Jun 29 02:19:12 2021
  List of video drivers:
- amdgpu
- ati
- intel
- nouveau
- qxl
- radeon
- vmware
- modesetting
- fbdev
- vesa
+ amdgpu
+ ati
+ intel
+ nouveau
+ qxl
+ radeon
+ vmware
+ modesetting
+ fbdev
+ vesa
  (++) Using config file: "/root/xorg.conf.new"
  (==) Using config directory: "/etc/X11/xorg.conf.d"
  (==) Using system config directory "/usr/share/X11/xorg.conf.d"
  modprobe: FATAL: Module fbcon not found in directory 
/lib/modules/5.10.0-7-amd64
  intel: waited 2020 ms for i915.ko driver to load
  Number of created screens does not match number of detected devices.
-   Configuration failed.
+   Configuration failed.
  (EE) Server terminated with error (2). Closing log file.
+ 
+ 
+ WORKAROUND:
+ 
+ run "xrandr --current" to see the device identifiers, then run something
+ like "xrandr --output VGA-1 --left-of LVDS-1".  Unfortunately, it's not
+ permanent.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933913

Title:
  "Xorg -configure" results in "modprobe: FATAL: Module fbcon not found
  in directory /lib/modules/5.10.0-7-amd64"

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/1933913/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933913] [NEW] "Xorg -configure" results in "modprobe: FATAL: Module fbcon not found in directory /lib/modules/5.10.0-7-amd64"

2021-06-28 Thread Bill Yikes
Public bug reported:

With x11 not running, this was executed: "Xorg -configure".  It should
simply build a configuration file.  The output below appears in the
terminal with an error.  It manages to create a config file anyway, but
what it creates causes "startx" to fall over.  So I am forced to run x11
without a config file (which is a problem for me because I have two
displays and need to change the RightOf/LeftOf setting).

OUTPUT:

X.Org X Server 1.20.11
X Protocol Version 11, Revision 0
Build Operating System: linux Debian
Current Operating System: Linux billyikes 5.10.0-7-amd64 #1 SMP Debian 
5.10.40-1 (2021-05-28) x86_64
Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.10.0-7-amd64 
root=/dev/mapper/grp-root ro 
rd.luks.name=UUID=279a24dc-1014-6495-38cc-75ce88144f44=cryptdisk quiet
Build Date: 13 April 2021  04:07:31PM
xorg-server 2:1.20.11-1 (https://www.debian.org/support)
Current version of pixman: 0.40.0
Before reporting problems, check http://wiki.x.org
to make sure that you have the latest version.
Markers: (--) probed, (**) from config file, (==) default setting,
(++) from command line, (!!) notice, (II) informational,
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
(==) Log file: "/var/log/Xorg.0.log", Time: Tue Jun 29 02:19:12 2021
List of video drivers:
amdgpu
ati
intel
nouveau
qxl
radeon
vmware
modesetting
fbdev
vesa
(++) Using config file: "/root/xorg.conf.new"
(==) Using config directory: "/etc/X11/xorg.conf.d"
(==) Using system config directory "/usr/share/X11/xorg.conf.d"
modprobe: FATAL: Module fbcon not found in directory /lib/modules/5.10.0-7-amd64
intel: waited 2020 ms for i915.ko driver to load
Number of created screens does not match number of detected devices.
  Configuration failed.
(EE) Server terminated with error (2). Closing log file.

** Affects: xorg (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933913

Title:
  "Xorg -configure" results in "modprobe: FATAL: Module fbcon not found
  in directory /lib/modules/5.10.0-7-amd64"

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/xorg/+bug/1933913/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933037] Re: (bad docs) man page is inconsistent with wiki, & also inconsistent with system defaults

2021-06-20 Thread Bill Yikes
** Description changed:

  Linux systems are typically configured with either a "wheel" group or a
  "sudo" group to specify users that get sudo privs.  In the case of
  Debian's default config, a "sudo" group exists in /etc/group and also
  the /etc/sudoers file assumes a "sudo" group.  Yet the latest
  wpa_supplicant.conf man page gives all examples using the non-existent
  "wheel" group:
  
  
https://manpages.debian.org/testing/wpasupplicant/wpa_supplicant.conf.5.en.html
  
  The wiki:
  
  https://wiki.debian.org/WiFi/HowToUse#wpa_supplicant
  
  is incomplete, because it neglects the setting that enables users to
  configure the wifi APs.  The wiki assumes:
  
  ctrl_interface=/run/wpa_supplicant
  
  while the man page not only shows a different directory, it shows a
  different syntax:
  
  ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel
  
  Debian-independant docs out in the wild are all over map on this which
  exacerbates confusion.  In attempt to get a grasp on sanity, you would
  think users could simply check which directory was created by the
  wpasupplicant installer.  The answer: it created both
  /var/run/wpa_supplicant and /run/wpa_supplicant -- which obviously
  perpetuates confusion.
  
  Considering /var is short for "variable", and a wifi config is naturally
  quite variable, it's tempting to go with /var/run/wpa_supplicant/.
  Normally the man page would also be the first port of call, but because
  it uses "wheel", it's hard to trust the man page.  At the same time this
  apparently well-written distro-agnostic guide demonstrates using
  /run/wpa_supplicant/:
  
  https://shapeshed.com/linux-wifi
  
  There's another reason to distrust the man page: these config params
  worked on Debian Lenny-baed distros:
  
   ctrl_interface=/var/run/wpa_supplicant
   ctrl_interface_group=netdev
  
  Notice that ctrl_interface_group is a separate parameter.  Was that
  parameter made obsolete, or is it simply undocumented?  It should be
  documented in the man page, and if it's obsolete then the manpage should
  still document it and mark it obsolete so users know to adapt their
  configs.
  
  And woah, where does netdev come from?  A:
  /usr/share/doc/wpasupplicant/README.Debian.gz, which shows:
  
  ctrl_interface=DIR=/run/wpa_supplicant GROUP=netdev
  
  and yes, that's also /run/ without the leading /var/, thus contradicting
  the man page.  Wait, it gets better-- README.gz (not README.Debian.gz)
  shows:
  
  ctrl_interface=/var/run/wpa_supplicant
  ctrl_interface_group=wheel
  
  Does that mean upstream uses /var/run/ & Debian intends to use /run/?
  
  What does "update_config=1" mean?  We see it here: https://shapeshed.com
- /linux-wifi but no mention in any of the man pages, the wiki, or in any
- of the /usr/share/doc/wpasupplicant/* files.  We should expect to find
- it here:
+ /linux-wifi but no mention in any of the man pages,  or in any of the
+ /usr/share/doc/wpasupplicant/* files.  Very brief mention in the wiki,
+ but who's to say the user's network is up to visit the wiki?  We should
+ expect to find it here:
  
https://manpages.debian.org/testing/wpasupplicant/wpa_supplicant.conf.5.en.html
  but that's just a few examples-- no proper documentation.
  "update_config" is covered by arch linux:
  https://wiki.archlinux.org/title/wpa_supplicant/. It turns out to be a
  minimal config option to support wpa_cli, yet the wpa_cli man page makes
  no mention of this option that's critical for wpa_cli to operate.
  
  Please make the docs a bit less painful; less schitzophrenic.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933037

Title:
  (bad docs) man page is inconsistent with wiki, & also inconsistent
  with system defaults

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/wpasupplicant/+bug/1933037/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933037] Re: (bad docs) man page is inconsistent with wiki, & also inconsistent with system defaults

2021-06-20 Thread Bill Yikes
** Description changed:

  Linux systems are typically configured with either a "wheel" group or a
  "sudo" group to specify users that get sudo privs.  In the case of
  Debian's default config, a "sudo" group exists in /etc/group and also
  the /etc/sudoers file assumes a "sudo" group.  Yet the latest
  wpa_supplicant.conf man page gives all examples using the non-existent
  "wheel" group:
  
  
https://manpages.debian.org/testing/wpasupplicant/wpa_supplicant.conf.5.en.html
  
  The wiki:
  
  https://wiki.debian.org/WiFi/HowToUse#wpa_supplicant
  
  is incomplete, because it neglects the setting that enables users to
  configure the wifi APs.  The wiki assumes:
  
  ctrl_interface=/run/wpa_supplicant
  
  while the man page not only shows a different directory, it shows a
  different syntax:
  
  ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel
  
  Debian-independant docs out in the wild are all over map on this which
  exacerbates confusion.  In attempt to get a grasp on sanity, you would
  think users could simply check which directory was created by the
  wpasupplicant installer.  The answer: it created both
  /var/run/wpa_supplicant and /run/wpa_supplicant -- which obviously
  perpetuates confusion.
  
  Considering /var is short for "variable", and a wifi config is naturally
  quite variable, it's tempting to go with /var/run/wpa_supplicant/.
  Normally the man page would also be the first port of call, but because
  it uses "wheel", it's hard to trust the man page.  At the same time this
  apparently well-written distro-agnostic guide demonstrates using
  /run/wpa_supplicant/:
  
  https://shapeshed.com/linux-wifi
  
  There's another reason to distrust the man page: these config params
  worked on Debian Lenny-baed distros:
  
   ctrl_interface=/var/run/wpa_supplicant
   ctrl_interface_group=netdev
  
  Notice that ctrl_interface_group is a separate parameter.  Was that
  parameter made obsolete, or is it simply undocumented?  It should be
  documented in the man page, and if it's obsolete then the manpage should
  still document it and mark it obsolete so users know to adapt their
  configs.
  
  And woah, where does netdev come from?  A:
  /usr/share/doc/wpasupplicant/README.Debian.gz, which shows:
  
  ctrl_interface=DIR=/run/wpa_supplicant GROUP=netdev
  
  and yes, that's also /run/ without the leading /var/, thus contradicting
  the man page.  Wait, it gets better-- README.gz (not README.Debian.gz)
  shows:
  
  ctrl_interface=/var/run/wpa_supplicant
  ctrl_interface_group=wheel
  
  Does that mean upstream uses /var/run/ & Debian intends to use /run/?
  
  What does "update_config=1" mean?  We see it here: https://shapeshed.com
  /linux-wifi but no mention in any of the man pages, the wiki, or in any
  of the /usr/share/doc/wpasupplicant/* files.  We should expect to find
  it here:
  
https://manpages.debian.org/testing/wpasupplicant/wpa_supplicant.conf.5.en.html
  but that's just a few examples-- no proper documentation.
+ "update_config" is covered by arch linux:
+ https://wiki.archlinux.org/title/wpa_supplicant/. It turns out to be a
+ minimal config option to support wpa_cli, yet the wpa_cli man page makes
+ no mention of this option that's critical for wpa_cli to operate.
  
  Please make the docs a bit less painful; less schitzophrenic.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933037

Title:
  (bad docs) man page is inconsistent with wiki, & also inconsistent
  with system defaults

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/wpasupplicant/+bug/1933037/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933037] Re: (bad docs) man page is inconsistent with wiki, & also inconsistent with system defaults

2021-06-20 Thread Bill Yikes
** Description changed:

  Linux systems are typically configured with either a "wheel" group or a
  "sudo" group to specify users that get sudo privs.  In the case of
  Debian's default config, a "sudo" group exists in /etc/group and also
  the /etc/sudoers file assumes a "sudo" group.  Yet the latest
  wpa_supplicant.conf man page gives all examples using the non-existent
  "wheel" group:
  
  
https://manpages.debian.org/testing/wpasupplicant/wpa_supplicant.conf.5.en.html
  
  The wiki:
  
  https://wiki.debian.org/WiFi/HowToUse#wpa_supplicant
  
  is incomplete, because it neglects the setting that enables users to
  configure the wifi APs.  The wiki assumes:
  
  ctrl_interface=/run/wpa_supplicant
  
  while the man page not only shows a different directory, it shows a
  different syntax:
  
  ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel
  
  Debian-independant docs out in the wild are all over map on this which
  exacerbates confusion.  In attempt to get a grasp on sanity, you would
  think users could simply check which directory was created by the
  wpasupplicant installer.  The answer: it created both
  /var/run/wpa_supplicant and /run/wpa_supplicant -- which obviously
  perpetuates confusion.
  
  Considering /var is short for "variable", and a wifi config is naturally
  quite variable, it's tempting to go with /var/run/wpa_supplicant/.
  Normally the man page would also be the first port of call, but because
  it uses "wheel", it's hard to trust the man page.  At the same time this
  apparently well-written distro-agnostic guide demonstrates using
  /run/wpa_supplicant/:
  
  https://shapeshed.com/linux-wifi
  
  There's another reason to distrust the man page: these config params
  worked on Debian Lenny-baed distros:
  
   ctrl_interface=/var/run/wpa_supplicant
   ctrl_interface_group=netdev
  
  Notice that ctrl_interface_group is a separate parameter.  Was that
  parameter made obsolete, or is it simply undocumented?  It should be
  documented in the man page, and if it's obsolete then the manpage should
  still document it and mark it obsolete so users know to adapt their
  configs.
  
  And woah, where does netdev come from?  A:
  /usr/share/doc/wpasupplicant/README.Debian.gz, which shows:
  
  ctrl_interface=DIR=/run/wpa_supplicant GROUP=netdev
  
  and yes, that's also /run/ without the leading /var/, thus contradicting
  the man page.  Wait, it gets better-- README.gz (not README.Debian.gz)
  shows:
  
  ctrl_interface=/var/run/wpa_supplicant
  ctrl_interface_group=wheel
  
  Does that mean upstream uses /var/run/ & Debian intends to use /run/?
  
+ What does "update_config=1" mean?  We see it here: https://shapeshed.com
+ /linux-wifi but no mention in any of the man pages, the wiki, or in any
+ of the /usr/share/doc/wpasupplicant/* files.  We should expect to find
+ it here:
+ 
https://manpages.debian.org/testing/wpasupplicant/wpa_supplicant.conf.5.en.html
+ but that's just a few examples-- no proper documentation.
+ 
  Please make the docs a bit less painful; less schitzophrenic.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933037

Title:
  (bad docs) man page is inconsistent with wiki, & also inconsistent
  with system defaults

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/wpasupplicant/+bug/1933037/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933037] Re: (bad docs) man page is inconsistent with wiki, & also inconsistent with system defaults

2021-06-20 Thread Bill Yikes
** Description changed:

  Linux systems are typically configured with either a "wheel" group or a
  "sudo" group to specify users that get sudo privs.  In the case of
  Debian's default config, a "sudo" group exists in /etc/group and also
  the /etc/sudoers file assumes a "sudo" group.  Yet the latest
  wpa_supplicant.conf man page gives all examples using the non-existent
  "wheel" group:
  
  
https://manpages.debian.org/testing/wpasupplicant/wpa_supplicant.conf.5.en.html
  
  The wiki:
  
  https://wiki.debian.org/WiFi/HowToUse#wpa_supplicant
  
  is incomplete, because it neglects the setting that enables users to
  configure the wifi APs.  The wiki assumes:
  
  ctrl_interface=/run/wpa_supplicant
  
  while the man page not only shows a different directory, it shows a
  different syntax:
  
  ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel
  
  Debian-independant docs out in the wild are all over map on this which
  exacerbates confusion.  In attempt to get a grasp on sanity, you would
  think users could simply check which directory was created by the
  wpasupplicant installer.  The answer: it created both
  /var/run/wpa_supplicant and /run/wpa_supplicant -- which obviously
  perpetuates confusion.
  
  Considering /var is short for "variable", and a wifi config is naturally
  quite variable, it's tempting to go with /var/run/wpa_supplicant/.
  Normally the man page would also be the first port of call, but because
  it uses "wheel", it's hard to trust the man page.  At the same time this
  apparently well-written distro-agnostic guide demonstrates using
  /run/wpa_supplicant/:
  
  https://shapeshed.com/linux-wifi
  
  There's another reason to distrust the man page: these config params
  worked on Debian Lenny-baed distros:
  
-  ctrl_interface=/var/run/wpa_supplicant
-  ctrl_interface_group=netdev
+  ctrl_interface=/var/run/wpa_supplicant
+  ctrl_interface_group=netdev
  
  Notice that ctrl_interface_group is a separate parameter.  Was that
  parameter made obsolete, or is it simply undocumented?  It should be
  documented in the man page, and if it's obsolete then the manpage should
  still document it and mark it obsolete so users know to adapt their
  configs.
  
- Please make the docs a bit less schitzophrenic.
+ And woah, where does netdev come from?  A:
+ /usr/share/doc/wpasupplicant/README.Debian.gz, which shows:
+ 
+ ctrl_interface=DIR=/run/wpa_supplicant GROUP=netdev
+ 
+ and yes, that's also /run/ without the leading /var/, thus contradicting
+ the man page.  Wait, it gets better-- README.gz (not README.Debian.gz)
+ shows:
+ 
+ ctrl_interface=/var/run/wpa_supplicant
+ ctrl_interface_group=wheel
+ 
+ Does that mean upstream uses /var/run/ & Debian intends to use /run/?
+ 
+ Please make the docs a bit less painful; less schitzophrenic.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933037

Title:
  (bad docs) man page is inconsistent with wiki, & also inconsistent
  with system defaults

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/wpasupplicant/+bug/1933037/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933037] Re: (bad docs) man page is inconsistent with wiki, & also inconsistent with system defaults

2021-06-20 Thread Bill Yikes
** Description changed:

  Linux systems are typically configured with either a "wheel" group or a
  "sudo" group to specify users that get sudo privs.  In the case of
  Debian's default config, a "sudo" group exists in /etc/group and also
  the /etc/sudoers file assumes a "sudo" group.  Yet the latest
  wpa_supplicant.conf man page gives all examples using the non-existent
  "wheel" group:
  
  
https://manpages.debian.org/testing/wpasupplicant/wpa_supplicant.conf.5.en.html
  
  The wiki:
  
  https://wiki.debian.org/WiFi/HowToUse#wpa_supplicant
  
  is incomplete, because it neglects the setting that enables users to
  configure the wifi APs.  The wiki assumes:
  
  ctrl_interface=/run/wpa_supplicant
  
  while the man page not only shows a different directory, it shows a
  different syntax:
  
  ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel
  
  Debian-independant docs out in the wild are all over map on this which
  exacerbates confusion.  In attempt to get a grasp on sanity, you would
  think users could simply check which directory was created by the
  wpasupplicant installer.  The answer: it created both
  /var/run/wpa_supplicant and /run/wpa_supplicant -- which obviously
  perpetuates confusion.
  
  Considering /var is short for "variable", and a wifi config is naturally
  quite variable, it's tempting to go with /var/run/wpa_supplicant/.
  Normally the man page would also be the first port of call, but because
  it uses "wheel", it's hard to trust the man page.  At the same time this
  apparently well-written distro-agnostic guide demonstrates using
  /run/wpa_supplicant/:
  
  https://shapeshed.com/linux-wifi
  
+ There's another reason to distrust the man page: these config params
+ worked on Debian Lenny-baed distros:
+ 
+  ctrl_interface=/var/run/wpa_supplicant
+  ctrl_interface_group=netdev
+ 
+ Notice that ctrl_interface_group is a separate parameter.  Was that
+ parameter made obsolete, or is it simply undocumented?  It should be
+ documented in the man page, and if it's obsolete then the manpage should
+ still document it and mark it obsolete so users know to adapt their
+ configs.
+ 
  Please make the docs a bit less schitzophrenic.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933037

Title:
  (bad docs) man page is inconsistent with wiki, & also inconsistent
  with system defaults

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/wpasupplicant/+bug/1933037/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1933037] [NEW] (bad docs) man page is inconsistent with wiki, & also inconsistent with system defaults

2021-06-20 Thread Bill Yikes
Public bug reported:

Linux systems are typically configured with either a "wheel" group or a
"sudo" group to specify users that get sudo privs.  In the case of
Debian's default config, a "sudo" group exists in /etc/group and also
the /etc/sudoers file assumes a "sudo" group.  Yet the latest
wpa_supplicant.conf man page gives all examples using the non-existent
"wheel" group:

https://manpages.debian.org/testing/wpasupplicant/wpa_supplicant.conf.5.en.html

The wiki:

https://wiki.debian.org/WiFi/HowToUse#wpa_supplicant

is incomplete, because it neglects the setting that enables users to
configure the wifi APs.  The wiki assumes:

ctrl_interface=/run/wpa_supplicant

while the man page not only shows a different directory, it shows a
different syntax:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheel

Debian-independant docs out in the wild are all over map on this which
exacerbates confusion.  In attempt to get a grasp on sanity, you would
think users could simply check which directory was created by the
wpasupplicant installer.  The answer: it created both
/var/run/wpa_supplicant and /run/wpa_supplicant -- which obviously
perpetuates confusion.

Considering /var is short for "variable", and a wifi config is naturally
quite variable, it's tempting to go with /var/run/wpa_supplicant/.
Normally the man page would also be the first port of call, but because
it uses "wheel", it's hard to trust the man page.  At the same time this
apparently well-written distro-agnostic guide demonstrates using
/run/wpa_supplicant/:

https://shapeshed.com/linux-wifi

Please make the docs a bit less schitzophrenic.

** Affects: wpasupplicant (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1933037

Title:
  (bad docs) man page is inconsistent with wiki, & also inconsistent
  with system defaults

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/wpasupplicant/+bug/1933037/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1931815] Re: the %{remote_ip} output format is broken on proxied connections

2021-06-17 Thread Bill Yikes
The socks.c code shows that cURL does not even attempt DNS resolution on
SOCKS4a.  Strictly speaking, the SOCKS4a spec expects apps to /attempt/
DNS resolution before contacting the socks server.  I won't complain on
this point though because the status quo is favorable to Tor users (as
it protects them from DNS leaks).

The fallout is that the SOCKS server does not give feedback to the app
on the IP it settles on in the socks4a scenario.  This means cURL has no
possible way of knowing which IP to express in the %{remote_ip} output.

After seeing the code I'm calling out these bugs:

bug 1) SOCKS4a: Considering that Curl_resolve() is unconditionally
bypassed, when a user supplies both --resolve and also demands socks4a
cURL will neglect to honor the --resolve option even though the two
options are theoretically compatible.  This is a minor bug because
socks4 can be used instead as a workaround.  But certainly the man page
should at a minimum disclose the artificial incompatibility between
socks4a and --resolve.

bug 2) SOCKS4a docs: cURL has some discretion whether to attempt DNS
resolution or not.  Yet the docs do not clarify.  Users should get
reassurance in the man page that using socks4a unconditionally refrains
from internal DNS resolution.

bug 3) SOCKS4: Since cURL *must* do DNS resolution, cURL must also know
what the target IP is.  Thus cURL should properly return the
%{remote_ip} value.

bug 4) The docs for %{remote_ip} should tell users what to expect for
that value.  The man page is vague enough to be useless.

Workaround: if proxy users need to know which IP cURL connected to, they
must do their own DNS resolution manually outside of cURL (e.g. using
dig), supply the IP & hostname via --resolve, and use SOCKS4 or SOCKS5
(not SOCKS4a).

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931815

Title:
  the %{remote_ip} output format is broken on proxied connections

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curl/+bug/1931815/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1931815] Re: the %{remote_ip} output format is broken on proxied connections

2021-06-17 Thread Bill Yikes
According to the SOCKS4a spec:

  https://www.openssh.com/txt/socks4.protocol
  https://www.openssh.com/txt/socks4a.protocol

With SOCKS4 cURL *must* do DNS resolution and pass the selected IP to
the SOCKS server.  OTOH, SOCKS4a gives cURL the option to resolve.  If
cURL fails at DNS resolution, it's expected to send the hostname and
0.0.0.x.  So generally cURL should succeed at DNS resolution and thus
have the IP of the target server.  Yet it's not sharing that info with
the user.

SOCKS4a was a bad test case because we can't know whether or not cURL
did the DNS resolution.  A better test case is with SOCKS4 as follows:

curl --ssl --socks4 127.0.0.1:9050 -L --head -w '(effective URL =>
"%{url_effective} @ %{remote_ip}")' "$target_url"

We are assured per the SOCKS4 protocol that cURL *must* do DNS
resolution, so cURL must know the remote IP address.  Yet it still
neglects to correctly set the %{remote_ip} value.  This is certainly a
bug.

Secondary bug--

the manpage states: "remote_ip The remote IP address of the most
recently done connection - can be either IPv4 or IPv6 (Added in 7.29.0)"

The man page is ambiguous. The /rule of least astonishment/ would have
the user naturally expecting "remote IP" to be the target IP whenever
cURL knows it.  Since the behavior is non-intuitive, the man page should
state in detail what the user should expect to receive for the
%{remote_ip} value.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931815

Title:
  the %{remote_ip} output format is broken on proxied connections

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curl/+bug/1931815/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1931815] [NEW] the %{remote_ip} output format is broken on proxied connections

2021-06-13 Thread Bill Yikes
Public bug reported:

This is how a Tor user would use cURL to grab a header, and also expect
to be told which IP address was contacted:

curl --ssl --socks4a 127.0.0.1:9050 -L --head -w '(effective URL =>
"%{url_effective} @ %{remote_ip}")' "$target_url"

It's broken because the "remote_ip" is actually just printed as the
127.0.0.1 (likely that of the proxy server not the remote target host).

tested on curl ver 7.52.1

** Affects: curl (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1931815

Title:
  the %{remote_ip} output format is broken on proxied connections

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curl/+bug/1931815/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1930437] Re: urlscan does not work on HTML fragments

2021-06-01 Thread Bill Yikes
** Description changed:

  This yields no output:
  
  curl -s 'https://www.veridiancu.org' | sed -ne '/https://www.veridiancu.org' | python -c 'from bs4 import
+ BeautifulSoup; import sys; print(BeautifulSoup(sys.stdin.read()).form)'
+ | urlscan -n
+ 
+ which might give a clue about what the problem is.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1930437

Title:
  urlscan does not work on HTML fragments

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/urlscan/+bug/1930437/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1930437] [NEW] urlscan does not work on HTML fragments

2021-06-01 Thread Bill Yikes
Public bug reported:

This yields no output:

curl -s 'https://www.veridiancu.org' | sed -ne '/https://bugs.launchpad.net/bugs/1930437

Title:
  urlscan does not work on HTML fragments

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/urlscan/+bug/1930437/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1332344] Re: Urlscan has no Debian maintainer. I have a maintained fork.

2021-06-01 Thread Bill Yikes
I don't have Github access and in fact try to avoid Microsoft services
as much as possible.  I suggest moving off github for these reasons:

https://git.sdf.org/humanacollaborator/humanacollabora/src/branch/master/github.md

There are some decent alternatives here:

https://git.sdf.org/humanacollaborator/humanacollabora/src/branch/master/forge_comparison.md

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1332344

Title:
  Urlscan has no Debian maintainer. I have a maintained fork.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/urlscan/+bug/1332344/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 978587] Re: apt should ensure .deb are not corrupted before handing them to dpkg

2021-05-29 Thread Bill Yikes
This is actually a security issue and it's surprising it's gone unfixed
for 9 years.  It's inconsistent for apt to check the hash on deb files
that it downloads, but then neglect to do so on user-supplied deb files.
The status quo is a recipe for disaster.  To exacerbate the problem, the
man page does not document the inconsistency or the fact that .  There
are a variety of ways to fix this:

1) apt could refuse to accept local .deb files
2) apt could require local .deb files to be supplied with a hash string (which 
would need a new CLI arg)
3) apt could print the hash to the string and instruct the user to confirm 
whether the hash matches
4) apt could check the repos it's aware of to see if the hash matches anything 
served by a trusted repo.  If not, follow option 1 or 3 above.

It's also important to note that users don't generally know how deb
files are structured or how deb files are structured.  Should they be
responsible for knowing whether a hash is embedded within the deb file
or not?  Particularly when the man page makes no mention of it?
Generally, the user might know that hashes are checked by the apt-*
tools one way or another.  The apt suite of tools (and docs for it) keep
the user in the dark, and yet the user is responsible knowing how it
works.  The user is not served well in this case.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/978587

Title:
  apt should ensure .deb are not corrupted before handing them to dpkg

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apt/+bug/978587/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1930139] [NEW] the --no-directories option incorrectly documented in the man page

2021-05-29 Thread Bill Yikes
Public bug reported:

man page shows:

   -nd
   --no-directories
   Do not create a hierarchy of directories when retrieving 
recursively.  With this option turned on, all files will get saved
   to the current directory, without clobbering (if a name shows up 
more than once, the filenames will get extensions .n).


The way that's written implies that the -nd option would conflict with the -P 
option.  But when -nd is combined with --directory-prefix (-P), wget honors the 
prefix and also downloads to a flat non-hierarchical "structure".  The behavior 
is sensible but the docs are wrong (-nd does not necessarily download to the 
current dir).

** Affects: wget (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1930139

Title:
  the --no-directories option incorrectly documented in the man page

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/wget/+bug/1930139/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1929239] Re: debootstrap documentation bug

2021-05-21 Thread Bill Yikes
There is also a problem with the man page, which says:

   --second-stage
  Complete the bootstrapping process.  Other arguments are 
generally not needed.

To "complete the bootstrapping process" is vague.  Naturally everyone
wants to complete the bootstrapping process.  But according to
https://www.debian.org/releases/bullseye/arm64/apds03.en.html, the
--second-stage is *only* for situations where the installation host and
the target host have different architectures.  The man page needs to be
more clear.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1929239

Title:
  debootstrap documentation bug

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/debootstrap/+bug/1929239/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1929241] [NEW] [feature request] add a proxy CLI option to debootstrap

2021-05-21 Thread Bill Yikes
Public bug reported:

debootstrap connects to the Internet but provides no proxy option, and
the man page makes no mention of whether the http_proxy environment
variable is honored.  Ideally, Tor users would benefit most from SOCKS4a
support.  I suggest modeling after cURL ("--socks4a 127.0.0.1:9050").

** Affects: debootstrap (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1929241

Title:
  [feature request] add a proxy CLI option to debootstrap

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/debootstrap/+bug/1929241/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1929239] [NEW] debootstrap documentation bug

2021-05-21 Thread Bill Yikes
Public bug reported:

The documentation for debootstrap is broken and also needs
reorganization.  It lives here:

https://www.debian.org/releases/bullseye/arm64/apds03.en.html

To put this in "Appendix D: Random Bits" is a bit unsettling.  It should
have a chapter of its own sitting in parallel to the installer guides.

The main bug is telling users to run commands inside chroot that must
run outside of chroot.  The first paragraph says: "$ symbolizes a
command to be entered in the user's current system, while # refers to a
command entered in the Debian chroot."  Then every command given on that
page follows a hash.  E.g. "# mke2fs -j /dev/sda6".  That's impossible.
You can't chroot before you have a filesystem.

Normally the best place to report this would be the "www.debian.org"
/project/ in the Debian bug tracker, but the Debian bug tracker only
function by email and last time I checked the mail server did not accept
email from me.  If someone would mirror this upstream it would be
appreciated.

** Affects: debootstrap (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1929239

Title:
  debootstrap documentation bug

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/debootstrap/+bug/1929239/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1929087] Re: sfdisk refuses to write GPT table between sector 34 and 2047

2021-05-20 Thread Bill Yikes
A secondary bug manifests from this, whereby sfdisk chokes on its own
output and therefore cannot restore its own backup.  E.g. suppose
another tool is used to put a BIOS boot partition from sector 34 to
2047, as follows:

$ sgdisk --clear -a 1 --new=1:34:2047 -c 1:"BIOS boot"
--typecode=1:$(sgdisk --list-types | sed -ne
's/.*\(\).bios.*/\1/gip') /dev/sdb

That works fine, and from that we can run "sfdisk -d /dev/sdb >
dump.txt".  But when dump.txt is fed back into sfdisk, it pukes.  Yet
the docs claim "It is recommended to save the layout of your devices.
sfdisk supports two ways." .. "Use  the  --dump  option to save a
description of the device layout to a text file." .. "This can later be
restored by: sfdisk /dev/sda < sda.dump"

It's actually a security issue, because someone can make an non-
restorable backup and have the false sense of security that it is
restorable.  They wouldn't necessary test restoration either because
that's a destructive process.

** Description changed:

  According to https://wiki.archlinux.org/title/GRUB#BIOS_systems, it's
  both legal and interesting to place the BIOS BOOT partition from sector
  34 to sector 2047, as follows:
  
  $ sudo sfdisk --no-act -f --label gpt /dev/sdb << EOF
  start=   34, size=2013, name=bios,   type=$(sfdisk 
--label gpt -T | awk '{IGNORECASE = 1;} /bios boot/{print $1}')
  start= 2048, size=12582912, name=swap,   type=$(sfdisk 
--label gpt -T | awk '{IGNORECASE = 1;} /linux swap/{print $1}')
  EOF
  
  The output is:
  
  /dev/sdb1: Sector 34 already used.
  Failed to add #1 partition: Numerical result out of range
  Leaving.
  
- It's a false error.  As a workaround, users must omit the BIOS BOOT
- partition then use gdisk to insert it manually.  This was uncovered in
- 2015 and perhaps never reported to a bug tracker because it's still
- broken.  See https://www.spinics.net/lists/util-linux-ng/msg11253.html
+ It's a false error.  As a workaround, users must use parted or sgdisk
+ instead.  (note fdisk & gdisk are also broken in the same way)
+ 
+ This bug was uncovered in 2015 and perhaps never reported to a bug
+ tracker because it's still broken.  See https://www.spinics.net/lists
+ /util-linux-ng/msg11253.html

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1929087

Title:
  sfdisk refuses to write GPT table between sector 34 and 2047

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/util-linux/+bug/1929087/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1929087] [NEW] sfdisk refuses to write GPT table between sector 34 and 2047

2021-05-20 Thread Bill Yikes
Public bug reported:

According to https://wiki.archlinux.org/title/GRUB#BIOS_systems, it's
both legal and interesting to place the BIOS BOOT partition from sector
34 to sector 2047, as follows:

$ sudo sfdisk --no-act -f --label gpt /dev/sdb << EOF
start=   34, size=2013, name=bios,   type=$(sfdisk --label 
gpt -T | awk '{IGNORECASE = 1;} /bios boot/{print $1}')
start= 2048, size=12582912, name=swap,   type=$(sfdisk --label 
gpt -T | awk '{IGNORECASE = 1;} /linux swap/{print $1}')
EOF

The output is:

/dev/sdb1: Sector 34 already used.
Failed to add #1 partition: Numerical result out of range
Leaving.

It's a false error.  As a workaround, users must omit the BIOS BOOT
partition then use gdisk to insert it manually.  This was uncovered in
2015 and perhaps never reported to a bug tracker because it's still
broken.  See https://www.spinics.net/lists/util-linux-ng/msg11253.html

** Affects: util-linux (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1929087

Title:
  sfdisk refuses to write GPT table between sector 34 and 2047

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/util-linux/+bug/1929087/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1925381] Re: rsync conceals file deletions from reporting when --dry-run --remove-source-files are used together

2021-04-22 Thread Bill Yikes
For me the fact that the upstream repo moved from bugzilla.samba.org to
github.com is sufficient to diverge from upstream. But to each his own.

My contempt for github is in fact why I reported the bug downstream. I
will not use github but I still intended to make a public record of the
bug, hence why it's here.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925381

Title:
  rsync conceals file deletions from reporting when --dry-run --remove-
  source-files are used together

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rsync/+bug/1925381/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1925381] Re: rsync conceals file deletions from reporting when --dry-run --remove-source-files are used together

2021-04-21 Thread Bill Yikes
** Description changed:

  Rsync has an astonishing and dangerous bug:
  
  The dry run feature (-n / --dry-run) fails to report file deletions when
  --remove-source-files is used. This is quite serious. People use --dry-
  run to see if an outcome will work as expected before a live run. When
  the simulated run shows *less* destruction than the live run, the
  consequences can be serious because rsync may unexpectedly destroy the
- only copy of a file.
+ only copy(*) of a file.
  
  Users rely on --dry-run. Although users probably expect --dry-run to
  have limitations, we don't expect destructive operations to be under
  reported. If it were reversed, such that the live run were less
  destructive than the dry run, this wouldn't be as serious.
  
  Reproducer:
  
  $ mkdir -p /tmp/src /tmp/dest
  $ printf '%s\n' 'yada yada' > /tmp/src/foo.txt
  $ printf '%s\n' 'yada yada' > /tmp/src/bar.txt
  $ cp /tmp/src/foo.txt /tmp/dest
  $ ls /tmp/src/ /tmp/dest/
  /tmp/dest/:
  foo.txt
  
  /tmp/src/:
  bar.txt  foo.txt
  
  $ rsync -na --info=remove1 --remove-source-files --existing src/* dest/
  (no output)
  
  $ rsync -a --info=remove1 --remove-source-files --existing src/* dest/
  sender removed foo.txt
  
  $ ls /tmp/src/ /tmp/dest/
  /tmp/dest/:
  foo.txt
  
  /tmp/src/:
  bar.txt
  
+ (*) note when I say it can destroy the only copy of a file, another
+ circumstance is needed: that is, rsync does not do a checksum by
+ default.  It checks for identical files based on parameters like name
+ and date.  So it's possible that two files match in the default
+ comparison but differ in the actual content.  Losing a unique file in
+ this scenario is perhaps a rare corner case, but this bug should be
+ fixed nonetheless.
+ 
  Note this bug is similar but differs in a few ways:
  https://bugzilla.samba.org/show_bug.cgi?id=3844
  
  I've marked this as a security vulnerability because it causes
  unexpected data loss due to --dry-run creating a false expectation.

** Description changed:

  Rsync has an astonishing and dangerous bug:
  
  The dry run feature (-n / --dry-run) fails to report file deletions when
  --remove-source-files is used. This is quite serious. People use --dry-
  run to see if an outcome will work as expected before a live run. When
  the simulated run shows *less* destruction than the live run, the
  consequences can be serious because rsync may unexpectedly destroy the
  only copy(*) of a file.
  
  Users rely on --dry-run. Although users probably expect --dry-run to
  have limitations, we don't expect destructive operations to be under
  reported. If it were reversed, such that the live run were less
  destructive than the dry run, this wouldn't be as serious.
  
  Reproducer:
  
  $ mkdir -p /tmp/src /tmp/dest
  $ printf '%s\n' 'yada yada' > /tmp/src/foo.txt
  $ printf '%s\n' 'yada yada' > /tmp/src/bar.txt
  $ cp /tmp/src/foo.txt /tmp/dest
  $ ls /tmp/src/ /tmp/dest/
  /tmp/dest/:
  foo.txt
  
  /tmp/src/:
  bar.txt  foo.txt
  
  $ rsync -na --info=remove1 --remove-source-files --existing src/* dest/
  (no output)
  
  $ rsync -a --info=remove1 --remove-source-files --existing src/* dest/
  sender removed foo.txt
  
  $ ls /tmp/src/ /tmp/dest/
  /tmp/dest/:
  foo.txt
  
  /tmp/src/:
  bar.txt
  
  (*) note when I say it can destroy the only copy of a file, another
  circumstance is needed: that is, rsync does not do a checksum by
- default.  It checks for identical files based on parameters like name
- and date.  So it's possible that two files match in the default
- comparison but differ in the actual content.  Losing a unique file in
- this scenario is perhaps a rare corner case, but this bug should be
- fixed nonetheless.
+ default.  It checks for identical files based on superficial parameters
+ like name and date.  So it's possible that two files match in the
+ default superficial comparison but differ in the actual content.  Losing
+ a unique file in this scenario is perhaps a rare corner case, but this
+ bug should be fixed nonetheless.
  
  Note this bug is similar but differs in a few ways:
  https://bugzilla.samba.org/show_bug.cgi?id=3844
  
  I've marked this as a security vulnerability because it causes
  unexpected data loss due to --dry-run creating a false expectation.

** Description changed:

  Rsync has an astonishing and dangerous bug:
  
- The dry run feature (-n / --dry-run) fails to report file deletions when
- --remove-source-files is used. This is quite serious. People use --dry-
- run to see if an outcome will work as expected before a live run. When
- the simulated run shows *less* destruction than the live run, the
- consequences can be serious because rsync may unexpectedly destroy the
- only copy(*) of a file.
+ The dry run feature (-n / --dry-run) inhibits reporting of file
+ deletions when --remove-source-files is used. This is quite serious.
+ People use --dry-run to see if an outcome will work as expected before a
+ live run. When the simulated run shows 

[Bug 1925381] [NEW] rsync conceals file deletions from reporting when --dry-run --remove-source-files are used together

2021-04-21 Thread Bill Yikes
Public bug reported:

Rsync has an astonishing and dangerous bug:

The dry run feature (-n / --dry-run) fails to report file deletions when
--remove-source-files is used. This is quite serious. People use --dry-
run to see if an outcome will work as expected before a live run. When
the simulated run shows *less* destruction than the live run, the
consequences can be serious because rsync may unexpectedly destroy the
only copy of a file.

Users rely on --dry-run. Although users probably expect --dry-run to
have limitations, we don't expect destructive operations to be under
reported. If it were reversed, such that the live run were less
destructive than the dry run, this wouldn't be as serious.

Reproducer:

$ mkdir -p /tmp/src /tmp/dest
$ printf '%s\n' 'yada yada' > /tmp/src/foo.txt
$ printf '%s\n' 'yada yada' > /tmp/src/bar.txt
$ cp /tmp/src/foo.txt /tmp/dest
$ ls /tmp/src/ /tmp/dest/
/tmp/dest/:
foo.txt

/tmp/src/:
bar.txt  foo.txt

$ rsync -na --info=remove1 --remove-source-files --existing src/* dest/
(no output)

$ rsync -a --info=remove1 --remove-source-files --existing src/* dest/
sender removed foo.txt

$ ls /tmp/src/ /tmp/dest/
/tmp/dest/:
foo.txt

/tmp/src/:
bar.txt

Note this bug is similar but differs in a few ways:
https://bugzilla.samba.org/show_bug.cgi?id=3844

I've marked this as a security vulnerability because it causes
unexpected data loss due to --dry-run creating a false expectation.

** Affects: rsync (Ubuntu)
 Importance: Undecided
 Status: New

** Information type changed from Private Security to Public

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1925381

Title:
  rsync conceals file deletions from reporting when --dry-run --remove-
  source-files are used together

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/rsync/+bug/1925381/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1924609] Re: onion sites inaccessible due to internal DNS lookup

2021-04-20 Thread Bill Yikes
"Wontfix" is probably the most fitting status of the possibilities that
are given.

I would have reported upstream if it didn't require using gitlab.com
(which pushes CAPTCHAs).  Upstream forges often appear in unusable or
controversial places like github or gitlab.com, which actually
discourages bug reporting
(https://infosec.exchange/@bojkotiMalbona/104637098084869887).

We need another status: "reportUpstream".  Bugs could sit in a
"reportUpstream" state until someone with access to the upstream bug
tracker mirrors the report, at which point it could transition to
wontfix or invalid.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1924609

Title:
  onion sites inaccessible due to internal DNS lookup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1924609/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1924618] Re: add a "delete after" option

2021-04-16 Thread Bill Yikes
For the record, I should mention that I just noticed this is covered in
the FaQ:

https://www.fetchmail.info/fetchmail-FAQ.html#G5

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1924618

Title:
  add a "delete after" option

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1924618/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1924609] Re: onion sites inaccessible due to internal DNS lookup

2021-04-16 Thread Bill Yikes
///scenario 3: using mapaddress (requires root or tor controller
access)///

With this configuration:
```
poll uw via 10.40.40.46
protocol imap
port 993
username "billyikes"
fetchall
```
/etc/tor/torrc:
```
mapaddress 10.40.40.46 underwood2hj3pwd.onion
```

the terminal output is nothing:
```
$ fetchmail -v uw
```
and the log output is:
```
fetchmail: starting fetchmail 6.3.26 daemon
fetchmail: 6.3.26 querying uw (protocol IMAP) at Fri 16 Apr 2021 04:20:37 PM 
EDT: poll started
fetchmail: Trying to connect to 10.40.40.46/993...connection failed.
fetchmail: connection to 10.40.40.46:993 [10.40.40.46/993] failed: Connection 
timed out.
fetchmail: Connection errors for this poll:
name 0: connection to 10.40.40.46:993 [10.40.40.46/993] failed: Connection 
timed out.
fetchmail: IMAP connection to uw failed: Connection timed out
fetchmail: 6.3.26 querying uw (protocol IMAP) at Fri 16 Apr 2021 04:22:48 PM 
EDT: poll completed
fetchmail: Query status=2 (SOCKET)
```

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1924609

Title:
  onion sites inaccessible due to internal DNS lookup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1924609/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1924609] Re: onion sites inaccessible due to internal DNS lookup

2021-04-16 Thread Bill Yikes
///scenario 1: using torsocks///

With this configuration:
```
poll uw via underwood2hj3pwd.onion
protocol imap
port 993
username "billyikes"
fetchall
```
the terminal output is:
```
$ torsocks fetchmail uw
1618601733 ERROR torsocks[16571]: Unable to resolve. Status reply: 4 (in 
socks5_recv_resolve_reply() at socks5.c:683)
gethostbyname failed for myhost
Non-recoverable failure in name resolutionCannot find my own host in hosts 
database to qualify it!
Trying to continue with unqualified hostname.
DO NOT report broken Received: headers, HELO/EHLO lines or similar problems!
DO repair your /etc/hosts, DNS, NIS or LDAP instead.
fetchmail: can't poll specified hosts with another fetchmail running at 15369.
```
and the log output is:
```
1618602068 ERROR torsocks[17358]: Connection refused to Tor SOCKS (in 
socks5_recv_connect_reply() at socks5.c:549)
fetchmail: Connection errors for this poll:
name 0: connection to underwood2hj3pwd.onion:993 [127.42.42.0/993] failed: 
Connection refused.
fetchmail: IMAP connection to uw failed: Connection refused
```

///scenario 2: using plugin for socat///

With this configuration:
```
poll uw via underwood2hj3pwd.onion
plugin   "socat STDIO SOCKS4A:127.0.0.1:%h:%p,socksport=9050"
protocol imap
port 993
username "billyikes"
fetchall
```
the terminal had no output:
```
$ fetchmail uw
```
and the log output is:
```
fetchmail: starting fetchmail 6.3.26 daemon
fetchmail: couldn't find canonical DNS name of uw (underwood2hj3pwd.onion): 
Name or service not known
fetchmail: Query status=11 (DNS)
```

/// Version info ///

Abridged output of "env LC_ALL=C fetchmail -V":
```
This is fetchmail release 6.3.26+GSS+NTLM+SDPS+SSL-SSLv3+NLS+KRB5.

Copyright (C) 2002, 2003 Eric S. Raymond
Copyright (C) 2004 Matthias Andree, Eric S. Raymond,
   Robert M. Funk, Graham Wilson
Copyright (C) 2005 - 2012 Sunil Shetye
Copyright (C) 2005 - 2015 Matthias Andree
Fetchmail comes with ABSOLUTELY NO WARRANTY. This is free software, and you
are welcome to redistribute it under certain conditions. For details,
please see the file COPYING in the source or documentation directory.
This product includes software developed by the OpenSSL Project
for use in the OpenSSL Toolkit. (http://www.openssl.org/)

Fallback MDA: (none)
Linux cypher 4.9.0-15-amd64 #1 SMP Debian 4.9.258-1 (2021-03-08) x86_64 
GNU/Linux
Taking options from command line and /home/user/.fetchmailrc
Poll interval is 1800 seconds
Logfile is /home/user/logs/fetchmail.log
Idfile is /home/user/.fetchids
Fetchmail will forward misaddressed multidrop messages to user.
Options for retrieving from billyikes@uw:
  Mail will be retrieved via underwood2hj3pwd.onion
  True name of server is underwood2hj3pwd.onion.
  Protocol is IMAP (using service 993).
  All available authentication methods will be tried.
  SSL trusted certificate directory: /etc/ssl/certs
  Server nonresponse timeout is 300 seconds (default).
  Default mailbox selected.
  All messages will be retrieved (--all on).
  Fetched messages will not be kept on the server (--keep off).
  Old messages will not be flushed before message retrieval (--flush off).
  Oversized messages will not be flushed before message retrieval (--limitflush 
off).
  Rewrite of server-local addresses is enabled (--norewrite off).
  Carriage-return stripping is disabled (stripcr off).
  Carriage-return forcing is disabled (forcecr off).
  Interpretation of Content-Transfer-Encoding is enabled (pass8bits off).
  MIME decoding is disabled (mimedecode off).
  Idle after poll is disabled (idle off).
  Nonempty Status lines will be kept (dropstatus off)
  Delivered-To lines will be kept (dropdelivered off)
  Fetch message size limit is 100 (--fetchsizelimit 100).
  Do binary search of UIDs during 3 out of 4 polls (--fastuidl 4).
  Messages will be SMTP-forwarded to: localhost (default)
  Single-drop mode: 1 local name recognized.
  Server connections will be made via plugin socat STDIO 
SOCKS4A:127.0.0.1:%h:%p,socksport=9050 (--plugin socat STDIO 
SOCKS4A:127.0.0.1:%h:%p,socksport=9050).
  No UIDs saved from this host.
```

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1924609

Title:
  onion sites inaccessible due to internal DNS lookup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1924609/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1924622] Re: add a security feature to randomize the fetch schedule

2021-04-16 Thread Bill Yikes
Matthias, I appreciate the tip about FETCHMAILHOME, which seems to imply
that multiple instances can run potentially at the same time.  I will
explore your suggested workaround.  Since I access onion servers, I have
a lot of wiring outside of fetchmail anyway.

Note that I don't personally have a threat model that makes this
capability important to me, but I raised the feature request because it
would be a useful security improvement for many to have.  If the daemon
could handle the scheduling, it would ease things for the user and
improve security for many including those who don't have such an
exciting threat model.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1924622

Title:
  add a security feature to randomize the fetch schedule

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1924622/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1924618] Re: add a "delete after" option

2021-04-16 Thread Bill Yikes
I appreciate everyone's quick response.  First I will address the
privacy matter.

In the infosec discipline, we have the "principle of least priviledge",
which essentially means it's a bad idea to grant more access than what's
needed for the task.  If someone only needs 90 days of server storage
per message, why should the server admins have the priviledge of seeing
further back?  It makes no sense.

As far as not knowing what the email service provider (ESP) does, that's
indeed true, but this does not eliminate the security benefit for three
reasons:

1) [mass surveillance case] The ESP may publish a retention policy that
contractually obligates them.  They may opt not to comply but it's still
a security benefit to place the ESP in a position of being out of
compliance should they retain "deleted" email beyond a threshold that
the user controls.  The state of non-compliance greatly limits how the
ESP uses the data because getting caught compromises their bottom line.
Yahoo was dragged into one court after it provided evidence in another
court that wasn't supposed to exist per the privacy policy.  It makes no
sense to give up this protection.

2) [targeted surveillance case] In the case of warranted, targeted
surveillance which overrides retention policy, data deleted before the
warrant is served is effectively protected.  If 5+ years of email is
sitting on a server when a warrant is served, the warrant can force
disclosure of all that data.  If only ~90 days + contractual retention
are on the server when the warrant is serviced, then that's all that's
available.  Warrants can't be served from a time machine.

3) [targeted unwarranted surveillance case] In the case of snoops
illegally probing email without a warrant or consent, they can of course
exfiltrate the data they're after.  Getting the data is only part of the
equation.  How they can use illegally obtained data is limited.  If they
just walk into court with that they will suffer consequences.  They
don't want to reveal their illegal ways to the public over a small case.
It's a very costly hand to play.  They will look for plausible ways to
obtain the data legally and go that route (parallel construction).  If
fetchmail needlessly leaves data on the server, it creates an attack
surface for parallel construction.

> Please re-file this feature request (without the privacy "reason") as
new issue here: https://gitlab.com/fetchmail/fetchmail/-/issues

I cannot access gitlab.com because I am blocked by a variety of Tor-
hostile MACFANG-dependant mechanisms.  In as open and free world, indeed
bugs should be reported as upstream as possible.  "Possible" is the
keyword as otherwise public projects are making their way into access-
restricted forges like gitlab.com more and more.

There are many reasons why gitlab.com should be avoided expressed here:
https://git.sdf.org/humanacollaborator/humanacollabora/src/branch/master
/gitlab-dot-com.md

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1924618

Title:
  add a "delete after" option

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1924618/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1924622] [NEW] add a security feature to randomize the fetch schedule

2021-04-15 Thread Bill Yikes
Public bug reported:

Suppose you have ~6-12 accounts all fetched over Tor.  If they all fetch
at the same time, the accounts could easily be correlated together --
which is particularly problematic if you hold multiple accounts at the
same provider.  And even if you only have one account, having a fixed
delay between fetches is also compromizing.

The "interval" option doesn't cut it because the first fetch still hits
all servers at once and the schedule is still predictable.  I therefore
propose expanding the "interval" parameter to fetch at random times if a
range is given.  E.g. "interval 5.3-25" means fetch as early as 5.3 min
& no later than 25 min since the last fetch, randomized after each
fetch.

** Affects: fetchmail (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1924622

Title:
  add a security feature to randomize the fetch schedule

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1924622/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1924618] [NEW] add a "delete after" option

2021-04-15 Thread Bill Yikes
Public bug reported:

Getmail has a "delete_after" option, as does MUAs like Claws Mail.  Why
not fetchmail?

It would be a useful feature to have.  Perhaps the most common use case
is this: someone has both a desktop and mobile phone MUA, which confines
them to IMAP if they don't want the complexity of having to run their
own IMAP server.  But this means email sits on the server potentially
indefinitely, which is terrible in terms of security.  Ideally a good
compromise is for the desktop to POP3 download msgs and delete them 90
days later.  The mobile device would then use IMAP to access just the
past 90 days of messages.

Te some extent this would give the privacy benefit of providers like
Microsoft and Google not having years of email to snoop through at any
time.  It's also essential when you have a small provider like Danwin
with a small storage limit.

** Affects: fetchmail (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1924618

Title:
  add a "delete after" option

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1924618/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1924609] [NEW] onion sites inaccessible due to internal DNS lookup

2021-04-15 Thread Bill Yikes
Public bug reported:

Fetchmail works over Tor but only if the server is a clearnet host.  So for 
example a Yahoo config might look like this:
```
poll imap.mail.yahoo.com
plugin "socat STDIO SOCKS4A:127.0.0.1:%h:%p,socksport=9050"
protocol   imap
port   993
interval   3
username   "billyikes"
ssl
sslcertck
sslfingerprint "6F:C8:F1:EB:A0:55:3D:35:5B:2E:31:7F:6B:F8:A3:B4"
fetchall
```
If the server is an onion server, it's a disaster because fetchmail attempts to 
resolve the hostname internally and it can't handle *.onion hosts.  The 
following gives an error like "cannot resolve":
```
poll underwood2hj3pwd.onion
plugin "socat STDIO SOCKS4A:127.0.0.1:%h:%p,socksport=9050"
protocol   imap
port   993
username   "billyikes"
fetchall
```
The documentation does not state that hostnames must be clearnet hostnames.  So 
at the very minimum that limitation should be documented.  But really, Tor 
should be supported officially and ideally without the "plugin" hack.  This is 
the workaround:
```
skip underwood-onion via 127.0.0.1
protocol   imap
port   12345
username   "billyikes"
fetchall
```
run:

socat TCP4-LISTEN:12345,reuseaddr,fork
SOCKS4A:127.0.0.1:underwood2hj3pwd.onion:110,socksport=9050 &

then run "fetchmail underwood-onion".  It's a nasty hack.. makes daemon mode 
problematic because a socat tunnel can't just be left up indefinitely.  We 
should be able to write something like:
```
poll underwood2hj3pwd.onion
socks4a127.0.0.1:9050"
protocol   imap
port   993
username   "billyikes"
fetchall
```

** Affects: fetchmail (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1924609

Title:
  onion sites inaccessible due to internal DNS lookup

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/fetchmail/+bug/1924609/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1921628] Re: unpaper: error: no input or output files given.

2021-03-28 Thread Bill Yikes
yikes, a destructive manifestation of the same bug:

If a list of files is given with the --overwrite option, it actually
clobbers one file with the output of the other.

hmm.. actually I've misunderstood the man page:

--overwrite
   Allow overwriting existing files. Otherwise the program terminates 
with an error if an output file to be written already exists.

I thought "overwrite" meant to overwrite the source doc, not that it was
okay to overwrite the 2nd in a list.  So in the end this may just be a
documentation problem.  I suggesting making the docs more clear.  I
believe this is incorrect BNF:

  unpaper [options] {input-pattern output-pattern | input-file(s)
output-file(s)}

braces are invalid BNF; they should be parenthesis, and there should be
angle brackets like this:

  unpaper [options] (  | 
)

then the non-obvious things in angle brackets should be defined
separately, like pattern.  Patterns come in many forms.  I tried giving
'*.pgm' as a pattern and it was rejected.

Also, I suggest an "--in-place" option of sorts so users can operate
directly on the source and not give an output file.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1921628

Title:
  unpaper: error: no input or output files given.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/unpaper/+bug/1921628/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1921628] Re: unpaper: error: no input or output files given.

2021-03-28 Thread Bill Yikes
The workaround is to give up the --overwrite option, create a temp file,
and copy it back over the source:

$ unpaper -t pbm raw_pg-000_im_th35.pbm raw_pg-000_unpapered.pbm && mv -v 
raw_pg-000_unpapered.pbm raw_pg-000_im_th35.pbm
Processing sheet #1: raw_pg-000_im_th35.pbm -> raw_pg-000_unpapered.pbm
[image2 @ 0x5561ee6a2ae0] Using AVStream.codec to pass codec parameters to 
muxers is deprecated, use AVStream.codecpar instead.
[image2 @ 0x5561ee6a2ae0] Encoder did not produce proper pts, making some up.
'raw_pg-000_unpapered.pbm' -> 'raw_pg-000_im_th35.pbm'

I probably should have mentioned --overwrite in the subject of this
report.. can't change it now.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1921628

Title:
  unpaper: error: no input or output files given.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/unpaper/+bug/1921628/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1921628] [NEW] unpaper: error: no input or output files given.

2021-03-28 Thread Bill Yikes
Public bug reported:

$ unpaper -t pbm --overwrite raw_pg-000_im_th35.pbm
unpaper: error: no input or output files given.

Try 'man unpaper' for more information.

$ ls -l raw_pg-000_im_th35.pbm
-rw-r--r-- 1 user user 1052713 Mar 28 09:45 raw_pg-000_im_th35.pbm

So the file exists, but unpaper falls over with this bogus error.  This
has worked for me previously but not today.  Thus, it may be difficult
to reproduce.

$ unpaper --version
6.1

** Affects: unpaper (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1921628

Title:
  unpaper: error: no input or output files given.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/unpaper/+bug/1921628/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1916937] Re: unpaper truncates some images

2021-02-25 Thread Bill Yikes
Here's another sample input (attached) which shows the problem more
clearly.  Most right sidebar is lost.

** Attachment added: "grayscale input that unpaper botches"
   
https://bugs.launchpad.net/ubuntu/+source/unpaper/+bug/1916937/+attachment/5467253/+files/tiaa-004.pgm

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1916937

Title:
  unpaper truncates some images

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/unpaper/+bug/1916937/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1916937] Re: unpaper truncates some images

2021-02-25 Thread Bill Yikes
This comment attaches the output file showing the missing postmark date.

** Attachment added: "output file"
   
https://bugs.launchpad.net/ubuntu/+source/unpaper/+bug/1916937/+attachment/5467123/+files/sample_b20.pbm

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1916937

Title:
  unpaper truncates some images

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/unpaper/+bug/1916937/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1916937] Re: unpaper truncates some images

2021-02-25 Thread Bill Yikes
I should also mention the workaround that works, which is to use
ImageMagick instead of unpaper, as follows:

$ convert sample.pgm -rotate 90 -threshold 80% -type bilevel
sample_th80.pbm

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1916937

Title:
  unpaper truncates some images

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/unpaper/+bug/1916937/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1916937] [NEW] unpaper truncates some images

2021-02-25 Thread Bill Yikes
Public bug reported:

An envelope was scanned sideways to a PNG file, in grayscale.
Imagemagick "convert" was used to produce a PGM file.  Then unpaper was
used to rotate it and bilevel it.  This often works fine with no issues,
but exceptionally for one image in particular, unpaper cuts a box out of
the center of the image and shifts over the return address side of the
envelope, ultimately causing corruption.  This is the command used:

$ unpaper -v -t pbm -b 0.2 --pre-rotate 90 sample.pgm sample_b20.pbm

output was:

---
Processing sheet #1: sample.pgm -> sample_b20.pbm
pre-rotating 90 degrees.
input-file for sheet 1: sample.pgm
output-file for sheet 1: sample_b20.pbm
sheet size: 2747x1156
...
noise-filter ... deleted 20062 clusters.
blur-filter... deleted 387 pixels.
auto-masking (1373,578): 0,0,2746,1155 (invalid detection, using full page size)
gray-filter... deleted 13774552 pixels.
auto-masking (1373,578): -7,0,1458,1155
detected rotation left: [-7,0,1458,1155]: 0.003491
detected rotation right: [-7,0,1458,1155]: -0.054105
rotation average: -0.025307  deviation: 0.040726  rotation-scan-deviation 
(maximum): 0.017453  [-7,0,1458,1155]
out of deviation range - NO ROTATING
rotate (1373,578): 0.00
auto-masking (1373,578): -7,0,1458,1155
centering mask [-7,0,1458,1155] (1373,578): 647, 0
border detected: (0,25,1,21) in [0,0,2746,1155]
aligning mask [0,25,2745,1134] (0,22): 0, -3
writing output.
[image2 @ 0x561e104d1540] Using AVStream.codec to pass codec parameters to 
muxers is deprecated, use AVStream.codecpar instead.
[image2 @ 0x561e104d1540] Encoder did not produce proper pts, making some up.
---

The input file is attached.

It's easy to see the problem by noticing the postmark date on the input
image, then seeing that the month and day are missing from the resulting
image.

** Affects: unpaper (Ubuntu)
 Importance: Undecided
 Status: New

** Attachment added: "input file"
   https://bugs.launchpad.net/bugs/1916937/+attachment/5467122/+files/sample.pgm

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1916937

Title:
  unpaper truncates some images

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/unpaper/+bug/1916937/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

  1   2   >