Bug#954977: "Daily Limit Exceeded" connection error

2020-03-26 Thread jnqnfe
Package: evolution
Version: 3.36.0-1
Severity: grave

I am currently seeing the following error come up in evolution when
trying to connect to my gmail accounts. Presumably it is a knock on
effect of increased usage brought on by the lock down for COVID-19.

Someone needs to get the quota bumped up.

filing against evolution; apologies if it should have been against
gnome-online-accounts, i was not certain.

```
Daily Limit Exceeded. The quota will be reset at midnight Pacific Time
(PT). You may monitor your quota usage and adjust limits in the API
Console: 
https://console.developers.google.com/apis/api/caldav.googleapis.com/quotas?project=44438659992
```



Bug#929819: [firefox] package v67

2019-05-31 Thread jnqnfe
Package: firefox
Version: 66.0.5-1
Severity: critical

Firefox v67 was released 10 days ago and includes critical security
fixes (as I'm sure I don't need to point out, they always do). Please
update the package on the unstable channel.

I am aware that we are currently in a freeze period for the next stable
release, but many Debian users, like myself and my family, actually run
'unstable/'Sid', and the long delays in getting critical security fixes
like this onto the unstable channel impacts our security.

I understand that perhaps using unstable may not be officially
considered a correct use of Debian, but with the exception of server
use, people don't want to wait 2 years for new major versions of
significant userland packages. I have been using this channel for some
years now and rarely experience noticeable bugs introduced on it. The
only real problem stems from freezes that delay security updates.

Regards, :)



Bug#913271: segfault - broken rust compiling

2018-11-08 Thread jnqnfe
Package: llvm-7
Version: 1:7.0.1~+rc2-1
Severity: grave

I've just updated my Sid install and found that building Rust crates
with Cargo now fails with a seg fault.

Initially I fired a bug report at cargo to kick things off, but I've
now discovered that it relates to the llvm-7 update, as switching llvm7
packages back to testing versions fixes the problem.



Bug#799675: [virtualbox] missing packages in sid

2015-09-21 Thread jnqnfe
Package: virtualbox
Version: 5.0.4-dfsg-3
Severity: grave

Certain virtualbox packages seem to have vanished from sid with the
latest update.

I noticed two packages being held back when performing an upgrade on a
sid host - virtualbox and virtualbox-qt (5.0.4-dfsg-2 -> 5.0.4-dfsg-3).
When doing a full-upgrade, aptitude complains about unmet dependencies 
- predepending on virtualbox-dmks 5.0.4-dfsg-3, but 5.0.4-dfsg-2 is
installed. A 5.0.4-dfsg-3 copy of virtualbox-dkms is clearly not
available. In fact the virtualbox-dkms package is no longer available
from the sid repo at all... Nor is the alternative virtualbox-source.

Similarly in a VM, virtualbox-guest-x11 and virtualbox-guest-utils
updates where successfully installed, however no virtualbox-guest-dkms
update was available, the existing copy was marked as obsolete and
removed, and it is no longer available from the repo.



Bug#797227: segfault - gst_memory_unmap, libgstreamer

2015-08-31 Thread jnqnfe
On Mon, 2015-08-31 at 17:36 +0300, Sebastian Dröge wrote:
> On Mo, 2015-08-31 at 15:29 +0100, jnqnfe wrote:
> > 
> > > Can someone who is still able to reproduce this with 1.5.90 from
> > > experimental also install debug symbols for libc6, libglib2.0-0, 
> > > all the GStreamer packages and then
> > >  a) run with valgrind --track-origins=yes
> > >  b) get a new backtrace with gdb?
> > > 
> > > Thanks!
> > 
> > I do not believe anyone has reported being able to reproduce the 
> > issue with 1.5.90 packages.
> > 
> > Do you want me to revert to the troublesome packages and get a more
> > complete bt for you?
> 
> That would be great, yes. Thanks :)

Ok, pasted below! (full bt further down)

#0  0x7fff97f50ff0 in gst_memory_unmap (mem=0x7fff, 
info=info@entry=0x7fff87867980) at gstmemory.c:339
#1  0x7fff97f26f76 in gst_buffer_unmap (buffer=,
info=0x7fff87867980) at gstbuffer.c:1622
#2  0x7fff85442294 in gst_faad_set_format (dec=0x7fffc3399aa0
[GstFaad], caps=) at gstfaad.c:326
#3  0x7fff90d4be04 in gst_audio_decoder_do_caps (caps=, dec=) at gstaudiodecoder.c:866
#4  0x7fff90d4be04 in gst_audio_decoder_do_caps (dec=0x7fffc3399aa0
[GstFaad]) at gstaudiodecoder.c:1737
#5  0x7fff90d4f18f in gst_audio_decoder_chain (pad=0x7fffc2a0b6e0
[GstPad], parent=0x7fffc3399aa0 [GstFaad], buffer=0x7fffc2a0c840) at
gstaudiodecoder.c:1756
#6  0x7fff97f55e1f in gst_pad_push_data (data=,
type=, pad=) at gstpad.c:3830
#7  0x7fff97f55e1f in gst_pad_push_data (pad=0x7fffc2a0b280
[GstPad], type=2429874528, data=0x7fffc2a0c840) at gstpad.c:4063
#8  0x7fff97a9a564 in gst_base_parse_push_frame
(parse=0x7fffc331fa30 [GstAacParse], frame=0x7fff87867c60) at
gstbaseparse.c:2304
#9  0x7fff97a9b132 in gst_base_parse_chain (pad=0x7fff,
parent=0x7fffc331fa30 [GstAacParse], buffer=0x7fffc2a0c840) at
gstbaseparse.c:2824
#10 0x7fff97f55e1f in gst_pad_push_data (data=,
type=, pad=) at gstpad.c:3830
#11 0x7fff97f55e1f in gst_pad_push_data (pad=0x7fffc2a0ae20
[GstPad], type=2544478928, data=0x7fffc2a0c840) at gstpad.c:4063
#12 0x7fff908ccb4c in gst_multi_queue_loop (object=,
sq=, mq=) at gstmultiqueue.c:1229
#13 0x7fff908ccb4c in gst_multi_queue_loop (pad=0x7fff) at
gstmultiqueue.c:1484
#14 0x7fff97f83b61 in gst_task_func (task=0x7fffc28fc4d0 [GstTask])
at gsttask.c:316
#15 0x7fffee6a92e8 in g_thread_pool_thread_proxy (data=) at /tmp/buildd/glib2.0-2.44.1/./glib/gthreadpool.c:307
#16 0x7fffee6a8955 in g_thread_proxy (data=0x7fffc0a73ca0) at
/tmp/buildd/glib2.0-2.44.1/./glib/gthread.c:764
#17 0x77bc70a4 in start_thread (arg=0x7fff87868700) at
pthread_create.c:309
#18 0x7707c07d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:111




#0  0x7fff97f50ff0 in gst_memory_unmap (mem=0x7fff, 
info=info@entry=0x7fff87867980) at gstmemory.c:339
__func__ = "gst_memory_unmap"
#1  0x7fff97f26f76 in gst_buffer_unmap (buffer=,
info=0x7fff87867980) at gstbuffer.c:1622
__func__ = "gst_buffer_unmap"
#2  0x7fff85442294 in gst_faad_set_format (dec=0x7fffc3399aa0
[GstFaad], caps=) at gstfaad.c:326
samplerate = 22050
channels = 2 '\002'
faad = 0x7fffc3399aa0 [GstFaad]
str = 0x7fff97370580
buf = 0x7fffc2a0c730
value = 
map = {memory = 0x7fff, flags = GST_MAP_READ, data =
0x7fffc2590270 "\025\b", , size = 5,
maxsize = 12, user_data = {0x7fffc23f0068, 0x7fff991b7460,
0x7fffee955808 <g_value_get_boxed+88>, 0x7fff97f551b0 },
_gst_reserved = {0x7fff878679f0, 0x7fffc2a0b6e0, 0x7fff97f4439f
<gst_event_parse_caps+127>, 0x7fffc3399aa0}}
cdata = 0x7fffc2590270 "\025\b", 
csize = 5
__func__ = "gst_faad_set_format"
__FUNCTION__ = "gst_faad_set_format"
#3  0x7fff90d4be04 in gst_audio_decoder_do_caps (caps=, dec=) at gstaudiodecoder.c:866
klass = 0x7fffc35d5c00
res = -1665004384
caps = 0x7fffc33e8370
#4  0x7fff90d4be04 in gst_audio_decoder_do_caps (dec=0x7fffc3399aa0
[GstFaad]) at gstaudiodecoder.c:1737
caps = 0x7fffc33e8370
#5  0x7fff90d4f18f in gst_audio_decoder_chain (pad=0x7fffc2a0b6e0
[GstPad], parent=0x7fffc3399aa0 [GstFaad], buffer=0x7fffc2a0c840) at
gstaudiodecoder.c:1756
ret = -1029650368
__PRETTY_FUNCTION__ = "gst_audio_decoder_chain"
#6  0x7fff97f55e1f in gst_pad_push_data (data=,
type=, pad=) at gstpad.c:3830
chainfunc = 0x7fff90d4f160 
parent = 0x7fffc3399aa0 [GstFaad]
peer = 0x7fffc2a0b6e0 [GstPad]
__PRETTY_FUNCTION__ = "gst_pad_push_data"
#7  0x7fff97f55e1f in gst_pad_push_data (pad=0x7fffc2a0b280
[GstPad], type=2429874528, data=0x7fffc2a0c840) at gstpad.c:4063
peer = 0x7fffc2a0b6e0 [GstPad]
__PRETTY_FUNCTI

Bug#797227: segfault - gst_memory_unmap, libgstreamer

2015-08-31 Thread jnqnfe
On Mon, 2015-08-31 at 12:39 +0300, Sebastian Dröge wrote:
> Hi,
> 
> On Sun, 30 Aug 2015 00:04:02 +0100 jnqnfe <jnq...@gmail.com> wrote:
> 
> > Upgrading to gstreamer1.0-plugins-bad from experimental as someone
> > suggested resulted in the following package changes:
> > 
> > The following NEW packages will be installed:
> > [...]
> > Iceweasel indeed no longer crashes now. Running in gdb I instead 
> > get 
> > a SIGPIPE failure.
> 
> Can you get a backtrace of that SIGPIPE? And without running in gdb,
> everything works fine for you now?

Yes having switched some of the gstreamer packages to 1.5.90 from
experimental, iceweasel no longer crashes during normal use.

The SIGPIPE failure doesn't seem to relate to gstreamer, so I'll stick
a bt for that in a separate bug report, if one doesn't already exist
for it.

> The vimeo link, https://vimeo.com/55640554, pasted in this bug report
> earlier here does not cause any crashes or anything for me, it just
> works fine.

Did you play the video or just load the page? I believe someone said
that you need to play it for the crash to occur.

I should perhaps explain that I did not try the supplied link; I have
iceweasel set to load my previous tabs on startup (of which I currently
have quiet a lot). After upgrading iceweasel along with gstreamer
-plugins-bad and dependencies, loading iceweasel then resulted in the
crash. I did not alter the set of tabs during testing. The crash
occured running iceweasel in safe mode in gdb. Switching gstreamer
-plugins-bad to v1.5.90 from experimental, loading iceweasel with that
same exact set of tabs as before, I had no issues. I checked in gdb
just for the hell of it and got the SIGPIPE failure (and still do), but
it's fine in normal operation.

> Can someone who is still able to reproduce this with 1.5.90 from
> experimental also install debug symbols for libc6, libglib2.0-0, all
> the GStreamer packages and then
>  a) run with valgrind --track-origins=yes
>  b) get a new backtrace with gdb?
> 
> Thanks!

I do not believe anyone has reported being able to reproduce the issue
with 1.5.90 packages.

Do you want me to revert to the troublesome packages and get a more
complete bt for you?



Bug#797227: segfault - gst_memory_unmap, libgstreamer

2015-08-28 Thread jnqnfe
Package: iceweasel
Version: 38.2.1esr-1
Severity: grave

The latest iceweasel update causes it to segfault shortly after loading
it.

Tested with safemode in gdb as outlined by reportbug.

Backtrace:

#0  0x7fff9a3a8ff0 in gst_memory_unmap () from /usr/lib/x86_64
-linux-gnu/libgstreamer-1.0.so.0
No symbol table info available.
#1  0x7fff9a37ef76 in gst_buffer_unmap () from /usr/lib/x86_64
-linux-gnu/libgstreamer-1.0.so.0
No symbol table info available.
#2  0x7fff83841294 in ?? () from /usr/lib/x86_64-linux
-gnu/gstreamer-1.0/libgstfaad.so
No symbol table info available.
#3  0x7fff9034be04 in ?? () from /usr/lib/x86_64-linux
-gnu/libgstaudio-1.0.so.0
No symbol table info available.
#4  0x7fff9034f18f in ?? () from /usr/lib/x86_64-linux
-gnu/libgstaudio-1.0.so.0
No symbol table info available.
#5  0x7fff9a3ade1f in ?? () from /usr/lib/x86_64-linux
-gnu/libgstreamer-1.0.so.0
No symbol table info available.
#6  0x7fff99ef2564 in gst_base_parse_push_frame () from
/usr/lib/x86_64-linux-gnu/libgstbase-1.0.so.0
No symbol table info available.
#7  0x7fff99ef3132 in ?? () from /usr/lib/x86_64-linux
-gnu/libgstbase-1.0.so.0
No symbol table info available.
#8  0x7fff9a3ade1f in ?? () from /usr/lib/x86_64-linux
-gnu/libgstreamer-1.0.so.0
No symbol table info available.
#9  0x7fff8feccb4c in ?? () from /usr/lib/x86_64-linux
-gnu/gstreamer-1.0/libgstcoreelements.so
No symbol table info available.
#10 0x7fff9a3dbb61 in ?? () from /usr/lib/x86_64-linux
-gnu/libgstreamer-1.0.so.0
No symbol table info available.
#11 0x7fffee6a92e8 in ?? () from /lib/x86_64-linux-gnu/libglib
-2.0.so.0
No symbol table info available.
#12 0x7fffee6a8955 in ?? () from /lib/x86_64-linux-gnu/libglib
-2.0.so.0
No symbol table info available.
#13 0x77bc70a4 in start_thread (arg=0x7fff86c69700) at
pthread_create.c:309
__res = optimized out
pd = 0x7fff86c69700
now = optimized out
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140735454549760, 
-7064389635352992634, 0, 140737354125408, 140737193347328,
140735454549760, 7064233029594866822, 7064406699172390022},
mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev
= 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = optimized out
pagesize_m1 = optimized out
sp = optimized out
freesize = optimized out
__PRETTY_FUNCTION__ = start_thread
#14 0x7707c07d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:111
No locals.


-- Package-specific info:

-- Extensions information
Name: Default theme
Location: /usr/lib/iceweasel/browser/extensions/{972ce4c6-7e08-4474
-a285-3208198ce6fd}
Package: iceweasel
Status: enabled

Name: English (GB) Language Pack locale
Location: /usr/lib/iceweasel/browser/extensions/langpack-en
-g...@iceweasel.mozilla.org.xpi
Package: iceweasel-l10n-en-gb
Status: enabled

Name: HTTPS-Everywhere
Location: ${PROFILE_EXTENSIONS}/https-everywhere-...@eff.org
Status: enabled

Name: NoScript
Location: ${PROFILE_EXTENSIONS}/{73a6fe31-595d-460b-a920
-fcc0f8843232}.xpi
Status: enabled

Name: Video DownloadHelper
Location: ${PROFILE_EXTENSIONS}/{b9db16a4-6edc-47ec-a1f4
-b86292ed211d}.xpi
Status: user-disabled

-- Plugins information
Name: Gnome Shell Integration
Location: /usr/lib/mozilla/plugins/libgnome-shell-browser-plugin.so
Package: gnome-shell
Status: enabled

Name: iTunes Application Detector
Location: /usr/lib/mozilla/plugins/librhythmbox-itms-detection
-plugin.so
Package: rhythmbox-plugins
Status: disabled


-- Addons package information
ii  gnome-shell3.16.3-1 amd64graphical shell for the
GNOME des
ii  iceweasel  38.2.1esr-1  amd64Web browser based on
Firefox
ii  iceweasel-l10n 1:38.2.1esr- all  English (United Kingdom)
language
ii  rhythmbox-plug 3.2.1-1  amd64plugins for rhythmbox
music playe

-- System Information:
Debian Release: stretch/sid
  APT prefers unstable
  APT policy: (500, 'unstable')
Architecture: amd64 (x86_64)

Kernel: Linux 4.1.0-2-amd64 (SMP w/8 CPU cores)
Locale: LANG=en_GB.utf8, LC_CTYPE=en_GB.utf8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)

Versions of packages iceweasel depends on:
ii  debianutils   4.5.1
ii  fontconfig2.11.0-6.3
ii  libasound21.0.29-1
ii  libatk1.0-0   2.16.0-2
ii  libc6 2.19-19
ii  libcairo2 1.14.2-2
ii  libdbus-1-3   1.8.20-1
ii  libdbus-glib-1-2  0.102-1
ii  libevent-2.0-52.0.21-stable-2
ii  libffi6   3.2.1-3
ii  libfontconfig12.11.0-6.3
ii  libfreetype6  2.5.2-4
ii  libgcc1   1:5.2.1-15
ii  libgdk-pixbuf2.0-02.31.5-1
ii  libglib2.0-0  2.44.1-1.1
ii  libgtk2.0-0   2.24.28-1
ii  libhunspell-1.3-0 1.3.3-3+b1
ii  libnspr4  2:4.10.9-1
ii  libnss3 

Bug#794912: libcmis-0.5-5: please use libboost-date-time 1.57 or 1.58 ASAP

2015-08-07 Thread jnqnfe
Hey. Although there may not be security issues within this package
itself, and while the descriptions of severity [1] may not perfectly
cover this unfortunate scenario, an update to this package (and others)
is urgently needed in order for Sid users to be able to install
critical security patches (since rolling back the gcc5 transition
temporarily I imagine isn't going to happen); and since this bug report
is all about fixing this dependency breakage issue, I feel that it is
perfectly appropriate to raise the severity as I have in order to help
ensure that the maintainers take proper note of the urgency for which
resolution of this issue is needed. Frankly I don't give a damn if the
maintainer disagrees with the severity, they can ignore or change it,
and at least it may have helped grab their attention. I think it's
perfectly worth risking making them a little annoyed over a possibly
inappropriately set severity level if it helps to make them aware of
the security issues going on here.

Regards :)

[1] https://www.debian.org/Bugs/Developers#severities

On Sat, 2015-08-08 at 02:46 +0200, Christoph Anton Mitterer wrote:
 Hey.
 
 I appreciate that you try to push in that matter,... but strictly
 speaking, there is no security issue in this package, and also the
 severity wouldn't be justified.
 
 Some maintainers may not be too happy about that...
 
 
 Cheers,
 Chris.


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#794913: libphonenumber6: please use libboost-date-time 1.57 or 1.58 ASAP

2015-08-07 Thread jnqnfe
Hey. Although there may not be security issues within this package
itself, and while the descriptions of severity [1] may not perfectly
cover this unfortunate scenario, an update to this package (and others)
is urgently needed in order for Sid users to be able to install
critical security patches (since rolling back the gcc5 transition
temporarily I imagine isn't going to happen); and since this bug report
is all about fixing this dependency breakage issue, I feel that it is
perfectly appropriate to raise the severity as I have in order to help
ensure that the maintainers take proper note of the urgency for which
resolution of this issue is needed. Frankly I don't give a damn if the
maintainer disagrees with the severity, they can ignore or change it,
and at least it may have helped grab their attention. I think it's
perfectly worth risking making them a little annoyed over a possibly
inappropriately set severity level if it helps to make them aware of
the security issues going on here.

Regards :)

[1] https://www.debian.org/Bugs/Developers#severities

On Sat, 2015-08-08 at 02:47 +0200, Christoph Anton Mitterer wrote:
 Hey.
 
 I appreciate that you try to push in that matter,... but strictly
 speaking, there is no security issue in this package, and also the
 severity wouldn't be justified.
 
 Some maintainers may not be too happy about that...
 
 
 Cheers,
 Chris.


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#778712: libparted2: Breakage of RAID GPT header

2015-02-20 Thread jnqnfe
Control: severity -1 normal
Control: close -1
thanks

On Fri, 2015-02-20 at 15:12 -0500, Phillip Susi wrote:
 I'm sorry; I misread what you said.  I thought you said you had
 removed the information about the individual disks that were members
 of the array.

No problem.

 At this point the array contains a protective MBR that lists one
 partition of type ee that occupies the whole array.  Fdisk looks at
 sdb and sees the same thing.  Following the MBR is the GPT, part of
 which is missing from sdb, so fdisk treats it as corrupt, and falls
 back to printing only the MBR.

Yes, I'm with you.

  So the phantom sdb1 device was not there when only fdisk was used 
  (fdisk4), but does appear after using parted, whether using parted
  to create the partition table (fdisk 2, fdisk3), or as in the last
  test, only to view information (parted -l) after using fdisk
  (fdisk5).
 
 I see now.  I think you are running into a cache aliasing issue here.
  That is to say, that the MBR of sdb was read into the cache while the
 drive was still blank, and when parted creates the gpt on the array,
 it does in fact create that protective mbr partition, but fdisk does
 not see it on sdb yet, since it is still holding the cached data from
 earlier.  Note that at this point fdisk reports that there is no
 partition table of any kind, not just no sdb1.  If you run blockdev
 --flushbufs and then repeat the fdisk -l, sdb1 should show up.

I agree now that this might just be an fdisk caching issue, but I don't
think this bit is quiet as you describe. The actions taken and results
were as follows:
1) RAID array recreated.
2) fdisk used to create GPT table on md126.
3) fdisk -l, showing no issues and no info from MBR.
4) parted -l, pointing out corrupt GPT table.
5) fdisk -l, now showing info from the MBR and the error.

So on the basis that fdisk is writing the same protected MBR that parted
does, it seems fdisk is failing to flush it's cache and see the problem
when asked to display info immediately following creation of the
partition tables. Then, either parted triggered a cache flush (shared
cache I presume?), or else fdisk managed to flush the cache the second
time around.

So in conclusion, this whole confusing mess resulted from a combination
of:
1) parted being incapable of understanding RAID array membership.
2) fdisk also being incapable of understanding RAID array membership.
3) fdisk failing to flush a cache of partition info.

I'll reduce the severity of this bug report and close it now then.

Thank you for helping get to the bottom of this.

I will try to do a little further testing tomorrow to try and nail down
more precise details of the caching behaviour, and then report that
along against fdisk with a request for fdisk to also add understanding
of RAID array membership.

Thanks again :)


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#778712: libparted2: Breakage of RAID GPT header

2015-02-20 Thread jnqnfe
On Fri, 2015-02-20 at 10:16 -0500, Phillip Susi wrote:
 On 2/19/2015 2:24 PM, jnqnfe wrote:
  Firstly, I am not running fdisk or parted on the raw member disks,
  I am simply running generic 'fdisk -l' and 'parted -l' commands,
  which return information about all disks. To simplify matters I
  removed information about other disks in my system from the output
  I supplied, leaving only that pertaining to the array and array
  member disks.
 
 You did not; the output you supplied listed both sda and sdb.

What? I very carefully went through every one of them before sending to
ensure that only information about the array (md126) and the array
members (sdb and sdc) were included. I have just checked back over every
one of those files attached to the original bug report and none of them
contain any info about sda.

Do please note that in some of them sdc has been output before sdb, so
perhaps you didn't look carefully enough and misread sdc for sda in
these cases? I really don't know otherwise why on earth you think I've
sent info about sda.

 The GPT is 16 KiB but starts on sector 2, hence the last 2 sectors
 fall onto the second disk.

Okay, I'll take your word on that and thus that explains sufficiently
why parted things there's corruption.

 Because parted does not know anything about raid.  I suppose it might
 be nice if it could detect it and ignore those drives, but doing so
 would require adding a dependency on udev or blkid.  I'll mull the
 idea over.

Okay. I do think that it would be a very good idea for parted to do
this.

We can put that stuff to one side then and focus on this phantom sdb1
device...

  Furthermore, if you look at the fdisk output I supplied, you
  should notice that when I created the partition table with fdisk,
  everything was initially fine; no 'dev/sdb1' device exists (see
  fdisk4). However after running 'parted -l' to see what parted makes
  of the result of using fdisk, and then re-running 'fdisk -l' (I
  just happened to do so to be certain everything was fine, and found
  to my surprise it was not), you can see that now all of a sudden a
  /dev/sdb1' device exists.
 
 sdb1 shows up in fdisk2.

Yes, but please review the initial bug report for when I created each of
the output files. I ran three tests using different tools to create the
GPT headers, first with gparted, then with parted, then with fdisk.
Before each test I deleted and recreated the RAID array to try and
achieve a fresh start (which checking fdisk and parted info after doing
so confirmed was a successful means of resetting things). Files fdisk1
and parted1 demonstrate the state of things directly after recreating
the RAID array, without yet attempting to write the partition table.

So, fdisk2 and parted2 show the state of things after using gparted to
write a GPT table to the array, and thus this phantom sdb1 device
exists, which fdisk doesn't like.

Starting afresh, I then did the same thing but using parted. You can see
the state of things afterwards in fdisk3 and parted3. Again, as you can
see in fdisk3, this phantom sdb1 device exists which fdisk doesn't like.
No difference from using gparted.

Finally I started things afresh once more and used fdisk to create the
GPT partition table. The state of things after this according to fdisk
(which I checked first) and which you can see in fdisk4 shows no sign of
this phantom sdb1 device. So everything seems fine at this point
according to fdisk. I then checked the state of things with parted,
which you can see in file parted4. Then I checked fdisk one more time,
and that phantom sdb1 device is back, as can be seem in fdisk5.

So the phantom sdb1 device was not there when only fdisk was used
(fdisk4), but does appear after using parted, whether using parted to
create the partition table (fdisk 2, fdisk3), or as in the last test,
only to view information (parted -l) after using fdisk (fdisk5).

As I said in my last email, I am not outright claiming that parted is
definitely directly responsible for creating this phantom device, but it
is a pretty damning coincidence that it has so far only appeared after
running parted.

 The moment you created the GPT table on the raid array, it included
 the protective MBR partition, and that is what fdisk is reporting
 since the GPT is corrupt ( when viewed through the lens of the single
 disk ).  lsblk uses the blkid database which does recognize that the
 disks are array components and filters them out.

Okay, I am aware that a protective MBR may be written alongside the GPT
tables and that the protective MBR may contain a partition entry
covering the entire disk. So you're suggesting that this may be what
this phantom sdb1 device is? Interesting.

But, what is the explanation for it not appearing in fdisk ouput after
using fdisk to create the GPT tables in test #3? And furthermore what is
the explanation for it then suddenly appearing after then running
'parted -l'? And if that is the case then that would imply

Bug#778712: libparted2: Breakage of RAID GPT header

2015-02-19 Thread jnqnfe
On Wed, 2015-02-18 at 23:27 -0500, Phillip Susi wrote:
 All of the error messages shown in the logs you sent so far involve
 the raw disks ( sdb, etc ) rather than the raid array.  You certainly
 should not be running fdisk or parted on the raw disk, and responding
 to the error messages by saying it should fix the problem ( since the
 problem is only a result of looking at an individual disk instead of
 the whole array ).

Firstly, I am not running fdisk or parted on the raw member disks, I am
simply running generic 'fdisk -l' and 'parted -l' commands, which return
information about all disks. To simplify matters I removed information
about other disks in my system from the output I supplied, leaving only
that pertaining to the array and array member disks.

I disagree that the problems reported against the member disks should
just be ignored.

Why does parted think and report that one of the member disks has
corrupt GPT tables?
1) The array was setup with 16KB block striping, which is surely plenty
to contain the entire MBR block and primary GPT table within the one
member disk; so it's not like this results from part of the GPT header
being on one disk and the rest on another, which otherwise would
understandably result in such an error. Unless I am wrong and this is
happening, why does parted think there is a corruption?
2) Why is parted examining GPT headers of member disks at all? It should
recognise that these disks are members of a RAID array and thus skip
looking for and reading partition headers on it, otherwise it just
results in confusion for the user (and potentially other issues if it
changes anything). Parted's behaviour should be changed here accordingly
to skip seeking this information on array members.

Furthermore, if you look at the fdisk output I supplied, you should
notice that when I created the partition table with fdisk, everything
was initially fine; no 'dev/sdb1' device exists (see fdisk4). However
after running 'parted -l' to see what parted makes of the result of
using fdisk, and then re-running 'fdisk -l' (I just happened to do so to
be certain everything was fine, and found to my surprise it was not),
you can see that now all of a sudden a /dev/sdb1' device exists.

The 'GPT PMBR size mismatch' error reported by fdisk is related to this
device, which per its name is apparently a sub-component of one of the
array member disks, but I did not create any partition, and this device
does not appear in lsblk output. So where does this 'sdb1' device come
from? As just stated, it does not exist after purely creating the
partition table with fdisk, but it does suddenly exist after running
'parted -l'. Perhaps I am wrong and parted is not actually messing up
the actual partition data on the disk (I haven't examined the disk),
perhaps it is simply generating and storing information about this
phantom device in the file system somewhere, which fdisk is then picking
up on. So, what is going on here?

 You stated that parted modified the disk when you didn't tell it to,
 but did not show exactly what command you gave that lead to this, and
 more importantly, what if any, error messages parted threw and how you
 responded to them.

To be more clear, parted seems to be creating some phantom 'sdb1'
device, which then fdisk isn't happy with. As described above, I have no
idea why parted is creating this. I do not know absolutely that it is
parted that created it, but it does consistently appear after using
parted, which makes it pretty likely. I also do not know for certain
that this device is something actually being written to disk, or whether
it is being saved into the filesystem, but it is being stored somewhere
for fdisk to then discover and complain about, and this persists across
reboots.

As already stated, the necessary details of what I did are described in
my previous message. Here is a small amount of additional detail
however:
1) When checking fdisk, I specifically ran 'fdisk -l'. To then generate
the output files I simply ran 'fdisk -l  fdisk1 21'. I then edited
the output file in gedit to remove details about other disks that would
be irrelevant.
2) For parted output, I similarly ran 'parted -l' and 'parted -l 
parted1 21' respectively, and edited the output files with gedit as
with fdisk.
3) See initial bug report for further detail (e.g. list of and order of
actions taken). I have excluded from this nothing that should be at all
relevant, I mean I may have had my mail client open but as I say, I am
excluding nothing that should be at all relevant.
4) As already described, the only errors that occurred are those that
are present in the output files attached to the initial bug report. I
responded to them only as exactly described in the initial bug report,
i.e. I saved output from 'fdisk -l' and 'parted -l' into files to attach
to the bug report, as described above and previously.


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of 

Bug#778697: libparted2: error on opening with regard to RAID member devices

2015-02-18 Thread jnqnfe
Package: libparted2
Severity: grave

With a 'fake RAID' RAID0 device constructed using motherboard firmware,
which has a GPT partitioning table setup using fdisk (gparted failed me
- see #778683), opening gparted now results in errors in relation to the
RAID member disks.

Specifically, I firstly get the following error regarding the first disk
in the RAID array:
Title: Libparted bug found!
Message: Invalid argument during seek for read on /dev/sdb

Clicking on 'ignore' then results in this followup message:
Title: Libparted bug found!
Message: The backup GPT table is corrupt, but the primary appears OK, so
that will be used.

fdisk seems perfectly happy with the setup. Presumably libparted is not
processing the member disks as actually being part of an array.

Marking as grave on the off chance of data loss with libparted not
processing things properly here, and the possibility of users fiddling
with things in gparted in relation to it (e.g. trying to correct the
'unrecognised partition table' status of member disks, wiping out their
array.

Should array members even be listed in gparted?


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#778697: libparted2: error on opening with regard to RAID member devices

2015-02-18 Thread jnqnfe
These errors actually disappeared after a reboot :/ ...

I guess that means I should have refreshed something, or fdisk should
have refreshed something, and thus this can be closed?


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#778712: libparted2: Breakage of RAID GPT header

2015-02-18 Thread jnqnfe
Package: libparted2
Version: 3.2-6
Severity: grave

libparted2 breaks my RAID GPT header!

There appears to be a disagreement between parted and fdisk as to the
correct size. fdisk is happy after creating a GPT partition table, but
parted is not and seems to be forcibly applying what it believes to be
correct (ignoring the fact that it was only asked to display info, not
modify anything). Having done so however parted is still not happy and
now neither is fdisk. Letting parted create the partition table just
leaves both unhappy, reporting the same issues.

In testing reproducibility of my issue here I deleted and recreated the
array, and proceeded to test as documented below, which explains things
more clearly.

**Please pay particular attention to what happened at the very end of
test #3, which is why I marked this as severity grave!

I would appreciate a quick turnaround on this issue, so I can get on
with actually using this RAID array without fear of breaking it simply
by running parted -l or opening gparted.

Background
=
I have a 'fake-raid' RAID0 array, created from two HDDs using my
motherboard firmware. This is not used for root, just data.

sdb and sdc are the RAID members here and the RAID device is md126.

fdisk -l and parted -l output (cut down to only the devices in question)
generated during this procedure is attached.

Test#1 - gparted
=
1) Deleted and recreated the RAID array (in MB firmware).
2) Checked fdisk -l and parted -l (see fdisk1 and parted1 output files).
fdisk is happy, parted only complains about unrecognised disk labels.
3) In gparted, with device md126 selected, I asked it to create a GPT
partition table. This was done with no errors reported.

gparted shows warnings for both sdb and sdc. The warning for sdc is just
an unrecognised disk label warning, but the warning against sdb is:
Both the primary and backup GPT tables are corrupt.!

Checking fdisk -l, I see an GPT PMBR size mismatch error.

I created the fdisk2 and parted2 files at this stage.

For some reason fdisk now sees a device '/dev/sdb1', with size equal to
that of the full array. I had not created any partitions yet.

Test#2 - parted
=
1) Deleted and recreated the RAID array (in MB firmware).
2) Checked fdisk and parted to make sure things had been reset
correctly, they were.
3) Ran: sudo parted /dev/md126 mktable GPT
This ran with no errors directly reported.
4) Checked parted -l, which reported the same corruption issue above
(see parted3).
5) Checked fdisk -l, which reported the GPT PMBR size mismatch error as
before (see fdisk3).

Test#3 - fdisk
=
1) Deleted and recreated the RAID array (in MB firmware).
2) Checked fdisk and parted to make sure things had been reset
correctly, they were.
3) Ran: sudo fdisk /dev/md126
g (create a new empty GPT partition table)
v (verify) - no errors, looked good to me
w (write) - no errors:
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
4) Checked fdisk -l output, which looks absolutely fine (see fdisk4).
5) Checked parted -l. This still complains about a corrupt GPT header.
(See parted4).
6) Happened to check fdisk -l again, how it's reporting the GPT PMBR
size mismatch error from before (See fdisk5).

So it seems that the parted -l command here seems to have tried to
forcibly correct the issue it was unhappy with, breaking what fdisk
seemed to have done correctly.

Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/md126: 1.8 TiB, 2000381018112 bytes, 3906994176 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 32768 bytes

GPT PMBR size mismatch (3906994175 != 1953525167) will be corrected by w(rite).

Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x

Device Boot StartEndSectors  Size Id Type
/dev/sdb1   1 3906994175 3906994175  1.8T ee GPT

Disk /dev/md126: 1.8 TiB, 2000381018112 bytes, 3906994176 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 32768 bytes
Disklabel type: gpt

Bug#778712: libparted2: Breakage of RAID GPT header

2015-02-18 Thread jnqnfe
On Wed, 2015-02-18 at 16:35 -0500, Phillip Susi wrote:
 On 2/18/2015 4:05 PM, jnqnfe wrote:
  Background = I have a 'fake-raid' RAID0 array,
  created from two HDDs using my motherboard firmware. This is not
  used for root, just data.
 
 FYI, unless you have to dual boot with windows, you should avoid using
 fakeraid and stick with conventional linux software raid, which is
 much better supported.

Fine, fair enough, I am not dual booting so I may switch as you suggest.
Thanks for the tip.

  sdb and sdc are the RAID members here and the RAID device is
  md126.
 
 Then you need to only manipulate md126 and ignore sdb and sdc.  Most
 of what you seem to be reporting involves looking directly at the
 individual disks, which you must not do as that will present a
 partial/corrupt view of the raid array.  In other words, if the first
 few sectors of the raid array map to sdb, then sdb will appear to have
 a partition table in its sector 0 that describes a disk that is twice
 the size, since this partition table is actually describing the raid
 array and not the individual disk.

I am not doing anything at all to the member disks, I am only
manipulating the array (mb126) and providing the ouput of fsdisk -l /
parted -l (with unnecessary info about other disks removed).

 The one thing you mention that I can't write off as user error is but
 parted is not and seems to be forcibly applying what it believes to be
 correct (ignoring the fact that it was only asked to display info, not
 modify anything).  Can you provide more details here?  Exactly what
 command did you run and what changed before vs. after?  Parted should
 not be modifying anything on the disk unless you tell it to.  Normally
 it will throw a warning telling you something is wrong with the disk
 and ask if you want it to fix it and you have to answer fix for it
 to modify the disk.

I did only exactly as described in my previous message, nothing more,
nothing less.


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#718225: live-build should authenticate files it downloads

2015-01-03 Thread jnqnfe
Control: found -1 0.99-1

This security issue stretches back all the way as far as git history
goes, to 0.99-1. Attempting to update versions affected to update the
record, possibly causing correct listing against debian releases in
security trackers...


-- 
To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org