Hi everyone,

A teammate ran a test and confirmed that the patch Juerg attached solves
the problem - on an MBR disk over 2TB, it limits the automatic growth to
2TB rather than destroying the partition table.

Thomas, if you agree that this issue is Severity: important and believe
that the release team would also agree, would you please upload this quite
minimal fix to unstable? I can file a corresponding bug in the BTS
tomorrow, after which you can make an unblock request. The release policy
allows this severity of fix to migrate to Jessie by Dec 5 for optional or
extra packages, so if nothing goes wrong we are not too late.

Thanks so much!

- Jimmy
On Oct 22, 2014 9:36 AM, "Juerg Haefliger" <[email protected]> wrote:

> On Wed, Oct 22, 2014 at 4:29 PM, Thomas Goirand <[email protected]> wrote:
> >
> > On 10/22/2014 03:34 PM, Juerg Haefliger wrote:
> > >
> > >
> > > On Wed, Oct 22, 2014 at 8:53 AM, Thomas Goirand <[email protected]
> > > <mailto:[email protected]>> wrote:
> > >>
> > >> Please don't top-quote.
> > >>
> > >> On 10/22/2014 01:49 PM, Jimmy Kaplowitz wrote:
> > >> > In this case, the current behavior is actively destructive to the
> > >> > partition table on disks over 2 TB - critical data loss bugs would
> be RC
> > >> > bugs in scope for a stable update, and so are also probably in
> scope for
> > >> > work at this point in the process.
> > >>
> > >> I don't agree with the above. In the cloud case, there's no data loss
> if
> > >> you boot a VM with an image, what happens is that it simply doesn't
> work
> > >> in the worst case, but it will not destroy your HDD image.
> > >>
> > >> > While I agree that maxing out the
> > >> > partition at 2 TB is far from ideal, it's much better than the
> status
> > >> > quo of destroying the usability of the disk. We don't have a way to
> use
> > >> > the disk size in deciding whether to install
> cloud-initramfs-growroot;
> > >> > while our images are 10 GB (sparse), they are often used with much
> > >> > larger disks.
> > >>
> > >> FYI, the commit that brings the "do not destroy the HDD if it's > 2TB"
> > >> goes this way:
> > >>
> > >> diff --git a/bin/growpart b/bin/growpart
> > >> index bbac2fe..49a1792 100755
> > >> --- a/bin/growpart
> > >> +++ b/bin/growpart
> > >> @@ -160,6 +160,7 @@ mbr_resize() {
> > >>         local dump_mod=${TEMP_D}/dump.mod
> > >>         local tmp="${TEMP_D}/tmp.out"
> > >>         local err="${TEMP_D}/err.out"
> > >> +       local mbr_max_512="4294967296"
> > >>
> > >>         local _devc cyl _w1 heads _w2 sectors _w3 tot dpart
> > >>         local pt_start pt_size pt_end max_end new_size change_info
> > >> @@ -174,6 +175,9 @@ mbr_resize() {
> > >>         tot=$((${cyl}*${heads}*${sectors}))
> > >>
> > >>         debug 1 "geometry is ${MBR_CHS}. total size=${tot}"
> > >> +       [ "$tot" -gt "$mbr_max_512" ] &&
> > >> +               debug 1 "WARN: disk is larger than 2TB. additional
> > > space will go unused."
> > >> +
> > >>         rqe sfd_dump sfdisk ${MBR_CHS} --unit=S --dump "${DISK}" \
> > >>                 >"${dump_out}" ||
> > >>                 fail "failed to dump sfdisk info for ${DISK}"
> > >> @@ -217,6 +221,10 @@ mbr_resize() {
> > >>                 [ -n "${max_end}" ] ||
> > >>                 fail "failed to get max_end for partition ${PART}"
> > >>
> > >> +       if [ "$max_end" -gt "$mbr_max_512" ]; then
> > >> +               max_end=$mbr_max_512;
> > >> +       fi
> > >> +
> > >>         debug 1 "max_end=${max_end} tot=${tot} pt_end=${pt_end}" \
> > >>                 "pt_start=${pt_start} pt_size=${pt_size}"
> > >>         [ $((${pt_end})) -eq ${max_end} ] &&
> > >>
> > >> In the current version of cloud-utils, there's no such thing as
> > >> "max_end", and the way to patch really isn't trivial (please read the
> > >> source code and try to make sense out of it, then you tell me what you
> > >> think...).
> > >
> > > Hmm... There is max_end in the current version. How about this:
> > >
> > > --- cloud-utils-0.26/bin/growpart.orig    2014-10-22 09:23:07.455279618
> > > +0200
> > > +++ cloud-utils-0.26/bin/growpart    2014-10-22 09:26:46.627273142
> +0200
> > > @@ -119,6 +119,7 @@ tmp="${TEMP_D}/tmp.out"
> > >  err="${TEMP_D}/err.out"
> > >  orig_bin="${TEMP_D}/orig.save"
> > >  RESTORE_HUMAN="${TEMP_D}/recovery"
> > > +mbr_max_512="4294967296"
> > >
> > >  # --show-pt-geometry outputs something like
> > >  #     /dev/sda: 164352 cylinders, 4 heads, 32 sectors/track
> > > @@ -130,6 +131,8 @@ sfdisk "${disk}" --show-pt-geometry > "$
> > >  tot=$((${cyl}*${heads}*${sectors}))
> > >
> > >  debug 1 "geometry is $CHS. total size=${tot}"
> > > +[ "$tot" -gt "$mbr_max_512" ] &&
> > > +    debug 1 "WARN: disk is larger than 2TB. additional space will go
> > > unused."
> > >  sfdisk ${CHS} -uS -d "${disk}" > "${dump_out}" 2>"${err}" ||
> > >      fail "failed to dump sfdisk info for ${disk}"
> > >
> > > @@ -166,6 +169,10 @@ max_end=$(awk '
> > >      [ -n "${max_end}" ] ||
> > >      fail "failed to get max_end for partition ${part}"
> > >
> > > +if [ "$max_end" -gt "$mbr_max_512" ]; then
> > > +    max_end=$mbr_max_512
> > > +fi
> > > +
> > >  debug 1 "max_end=${max_end} tot=${tot} pt_end=${pt_end}
> > > pt_start=${pt_start} pt_size=${pt_size}"
> > >  [ $((${pt_end})) -eq ${max_end} ] &&
> > >      nochange "partition ${part} is size ${pt_size}. it cannot be
> grown"
> >
> > Ah, cool!
> >
> > Could you send this as an attachment, so we can just apply it to the
> > current version? Then maybe Jimmy can test it, then we (or even *I*)
> > upload the fix...
>
> Attached.
>
> ...Juerg
>
>
> > Cheers,
> >
> > Thomas
> >
> >
> > --
> > To UNSUBSCRIBE, email to [email protected]
> > with a subject of "unsubscribe". Trouble? Contact
> [email protected]
> > Archive: https://lists.debian.org/[email protected]
> >
>

Reply via email to