[Sts-sponsors] [Bug 1663280] Re: Serious performance degradation of math functions

2019-02-05 Thread Brian Murray
Hello Oleg, or anyone else affected,

Accepted glibc into xenial-proposed. The package will build now and be
available at https://launchpad.net/ubuntu/+source/glibc/2.23-0ubuntu11
in a few hours, and then in the -proposed repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how
to enable and use -proposed.  Your feedback will aid us getting this
update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested and change the tag from
verification-needed-xenial to verification-done-xenial. If it does not
fix the bug for you, please add a comment stating that, and change the
tag to verification-failed-xenial. In either case, without details of
your testing we will not be able to proceed.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance for helping!

N.B. The updated package will be released to -updates after the bug(s)
fixed by this package have been verified and the package has been in
-proposed for a minimum of 7 days.

** Changed in: glibc (Ubuntu Xenial)
   Status: In Progress => Fix Committed

** Tags added: verification-needed verification-needed-xenial

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1663280

Title:
  Serious performance degradation of math functions

Status in GLibC:
  Fix Released
Status in glibc package in Ubuntu:
  Fix Released
Status in glibc source package in Xenial:
  Fix Committed
Status in glibc source package in Zesty:
  Won't Fix
Status in glibc package in Fedora:
  Fix Released

Bug description:
  SRU Justification
  =

  [Impact]

   * Severe performance hit on many maths-heavy workloads. For example,
  a user reports linpack performance of 13 Gflops on Trusty and Bionic
  and 3.9 Gflops on Xenial.

   * Because the impact is so large (>3x) and Xenial is supported until
  2021, the fix should be backported.

   * The fix avoids an AVX-SSE transition penalty. It stops
  _dl_runtime_resolve() from using AVX-256 instructions which touch the
  upper halves of various registers. This change means that the
  processor does not need to save and restore them.

  [Test Case]

  Firstly, you need a suitable Intel machine. Users report that Sandy
  Bridge, Ivy Bridge, Haswell, and Broadwell CPUs are affected, and I
  have been able to reproduce it on a Skylake CPU using a suitable Azure
  VM.

  Create the following C file, exp.c:

  #include 
  #include 

  int main () {
double a, b;
for (a = b = 0.0; b < 2.0; b += 0.0005) a += exp(b);
printf("%f\n", a);
return 0;
  }

  $ gcc -O3 -march=x86-64 -o exp exp.c -lm

  With the current version of glibc:

  $ time ./exp
  ...
  real0m1.349s
  user0m1.349s

  
  $ time LD_BIND_NOW=1 ./exp
  ...
  real0m0.625s
  user0m0.621s

  Observe that LD_BIND_NOW makes a big difference as it avoids the call
  to _dl_runtime_resolve.

  With the proposed update:

  $ time ./exp
  ...
  real0m0.625s
  user0m0.621s

  
  $ time LD_BIND_NOW=1 ./exp
  ...

  real0m0.631s
  user0m0.631s

  Observe that the normal case is faster, and LD_BIND_NOW makes a
  negligible difference.

  [Regression Potential]

  glibc is the nightmare case for regressions as could affect pretty much
  anything, and this patch touches a key part (dynamic libraries).

  We can be fairly confident in the fix generally - it's in the glibc in
  Bionic, Debian and some RPM-based distros. The backport is based on
  the patches in the release/2.23/master branch in the upstream glibc
  repository, and the backport was straightforward.

  Obviously that doesn't remove all risk. There is also a fair bit of
  Ubuntu-specific patching in glibc so other distros are of limited
  value for ruling out bugs. So I have done the following testing, and
  I'm happy to do more as required. All testing has been done:
   - on an Azure VM (affected by the change), with proposed package
   - on a local VM (not affected by the change), with proposed package

   * Boot with the upgraded libc6.

   * Watch a youtube video in Firefox over VNC.

   * Build some C code (debuild of zlib).

   * Test Java by installing and running Eclipse.

  Autopkgtest also passes.

  [Original Description]

  Bug [0] has been introduced in Glibc 2.23 [1] and fixed in Glibc 2.25
  [2]. All Ubuntu versions starting from 16.04 are affected because they
  use either Glibc 2.23 or 2.24. Bug introduces serious (2x-4x)
  performance degradation of math functions (pow, exp/exp2/exp10,
  log/log2/log10, sin/cos/sincos/tan, asin/acos/atan/atan2,
  sinh/cosh/tanh, asinh/acosh/atanh) provided by libm. Bug can be
  reproduced on any AVX-capable x86-64 machine.

  @strikov: According to a quite 

[Sts-sponsors] [Bug 1573594] Re: Missing null termination in PROTOCOL_BINARY_CMD_SASL_LIST_MECHS response handling

2019-02-05 Thread Eric Desrochers
Ionna,

Let's then request the SRU verification team to drop the package for 
trusty-proposed. 
If SASL is not supported in the Trusty pkg, there is no point to complete the 
SRU for Trusty.

Additionally, since Trusty is near to its EOL, I don't see good
reason/justification to justify the effort/work to enable SASL on the
package.

Also enabling SASL, IMHO, won't be consider a "bug fix", but a "new
feature".

For all the reasons above, let's simply drop the pkg from trusty-
proposed.

- Eric

-- 
You received this bug notification because you are a member of STS
Sponsors, which is subscribed to the bug report.
https://bugs.launchpad.net/bugs/1573594

Title:
  Missing null termination in PROTOCOL_BINARY_CMD_SASL_LIST_MECHS
  response handling

Status in libmemcached package in Ubuntu:
  Fix Released
Status in libmemcached source package in Trusty:
  Fix Committed
Status in libmemcached source package in Xenial:
  Fix Committed
Status in libmemcached source package in Bionic:
  Fix Committed
Status in libmemcached source package in Cosmic:
  Fix Committed
Status in libmemcached source package in Disco:
  Fix Released
Status in libmemcached package in Debian:
  New

Bug description:
  [Impact]

  When connecting to a server using SASL,
  memcached_sasl_authenticate_connection() reads the list of supported
  mechanisms [1] from the server via the command
  PROTOCOL_BINARY_CMD_SASL_LIST_MECHS. The server's response is a string
  containing supported authentication mechanisms, which gets stored into
  the (uninitialized) destination buffer without null termination [2].

  The buffer then gets passed to sasl_client_start [3] which treats it
  as a null-terminated string [4], reading uninitialised bytes in the
  buffer.

  As the buffer lives on the stack, an attacker that can put strings on
  the stack before the connection gets made, might be able to tamper
  with the authentication.

  [1] libmemcached/sasl.cc:174
  [2] libmemcached/response.cc:619
  [1] libmemcached/sasl.cc:231
  [3] http://linux.die.net/man/3/sasl_client_start

  [Test Case]

  This bug is difficult to reproduce since it depends on the contents of the 
stack.
  However, here is a test case using the fix on Bionic that shows that this fix 
does not cause any problems.

  For testing you need

  1) A memcached server.
     You can setup one by following the instructions in [1],
     or (what I did) create one in the cloud [2].

  2) A client test program to connect to the memcached server.
     One can be found in [3].
     This simple test connects to a memcache server and test basic get/set 
operations.
     Copy paste the C code into a file (sals_test.c) and compile with :
     gcc -o sasl_test -O2 sasl_test.c -lmemcached -pthread

  3) On a machine with the updated version of libmemcached in which the fix is 
applied :
     jo@bionic-vm:~$ dpkg -l | grep libmemcached
  ii  libhashkit-dev:amd64  1.0.18-4.2ubuntu0.18.04.1   
   amd64libmemcached hashing functions and algorithms (development 
files)
  ii  libhashkit2:amd64 1.0.18-4.2ubuntu0.18.04.1   
   amd64libmemcached hashing functions and algorithms
  ii  libmemcached-dbg:amd641.0.18-4.2ubuntu0.18.04.1   
   amd64Debug Symbols for libmemcached
  ii  libmemcached-dev:amd641.0.18-4.2ubuntu0.18.04.1   
   amd64C and C++ client library to the memcached server (development 
files)
  ii  libmemcached-tools1.0.18-4.2ubuntu0.18.04.1   
   amd64Commandline tools for talking to memcached via libmemcached
  ii  libmemcached11:amd64  1.0.18-4.2ubuntu0.18.04.1   
   amd64C and C++ client library to the memcached server
  ii  libmemcachedutil2:amd64   1.0.18-4.2ubuntu0.18.04.1   
   amd64library implementing connection pooling for libmemcached

     Run the sals_test binary :
     #./sasl_test [username] [password] [server]

     In my case using the credentials and the server created in step 1 :
     jo@bionic-vm:~$ ./sasl_test 88BAB0 1A99094B77C8935ED9F1461C767DB1F9 
mc2.dev.eu.ec2.memcachier.com
     Get/Set success!

  [1] https://blog.couchbase.com/sasl-memcached-now-available/
  [2] https://www.memcachier.com/
  [3] 
https://blog.memcachier.com/2014/11/05/ubuntu-libmemcached-and-sasl-support/

  [Regression Potential]

  This fix initialises the buffer to 0.
  Any potential regression may include failure of the authentication when using 
SASL.

  * When running autopkgtest for xenial/armhf it fails on gearmand : 
http://autopkgtest.ubuntu.com/packages/g/gearmand/xenial/armhf .
  However this is a long standing issue with gearmand and it is not related 
with the current SRU.

  
  [Other Info]

  This bug affects trusty and later.

  * rmadison:
   libmemcached | 1.0.8-1ubuntu2 | trusty  | source
   libmemcached | 1.0.18-4.1 | xenial  | source
   libmemcached |