Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package onednn for openSUSE:Factory checked 
in at 2021-06-04 22:43:13
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/onednn (Old)
 and      /work/SRC/openSUSE:Factory/.onednn.new.1898 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "onednn"

Fri Jun  4 22:43:13 2021 rev:4 rq:897415 version:2.2.3

Changes:
--------
--- /work/SRC/openSUSE:Factory/onednn/onednn.changes    2021-04-14 
10:11:32.989551512 +0200
+++ /work/SRC/openSUSE:Factory/.onednn.new.1898/onednn.changes  2021-06-04 
22:43:27.375122186 +0200
@@ -1,0 +2,47 @@
+Thu Jun  3 01:38:56 UTC 2021 - Ferdinand Thiessen <r...@fthiessen.de>
+
+- Update to version 2.2.3
+  * Fixed a bug in int8 depthwise convolution ptimitive with groups
+    and 1d spatial size for processors with AVX-512 and AVX2 support
+  * Fixed correctness issue for PReLU primitive
+  * Fixed corretness issue in reorder for blocked layouts with
+    zero padding
+  * Improved performance of weights reorders used by BRGEMM-based
+    convolution primitive for processors with AVX-512 support
+  * Added -fp-model=precise build flag for DPC++ code
+  * Fixed potential memory leak in matmul primitive
+  * Fixed performance of matmul primitive when fused with bias
+    update and sum
+  * Fixed a bug in matmul primitive when writing to non-contiguous
+    destination buffer
+- Add upstream patch for GCC11 support
+  * 0001-common-gpu-include-thread-and-limit-headers-to-fix-G.patch
+
+-------------------------------------------------------------------
+Thu May 27 08:10:13 UTC 2021 - Jan Engelhardt <jeng...@inai.de>
+
+- Update descriptions.
+
+-------------------------------------------------------------------
+Wed May 26 13:29:27 UTC 2021 - Guillaume GARDET <guillaume.gar...@opensuse.org>
+
+- Update to 2.2.2, changes:
+  * Fixed performance regression in fp32 forward inner product for
+  shapes with number of output channels equal to 1 for processors
+  with Intel AVX-512 support (714b1fd)
+  * Fixed performance regression in forward convolutions with groups
+  for processors with Intel AVX-512 support(3555d4a)
+  * Removed -std=c++11 build flag for DPC++ headers (1fcb867)
+  * Fixed buffer access in initializing workspace in RNN
+  implementation on GPU (9b03091)
+  * Fixed fix a bug in convolution with 1x1 kernel and mixed
+  strides on processors with Intel AVX-512 support (d0b3e3f)
+  * Used getauxval for Linux to get CPU features on for AArch64
+  systems (25c4cea)
+  * Added -fp-model=precise build flag for DPC++ code (3e40e5e)
+  * Fixed out-of-bounds writes in elementwise primitive on
+  Intel Processor Graphics (bcf823c)
+- Fix build with Arm Compute Library:
+  * onednn-1045.patch
+
+-------------------------------------------------------------------

Old:
----
  onednn-2.2.1.tar.gz

New:
----
  0001-common-gpu-include-thread-and-limit-headers-to-fix-G.patch
  oneDNN-2.2.3.tar.gz
  onednn-1045.patch

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ onednn.spec ++++++
--- /var/tmp/diff_new_pack.1tK6MF/_old  2021-06-04 22:43:28.263123166 +0200
+++ /var/tmp/diff_new_pack.1tK6MF/_new  2021-06-04 22:43:28.267123170 +0200
@@ -31,12 +31,16 @@
 
 %define libname libdnnl2
 Name:           onednn
-Version:        2.2.1
+Version:        2.2.3
 Release:        0
-Summary:        Intel(R) Math Kernel Library for Deep Neural Networks
+Summary:        Intel Math Kernel Library for Deep Neural Networks
 License:        Apache-2.0
 URL:            https://01.org/onednn
-Source0:        
https://github.com/oneapi-src/oneDNN/archive/v%{version}/%{name}-%{version}.tar.gz
+Source0:        
https://github.com/oneapi-src/oneDNN/archive/v%{version}/oneDNN-%{version}.tar.gz
+# PATCH-FIX-UPSTREAM onednn-1045.patch -- 
https://github.com/oneapi-src/oneDNN/pull/1045
+Patch0:         onednn-1045.patch
+# PATCH-FIX-UPSTREAM 
0001-common-gpu-include-thread-and-limit-headers-to-fix-G.patch
+Patch1:         0001-common-gpu-include-thread-and-limit-headers-to-fix-G.patch
 BuildRequires:  cmake
 BuildRequires:  doxygen
 BuildRequires:  fdupes
@@ -57,18 +61,18 @@
 Provides:       oneDNN = %{version}
 
 %description
-Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) is an
+Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) is an
 open-source performance library for deep-learning applications. The library
 accelerates deep-learning applications and frameworks on Intel architecture.
 Intel MKL-DNN contains vectorized and threaded building blocks that you can use
 to implement deep neural networks (DNN) with C and C++ interfaces.
 
 %package -n benchdnn
-Summary:        Header files of Intel(R) Math Kernel Library
+Summary:        Header files of Intel Math Kernel Library
 Requires:       %{libname} = %{version}
 
 %description -n benchdnn
-Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) is an
+Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) is an
 open-source performance library for deep-learning applications. The library
 accelerates deep-learning applications and frameworks on Intel architecture.
 Intel MKL-DNN contains vectorized and threaded building blocks that you can use
@@ -77,43 +81,42 @@
 This package only includes the benchmark utility including its input files.
 
 %package devel
-Summary:        Header files of Intel(R) Math Kernel Library
+Summary:        Header files of Intel Math Kernel Library
 Requires:       %{libname} = %{version}
 Provides:       mkl-dnn-devel = %{version}
 Obsoletes:      mkl-dnn-devel <= %{version}
 Provides:       oneDNN-devel = %{version}
 
 %description devel
-Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) is an
+Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) is an
 open-source performance library for deep-learning applications. The library
 accelerates deep-learning applications and frameworks on Intel architecture.
 Intel MKL-DNN contains vectorized and threaded building blocks that you can use
 to implement deep neural networks (DNN) with C and C++ interfaces.
 
 This package includes the required headers and library files to develop 
software
-with the Intel(R) MKL-DNN.
+with the Intel MKL-DNN.
 
 %package doc
-Summary:        Reference documentation for the Intel(R) Math Kernel Library
+Summary:        Reference documentation for the Intel Math Kernel Library
 BuildArch:      noarch
 
 %description doc
-The reference documentation for the Intel(R) Math Kernel Library can be 
installed
+The reference documentation for the Intel Math Kernel Library can be installed
 with this package.
 
 %package -n %{libname}
-Summary:        Header files of Intel(R) Math Kernel Library
+Summary:        Header files of Intel Math Kernel Library
 
 %description -n %{libname}
-Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN) is an
+Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) is an
 open-source performance library for deep-learning applications. The library
 accelerates deep-learning applications and frameworks on Intel architecture.
 Intel MKL-DNN contains vectorized and threaded building blocks that you can use
 to implement deep neural networks (DNN) with C and C++ interfaces.
 
 %prep
-%setup -q -n oneDNN-%{version}
-%autopatch -p1
+%autosetup -p1 -n oneDNN-%{version}
 
 %build
 %cmake \
@@ -167,6 +170,7 @@
 %{_datadir}/benchdnn
 
 %files devel
+%doc README.md
 %{_includedir}/mkl-dnn
 %{_includedir}/mkldnn*.h*
 %{_includedir}/dnnl*.h*
@@ -185,7 +189,6 @@
 
 %files -n %{libname}
 %license LICENSE
-%doc README.md
 %{_libdir}/libdnnl.so.*
 %{_libdir}/libmkldnn.so.*
 

++++++ 0001-common-gpu-include-thread-and-limit-headers-to-fix-G.patch ++++++
>From cfbefd8d744d4cdcdf3dd2f18576f487b36911b6 Mon Sep 17 00:00:00 2001
From: Denis Samoilov <denis.samoy...@intel.com>
Date: Fri, 2 Apr 2021 19:46:22 -0700
Subject: [PATCH] common, gpu: include thread and limit headers to fix GCC 11
 build issues

---
 src/common/primitive_cache.hpp      | 1 +
 src/gpu/jit/ngen/ngen_auto_swsb.hpp | 1 +
 2 files changed, 2 insertions(+)

diff --git a/src/common/primitive_cache.hpp b/src/common/primitive_cache.hpp
index 73cb1224f..05a3e53e5 100644
--- a/src/common/primitive_cache.hpp
+++ b/src/common/primitive_cache.hpp
@@ -20,6 +20,7 @@
 #include <future>
 #include <list>
 #include <memory>
+#include <thread>
 #include <unordered_map>
 
 #include "c_types_map.hpp"
diff --git a/src/gpu/jit/ngen/ngen_auto_swsb.hpp 
b/src/gpu/jit/ngen/ngen_auto_swsb.hpp
index de3417af3..62ef2a571 100644
--- a/src/gpu/jit/ngen/ngen_auto_swsb.hpp
+++ b/src/gpu/jit/ngen/ngen_auto_swsb.hpp
@@ -33,6 +33,7 @@
 
 #include <list>
 #include <map>
+#include <limits>
 
 namespace ngen {
 namespace autoswsb {
-- 
2.26.2

++++++ onednn-1045.patch ++++++
>From a94acd4e2dfaf51552dd2a60b059df1c1f14e452 Mon Sep 17 00:00:00 2001
From: Alexandre Truong <alexandre.tru...@arm.com>
Date: Wed, 28 Apr 2021 10:32:35 +0100
Subject: [PATCH] cpu: aarch64: missing include for arm_compute::Scheduler

---
 src/cpu/aarch64/acl_indirect_gemm_convolution.hpp | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/cpu/aarch64/acl_indirect_gemm_convolution.hpp 
b/src/cpu/aarch64/acl_indirect_gemm_convolution.hpp
index 86d2bed73..040311f8c 100644
--- a/src/cpu/aarch64/acl_indirect_gemm_convolution.hpp
+++ b/src/cpu/aarch64/acl_indirect_gemm_convolution.hpp
@@ -26,6 +26,7 @@
 
 #include "arm_compute/runtime/FunctionDescriptors.h"
 #include "arm_compute/runtime/NEON/NEFunctions.h"
+#include "arm_compute/runtime/Scheduler.h"
 
 namespace dnnl {
 namespace impl {

Reply via email to