Intel(R) Open Volume Kernel Library (Intel(R) Open VKL) is a collection of
high-performance volume computation kernels, developed at Intel. The
target users of Open VKL are graphics application engineers who want to
improve the performance of their volume rendering applications by
leveraging Open VKL’s performance-optimized kernels, which include
volume traversal and sampling functionality for a variety of volumetric
data formats. The kernels are optimized for the latest Intel(R) processors
with support for SSE, AVX, AVX2, and AVX-512 instructions.

Open VKL provides a C API, and also supports applications written with
the Intel(R) Implicit SPMD Program Compiler (Intel(R) ISPC) by also
providing an ISPC interface to the core volume algorithms. This makes it
possible to write a renderer in ISPC that automatically vectorizes and
leverages SSE, AVX, AVX2, and AVX-512 instructions. ISPC also supports
runtime code selection, thus ISPC will select the best code path for
your application.

https://www.openvkl.org/

Signed-off-by: Naveen Saini <[email protected]>
---
 .../recipes-oneapi/openvkl/openvkl_0.13.0.bb  | 34 +++++++++++++++++++
 1 file changed, 34 insertions(+)
 create mode 100644 
dynamic-layers/openembedded-layer/recipes-oneapi/openvkl/openvkl_0.13.0.bb

diff --git 
a/dynamic-layers/openembedded-layer/recipes-oneapi/openvkl/openvkl_0.13.0.bb 
b/dynamic-layers/openembedded-layer/recipes-oneapi/openvkl/openvkl_0.13.0.bb
new file mode 100644
index 00000000..edf7509d
--- /dev/null
+++ b/dynamic-layers/openembedded-layer/recipes-oneapi/openvkl/openvkl_0.13.0.bb
@@ -0,0 +1,34 @@
+SUMMARY  = "Intel(R) Open Volume Kernel Library"
+DESCRIPTION = "Intel(R) Open Volume Kernel Library (Intel(R) Open VKL) is a \
+collection of high-performance volume computation kernels. The target users \
+of Open VKL are graphics application engineers who want to improve the \
+performance of their volume rendering applications by leveraging Open VKL’s \
+performance-optimized kernels, which include volume traversal and sampling \
+functionality for a variety of volumetric data formats. The kernels are 
optimized \
+for the latest Intel(R) processors with support for SSE, AVX, AVX2, and 
AVX-512 \
+instructions."
+HOMEPAGE = "https://www.openvkl.org/";
+
+LICENSE  = "Apache-2.0 & BSD-3-Clause & MIT & Zlib"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=3b83ef96387f14655fc854ddc3c6bd57 \
+                    
file://third-party-programs.txt;md5=90d62b467dd4fdf3c7d3d897fbac7437"
+
+inherit pkgconfig cmake
+
+S = "${WORKDIR}/git"
+
+SRC_URI = "git://github.com/openvkl/openvkl.git;protocol=https \
+            "
+SRCREV = "84b9d78ead12f369f37cee77d985da9d13c07ae1"
+
+COMPATIBLE_HOST = '(x86_64).*-linux'
+
+DEPENDS = "ispc-native rkcommon embree"
+
+EXTRA_OECMAKE += " \
+                  -DISPC_EXECUTABLE=${STAGING_BINDIR_NATIVE}/ispc  \
+                  "
+PACKAGES =+ "${PN}-examples"
+FILES_${PN}-examples = "\
+                     ${bindir} \
+                     "
-- 
2.32.0

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#7139): 
https://lists.yoctoproject.org/g/meta-intel/message/7139
Mute This Topic: https://lists.yoctoproject.org/mt/83936112/21656
Group Owner: [email protected]
Unsubscribe: https://lists.yoctoproject.org/g/meta-intel/unsub 
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to