Yes it is still the case that you cannot enable C++ or Fortran (or the High 
Level APIs) when threadsafe is enabled. —enable-unsupported can override this 
behavior.

Scot 


> On Sep 22, 2016, at 12:36 PM, Elvis Stansvik <elvis.stans...@orexplore.com> 
> wrote:
> 
> 2016-09-22 19:23 GMT+02:00 Elvis Stansvik <elvis.stans...@orexplore.com>:
>> 2016-09-22 19:17 GMT+02:00 Dana Robinson <derob...@hdfgroup.org>:
>>> Hi Elvis,
>>> 
>>> Did you build your HDF5 library with thread-safety enabled
>>> (--enable-threadsafe w/ configure)?
>> 
>> Hi Dana, and thanks for the quick reply. I think we just e-mailed past
>> each other (see my previous mail).
>> 
>> I wrongly called it --thread-safe in that mail, but it was
>> --enable-threadsafe I was referring to. But yes, I'm pretty sure this
>> is the problem.
>> 
>> I'm rebuilding the Arch package now with --enable-threadsafe.
> 
> I spoke a little too soon. I now found this bug filed against the Arch 
> package:
> 
>    https://bugs.archlinux.org/task/33805
> 
> The reporter asked the package maintainer to add --enable-threadsafe,
> but the package maintainer closed the bug saying that
> --enable-threadsafe is not compatible with the Fortran build (in Arch,
> the C++ and Fortran APIs are bundled into one package
> hdf5-cpp-fortran).
> 
> Anyone know if that is still the case? If so I can't open a bug
> against the package again asking for --enable-threadsafe to be added.
> But I could open a bug asking the package to be split I guess.
> 
> Elvis
> 
>> 
>> Elvis
>> 
>>> 
>>> Dana Robinson
>>> Software Engineer
>>> The HDF Group
>>> 
>>> Get Outlook for Android
>>> 
>>> From: Elvis Stansvik
>>> Sent: Thursday, September 22, 12:43
>>> Subject: [Hdf-forum] Simply using the library from separate threads (C++
>>> API)
>>> To: HDF Users Discussion List
>>> 
>>> Hi all, I'm using the C++ API to read HDF5 files from separate threads (no
>>> writing). None of my threads read the same file, but they do execute
>>> simultaneously. The reason I'm using threading is not to speed things up or
>>> get better throughput, but simply to not block the UI (it's Qt application)
>>> while reading. So this is not about "Parallel HDF5" or anything, just simply
>>> using the serial library "from scratch" from multiple threads. This has been
>>> working fine when testing on Ubuntu 16.04 (our target OS), which has HDF5
>>> 1.8.16. I recently tested on my personal Arch Linux machine though, which
>>> has HDF5 1.10.0, and got this segmentation fault: (gdb) bt #0
>>> 0x00007ffff67c57d9 in H5SL_search () from /usr/lib/libhdf5.so.100 #1
>>> 0x00007ffff678dd19 in H5P_copy_plist () from /usr/lib/libhdf5.so.100 #2
>>> 0x00007ffff66a7fc0 in H5F_new () from /usr/lib/libhdf5.so.100 #3
>>> 0x00007ffff66a8f55 in H5F_open () from /usr/lib/libhdf5.so.100 #4
>>> 0x00007ffff66a155d in H5Fopen () from /usr/lib/libhdf5.so.100 #5
>>> 0x00007ffff6b79546 in H5::H5File::p_get_file(char const*, unsigned int,
>>> H5::FileCreatPropList const&, H5::FileAccPropList const&) () from
>>> /usr/lib/libhdf5_cpp.so.100 #6 0x00007ffff6b79750 in H5::H5File::H5File(char
>>> const*, unsigned int, H5::FileCreatPropList const&, H5::FileAccPropList
>>> const&) () from /usr/lib/libhdf5_cpp.so.100 #7 0x000000000041f00e in
>>> HDF5ImageReader::RequestInformation (this=0x7fffbc002de0,
>>> request=0x7fffbc010da0, inputVector=0x0, outputVector=0x7fffbc0039d0) at
>>> /home/estan/Projekt/orexplore/dev/src/insight/src/model/HDF5ImageReader.cpp:91
>>> #8 0x00007fffee8200d0 in vtkExecutive::CallAlgorithm(vtkInformation*, int,
>>> vtkInformationVector**, vtkInformationVector*) () from
>>> /usr/lib/libvtkCommonExecutionModel.so.1 #9 0x00007fffee837fa9 in
>>> vtkStreamingDemandDrivenPipeline::ExecuteInformation(vtkInformation*,
>>> vtkInformationVector**, vtkInformationVector*) () from
>>> /usr/lib/libvtkCommonExecutionModel.so.1 #10 0x00007fffee81ce05 in
>>> vtkDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>> vtkInformationVector**, vtkInformationVector*) () from
>>> /usr/lib/libvtkCommonExecutionModel.so.1 #11 0x00007fffee835c55 in
>>> vtkStreamingDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>> vtkInformationVector**, vtkInformationVector*) () from
>>> /usr/lib/libvtkCommonExecutionModel.so.1 #12 0x00007fffee816e1a in
>>> vtkCompositeDataPipeline::ForwardUpstream(vtkInformation*) () from
>>> /usr/lib/libvtkCommonExecutionModel.so.1 #13 0x00007fffee81ccb5 in
>>> vtkDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>> vtkInformationVector**, vtkInformationVector*) () from
>>> /usr/lib/libvtkCommonExecutionModel.so.1 #14 0x00007fffee835c55 in
>>> vtkStreamingDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>> vtkInformationVector**, vtkInformationVector*) () from
>>> /usr/lib/libvtkCommonExecutionModel.so.1 #15 0x00007fffee816e1a in
>>> vtkCompositeDataPipeline::ForwardUpstream(vtkInformation*) () from
>>> /usr/lib/libvtkCommonExecutionModel.so.1 #16 0x00007fffee81ccb5 in
>>> vtkDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>> vtkInformationVector**, vtkInformationVector*) () from
>>> /usr/lib/libvtkCommonExecutionModel.so.1 #17 0x00007fffee835c55 in
>>> vtkStreamingDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>> vtkInformationVector**, vtkInformationVector*) () from
>>> /usr/lib/libvtkCommonExecutionModel.so.1 #18 0x00007fffee816e1a in
>>> vtkCompositeDataPipeline::ForwardUpstream(vtkInformation*) () from
>>> /usr/lib/libvtkCommonExecutionModel.so.1 #19 0x00007fffee81ccb5 in
>>> vtkDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>> vtkInformationVector**, vtkInformationVector*) () from
>>> /usr/lib/libvtkCommonExecutionModel.so.1 #20 0x00007fffee835c55 in
>>> vtkStreamingDemandDrivenPipeline::ProcessRequest(vtkInformation*,
>>> vtkInformationVector**, vtkInformationVector*) () from
>>> /usr/lib/libvtkCommonExecutionModel.so.1 #21 0x00007fffee836482 in
>>> vtkStreamingDemandDrivenPipeline::Update(int) () from
>>> /usr/lib/libvtkCommonExecutionModel.so.1 #22 0x00007ffff1289a76 in
>>> vtkAbstractVolumeMapper::GetBounds() () from
>>> /usr/lib/libvtkRenderingCore.so.1 #23 0x00007ffff13459f9 in
>>> vtkVolume::GetBounds() () from /usr/lib/libvtkRenderingCore.so.1 #24
>>> 0x000000000043f235 in createVolume (image=..., from=0,
>>> to=2.7803999378532183, opacityFunction=..., colorFunction=...) at
>>> /home/estan/Projekt/orexplore/dev/src/insight/src/view/Pipeline.cpp:123 #25
>>> 0x00000000004295c4 in CreateVolume::operator() (this=0x829248, image=...) at
>>> /home/estan/Projekt/orexplore/dev/src/insight/src/view/Pipeline.h:45 #26
>>> 0x000000000042bc7a in QtConcurrent::MappedEachKernel::const_iterator,
>>> CreateVolume>::runIteration (this=0x829210, it=..., result=0x7fffbc002da8)
>>> at /usr/include/qt/QtConcurrent/qtconcurrentmapkernel.h:176 #27
>>> 0x000000000042bd5d in QtConcurrent::MappedEachKernel::const_iterator,
>>> CreateVolume>::runIterations (this=0x829210, sequenceBeginIterator=...,
>>> begin=1, end=2, results=0x7fffbc002da8) at
>>> /usr/include/qt/QtConcurrent/qtconcurrentmapkernel.h:186 #28
>>> 0x000000000042c4e1 in QtConcurrent::IterateKernel::const_iterator,
>>> vtkSmartPointer >::forThreadFunction (this=0x829210) at
>>> /usr/include/qt/QtConcurrent/qtconcurrentiteratekernel.h:256 #29
>>> 0x000000000042bedc in QtConcurrent::IterateKernel::const_iterator,
>>> vtkSmartPointer >::threadFunction (this=0x829210) at
>>> /usr/include/qt/QtConcurrent/qtconcurrentiteratekernel.h:218 #30
>>> 0x00007ffff7bd5cfd in QtConcurrent::ThreadEngineBase::run() () from
>>> /usr/lib/libQt5Concurrent.so.5 #31 0x00007ffff489a01f in ?? () from
>>> /usr/lib/libQt5Core.so.5 #32 0x00007ffff489dd78 in ?? () from
>>> /usr/lib/libQt5Core.so.5 #33 0x00007fffeb3f5454 in start_thread () from
>>> /usr/lib/libpthread.so.0 #34 0x00007fffec5f07df in clone () from
>>> /usr/lib/libc.so.6 (gdb) Before I start digging into what is happening here
>>> I'd just like to ask: Do I have to do something special when using the HDF5
>>> library from two different threads? I'm not reading the same files or
>>> anything, it's simply two completely separate usages of the library in
>>> threads that run in parallel. Does the library have any global structures or
>>> something that must be initialized before spawning any threads that use it?
>>> The reason I'm a little worried is that perhaps I've just been lucky when
>>> running under Ubuntu / HDF5 1.8.16. My usage in each thread basically looks
>>> like: 1) Create a H5::H5File 2) Open a dataset using file.openDataset 3) Get
>>> the dataspace for the dataset and select a hyperslab 4) Create a memory
>>> dataspace 5) Perform a single read(..) operation from the dataset dataspace
>>> to the memory dataspace And it's always different files that the threads
>>> work with. Is there some step 0 I'm missing? Thanks in advance for any
>>> advice. Elvis _______________________________________________ Hdf-forum is
>>> for HDF software users discussion. Hdf-forum@lists.hdfgroup.org
>>> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
>>> Twitter: https://twitter.com/hdf5
>>> 
>>> 
>>> _______________________________________________
>>> Hdf-forum is for HDF software users discussion.
>>> Hdf-forum@lists.hdfgroup.org
>>> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
>>> Twitter: https://twitter.com/hdf5
> 
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> Hdf-forum@lists.hdfgroup.org
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
> Twitter: https://twitter.com/hdf5

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to