Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-14 Thread Danyang Su
Hi Jed,

I cannot reproduce the same problem in C (attached example). The C code works 
fine on HDF 1.12.0 without problem. I am still confused why this happens.  
Anyway, will use HDF5-1.10.6 with PETSc-3.13 at this moment.

Thanks,

Danyang

On 2020-06-13, 1:39 PM, "Jed Brown"  wrote:

Can you reproduce in C?  You're missing three extra arguments that exist in 
the Fortran interface.

https://support.hdfgroup.org/HDF5/doc/RM/RM_H5D.html#Dataset-Create

Danyang Su  writes:

> Hi Jed,
>
> Attached is the example for your test.  
>
> This example uses H5Sset_none to tell H5Dwrite call that there will be no 
data. 4-th process HAS to participate we are in a collective mode.
> The code is ported and modified based on the C example from 
https://support.hdfgroup.org/ftp/HDF5/examples/parallel/coll_test.c
>
> The compiling flags in the makefile are same as those used in my own code.
>
> To compile the code, please run 'make all'
> To test the code, please run 'mpiexec -n 4 ./hdf5_zero_data'. Any number 
of processors larger than 4 should help to detect the problem.
>
> The code may crash on HDF5 1.12.0 but works fine on HDF5 1.10.x.
>
> The following platforms have been tested:
>   Macos-Mojave + GNU-8.2 + HDF5-1.12.0 -> Works fine
>   Ubuntu-16.04 + GNU-5.4 + HDF5-1.12.0 -> Crashes
>   Ubuntu-16.04 + GNU-7.5 + HDF5-1.12.0 -> Crashes
>   Ubuntu-16.04 + GNU-5.4 + HDF5-1.10.x -> Works fine
>   Centos-7 + Intel2018 + HDF5-12.0  -> Works fine
>
> Possible error when code crashes
>  At line 6686 of file H5_gen.F90
>  Fortran runtime error: Index '1' of dimension 1 of array 'buf' above 
upper bound of 0
>
> Thanks,
>
> Danyang
>
> On 2020-06-12, 6:05 AM, "Jed Brown"  wrote:
>
> Danyang Su  writes:
>
> > Hi Jed,
> >
> > Thanks for your double check. 
> >
> > The HDF 1.10.6 version also works. But versions from 1.12.x stop 
working.
>
> I'd suggest making a reduced test case in order to submit a bug 
report.
>
> This was the relevant change in PETSc for hdf5-1.12.
>
> 
https://gitlab.com/petsc/petsc/commit/806daeb7de397195b5132278177f4d5553f9f612
>
> > Attached is the code section where I have problem.
> >
> > !c write the dataset collectively
> > 
!!!
> >  CODE CRASHES HERE IF SOME PROCESSORS HAVE NO DATA TO 
WRITE
> > 
!!!
> > call h5dwrite_f(dset_id, H5T_NATIVE_DOUBLE, dataset, 
hdf5_dsize,   &
> > hdf5_ierr, file_space_id=filespace, 
   &
> > mem_space_id=memspace, xfer_prp = xlist_id)
> >
> > Please let me know if there is something wrong in the code that 
causes the problem.
>
> !c
> !c This example uses H5Sset_none to tell H5Dwrite call that 
> !c there will be no data. 4-th process HAS to participate since 
> !c we are in a collective mode.
> !c 
> !c The code is ported and modified based on the C example 
> !c from 
https://support.hdfgroup.org/ftp/HDF5/examples/parallel/coll_test.c
> !c by Danyang Su on June 12, 2020.
> !c
> !c To compile the code, please run 'make all'
> !c To test the code, please run 'mpiexec -n 4 ./hdf5_zero_data'
> !c
> !c IMPORTNANT NOTE
> !c The code may crash on HDF5 1.12.0 but works fine on HDF5 1.10.x.
> !c 
> !c The following platforms have been tested:
> !c Macos-Mojave + GNU-8.2 + HDF5-1.12.0 -> Works fine
> !c Ubuntu-16.04 + GNU-5.4 + HDF5-1.12.0 -> Crashes
> !c Ubuntu-16.04 + GNU-7.5 + HDF5-1.12.0 -> Crashes
> !c Ubuntu-16.04 + GNU-5.4 + HDF5-1.10.x -> Works fine
> !c Centos-7 + Intel2018 + HDF5-12.0 -> Works fine
> !c
> !c Possible error when code crashes
> !c At line 6686 of file H5_gen.F90
> !c Fortran runtime error: Index '1' of dimension 1 of array 'buf' 
above upper bound of 0
> !c 
>
> program hdf5_zero_data
>
>
> #include 
>
>   use petscsys
>   use hdf5
>
>   implicit none 
>
>   character(len=10), parameter :: h5File_Name = "SDS_row.h5"
>   character(len=8), parameter :: DatasetName = "IntArray"
>   integer, parameter :: nx = 8, ny = 5, ndim = 2
>
>   integer :: i
>   integer :: hdf5_ierr   ! HDF5 error code
>   PetscErrorCode :: ierr
>
>   integer(HID_T) :: file_id  ! File identifier
>   integer(HID_T) :: dset_id  ! Dataset identifier
>   integer(HID_T) :: space_id ! Space identifier
>   integer(HID_T) :: plist_id ! 

Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-13 Thread Danyang Su
Hi Jed,

These three extra arguments in h5dcreate_f are optional so I am not sure if 
these three arguments cause the problem. The error is from h5dwrite_f where all 
the arguments are provided.

I will try to see if I can reproduce the problem in C.

Thanks,

Danyang

On 2020-06-13, 1:39 PM, "Jed Brown"  wrote:

Can you reproduce in C?  You're missing three extra arguments that exist in 
the Fortran interface.

https://support.hdfgroup.org/HDF5/doc/RM/RM_H5D.html#Dataset-Create

Danyang Su  writes:

> Hi Jed,
>
> Attached is the example for your test.  
>
> This example uses H5Sset_none to tell H5Dwrite call that there will be no 
data. 4-th process HAS to participate we are in a collective mode.
> The code is ported and modified based on the C example from 
https://support.hdfgroup.org/ftp/HDF5/examples/parallel/coll_test.c
>
> The compiling flags in the makefile are same as those used in my own code.
>
> To compile the code, please run 'make all'
> To test the code, please run 'mpiexec -n 4 ./hdf5_zero_data'. Any number 
of processors larger than 4 should help to detect the problem.
>
> The code may crash on HDF5 1.12.0 but works fine on HDF5 1.10.x.
>
> The following platforms have been tested:
>   Macos-Mojave + GNU-8.2 + HDF5-1.12.0 -> Works fine
>   Ubuntu-16.04 + GNU-5.4 + HDF5-1.12.0 -> Crashes
>   Ubuntu-16.04 + GNU-7.5 + HDF5-1.12.0 -> Crashes
>   Ubuntu-16.04 + GNU-5.4 + HDF5-1.10.x -> Works fine
>   Centos-7 + Intel2018 + HDF5-12.0  -> Works fine
>
> Possible error when code crashes
>  At line 6686 of file H5_gen.F90
>  Fortran runtime error: Index '1' of dimension 1 of array 'buf' above 
upper bound of 0
>
> Thanks,
>
> Danyang
>
> On 2020-06-12, 6:05 AM, "Jed Brown"  wrote:
>
> Danyang Su  writes:
>
> > Hi Jed,
> >
> > Thanks for your double check. 
> >
> > The HDF 1.10.6 version also works. But versions from 1.12.x stop 
working.
>
> I'd suggest making a reduced test case in order to submit a bug 
report.
>
> This was the relevant change in PETSc for hdf5-1.12.
>
> 
https://gitlab.com/petsc/petsc/commit/806daeb7de397195b5132278177f4d5553f9f612
>
> > Attached is the code section where I have problem.
> >
> > !c write the dataset collectively
> > 
!!!
> >  CODE CRASHES HERE IF SOME PROCESSORS HAVE NO DATA TO 
WRITE
> > 
!!!
> > call h5dwrite_f(dset_id, H5T_NATIVE_DOUBLE, dataset, 
hdf5_dsize,   &
> > hdf5_ierr, file_space_id=filespace, 
   &
> > mem_space_id=memspace, xfer_prp = xlist_id)
> >
> > Please let me know if there is something wrong in the code that 
causes the problem.
>
> !c
> !c This example uses H5Sset_none to tell H5Dwrite call that 
> !c there will be no data. 4-th process HAS to participate since 
> !c we are in a collective mode.
> !c 
> !c The code is ported and modified based on the C example 
> !c from 
https://support.hdfgroup.org/ftp/HDF5/examples/parallel/coll_test.c
> !c by Danyang Su on June 12, 2020.
> !c
> !c To compile the code, please run 'make all'
> !c To test the code, please run 'mpiexec -n 4 ./hdf5_zero_data'
> !c
> !c IMPORTNANT NOTE
> !c The code may crash on HDF5 1.12.0 but works fine on HDF5 1.10.x.
> !c 
> !c The following platforms have been tested:
> !c Macos-Mojave + GNU-8.2 + HDF5-1.12.0 -> Works fine
> !c Ubuntu-16.04 + GNU-5.4 + HDF5-1.12.0 -> Crashes
> !c Ubuntu-16.04 + GNU-7.5 + HDF5-1.12.0 -> Crashes
> !c Ubuntu-16.04 + GNU-5.4 + HDF5-1.10.x -> Works fine
> !c Centos-7 + Intel2018 + HDF5-12.0 -> Works fine
> !c
> !c Possible error when code crashes
> !c At line 6686 of file H5_gen.F90
> !c Fortran runtime error: Index '1' of dimension 1 of array 'buf' 
above upper bound of 0
> !c 
>
> program hdf5_zero_data
>
>
> #include 
>
>   use petscsys
>   use hdf5
>
>   implicit none 
>
>   character(len=10), parameter :: h5File_Name = "SDS_row.h5"
>   character(len=8), parameter :: DatasetName = "IntArray"
>   integer, parameter :: nx = 8, ny = 5, ndim = 2
>
>   integer :: i
>   integer :: hdf5_ierr   ! HDF5 error code
>   PetscErrorCode :: ierr
>
>   integer(HID_T) :: file_id  ! File identifier
>   integer(HID_T) :: dset_id  ! Dataset identifier
>   integer(HID_T) :: space_id ! Space identifier
>   integer(HID_T) 

Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-13 Thread Jed Brown
Can you reproduce in C?  You're missing three extra arguments that exist in the 
Fortran interface.

https://support.hdfgroup.org/HDF5/doc/RM/RM_H5D.html#Dataset-Create

Danyang Su  writes:

> Hi Jed,
>
> Attached is the example for your test.  
>
> This example uses H5Sset_none to tell H5Dwrite call that there will be no 
> data. 4-th process HAS to participate we are in a collective mode.
> The code is ported and modified based on the C example from 
> https://support.hdfgroup.org/ftp/HDF5/examples/parallel/coll_test.c
>
> The compiling flags in the makefile are same as those used in my own code.
>
> To compile the code, please run 'make all'
> To test the code, please run 'mpiexec -n 4 ./hdf5_zero_data'. Any number of 
> processors larger than 4 should help to detect the problem.
>
> The code may crash on HDF5 1.12.0 but works fine on HDF5 1.10.x.
>
> The following platforms have been tested:
>   Macos-Mojave + GNU-8.2 + HDF5-1.12.0 -> Works fine
>   Ubuntu-16.04 + GNU-5.4 + HDF5-1.12.0 -> Crashes
>   Ubuntu-16.04 + GNU-7.5 + HDF5-1.12.0 -> Crashes
>   Ubuntu-16.04 + GNU-5.4 + HDF5-1.10.x -> Works fine
>   Centos-7 + Intel2018 + HDF5-12.0  -> Works fine
>
> Possible error when code crashes
>  At line 6686 of file H5_gen.F90
>  Fortran runtime error: Index '1' of dimension 1 of array 'buf' above 
> upper bound of 0
>
> Thanks,
>
> Danyang
>
> On 2020-06-12, 6:05 AM, "Jed Brown"  wrote:
>
> Danyang Su  writes:
>
> > Hi Jed,
> >
> > Thanks for your double check. 
> >
> > The HDF 1.10.6 version also works. But versions from 1.12.x stop 
> working.
>
> I'd suggest making a reduced test case in order to submit a bug report.
>
> This was the relevant change in PETSc for hdf5-1.12.
>
> 
> https://gitlab.com/petsc/petsc/commit/806daeb7de397195b5132278177f4d5553f9f612
>
> > Attached is the code section where I have problem.
> >
> > !c write the dataset collectively
> > !!!
> >  CODE CRASHES HERE IF SOME PROCESSORS HAVE NO DATA TO WRITE
> > !!!
> > call h5dwrite_f(dset_id, H5T_NATIVE_DOUBLE, dataset, hdf5_dsize,   &
> > hdf5_ierr, file_space_id=filespace,&
> > mem_space_id=memspace, xfer_prp = xlist_id)
> >
> > Please let me know if there is something wrong in the code that causes 
> the problem.
>
> !c
> !c This example uses H5Sset_none to tell H5Dwrite call that 
> !c there will be no data. 4-th process HAS to participate since 
> !c we are in a collective mode.
> !c 
> !c The code is ported and modified based on the C example 
> !c from https://support.hdfgroup.org/ftp/HDF5/examples/parallel/coll_test.c
> !c by Danyang Su on June 12, 2020.
> !c
> !c To compile the code, please run 'make all'
> !c To test the code, please run 'mpiexec -n 4 ./hdf5_zero_data'
> !c
> !c IMPORTNANT NOTE
> !c The code may crash on HDF5 1.12.0 but works fine on HDF5 1.10.x.
> !c 
> !c The following platforms have been tested:
> !c Macos-Mojave + GNU-8.2 + HDF5-1.12.0 -> Works fine
> !c Ubuntu-16.04 + GNU-5.4 + HDF5-1.12.0 -> Crashes
> !c Ubuntu-16.04 + GNU-7.5 + HDF5-1.12.0 -> Crashes
> !c Ubuntu-16.04 + GNU-5.4 + HDF5-1.10.x -> Works fine
> !c Centos-7 + Intel2018 + HDF5-12.0 -> Works fine
> !c
> !c Possible error when code crashes
> !c At line 6686 of file H5_gen.F90
> !c Fortran runtime error: Index '1' of dimension 1 of array 'buf' above 
> upper bound of 0
> !c 
>
> program hdf5_zero_data
>
>
> #include 
>
>   use petscsys
>   use hdf5
>
>   implicit none 
>
>   character(len=10), parameter :: h5File_Name = "SDS_row.h5"
>   character(len=8), parameter :: DatasetName = "IntArray"
>   integer, parameter :: nx = 8, ny = 5, ndim = 2
>
>   integer :: i
>   integer :: hdf5_ierr   ! HDF5 error code
>   PetscErrorCode :: ierr
>
>   integer(HID_T) :: file_id  ! File identifier
>   integer(HID_T) :: dset_id  ! Dataset identifier
>   integer(HID_T) :: space_id ! Space identifier
>   integer(HID_T) :: plist_id ! Property list identifier
>   integer(HID_T) :: FileSpace! File identifier
>   integer(HID_T) :: MemSpace ! Dataset identifier
>
>   integer(HSIZE_T) :: dimsf(2)   ! Dataset dimensions
>   integer(HSIZE_T) :: hcount(2)  ! hyperslab selection parameters
>   integer(HSIZE_T) :: offset(2)  ! hyperslab selection parameters
>
>   integer, allocatable :: data(:)! Dataset to write
>
>   integer :: mpi_size, mpi_rank;
>
>   !c Initialize MPI.
>   call PetscInitialize(Petsc_Null_Character,ierr)  
>   CHKERRQ(ierr)
>   call MPI_Comm_rank(Petsc_Comm_World,mpi_rank,ierr)
>   CHKERRQ(ierr)
>   call 

Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-12 Thread Danyang Su
Hi Jed,

Attached is the example for your test.  

This example uses H5Sset_none to tell H5Dwrite call that there will be no data. 
4-th process HAS to participate we are in a collective mode.
The code is ported and modified based on the C example from 
https://support.hdfgroup.org/ftp/HDF5/examples/parallel/coll_test.c

The compiling flags in the makefile are same as those used in my own code.

To compile the code, please run 'make all'
To test the code, please run 'mpiexec -n 4 ./hdf5_zero_data'. Any number of 
processors larger than 4 should help to detect the problem.

The code may crash on HDF5 1.12.0 but works fine on HDF5 1.10.x.

The following platforms have been tested:
  Macos-Mojave + GNU-8.2 + HDF5-1.12.0 -> Works fine
  Ubuntu-16.04 + GNU-5.4 + HDF5-1.12.0 -> Crashes
  Ubuntu-16.04 + GNU-7.5 + HDF5-1.12.0 -> Crashes
  Ubuntu-16.04 + GNU-5.4 + HDF5-1.10.x -> Works fine
  Centos-7 + Intel2018 + HDF5-12.0  -> Works fine

Possible error when code crashes
 At line 6686 of file H5_gen.F90
 Fortran runtime error: Index '1' of dimension 1 of array 'buf' above upper 
bound of 0

Thanks,

Danyang

On 2020-06-12, 6:05 AM, "Jed Brown"  wrote:

Danyang Su  writes:

> Hi Jed,
>
> Thanks for your double check. 
>
> The HDF 1.10.6 version also works. But versions from 1.12.x stop working.

I'd suggest making a reduced test case in order to submit a bug report.

This was the relevant change in PETSc for hdf5-1.12.


https://gitlab.com/petsc/petsc/commit/806daeb7de397195b5132278177f4d5553f9f612

> Attached is the code section where I have problem.
>
> !c write the dataset collectively
> !!!
>  CODE CRASHES HERE IF SOME PROCESSORS HAVE NO DATA TO WRITE
> !!!
> call h5dwrite_f(dset_id, H5T_NATIVE_DOUBLE, dataset, hdf5_dsize,   &
> hdf5_ierr, file_space_id=filespace,&
> mem_space_id=memspace, xfer_prp = xlist_id)
>
> Please let me know if there is something wrong in the code that causes 
the problem.



hdf5_zero_data.F90
Description: Binary data


makefile
Description: Binary data


Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-12 Thread Jed Brown
Danyang Su  writes:

> Hi Jed,
>
> Thanks for your double check. 
>
> The HDF 1.10.6 version also works. But versions from 1.12.x stop working.

I'd suggest making a reduced test case in order to submit a bug report.

This was the relevant change in PETSc for hdf5-1.12.

https://gitlab.com/petsc/petsc/commit/806daeb7de397195b5132278177f4d5553f9f612

> Attached is the code section where I have problem.
>
> !c write the dataset collectively
> !!!
>  CODE CRASHES HERE IF SOME PROCESSORS HAVE NO DATA TO WRITE
> !!!
> call h5dwrite_f(dset_id, H5T_NATIVE_DOUBLE, dataset, hdf5_dsize,   &
> hdf5_ierr, file_space_id=filespace,&
> mem_space_id=memspace, xfer_prp = xlist_id)
>
> Please let me know if there is something wrong in the code that causes the 
> problem.


Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-12 Thread Danyang Su
Hi Jed,

Thanks for your double check. 

The HDF 1.10.6 version also works. But versions from 1.12.x stop working.

Attached is the code section where I have problem.

!c write the dataset collectively
!!!
 CODE CRASHES HERE IF SOME PROCESSORS HAVE NO DATA TO WRITE
!!!
call h5dwrite_f(dset_id, H5T_NATIVE_DOUBLE, dataset, hdf5_dsize,   &
hdf5_ierr, file_space_id=filespace,&
mem_space_id=memspace, xfer_prp = xlist_id)

Please let me know if there is something wrong in the code that causes the 
problem.

Thanks,

Danyang

On 2020-06-11, 8:32 PM, "Jed Brown"  wrote:

Danyang Su  writes:

> Hi Barry,
>
> The HDF5 calls fail. I reconfigure PETSc with HDF 1.10.5 version and it 
works fine on different platforms. So, it is more likely there is a bug in the 
latest HDF version.

I would double-check that you have not subtly violated a collective 
requirement in the interface, then report to upstream.



example.F90
Description: Binary data


Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-11 Thread Jed Brown
Danyang Su  writes:

> Hi Barry,
>
> The HDF5 calls fail. I reconfigure PETSc with HDF 1.10.5 version and it works 
> fine on different platforms. So, it is more likely there is a bug in the 
> latest HDF version.

I would double-check that you have not subtly violated a collective requirement 
in the interface, then report to upstream.


Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-11 Thread Danyang Su
Hi Barry,

The HDF5 calls fail. I reconfigure PETSc with HDF 1.10.5 version and it works 
fine on different platforms. So, it is more likely there is a bug in the latest 
HDF version.

Thanks.

All the best,

Danyang



On June 11, 2020 5:58:28 a.m. PDT, Barry Smith  wrote:
>
>Are you making HDF5 calls that fail or is it PETSc routines calling
>HDF5 that fail? 
>
>Regardless it sounds like the easiest fix is to switch back to the
>previous HDF5 and wait for HDF5 to fix what sounds to be a bug.
>
>   Barry
>
>
>> On Jun 11, 2020, at 1:05 AM, Danyang Su  wrote:
>> 
>> Hi All,
>>  
>> Sorry to send the previous incomplete email accidentally. 
>>  
>> After updating to HDF5-1.12.0, I got some problem if some processors
>have no data to write or not necessary to write. Since parallel writing
>is collective, I cannot disable those processors from writing. For the
>old version, there seems no such problem. So far, the problem only
>occurs on Linux using GNU compiler. The same code has no problem using
>intel compiler or latest gnu compiler on MacOS. 
>>  
>> I have already included h5sselect_none in the code for those
>processors without data. But it does not take effect. The problem is
>documented in the following link (How do you write data when one
>process doesn't have or need to write data ?).
>>  https://support.hdfgroup.org/HDF5/hdf5-quest.html#par-nodata
>
>>  
>> Similar problem has also been reported on HDF Forum by others.
>> https://forum.hdfgroup.org/t/bug-on-hdf5-1-12-0-fortran-parallel/6864
>
>>  
>> Any suggestion for that?
>>  
>> Thanks,
>>  
>> Danyang

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-11 Thread Barry Smith

  Are you making HDF5 calls that fail or is it PETSc routines calling HDF5 that 
fail? 

  Regardless it sounds like the easiest fix is to switch back to the previous 
HDF5 and wait for HDF5 to fix what sounds to be a bug.

   Barry


> On Jun 11, 2020, at 1:05 AM, Danyang Su  wrote:
> 
> Hi All,
>  
> Sorry to send the previous incomplete email accidentally. 
>  
> After updating to HDF5-1.12.0, I got some problem if some processors have no 
> data to write or not necessary to write. Since parallel writing is 
> collective, I cannot disable those processors from writing. For the old 
> version, there seems no such problem. So far, the problem only occurs on 
> Linux using GNU compiler. The same code has no problem using intel compiler 
> or latest gnu compiler on MacOS. 
>  
> I have already included h5sselect_none in the code for those processors 
> without data. But it does not take effect. The problem is documented in the 
> following link (How do you write data when one process doesn't have or need 
> to write data ?).
>  https://support.hdfgroup.org/HDF5/hdf5-quest.html#par-nodata 
> 
>  
> Similar problem has also been reported on HDF Forum by others.
> https://forum.hdfgroup.org/t/bug-on-hdf5-1-12-0-fortran-parallel/6864 
> 
>  
> Any suggestion for that?
>  
> Thanks,
>  
> Danyang



[petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-11 Thread Danyang Su
Hi All,

 

After updating to HDF5-1.12.0, I got some problem if some processors have no 
data to write or not necessary to write. Since parallel writing is collective, 
I cannot disable those processors from writing. For the old version, there 
seems no such problem. So far, the problem only occurs on Linux using GNU 
compiler. The same code has no problem using intel compiler or latest gnu 
compiler on MacOS.

 

Looks like it is caused by zero memory space. However, as documented in