Dear petsc team,

Thank you for all your answers. I really appreciate.

Best regards,
Medane


> On 7 Feb 2025, at 15:22, Matthew Knepley <knep...@gmail.com> wrote:
> 
> On Fri, Feb 7, 2025 at 8:20 AM medane.tchako...@univ-fcomte.fr 
> <mailto:medane.tchako...@univ-fcomte.fr> <medane.tchako...@univ-fcomte.fr 
> <mailto:medane.tchako...@univ-fcomte.fr>> wrote:
>> Re:
>> Please find below the output from the previous code, running on only one 
>> processor.
>> 
>> Mat Object: 1 MPI process
>>   type: seqdense
>> 7.2003197397953400e-01 3.9777780919128602e-01 9.8405227390177075e-01 
>> 1.4405427480322786e-01
>> 6.1793966542126100e-02 7.3036588248200474e-02 7.3851607000756303e-01 
>> 9.9650445216117589e-01
>> 1.0022337819588500e-02 1.0386628927366459e-01 4.0114727059134836e-01 
>> 1.0677308875937896e-01
>> 1.4463931936456476e-01 2.5078039364333193e-01 5.2764865382548720e-01 
>> 9.8905332488367748e-01
>> 
>> buffer[0] = 7.200320e-01
>> buffer[1] = 6.179397e-02
>> buffer[2] = 1.002234e-02
>> buffer[3] = 1.446393e-01
>> buffer[4] = 0.000000e+00
>> buffer[5] = 0.000000e+00
>> buffer[6] = 0.000000e+00
>> buffer[7] = 0.000000e+00
>> buffer[8] = 3.977778e-01
>> buffer[9] = 7.303659e-02
>> buffer[10] = 1.038663e-01
>> buffer[11] = 2.507804e-01
>> buffer[12] = 0.000000e+00
>> buffer[13] = 0.000000e+00
>> buffer[14] = 0.000000e+00
>> buffer[15] = 0.000000e+00
>> 
>> Mat Object: 1 MPI process
>>   type: seqdense
>> 7.2003197397953400e-01 3.9777780919128602e-01 9.8405227390177075e-01 
>> 1.4405427480322786e-01
>> 6.1793966542126100e-02 7.3036588248200474e-02 7.3851607000756303e-01 
>> 9.9650445216117589e-01
>> 1.0022337819588500e-02 1.0386628927366459e-01 4.0114727059134836e-01 
>> 1.0677308875937896e-01
>> 1.4463931936456476e-01 2.5078039364333193e-01 5.2764865382548720e-01 
>> 9.8905332488367748e-01
>> 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 
>> 0.0000000000000000e+00
>> 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 
>> 0.0000000000000000e+00
>> 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 
>> 0.0000000000000000e+00
>> 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 
>> 0.0000000000000000e+00
>> 
>> 
>> I was expecting to get in “buffer”, only the data entries from R_part. 
>> Please, let me know if this is the excepted behavior and I’am missing 
>> something.
> 
> As Jose already pointed out, SubMatrix() does not copy. It gives you a Mat 
> front end to the same data, but with changed sizes. In this case, the LDA is 
> 4, not 2, so when you iterate over the values, you skip over the ones you 
> don't want.
> 
>   Thanks,
> 
>     Matt
>  
>> Thanks,
>> Medane
>> 
>> 
>> 
>> > On 7 Feb 2025, at 11:34, Pierre Jolivet <pie...@joliv.et 
>> > <mailto:pie...@joliv.et>> wrote:
>> > 
>> > 
>> > 
>> >> On 7 Feb 2025, at 11:05 AM, medane.tchako...@univ-fcomte.fr 
>> >> <mailto:medane.tchako...@univ-fcomte.fr> wrote:
>> >> 
>> >> 
>> >> Dear all,
>> >> 
>> >> I have been experiencing incoherent data entries from this code below, 
>> >> when printing the array. Maybe I’am doing something wrong.
>> > 
>> > What is incoherent?
>> > Everything looks OK to me.
>> > 
>> > Thanks,
>> > Pierre
>> > 
>> >> ----------------
>> >> 
>> >>    PetscInt nlines = 8; // lines
>> >>   PetscInt ncols = 4;  // columns
>> >>   PetscMPIInt rank;
>> >>   PetscMPIInt size;
>> >> 
>> >>   // Initialize PETSc
>> >>   PetscCall(PetscInitialize(&argc, &args, NULL, NULL));
>> >>   PetscCallMPI(MPI_Comm_rank(MPI_COMM_WORLD, &rank));
>> >>   PetscCallMPI(MPI_Comm_size(MPI_COMM_WORLD, &size));
>> >> 
>> >>   Mat R_full;
>> >>   Mat R_part;
>> >>   PetscInt idx_first_row = 0;
>> >>   PetscInt idx_one_plus_last_row = nlines / 2;
>> >>   PetscCall(MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, 
>> >> nlines, ncols, NULL, &R_full));
>> >> 
>> >>   // Get sub matrix
>> >>   PetscCall(MatDenseGetSubMatrix(R_full, idx_first_row, 
>> >> idx_one_plus_last_row, PETSC_DECIDE, PETSC_DECIDE, &R_part));
>> >>   // Add entries to sub matrix
>> >>   MatSetRandom(R_part, NULL);
>> >>   //View sub matrix
>> >>   PetscCall(MatView(R_part, PETSC_VIEWER_STDOUT_WORLD));
>> >> 
>> >>   // Get array from sub matrix and print entries
>> >>   PetscScalar *buffer;
>> >>   PetscCall(MatDenseGetArray(R_part, &buffer));
>> >>   PetscInt idx_end = (nlines/2) * ncols;
>> >> 
>> >>   for (int i = 0; i < idx_end; i++)
>> >>   {
>> >>       PetscPrintf(PETSC_COMM_SELF, "buffer[%d] = %e \n", i, buffer[i]);
>> >>   }
>> >> 
>> >>   //Restore array to sub matrix
>> >>   PetscCall(MatDenseRestoreArray(R_part, &buffer));
>> >>   // Restore sub matrix
>> >>   PetscCall(MatDenseRestoreSubMatrix(R_full, &R_part));
>> >>   // View the initial matrix
>> >>   PetscCall(MatView(R_full, PETSC_VIEWER_STDOUT_WORLD));
>> >> 
>> >>   PetscCall(MatDestroy(&R_full));
>> >> 
>> >>   PetscCall(PetscFinalize());
>> >>   return 0;
>> >> 
>> >> ----------------
>> >> 
>> >> 
>> >> Thanks
>> >> Medane
>> > 
>> > 
>> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!aW67hycVSXmRHgmoXNGV0fVjZ4HM7XloTvLw0b1d9peGDnJGYOm6nKgJPy53qErREjKHhwJybk0bAXiSQY7f9RreQX1Aik1VZBB7CIDA$
>   
> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!aW67hycVSXmRHgmoXNGV0fVjZ4HM7XloTvLw0b1d9peGDnJGYOm6nKgJPy53qErREjKHhwJybk0bAXiSQY7f9RreQX1Aik1VZKEXJSu-$
>  >

Reply via email to