Jones Beene wrote:
> Most viable concepts for commercial vehicles which would utilize LENR need
> to have efficient water-splitting as part of the package. Compressed
> hydrogen gas as the alternative - that is probably a non-starter for safety
> reasons,
>
Do you mean the hydrogen or deuterium
The question is about vectors. I think it will work, but haven't tested.
Barry Smith writes:
> We seem to be emphasizing using MatSetValuesCOO() for GPUs (can also be for
> CPUs); in the main branch you can find a simple example in
> src/mat/tutorials/ex18.c which demonstrates its use.
>
>
-march=native, which is also recognized by the new Intel compilers (icx), which
are based on LLVM.
Ernesto Prudencio via petsc-users writes:
> Hi all.
>
> When compiling PETSc with INTEL compilers, we have been using the options
> "-Ofast -xHost". Is there an equivalent to -xHost for GNU compi
Robin wrote:
> Just Google atomic or molecular self-assembly.
>
I don't see how this could apply to making a cathode. Perhaps you could
explain in a little more detail?
Robin wrote:
> I wonder if atomic/molecular self-assembly could be used to create uniform
> structures of exactly the right size and
> shape for the NAE?
>
What do you mean by "self-assembly"? What RNA and ribosomes do?
Here is a preprint of an ICCF-23 paper:
Storms, E. *The Nature of the D+D Fusion Reaction in Palladium and Nickel
(preprint)*. in *ICCF-23*. 2021. Xiamen, China.
https://www.lenr-canr.org/acrobat/StormsEthenatureob.pdf
Could you explain more of what you mean by "local matrix"? If you're thinking
of finite elements, then that doesn't exist and can't readily be constructed
except at assembly time (see MATIS, for example). If you mean an overlapping
block, then MatGetSubMatrix() or MatGetSubMatrices(), as used in
Years ago, Peter Hagelstein wrote one of the best essays I know of about
science and human nature:
https://www.lenr-canr.org/acrobat/Hagelsteinontheoryan.pdf
He wrote another wide-ranging paper in JCMNS 35:
"Theory and Experiments in Condensed Matter Nuclear Science"
https://www.lenr-canr.org/a
e
> mentioning already exist?
>
> Or you are simply sketching out what is going to be needed?
>
> Thank you,
>
> -Alfredo
>
>
> On Thu, Mar 10, 2022 at 3:40 PM Zhang, Hong wrote:
>
>>
>> > On Mar 10, 2022, at 2:51 PM, Jed Brown wrote:
>
Would the order be inferred by the number of vectors in TSBDFSetStepVecs(TS ts,
PetscInt num_steps, const PetscReal *times, const Vec *vecs)?
"Zhang, Hong" writes:
> It is clear to me now. You need TSBDFGetStepVecs(TS ts, PetscInt *num_steps,
> const PetscReal **times, const V
I meant:
JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE *Vol. 35* is uploaded.
ttps://www.lenr-canr.org/acrobat/BiberianJPjcondensedzh.pdf
+1 (binding)
Checked signatures, checksums, and licences
+1 (binding)
JOURNAL OF CONDENSED MATTER NUCLEAR SCIENCE Vol. 34 is uploaded.
https://www.lenr-canr.org/acrobat/BiberianJPjcondensedzh.pdf
ods. But I doubt it is the best
>> solution to Alfredo’s problem. Alfredo, can you elaborate a bit on what you
>> would like to do? TSBDF_Restart is already using the previous solution to
>> restart the integration with first-order BDF.
>>
>> Hong(Mr.)
>>
>>
Can you restart using small low-order steps?
Hong, does (or should) your trajectory stuff support an exact checkpointing
scheme for BDF?
I think we could add an interface to access the stored steps, but there are few
things other than checkpointing that would make sense mathematically. Would yo
Good points Jarek. I mentioned it in the initial email, but I think we
should keep it optional to start with. I'd rather get the basics in place
first, as I think we are going to find some interesting scenarios as we try
and put rules around it. Even if only release managers touch it in the
short t
Thanks Kaxil!
Here is a specific example from pip, a bugfix in 22.0.3 (note the file
added to `news`):
https://github.com/pypa/pip/pull/10869/files
Which is moved from that news file to NEWS.rst here during release (note
the `news` deletions):
https://github.com/pypa/pip/commit/44018de50cafba2544
nges
- New cool doc section (#)
```
Commit the newfragments (`git add chart/newsfragments && git commit -m
"demo" -n` ), then remove the `--draft` flag and observe the fragments are
deleted and the release notes are in `chart/RELEASE_NOTES.rst`.
Thanks,
Jed
I regret to announce that Charles Entenmann died on February 24, 2022. See:
http://www.infinite-energy.com/resources/charles-entenmann.html
https://www.legacy.com/us/obituaries/newsday/name/charles-entenmann-obituary?id=33396300
I think SNESLineSearchApply_Basic will clarify these points. Note that the
pre-check is applied before "taking the step", so X is x_k. You're right that
the sign is flipped on search direction Y, as it's using -lambda below.
/* precheck */
ierr = SNESLineSearchPreCheck(linesearch,X,Y,&change
Ivermectin improves the prognosis for patients infected with parasites. It
does nothing to prevent or cure COVID. Double blind tests of ivermectin
only show positive results in places where parasites are widespread, such
as India. See:
https://astralcodexten.substack.com/p/ivermectin-much-more-tha
ling to help bulk-generate missing
newsfragments
- Minor tweaks to some other release-manager specific tooling (e.g. chart
artifacthub changelog generator)
Please check out the feature branch and experiment! I'm eager to hear your
feedback.
https://github.com/apache/airflow/pull/22003
Thanks,
Jed
+1 (binding)
Verified signatures (though I signed them), licenses, and checksums. Ran
through a couple installs with different config and ran a DAG.
This is a small problem for which direct Householder QR may be fast enough
(depending on the rest of your application). For multi-node, you can use TSQR
(backward stable like Householder) or Cholesky (unstable).
julia> A = rand(20, 200);
julia> @time Q, R = qr(A);
0.866989 seconds (14 all
I wrote:
> I asked them if they plan to give out more than one prize. I will report
> back if they respond.
>
They say they haven't decided yet. They have not decided the prize amount
either, contrary to what Celani reported.
I wrote:
> Maybe they plan to give more than one prize. It says $2 million or $3
> million. Maybe that means $2 million to one person and another $1 or $2
> million to another.
>
I asked them if they plan to give out more than one prize. I will report
back if they respond.
Jones Beene wrote:
Time's a wasting. This prize should be claimed by someone we know, no?
>
Maybe they plan to give more than one prize. It says $2 million or $3
million. Maybe that means $2 million to one person and another $1 or $2
million to another.
>
Jürg Wyttenbach wrote:
> Only a fool would sell the Gates/Page blood suckers a working LENR
> reaction.
>
If they are giving a prize with no strings attached, why not show them the
reaction? As long as they do not demand a share of the intellectual
property, what harm can they do?
> And of cour
to prevent such crashes.
>
>> On Feb 27, 2022, at 4:24 PM, Jed Brown wrote:
>>
>> I assume this would be running VecWAXPY on CPU (and GPU) with some empty
>> ranks? I'd be mildly concerned about allocating GPU memory because a crash
>> here would be rea
n n seconds it could automatically run a few levels of streams
> (taking presumably well less than a few seconds) and adjust suitable the
> output. If the user runs, for example, 10min they surely don't mind .5
> seconds to get more useful information.
>
>
>
>> On Fe
Probably not implied by -log_view alone, but -streams_view or some such doing
it automatically would save having to context switch elsewhere to obtain that
data.
Barry Smith writes:
> We should think about have -log_view automatically running streams on
> subsets of ranks and using the resu
This is pretty typical. You see the factorization time is significantly better
(because their more compute-limited) but MatMult and MatSolve are about the
same because they are limited by memory bandwidth. On most modern
architectures, the bandwidth is saturated with 16 cores or so.
https://pet
Is there a reason that the twederation plugin doesn't work?
https://github.com/inmysocks/TW5-TWederation/tree/master/Federation-core
On Wednesday, February 23, 2022 at 11:59:41 PM UTC+1 cj.v...@gmail.com
wrote:
> https://youtu.be/IKJZEKJp9Ck
>
> A short video to introduce a first step in a littl
At the DARP workshop Francesco Celani said that the Anthropocene Institute
is offering a $2 million prize for a "simple/reproducible LENR experiment."
I do see anything about this at https://www.iccf24.org/
There is one slide about it here:
https://arpa-e.energy.gov/sites/default/files/2021LENR_w
Have you tried Bob?
You can get around all the setup by using the executable version. The
newest version is here
https://github.com/OokTech/TW5-BobEXE/releases/tag/1.7.3b1
Just download the executable for your system, put it in a folder and run it.
On Wednesday, February 23, 2022 at 6:56:50 PM
It would be good to report a reduced test case upstream. They may not fix it,
but a lot of things related to static libraries don't work without coaxing and
they'll never get fixed if people who use CMake with static libraries don't
make their voices heard.
"Palmer, Bruce J via petsc-users" wr
GELOG here for more details:
https://airflow.apache.org/docs/apache-airflow/2.2.4/changelog.html
Container images are published at:
https://hub.docker.com/r/apache/airflow/tags/?page=1&name=2.2.4
Thanks,
Jed
GELOG here for more details:
https://airflow.apache.org/docs/apache-airflow/2.2.4/changelog.html
Container images are published at:
https://hub.docker.com/r/apache/airflow/tags/?page=1&name=2.2.4
Thanks,
Jed
Hello,
Apache Airflow 2.2.4 (based on RC1) has been accepted.
3 “+1” binding votes received:
- Jed Cunningham
- Kaxil Naik
- Ephraim Anierobi
Vote thread:https://lists.apache.org/thread/pgxczr9qdnfqptxg7k6op518l0yk429z
I'll continue with the release process, and the release announcement
The website for ICCF-24 has been updated to include the Call for Papers and
other items.
https://www.iccf24.org/
If you can share before/after output from -log_view, it would likely help
localize.
Another unintrusive thing (if you're allowed to run Linux perf) is to
$ perf record --call-graph dwarf -F99 ./app
[... runs ...]
$ perf script | stackcollapse-perf | flamegraph > flame.svg
and open flame.svg in
+1 (Binding)
Verified licenses, signatures, and checksums.
ck documentation about context usage in Python/@task (#18868)
- Clean up dynamic `start_date` values from docs (#19607)
- Docs for multiple pool slots (#20257)
- Update upgrading.rst with detailed code example of how to resolve
post-upgrade warning (#19993)
*Misc*:
- Deprecate some functions in the experimental API (#19931)
- Deprecate smart sensors (#20151)
Thanks,
Jed
We need to make these docs more explicit, but the short answer is configure
with --download-kokkos --download-kokkos-kernels and run almost any example
with -dm_mat_type aijkokkos -dm_vec_type kokkos. If you run with -log_view, you
should see that all the flops take place on the device and there
Note that operations that don't have communication (like VecAXPY and
VecPointwiseMult) are already non-blocking on streams. (A recent Thrust update
helped us recover what had silently become blocking in a previous release.) For
multi-rank, operations like MatMult require communication and MPI do
See: https://www.lenr-canr.org/acrobat/PanethFthepublica.pdf
The Publications of Fritz Paneth and Kurt Peters: Precursor to the
Discovery of Cold Fusion
Contents
Introduction.
Paneth, F. and K. Peters, On the transmutation of hydrogen into helium.
Ber., 1926. 59: p. 2039 (translation).
Paneth, F.
+1 (Binding)
Verified licenses, signatures, and checksums.
VecDuplicateVecs isn't implemented in petsc4py, but it internally just loops
over VecDuplicate so you can use
qs = [x.duplicate() for i in range(4)]
y.maxpy(alphas, qs)
where the Python binding here handles qs being a Python array.
Samar Khatiwala writes:
> Hello,
>
> I’d like to create an
Matthew and Jed,
>
> Brilliant. Thank you so much!
>
> Your changes work like a charm Matthew (I tested your branch on the gmsh
> file I sent) and thank you so much for your advice Jed. The loss of one
> order of convergence for an inf-sup stable pressure discretization seems
Susanne, do you want PetscFE to make the serendipity (8-node) finite element
space or do you just want to read these meshes? I.e., would it be okay with you
if the coordinates were placed in a Q_2 (9-node, biquadratic) finite element
space?
This won't matter if you're traversing the dofs per ed
CB Sites wrote:
> I really like how the Chubb brothers worked on it from the solid state POV.
>
(It was uncle Talbot and his nephew Scott, both deceased.)
+1 (binding)
gt; actually not allocate the nonzeros and just stores the nonzero structure. But
> if this is not the case then of course I just duplicate the matrix.
>
> Thanks for the feedback.
>
>> Gesendet: Donnerstag, den 03.02.2022 um 03:09 Uhr
>> Von: "Jed Brown"
>> A
gt; actually not allocate the nonzeros and just stores the nonzero structure. But
> if this is not the case then of course I just duplicate the matrix.
>
> Thanks for the feedback.
>
>> Gesendet: Donnerstag, den 03.02.2022 um 03:09 Uhr
>> Von: "Jed Brown"
>> A
"Evstafyeva,Tamara" writes:
> Thanks for your prompt reply. I am attaching the makefile; the line for
> execution “make all -j 4”
>
> I guess using both was my attempt at trying multiple things until they work –
> using either one or the other produced the same error for me.
petsc.pc isn't bei
Hmm, usually we don't use BOTH the makefile includes and pkgconfig (as in
Makefile.user). You can use either. If you share the whole file and the command
line that executes, I think it'll be easy enough to fix.
"Evstafyeva,Tamara" writes:
> To whom it may concern,
>
> I am using a code that ut
The other day Francesco Celani and his friend asked me if I know of any
papers that discuss the role of H in the bulk Pd cold fusion. Can H enhance
the reaction? Is there an H-D reaction? I said I don't recall any papers
like that. It turns out they already found one, which I added to the
library:
Marius Buerkle writes:
> Thanks for they reply. Yes the example works, this is how I was doing it
> before. But the matrix is rather big and i need a matrix with the same
> structure at various points in my code. So it was convenient to create the
> matrix with preallocate, destroy it after us
Marius Buerkle writes:
> Thanks for they reply. Yes the example works, this is how I was doing it
> before. But the matrix is rather big and i need a matrix with the same
> structure at various points in my code. So it was convenient to create the
> matrix with preallocate, destroy it after us
I've created some .pdfs of P&L reports, and set the report to use the
two-column layout. I just discovered this option doesn't size the output to
the printed page, so the far right totals (the number I'm most interested in
- the net P/L for the period) are missing.
Changing the report layout t
Matthew Knepley writes:
> On Wed, Feb 2, 2022 at 6:08 PM Jorti, Zakariae via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Hello,
>>
>> I am using a TS to solve a differential algebraic equation (DAE).
>> I do not provide the Jacobian matrix but instead set the TS to use a
>> finite differ
I hesitate to mention this because it might be a copyright violation, but
this book has been uploaded in Acrobat format:
*Developments in Electrochemistry - Science Inspired by Martin Fleischmann*,
ed. D. Pletcher, Z.Q. Tian, and D.E.G. Williams. 2014: Wiley.
https://www.lenr-forum.com/attachment
There have been a few signs of increased interest in the field by official
agencies, especially defense agencies. I have uploaded some recent
PowerPoint slides from ARPA-E LENR Workshop, and a document from the
Norwegian Defense Research Establishment (Forsvarets forskningsinstitutt --
FFI).
https
Stefano Zampini writes:
> Il giorno mar 1 feb 2022 alle ore 18:34 Jed Brown ha
> scritto:
>
>> Patrick Sanan writes:
>>
>> > Am Di., 1. Feb. 2022 um 16:20 Uhr schrieb Jed Brown :
>> >
>> >> Patrick Sanan writes:
>>
Stefano Zampini writes:
> Il giorno mar 1 feb 2022 alle ore 18:34 Jed Brown ha
> scritto:
>
>> Patrick Sanan writes:
>>
>> > Am Di., 1. Feb. 2022 um 16:20 Uhr schrieb Jed Brown :
>> >
>> >> Patrick Sanan writes:
>>
Patrick Sanan writes:
> Am Di., 1. Feb. 2022 um 16:20 Uhr schrieb Jed Brown :
>
>> Patrick Sanan writes:
>>
>> > Sorry about the delay on this. I can reproduce.
>> >
>> > This regression appears to be a result of this optimization:
>> >
Patrick Sanan writes:
> Am Di., 1. Feb. 2022 um 16:20 Uhr schrieb Jed Brown :
>
>> Patrick Sanan writes:
>>
>> > Sorry about the delay on this. I can reproduce.
>> >
>> > This regression appears to be a result of this optimization:
>> >
Patrick Sanan writes:
> Sorry about the delay on this. I can reproduce.
>
> This regression appears to be a result of this optimization:
> https://gitlab.com/petsc/petsc/-/merge_requests/4273
Thanks for tracking this down. Is there a reason to prefer preallocating twice
ierr = MatPreallocato
Patrick Sanan writes:
> Sorry about the delay on this. I can reproduce.
>
> This regression appears to be a result of this optimization:
> https://gitlab.com/petsc/petsc/-/merge_requests/4273
Thanks for tracking this down. Is there a reason to prefer preallocating twice
ierr = MatPreallocato
Carlson
Sent: Saturday, January 29, 2022 01:31
To: Jed Taylor
Cc: Gnucash Users
Subject: Re: [GNC] Can't get Scheduled Transactions to work
Jed,
Scheduled transactions have had an issue for a long time that may explain part
of what you are seeing, as it involves templates showi
2022-02-01?
-Original Message-
From: gnucash-user
On Behalf Of Jed Taylor
Sent: Friday, January 28, 2022 15:49
To: gnucash-user@gnucash.org
Subject: [GNC] Can't get Scheduled Transactions to work
GnuCash v4.6 on Windows 10.
I'm working on setting up the books for my condo assoc
it to execute the ST process.
Shouldn't what's in the Review Transactions G/J match *exactly* with what's
been posted to G/J? Why the extra transaction dated today when it knows
that all the others are the first day of the month?
Thanks.
-Original Message-
From: gnucash
GnuCash v4.6 on Windows 10.
I'm working on setting up the books for my condo association, which has 20
units. With 20 units, I'd like to have GnuCash automatically post the
monthly assessment due from each unit, which is a different amount for each,
on the first of the month to the G/J.
I'
Here is a report from lenr-forum:
"There has been another successful replication [of the Pd Ni-mesh
experiment] by a third party Japanese publicly traded company. They are
working on the final report now, and when done it will be posted here on
the forum. Not sure what reactor, but results were 64
Let's move Crusher stuff to petsc-maint. If top/htop doesn't make it obvious
why there is no memory, I think you should follow up with OLCF support.
Mark Adams writes:
> Something is very messed up on Crusher. I've never seen this "Cannot
> allocate memory", but see it for everything:
>
> 13:1
"multigrid as a solver" generally means stationary (Richardson) iterations:
-ksp_type richardson -pc_type hypre
This might not converge, and you'll almost certainly see faster convergence if
you use it with a Krylov method. -ksp_type cg -pc_type hypre if your problem is
SPD.
sijie tang writes
Barry Smith writes:
>> What is the command line option to turn
>> PetscLogGpuTimeBegin/PetscLogGpuTimeEnd into a no-op even when -log_view is
>> on? I know it'll mess up attribution, but it'll still tell us how long the
>> solve took.
>
> We don't have an API for this yet. It is slightly tri
Barry Smith writes:
>> On Jan 25, 2022, at 11:55 AM, Jed Brown wrote:
>>
>> Barry Smith writes:
>>
>>> Thanks Mark, far more interesting. I've improved the formatting to make it
>>> easier to read (and fixed width font for email readi
Barry Smith writes:
> Thanks Mark, far more interesting. I've improved the formatting to make it
> easier to read (and fixed width font for email reading)
>
> * Can you do same run with say 10 iterations of Jacobi PC?
>
> * PCApply performance (looks like GAMG) is terrible! Problems too sm
Mark Adams writes:
> adding Suyash,
>
> I found the/a problem. Using ex56, which has a crappy decomposition, using
> one MPI process/GPU is much faster than using 8 (64 total). (I am looking
> at ex13 to see how much of this is due to the decomposition)
> If you only use 8 processes it seems that
Here is an obituary of Martin Fleischmann by D. Williams:
https://www.chemistryworld.com/opinion/martin-fleischmann-1927-2012/5401.article
Some of this is pleasing. It reminds me of what McKubre and others said
about Martin. Unfortunately, the parts about cold fusion are nonsense.
Either Williams
"Paul T. Bauman" writes:
> 1. `rocgdb` will be in your PATH when the `rocm` module is loaded. This is
> gdb, but with some extra AMDGPU goodies. AFAIK, you cannot, yet, do
> stepping through a kernel in the source (only the ISA), but you can query
> device variables in host code, print their valu
Barry Smith writes:
> We should make it easy to turn off the logging and synchronizations (from
> PetscLogGpu) for everything Vec and below, and everything Mat and below to
> remove all the synchronizations needed for the low level timing. I think we
> can do that by having PetscLogGpu take
Barry Smith writes:
> Norm, AXPY, pointwisemult roughly the same.
These are where I think we need to start. The bandwidth they are achieving is
supposed to be possible with just one chiplet.
Mark, can we compare with Spock?
Barry Smith via petsc-dev writes:
> The PetscLogGpuTimeBegin()/End was written by Hong so it works with events
> to get a GPU timing, it is not suppose to include the CPU kernel launch times
> or the time to move the scalar arguments to the GPU. It may not be perfect
> but it is the best we
o not plan to involve
> myself in any brand new serious benchmarking studies in my current lifetime,
> doing one correctly is a massive undertaking IMHO.
>
>> On Jan 22, 2022, at 6:43 PM, Jed Brown wrote:
>>
>> This isn't so much more or less work, but work in more
(which is why PETSc has its hacked-up ones). I submit a properly performance
> study is a full-time job and everyone always has those.
>
>> On Jan 22, 2022, at 2:11 PM, Jed Brown wrote:
>>
>> Barry Smith writes:
>>
>>>> On Jan 22, 2022, at 12:15 PM, Jed
We could create a communicator for the MPI ranks in the first shared-memory
node, then enumerate their mapping (NUMA and core affinity, and what GPUs they
see).
Barry Smith writes:
> I suggested years ago that -log_view automatically print useful information
> about the GPU setup (when GPUs
Barry Smith writes:
>> On Jan 22, 2022, at 12:15 PM, Jed Brown wrote:
>> Barry, when you did the tech reports, did you make an example to reproduce
>> on other architectures? Like, run this one example (it'll run all the
>> benchmarks across different sizes) an
Mark Adams writes:
> On Sat, Jan 22, 2022 at 12:29 PM Jed Brown wrote:
>
>> Mark Adams writes:
>>
>> >>
>> >>
>> >>
>> >> > VecPointwiseMult 402 1.0 2.9605e-01 3.6 1.05e+08 1.0 0.0e+00
>> 0.0e+00
>> >> 0
Mark Adams writes:
>>
>>
>>
>> > VecPointwiseMult 402 1.0 2.9605e-01 3.6 1.05e+08 1.0 0.0e+00 0.0e+00
>> 0.0e+00 0 0 0 0 0 5 1 0 0 0 22515 70608 0 0.00e+000
>> 0.00e+00 100
>> > VecScatterBegin 400 1.0 1.6791e-01 6.0 0.00e+00 0.0 3.7e+05 1.6e+04
>> 0.0e+00 0 0 62
Mark Adams writes:
> as far as streams, does it know to run on the GPU? You don't specify
> something like -G 1 here for GPUs. I think you just get them all.
No, this isn't GPU code. BabelStream is a common STREAM suite for different
programming models, though I think it doesn't support MPI wit
Mark Adams writes:
> On Fri, Jan 21, 2022 at 9:55 PM Barry Smith wrote:
>
>>
>> Interesting, Is this with all native Kokkos kernels or do some kokkos
>> kernels use rocm?
>>
>
> Ah, good question. I often run with tpl=0 but I did not specify here on
> Crusher. In looking at the log files I see
>
"Paul T. Bauman" writes:
> On Fri, Jan 21, 2022 at 8:52 AM Paul T. Bauman wrote:
>> Yes. The way HYPRE's memory model is setup is that ALL GPU allocations are
>> "native" (i.e. [cuda,hip]Malloc) or, if unified memory is enabled, then ALL
>> GPU allocations are unified memory (i.e. [cuda,hip]Mall
Mark Adams writes:
>>
>>
>>
>> > Is there a way to tell from log_view data that hypre is running on the
>> GPU?
>>
>> Is it clear from data transfer within PCApply?
>>
>>
> Well, this does not look right. '-mat_type hypre' fails. I guess we have to
> get that working or could/should it work with
"Paul T. Bauman" writes:
> On Fri, Jan 21, 2022 at 8:19 AM Jed Brown wrote:
>
>> Mark Adams writes:
>>
>> > Two questions about hypre on HIP:
>> >
>> > * I am doing this now. Is this correct?
>> >
>> > '--do
Mark Adams writes:
> Two questions about hypre on HIP:
>
> * I am doing this now. Is this correct?
>
> '--download-hypre',
> '--download-hypre-configure-arguments=--enable-unified-memory',
> '--with-hypre-gpuarch=gfx90a',
I's recommended to use --with-hip-arch=gfx90a, which forwards
When applying suggestions, it should offer to "instant fixup" (apply it to some
prior commit in this branch, but not in any other branches). That instant fixup
should highlight commits that changed nearby lines.
When you make an inline comment and the author changes those lines of code, it
now
Junchao Zhang writes:
> I don't see values using PetscUnlikely() today.
It's usually premature optimization and PetscUnlikelyDebug makes it too easy to
skip important checks. But at the time when I added PetscUnlikely, it was
important for CHKERRQ(ierr). Specifically, without PetsUnlikely, man
701 - 800 of 23988 matches
Mail list logo