> os << _value;
>
> This doesn't work if the parameter type is something like a
> std::vector... it won't even compile. Any ideas on dealing with
> this? For now, we've just commented out that line of code (we don't
> need to print the parameters) and continued on but what is the
> correct s
>> Instead of
>> just enabled or disabled, would it make more sense to add a third (and
>> new default) option, that enables reference counting for dbg and devel
>> modes but disables it for opt/prof?
>
> Yes... as soon as possible.
(lets hope this gets out before my battery dies.)
The answer is
> We've currently got side boundary ids stored in a:
>
> std::multimapstd::pair >
> _boundary_side_id;
>
> Wouldn't it make more sense to use:
>
> std::map, short int>?
>
> The side number (the unsigned short int) is an input to the
> Ok - something about the way this was "fixed" has broken stuff for me. If I
> now read a 2D Exodus mesh (using a Mesh created with dim=2) and then write the
> solution out it comes out having a dimension of _3_.
>
> To see this you can just run the attached exodus file (generated using Cub
> I noticed that any 2D mesh embedded in three dimensions (ie.
> LIBMESH_DIM=3) is saved as a "flat" 2D mesh in the Exodus output. I
> think the reason is that the Exodus file is opened with the dimension
> set to the mesh dimension and not the spatial dimension (as it should be
> if I understood t
>>> Roy's right, that's not the problem, but I think this is:
>>>
>>> MeshBase::const_element_iterator el =
>>>mesh.local_elements_begin();
>>
>>> What happens if you change the loop to
>>>
>>> MeshBase::const_element_iterator el =
>>>mesh.active_local_elements_beg
(I'm copying this to the -devel list, feel free to take -users off any reply
to keep its traffic down)
> Okay, so dxyz seems to be appropriate for storing the metric - that
> problem is solved then. Now, one last thing: All DOFs are at the
> vertices (and we have only one DOF per vertex). Ie the g
>> So I am looking into the size of Nodes and Elems, with an eye to adding some
>> state information while hopefully not increasing the size of the objects by
>> much (or at all).
>
> Interesting idea, but I wonder how much overhead the inefficiencies in
> the default "new" allocator add. We do q
So I am looking into the size of Nodes and Elems, with an eye to adding some
state information while hopefully not increasing the size of the objects by
much (or at all).
I used the attached code, which produces the following:
// ariel(15)$ ./nodebits
// ---
// |32-bit machine |
//
> When you take an unstructured triangulated mesh and you do one
> trisection first, you'll always end up with triangles that have at most
> a single vertex with valence other than 6.
Ah, I didn't catch that step...
>>> Any idea if this renumbering would break something else in libmesh?
>>
>> T
> To do this in a smooth way, it
> is best to internally renumber a triangle's nodes so that the
> problematic node is the first node of the triangle (ie Elem::_nodes[0]
> returns that node).
As Roy suggested, might it not be the case that *all* the nodes in a given
triangle are problem nodes?
>
> I'm not sure about Qhull. I don't think it's had any releases since 2003.
> This in itself isn't bad, but from
> http://www.qhull.org/html/qh-in.htm#library, I see:
>
> Warning: Qhull was not designed for calling from other programs. There is
> neither API nor Qhull classes. It can be done, but
>> Going forward, Qhull (http://www.qhull.org/) looks like it will fit the bill
>> for building the convex hull, and then we just need to write an intersection
>> test... We could go so far as to derive the BoundingBox and
>> BoundingConvexHull from the same base class, so that you can use either t
>>> Right now we can get a bounding box for two processors easily
>>> enough, and (I am sure you are gonna love this...) overload '&&' to
>>> return true if they intersect.
>>
>> If you must use operator overloading, at least use '&' instead?
>> Bitwise AND resembles an intersection operation more
>> There is actually a commented-out section of code in MeshCommunication that
>> used to do this using bounding spheres, but I think for my meshes with
>> boundary-layers that would just have an annoying amount of false positives.
>
> What about boundary layers that aren't aligned with a coordina
> Anyone know why the doxygen "Classes" link on sourceforge now takes
> you to the "Class List" instead of the other "Classes" page (where the
> classes are listed in multiple columns)? I find the "Class List" to
> be harder to use... just because you have to scroll so far down to get
> to a clas
>> (2) compute a bounding box for the local elements, and communicate
>> these
>> globally in an alltoall or something
>
> Wow - didn't realize Nemesis was such a pain in the ass...
Well, the 'elem_cmaps' tell you the neighboring processor id (by element /
by face), but I do not think it is requ
(I'm moving this part to the devel list.)
On 11/12/08 7:34 PM, "Roy Stogner" <[EMAIL PROTECTED]> wrote:
>> Ah... right again. Right now find_neighbors() blows away all neighbor
>> information at the beginning, but I am about to change that for another
>> reason... (finishing the nemesis stuff).
>> Remind me again what was the reason for numbering it this way? I'm
>> sure there was one, I just can't remember what it was.
>
> I'm not sure there was a reason, or at least I can't find anything that jogs
> my memory looking back through the logs. One possible culprit, though - the
> orderi
> Remind me again what was the reason for numbering it this way? I'm
> sure there was one, I just can't remember what it was.
I'm not sure there was a reason, or at least I can't find anything that jogs
my memory looking back through the logs. One possible culprit, though - the
ordering of our p
I have just checked in a small change to the DofMap that will report the
[first,last) local degree of freedom indices for a specified variable.
It will only work when the hackish --node_major_dofs is not specified, which
should not be a problem for anyone except me.
It could easily be extended to
> ...
>
> Okay, done.
>
>> And at any rate, if we *don't* resize() then we have to return the
>> size in the status object, which seems unnecessarily awkward...
>
> That's already implicitly happening. Should we now change the return
> type from Status to void? We now already know the size, we
>> What about instead using MPI_Probe on the receiving side to get the message
>> size and resizing the buffer when need be?
>
> Okay, one more question: what's our definition of "when need be"? If
> we're willing to shrink as well as enlarge the buffer, then it shouldn't
> hurt to call vector::r
> Unless MPI_Probe involves some performance hit (and I don't see why it
> should) this is the way to go.
Along these lines, since send_recv is so integral in a lot of algorithms,
should we implement it internally with
(1) nonblocking send
(2) blocking probe to get recv size
(3) resize recv buff
>>> Does anyone have any idea as to what to name the "non-nice" methods?
>>> receive_array? fill? I'm trying to think of something that's
>>> descriptive enough but that won't require users to call
>>> Parallel::nonblocking_receive_into_pre_sized_vector().
>>
>> Is it possible to resize for the
> Does anyone have any idea as to what to name the "non-nice" methods?
> receive_array? fill? I'm trying to think of something that's
> descriptive enough but that won't require users to call
> Parallel::nonblocking_receive_into_pre_sized_vector().
Is it possible to resize for the user in the ca
> 2.) *Any* refinement pattern for the pyramid will likely involve a
> splitting where some children have a different type than the parent.
> Since this would be the only element with such an inhomogeneous
> splitting, I'd like to discuss the best way to do this in the present
> context of the embe
> Wouldn't the above loop over a _lot_ (like millions in some cases) of
> unnecessary nodes when using Serial mesh?
Well, the exact same thing happens when you loop over active_local_elements.
For the case of a serial mesh (or any mesh for that matter) you are *really*
iterating over all the eleme
> Another option is that when you provide the constraint... you provide the
> constant (and it's stored in the constraint system). This isn't a bad idea...
> and will probably use less storage.
That's what I was envisioning...
As for the node bc ids,
> Now the questions shifts to how to provid
>> At the same time with JFNK we have a problem with the current way we
>> deal with hanging node constraints in libMesh. For JFNK we need to
>> fill the residual of a constrained DOF with something like u - ( A1 *
>> Parent1Value + A2 * Parent2Value + etc.). Currently
>> constrain_element_vect
> At the same time with JFNK we have a problem with the current way we
> deal with hanging node constraints in libMesh. For JFNK we need to
> fill the residual of a constrained DOF with something like u - ( A1 *
> Parent1Value + A2 * Parent2Value + etc.). Currently
> constrain_element_vector() d
>> So I have upgraded the wiki version and restored the old content.
>>
>> Now I can successfully log in, but when I click the 'main page' link it
>> takes me back to the main page and I am now no longer logged in.
>>
>> Can someone else try to log in and see if they can edit a page?
>
> Logging
So I have upgraded the wiki version and restored the old content.
Now I can successfully log in, but when I click the 'main page' link it
takes me back to the main page and I am now no longer logged in.
Can someone else try to log in and see if they can edit a page?
-Ben
--
> How hard was it to go through the code and change all of the #ifdefs
> and such?
Not too bad with that perl one-liner. It took me longer to find the proper
autoconf-way to get LIBMESH_ in front of everything...
-Ben
-
Th
I am about to check in a substantial change set that prefixes every variable
in include/base/libmesh_config.h with LIBMESH_ to avoid conflicts with
external packages.
I found an autoconf macro designed to do just that. The process is a little
convoluted and deserves some explanation.
Because of
>> Will be easy, but I bet those are not most of the ones which conflict? More
>> difficult will be, for example, HAVE_STDLIB_H, which comes indirectly when
>> configure looks for . Similarly, SIZEOF_INT etc... ?
>
> I have vague memories of talking about this problem (with John?) at some
> p
> So I think the best option is to prefix all of our #defines with LIBMESH_.
>
> Ben... since it sounds like you have a handle on the situation... you have the
> greenlight from me ;-) If you don't do it, I'll probably do it pretty soon.
> Just so it's clear, it's not just HAVE_MPI... it's qu
John,
What is the output of the following on a ranger node:
praesepe(1)$ numactl --hardware
available: 4 nodes (0-3)
node 0 size: 4095 MB
node 0 free: 1146 MB
node 1 size: 4096 MB
node 1 free: 1089 MB
node 2 size: 4096 MB
node 2 free: 1039 MB
node 3 size: 4096 MB
node 3 free: 1162 MB
node distanc
> The trouble would be the dense rows and columns that each new scalar
> would add to the sparsity pattern.
It adds a column to every row, but that is not a problem for all the other
rows, right? It is just one more entry that gets stored in that sparse row?
The major issue is that the last row
> Let's try it the other way round: Is there any easy test example that
> I could run on "my" cluster that you would expect to scale well with
> the number of CPUs I'm typically using? (It should be a test example
> that uses a sufficiently high number of dofs, solves some linear
> systems, and re
Even though projecting the vector is not the bottleneck I was observing,
I've decided to play with it anyway. The results are bizarre to say the
least. The following is the total time used by ProjectVector::operator() on
a number of processors. This is the code that does the computation, and
the
(I am forking part of this to the devel list)
On 9/9/08 10:28 AM, "John Peterson" <[EMAIL PROTECTED]> wrote:
>
> What about this parallel_for stuff? Even if you're not using threads,
> this still introduces an O(N_elem) step in the ConstElemRange
> constructor. This might be more noticeable whe
> I know there's still a lot of work (and cleanup) left to do... but it
> is technically working. One thing I know for sure is that it's
> probably leaking memory like a bitch ;-) I'll deal with that next
> week though ;-)
ex4 on two processors doesn't work yet! ;-)
> Thanks for all the help
N
Should the trilinos list get queried? Is that a bug??
> What do you think Ben? Shouldn't be too hard right? After a bit of looking
> around it appears that the dof_map can generate the full sparsity pattern
> which shouldn't be too hard to translate into the CrsGraph that Epetra wants.
Right.
Derek,
I just checked in a change which should construct the map properly for a
sparse matrix in trilinos. Now, I remember thinking this was gonna be
harder than what I just did, so maybe I missed something?
In any case, check it out...
-Ben
-
> Let's coordinate on this. Like I mentioned a little while ago, the hardest
> thing for me to get a grasp on is the parallel map / graph and how that
> translates into an Epetra_Map nudge, nudge wink wink
I will focus my effort on initializing the sparse matrix sparsity pattern...
>> You know, this is how I originally had things set up, but it seems
>> to be at
>> odds with the "make install" target. Try configuring trilinos with
>> the
>> --prefix="$TRILINOS_DIR" , then do a make install, and you will see
>> what I
>> mean.
>
> Aha! Now, what was in there makes more sens
> I just committed a change that modifies what the $TRILINOS_DIR environment
> variable should be set to. I'm not really sure what I was thinking before...
> but now it makes more sense.
>
> What you do is set it to the build directory in which you built Trilinos.
> This is more in line with the
>> Uh oh. What specifically are you referring to being not implemented?
>
> Specifically... all of the Trilinos stuff has NotImplemented throws in it.
>
> Of course, people really shouldn't be hitting that anyway (we don't advertise
> the Trilinos support yet). But, if some enterprising person
> Actually I just had a thought that on quadratic isoparametric elements
> the Jacobian will be a linear function, so is the idea of the 2*order+1
> to cover this case?
That was my original thought. Even on distorted bilinear quadrilaterals the
Jacobian will be non-constant.
This predates Roy's
>>> Especially when T is a char - that is 24 bytes on a x86_64 in overhead!
>>
>> Yeah, because of the pointers... I believe sizeof(char) is still 1 on
>> x86_64. That's pretty bad...
>
> A more feasible option might be a single vector. Then take the
> current char** and align it out in a strai
> I think I may have found the leak...
>
> Check out clear_dofs() in dof_object.h
>
> After deleting _n_v_comp[s], we test to see if there are any dof_ids
> to be deleted:
>
> if (this->n_vars(s) != 0) // This has only been allocated if
> {
Are there any outstanding issues in the way of 0.6.3? I just commited a fix
for ex12, it runs to completion again.
-Ben
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux
> The biggest libMesh thing to me is the sparsity graph. Do you have
> epetra_matrix and epetra_vector building the correct parallel sparsity
> Map based on our sparsity graph yet? If so, then truly there isn't
> much work to do.
The DofMap builds the proper sparsity pattern, but the Epetra side
> Ben... where are we at on the Trilinos integration with libMesh? I
> know that you did some work a little while ago to put an eptra_matrix
> object in there. How much of that is working?? How long do you think
> it would take someone to finish off epetra_matrix and epetra_vector?
Actually wou
John and I can probably do that. John wrote the original implementation,
and I am pretty sure the node ordering is solid as we have used it before.
Am I correct that the issue here is the face node ordering?
-Ben
On 8/8/08 10:46 AM, "Derek Gaston" <[EMAIL PROTECTED]> wrote:
> The exodusII-4.p
> Or
> perhaps this needs to be done at the top of the file before the c++
> includes?
Of course, that is the only thing that makes sense... Duh.
Perhaps we can do that in the file, along with a hefty amount of comments??
-Ben
---
> And I don't see a good solution. We're not going to convince the
> libstdc++ people to take out a debugging check or even to make the
> debug-enabling macros more fine-grained. It makes no sense to disable
> the METHOD=dbg STL checks, since that's the point of using dbg instead
> of devel. We
> Specifically, look for the statement "global_ids.clear();" in
> mesh_communication_global_indices.C. The for loop immediately
> following is where all the CPU time is being sucked up. But I don't
> see what the problem is. Nothing inside a libmesh_assert() there is
> expensive. Putting std::
> The hard (or at least tedious) part may be fixing our I/O classes to
> write out and read in solutions with per-subdomain variables. I'm not
> familiar with the nitty-gritty details of our output formats, but I
> wouldn't be surprised if they didn't all support such a thing.
I'm thinking for m
> It's also unclear yet whether it's worth doing this sort of thing at alll
> given that MVAPICH is already multi-core aware and the on-node communications
> are done via shared memory.
Yeah, that is what I would like to take advantage of. My thought process is
that you may want to group "nearest
ECTED]> wrote:
> Definitely interesting numbers. What I find most interesting is that
> MVAPICH2 has higher latency than MVAPICH1... any ideas about that?
>
> Do you have an idea about how you would actually implement this using
> Metis / ParMetis?
>
> Derek
>
>
Check out attached...
I've been doing some MPI profiling on my 4-socket, dual-core per node
Opteron cluster. I've been curious for a while about "multilevel domain
decomposition" for this class of architectures - e.g.
(1) partition into the number of nodes
(2) partition each subdomain into the nu
>> DistributedVector vec(n_global, n_local, array_of_remote_indices);
>>
>> Where "vec" is a vector which stores n_local entries, and also has "ghost
>> storage" for array_of_remote_indices.size() entries.
>
> We'd also need an "unsigned int first_local_index" argument for the
> offset of n_local
> Also: Ben, is current_local_solution a serial vector? It certainly
> looks that way to me, but I've always been a little confused by the
> solution/current_local_solution divide, so maybe I'm missing some sort
> of PETSc magic under the hood.
In the beginning...
I understood a little about par
Assertion `min_procid != libMesh::processor_id() || obj' failed.
[1] src/mesh/parallel_mesh.C, line 546, compiled Jun 23 2008 at 17:21:20
terminate called after throwing an instance of 'libMesh::LogicError'
what(): Error in libMesh internal logic
> In case my sparse terse comments in the code a
> I agree. I've always thought it would be cool to have a home-grown
> DistributedMatrix to go with it as well (LibMesh's own SparseMatrix
> implementation) so I'd like to see it stay as a leaf class if
> possible...
That would be my intent. But by pushing the majority of the implementation
into
> Assuming you mean NumericVector not DistributedVector, that sounds
> like an excellent idea
Actually, I meant DistributedVector<>, and the inheritance would change.
But your point is well taken. The implementation could just as easily be
done in NumericVector<>, and then the DistributedVector<>
> OTOH, Derek, this may be why your "back of the envelope" problem size
> calculation caused the machines to swap ;-)
FWIW, I have no swap on my compute nodes. (No disk for that matter too.)
I'd rather have a memory allocation request that does not fit in RAM kill
the process than swap!!
-ben
>> I've got a good guess as to where the memory spikes are occuring:
>> check out __libmesh_petsc_snes_residual() in petsc_nonlinear_solver.C.
>> Is that X_local a serial vector?
>
> Good catch Roy... that's along the lines of what I was thinking.
>
>> I think I noticed this problem and tried to
> But as a temporary workaround, would you go to partitioner.C and
> uncomment the "don't repartition in parallel" code on lines 47-48?
> I'm not yet sure whether there's a bug in Ben's redistribution code or
> whether that's just triggering a bug in my core or refinement code,
> but I at least can
> I do like CPPUNIT. It has it's drawbacks... but there's not really
> any reason to use anything else. In particular Boost.Test has _many_
> drawbacks (as Boost stuff often does)... mainly that it's just tough
> to work with. CPPUNIT is straightforward and does it's job well.
OK, the libme
> That's a nasty one. I would never have seen that -- I'd be to focused
> on making sure the weights and points were correct to worry if the
> vector was too long!
Yeah, if you ran on pure tets you likely were OK as the extra entries were
default constructed to 0 weight. When I ran a hex or pris
Have we settled in on CPPUNIT as our unit-testing framework? I'm ready to
reorganize the unit test directory if so.
On a totally related issue, I was just expanding the quadrature unit tests
and found a bug in QGauss for Tets at 5th-order. The issue is that the
points/weights vector is resized t
> If that's not the case (ie, we're relying on
> Petsc to do off processor adding for us) then by all means let's use
> FEVector...
Yeah, we expect that PETSc does that for us.
> Hope that helps...
It does. This is a very common usage -- summing local contribution to
shared, remote entries, so
> They are actually redefining "underscore." I have no clue why but
> that can't be good programming practice.
This is invigorating me to finish the Trilinos integration. It will be nice
to have a functional alternative to solve linear systems on parallel
platforms.
Along these lines, Derek, an
No issues under CentOS-5.1 with gcc-4.1.1.
On 6/16/08 2:20 PM, "Derek Gaston" <[EMAIL PROTECTED]> wrote:
> No problems here on OSX with gcc 4.0.1. Nor do I have a problem on
> our cluster here running Suse with gcc 4.1.2.
>
> Derek
>
>
> On Jun 16, 2008, at 12:50 PM, Roy Stogner wrote:
>
>>
> Hey! If it's "Beat Up On Roy" day, could we celebrate by pointing to
> some specific obfuscated examples? :-) I'm not using GCC4.3 yet, so
> I'm not going to try to guess what redundant parentheses would be
> necessary to make that compiler happy, but if I've been writing code
> that makes hum
> Is the lack of extra curly braces now a warning, and therefore we
> should code accordingly?
Yeah, in gcc-4.3 it warns that it is ambiguous when 'if' the 'else'
corresponds to. My personal feeling is that for compound clauses which fit
on one screen height it is pretty clear provided you have d
John asked a few days ago if the OSX case-preserving (but insensitive)
filesystem ever caused me issues. The answer was no, but now...
#include
Happily finds the libMesh epetra_vector.h instead of Epetra_Vector.h in the
trilinos install directory.
Any better suggestion than renaming the libmes
Is all the latest trilinos stuff synced up in SVN? I'm gonna try and push
it a little further...
-
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Sourc
Is anyone out there using pyramids?
I've got a hybrid hex/tet/pyramid mesh in exodusII format. (ExodusII has
undocumented support for pyramids, meaning I guessed at their node ordering
since it is not in the old documentation.) So now I get negative Jacobians
on a *subset* of my pyramids, which
>> This might be a good place to start? It describes the
>> MPI_Errhandler_create and MPI_Errhandler_set interfaces.
>
> Thanks, but according to this, MPI's default behavior should already
> be what I want: to do an MPI_Abort on any errors.
>
> Of course, even when I do an MPI_Abort(COMM_WORLD
Send it to Bill et al via consulting on the TACC portal. Which level of
optimization is the Intel compiler using? In the past when it has taken
long on a single file Intel has found bugs in their compiler optimization
strategies, maybe this is an example of that...
Let me know what kind of perfo
>> It's already too late to save the FE classes, but please, the
>> DofObjects are innocent; there's no need to kill again!
>>
>
> I'm sorry Roy but I agree with Ben,
> I think that the FE classes help making the difference taking libMesh
> one step ahead.
> But, I have to say that I love templat
>>> Just eliminating that unnecessary, incomplete copy reduced the memory usage
>>> to 980MB, which I consider a huge savings. So needless to say we will not
>>> copy the dof objects when they do not contain complete information.
>>
>> Wow! I'd consider that a big savings too.
>
> I'll third
A question recently came up on the user's list regarding the old_dof_object.
Specifically, is it OK to clear it after system projection.
I couldn't think of a reason why not, but I also figured it had a pretty
small memory footprint. Not so, especially on 64-bit machines.
I've got a ~1 million e
>> I would think that in the case when libMesh initializes MPI we can
assume
>> responsibility for parallel error handling. We should install an error
>> handler which calls MPI_Abort(), but save the original error handler and
>> restore it in the current equivalent of libMesh::finalize()
>
> By
>>> Do nothing. My MPI library is pretty good about figuring out that
>>> when one process dies, the rest can't network write to it anymore and
>>> should exit. I'll bet other MPI libraries are just as good. This is
>>> basically what happens when there's a segfault, after all. The "do
>>> not
> Apparently the C++ standard is as follows:
>
> | 15.3 - Handling an exception [except.handle]
> ...
> | -9- If no matching handler is found in a program, the function
> terminate() is called;
>
> In other words, unless there's actually a try block waiting to catch
> any error()-thrown e
> Seems like a paradox: we can't call MPI_Abort from within error(),
> because error() can't be sure there isn't some enclosing code waiting
> to catch its exception, but we do have to be able to call MPI_Abort
> when error() is called if there is no catch waiting.
I like what you propose.
At ti
Go for it!!
>> How well do exceptions work with threads?
>
> They're thread-safe, and they don't propagate from thread to thread.
> So if one thread throws an exception and is able to recover from it,
> the other threads aren't supposed even get bothered by the process.
>
> I'm not sure how well
So today's check-in should allow us to properly repartition a parallel mesh.
The next obvious necessary step is redistribution, and I'm gonna work on
that next.
But after that, what is next? Eventually the concept of System::solution
and System::current_local_solution need to merge, where we only
> On the other hand, while I don't think "unchanging element ordering"
> is a feature worthy of regression testing itself, it might be a useful
> feature to make other regression tests easier. Running "diff
> current.gmv gold.gmv" would certainly be easier than trying to reorder
> the results bef
On more than one processor:
ParallelMesh mesh(3);
const unsigned int ps = 10;
MeshTools::Generation::build_cube (mesh,
ps, ps, ps,
-1., 1.,
-1., 1.,
-1., 1.,
HEX8);
> print_trace() gives a whole stack trace, and currently only prints
> function names (uncomment those four lines to print more), but the
> same code could be modified to only print out the details of the frame
> from which print_trace() was called. That might be as good as
> __FILE__ and __LINE_
>>> This sounds like the ideal (in the sense of "least bad") solution.
>>
>> Good plan. Are inline functions pretty much equivalent to #define macros for
>> all intents and purposes? I had thought there was a reason where you might
>> still need a macro every now and then, but I honestly can't r
>> This sounds like the ideal (in the sense of "least bad") solution.
>
> Good plan. Are inline functions pretty much equivalent to #define macros for
> all intents and purposes? I had thought there was a reason where you might
> still need a macro every now and then, but I honestly can't remem
Well, it finally happened... My sloppy
#define error() ...
caught up with me. SunStudio uses error as a member variable, and has the
string error(good) in a constructor initialization list. Of course, that is
a perfectly sensible thing for them to do. ;-)
I've replaced all those #define's in lib
> Are there any objections? I'm hoping that the number of people who
> are doing DG with high p elements and who can't conveniently
> regenerate their old solution files is zero.
No objections here. The new EquationSystems IO stuff is not in an official
release, so if we are really motivated we
1 - 100 of 160 matches
Mail list logo