Re: [crossfire] 2.0 object-type refactoring

2006-11-05 Thread Mark Wedel
Alex Schultz wrote:
 Mark Wedel wrote:
   Yes, all of this should go through a common code path, but saying those 
 checks 
 should be in the 'command' code may not be the right answer, depending on 
 what 
 the command code is defined as.
 What I'm saying, is that 'character initiated' actions should be
 separated from 'raw actions'. In particular, in some way such that all
 'character initiated' calls follow a common code path that only applies
 to the 'character initiated' aspects. I was thinking that only 'raw
 actions' should be in the callback system. Though now that I think about
 it, one could potentially add in a central location, pl_ob_apply(),
 pl_ob_drop() and similar functions, which would do 'character initiated'
 aspects then run the callback (ob_apply() and ob_drop, etc.).

  I'm not sure if it should really make any difference how the 
operation/callback is happening.

  If there is a callback for dropping objects, then any way they are dropped 
(explicit player command, monster dying, chess being burned up, etc) should all 
use that same callback.

  This gives us consistent behavior.  This is better than cases where different 
things happen, as from a player perspective, that doesn't seem to look right.

  That said, for some operations, there is likely need for some upper function 
that finds/gets the parameters that the callback needs.

  In the apply example, you'd have the case where the player does 'apply bow'. 
In that case, the apply command has to find the object of the correct name, and 
then do the callback.

  If the player applies it via mouseclick in the client, the server has to look 
it up by tag, and then pass into callback function

  Then there could be the other methods (player steps on something like a 
teleporter that causes the apply logic to come into place).



___
crossfire mailing list
crossfire@metalforge.org
http://mailman.metalforge.org/mailman/listinfo/crossfire


Re: [crossfire] 2.0 object-type refactoring

2006-11-03 Thread Mark Wedel
Alex Schultz wrote:
 Well, I'm thinking that we do have a BASECLASS, we might as well use
 it for transitional fallbacks, OR better yet make a TRANSITONCLASS or
 something, which the BASECLASS is based on. I would say that would be
 just as easy to follow as things like apply_fallback() and in fact on
 can make names like that a convention for transitional functions.
 Also, fallbacks could potentially be handled very easily and generically
 in the functions like ob_apply(), such as in this example:

  The only issue with using the baseclass for fallbacks of old code is that the 
calling structure has to be the same.

  If the new callbacks have more/different parameters than the old function, 
then you have to go back to the old function and change it around a little bit. 
  Any functions so called may need other changes - they may generate error 
messages if the object type isn't handled for example, or possibly other 
issues. 
  The question then becomes if it is worth it to do changes/debugging vs just 
change the function, move it to the new code and callback method.

  I also think more thought/discussion on these different callback mechanisms 
need to be thought out.  The idea of per object callbacks have also been 
mentioned.  How all of these interact need to be clearly defined.

  For example, is the BASECLASS callback always made, regardless of anything 
else in the object?  Is it only made if if there isn't a specific callback 
registered, etc?  Depending which is done, that is a major difference, and has 
to be reflected in the code that is written.

  If it is always called, to a great extent, it then limits the usefulness of 
the callback structure as a whole (it limits what can be done with the object).

  If the actual type handlers make the callback to the base function, that is 
fine - it just needs to be clearly defined that is what should be done.  And in 
that case, it could perhaps be used as a transition.

  I still wonder how much that does make sense, as in that case, the type 
callback would know what the legacy function is, and wouldn't really need a 
baseclass mechanism to figure it out, simply because there is an old function 
is 
a single function that would cover everything.


   I was thinking overrride would be determined by the object (or person) 
 using 
 the objects themselves.

   In the curse example, you could check the person unapplying it, and if the 
 WIZ, let them do so.  Sure, you could pass in parameters for that, but I 
 doubt 
 that is necessary
   
 Well, the thing is, I believe it would be beneficial to have the extra
 flexibility of being able to make these calls and have the caller make
 overrides for reasons other than WIZ and such. It's not that it's
 currently needed, but I'd prefer to leave the opportunities for such
 open if reasonably possible.

  But how do you reasonably do that, if you don't know what overrides are 
needed?

  Only thing I can really think of is to just have an integer parameter that 
you 
pass it, that could contain various values (bitmasked) that could be added 
extended.

  The main problem there is that if some override is added, there may be 20 
callbacks you need to add code to.  Which is why having that in the top level 
handler makes it easier - only one place that the code needs to be written.

 
   In all these cases, that code will be someplace - it is just a question of 
 where.

   For some things, doing it in the command code isn't ideal, because there 
 are 
 several ways for the command to be issued (some are processed by the client 
 and 
 sent as lower level protocol commands in addition to having text commands 
 the 
 player an use, etc) - in those cases, you want the check code in the 
 function 
 all of those call so it is in one place, not 3.
 Well, IMHO if the command code doesn't have a unified method for calling
 something like apply or drop, then perhaps that needs to be cleaned
 up, so putting those checks in the command code would be practical.

  It depends on how you define the command code in this context.

  At some level, all the drop stuff eventually calls the same function, and in 
that function, it can check for various attributes (god given equipment, etc).

  However, there are probably at least 4 paths to get to that function, 
depending on how the object is dropped.

  Picking stuff up would probably be more so if you think about it.  There is 
the automatic pickup modes, there is mouse control (right click), there is 
keyboard control.  You can also get objects indirectly added to your inventory 
(being hit by an arrow).  Using shops can sort of change inventory (coins added 
to make change, etc).

  Yes, all of this should go through a common code path, but saying those 
checks 
should be in the 'command' code may not be the right answer, depending on what 
the command code is defined as.

___
crossfire mailing list
crossfire@metalforge.org

Re: [crossfire] 2.0 object-type refactoring

2006-11-02 Thread Alex Schultz
Mark Wedel wrote:
 Alex Schultz wrote:
   
 Another thought, perhaps instead of the else though, we could use the
 BASECLASS via another ob_methods structure as mentioned in a different
 mail, to implement the falling back to the old function. Not only is it
 well suited to falling back to the old function, it IMHO might prove a
 useful system to have in place for after the conversion is complete,
 particularly once getting into per-arch and per-object ones with plugin
 support.
 

   I personally wouldn't  be overly concerned about neatness while in the 
 transition.  After all, it is a transition, and will eventually go away, so 
 being too concerned about how it work doesn't make tons of sense.
 snip
   I'd think for generic callbacks, it depends on its purpose.  For the 
 transition purpose, I think it'd have to fallback to a single function for 
 all 
 object types.  I'd personally rather just see a call like 'apply_fallback()' 
 type of thing vs a 'BASECLASS[APPLY]()' type of thing - if there is only 1 
 function for each type, it is easier to follow the code if you see exactly 
 what 
 the function is, and don't have to go look at BASECLASS to see what it is 
 doing, 
 etc.
   
Well, I'm thinking that we do have a BASECLASS, we might as well use
it for transitional fallbacks, OR better yet make a TRANSITONCLASS or
something, which the BASECLASS is based on. I would say that would be
just as easy to follow as things like apply_fallback() and in fact on
can make names like that a convention for transitional functions.
Also, fallbacks could potentially be handled very easily and generically
in the functions like ob_apply(), such as in this example:

int ob_apply(object *ob, object *pl) {
ob_methods *methods;
for (methods = type_methods[ob-type]; methods; methods =
methods-fallback)
if (methods-apply)
return methods-apply(ob, pl);
return 0;
}

Also, things like the generic callback for 'drop', that generic code can
be moved straight away to the BASECLASS, and makes having the separate
TRANSITIONCLASS helpful for organization to show the difference
between purely transitional functions and other generic callbacks.
Eventually one could simply remove TRANSITONCLASS and the framework
would still be useful for BASECLASS and other things. Sure, one could
put the calls to the generic callbacks straight into ob_apply, however
due to the repeated code for calling callbacks, I would find it nicer to
avoid differences in their code beyond what is necessary.

   I was thinking overrride would be determined by the object (or person) 
 using 
 the objects themselves.

   In the curse example, you could check the person unapplying it, and if the 
 WIZ, let them do so.  Sure, you could pass in parameters for that, but I 
 doubt 
 that is necessary
   
Well, the thing is, I believe it would be beneficial to have the extra
flexibility of being able to make these calls and have the caller make
overrides for reasons other than WIZ and such. It's not that it's
currently needed, but I'd prefer to leave the opportunities for such
open if reasonably possible.

   In all these cases, that code will be someplace - it is just a question of 
 where.

   For some things, doing it in the command code isn't ideal, because there 
 are 
 several ways for the command to be issued (some are processed by the client 
 and 
 sent as lower level protocol commands in addition to having text commands the 
 player an use, etc) - in those cases, you want the check code in the function 
 all of those call so it is in one place, not 3.
Well, IMHO if the command code doesn't have a unified method for calling
something like apply or drop, then perhaps that needs to be cleaned
up, so putting those checks in the command code would be practical.

Alex Schultz

___
crossfire mailing list
crossfire@metalforge.org
http://mailman.metalforge.org/mailman/listinfo/crossfire


Re: [crossfire] 2.0 object-type refactoring

2006-11-01 Thread Mark Wedel
Alex Schultz wrote:
 Another thought, perhaps instead of the else though, we could use the
 BASECLASS via another ob_methods structure as mentioned in a different
 mail, to implement the falling back to the old function. Not only is it
 well suited to falling back to the old function, it IMHO might prove a
 useful system to have in place for after the conversion is complete,
 particularly once getting into per-arch and per-object ones with plugin
 support.

  I personally wouldn't  be overly concerned about neatness while in the 
transition.  After all, it is a transition, and will eventually go away, so 
being too concerned about how it work doesn't make tons of sense.

  per arch/per object I think gets into some other issues.  Registering them 
isn't hard - the more serious question is if those callbacks are in addition to 
the ones normally for that object, or are instead (or does something control 
that?).  But that is a future discussion.

  I'd think for generic callbacks, it depends on its purpose.  For the 
transition purpose, I think it'd have to fallback to a single function for all 
object types.  I'd personally rather just see a call like 'apply_fallback()' 
type of thing vs a 'BASECLASS[APPLY]()' type of thing - if there is only 1 
function for each type, it is easier to follow the code if you see exactly what 
the function is, and don't have to go look at BASECLASS to see what it is 
doing, 
etc.

 
 snip
   The previous example I had was about applying unpaid objects.  I can't 
 think 
 of any case where that should be allowed, and if there was some specific 
 reason, 
 I think a flag on the object should control that, and not object type (as 
 presumably, that object would have some behavior like existing objects).  I 
 think it is cleaner to have it like:
 snip

   (another different case here is dropping of damned/cursed objects - 
 regardless 
 of type, that shouldn't be allowed save for something special (DM))
   
 Well, IMHO those dropping and unpaid restrictions should either be in
 the command code, or give the callback an 'override' parameter, because
 of the save for something special cases. Either works for me, but I am
 inclined against the 'override' parameter as it IMHO creates unnecessary
 complexity.

  I was thinking overrride would be determined by the object (or person) using 
the objects themselves.

  In the curse example, you could check the person unapplying it, and if the 
WIZ, let them do so.  Sure, you could pass in parameters for that, but I doubt 
that is necessary

  In all these cases, that code will be someplace - it is just a question of 
where.

  For some things, doing it in the command code isn't ideal, because there are 
several ways for the command to be issued (some are processed by the client and 
sent as lower level protocol commands in addition to having text commands the 
player an use, etc) - in those cases, you want the check code in the function 
all of those call so it is in one place, not 3.



___
crossfire mailing list
crossfire@metalforge.org
http://mailman.metalforge.org/mailman/listinfo/crossfire


Re: [crossfire] 2.0 object-type refactoring

2006-10-31 Thread Alex Schultz
Mark Wedel wrote:
 snip
   In that mode, I still think it would be easy to do an object type at a time 
 - 
 you'd have something like:

   ob_apply()
 {
if (we have callback) do_callback()
   else call_old_apply_function()
 }

   type of thing.
   
Yes, I agree and do now believe that doing it one-type-at-a-time would
work better overall than one-callback-type-at-a-time. The way you say
above is about what I'm thinking.

Another thought, perhaps instead of the else though, we could use the
BASECLASS via another ob_methods structure as mentioned in a different
mail, to implement the falling back to the old function. Not only is it
well suited to falling back to the old function, it IMHO might prove a
useful system to have in place for after the conversion is complete,
particularly once getting into per-arch and per-object ones with plugin
support.

 snip
   The previous example I had was about applying unpaid objects.  I can't 
 think 
 of any case where that should be allowed, and if there was some specific 
 reason, 
 I think a flag on the object should control that, and not object type (as 
 presumably, that object would have some behavior like existing objects).  I 
 think it is cleaner to have it like:
 snip

   (another different case here is dropping of damned/cursed objects - 
 regardless 
 of type, that shouldn't be allowed save for something special (DM))
   
Well, IMHO those dropping and unpaid restrictions should either be in
the command code, or give the callback an 'override' parameter, because
of the save for something special cases. Either works for me, but I am
inclined against the 'override' parameter as it IMHO creates unnecessary
complexity.

Alex Schultz

___
crossfire mailing list
crossfire@metalforge.org
http://mailman.metalforge.org/mailman/listinfo/crossfire


Re: [crossfire] 2.0 object-type refactoring

2006-10-30 Thread Mark Wedel
Alex Schultz wrote:
 It may be better working on getting all of this infrastructure/callback 
 methods 
 in place, and implementing one type of object using them instead of focusing 
 on 
 the specific callback method (the web page mentioned doing all the apply 
 stuff). 
   Why?  That way, if someone does write code for a new object type, all the 
 pieces are in place so they can use the new method.  If only the apply 
 callback 
 is implemented, they can use that logic, but are left using old coding 
 method 
 for the other areas, that then needs to be converted down the road.

   Also, by doing all the different callback methods, it provides better 
 coverage 
 of the callback logic, eg, if there are issues with any of the methods, they 
 will be found sooner vs later.
   
 Well, my original thinking, was that if I were to do it by completing an
 object type, instead of a callback type, then it would be somewhat of a
 pain to make it redirect to legacy methods for other object types not
 yet implemented in the new system. What's your view on that issue?

  Some of it depends on how this is done.

  If you replaced the specific calls to apply, drop, etc as they are sprinkled 
throughout the code, then doing one type of function is easier.

  However, what I was thinking is those functions would still exist, but 
instead 
of the case statements they have now, they would have something like:

  if (callback_Exists_for_this_type) do_callback()
  else { do switch statement}

  In that model, doing a few types at a time is easier to do.

  I also think you still want to have common top level/generic functions so 
that 
you can have some common code.

  An example of this (albeit a trivial one) is checking that objects are paid 
for before doing an apply.  Rather than have that duplicated in every object 
type apply function, you have it in the one generic one that calls the specific 
ones.

  A better case here may be something like examine/describe item, where many 
objects may have some specific/unique properties, but many also have common 
properties (value, cursed, applied, etc).

 
 Second, if we're going to be writing all new functions, we should use some 
 other 
 'best practices'.  In particular, for anything that would return a string 
 value, 
 they should be modified so that a string pointer (and length of that 
 pointer) is 
 passed in, instead of using static return buffers.  This will assist if we 
 ever 
 thread the server, but it will also help prevent a few bugs (related to 
 this, 
 sprintf, strcat, and strcpy should never be used, but rather snprintf, 
 strncat, 
 and strncpy instead)
 Well, I wouldn't say necessarily writing all new functions: I believe
 some existing functions can be tweaked fairly easily to fit into the new
 system. However I believe that when moving functions that map well over,
 we should hand review them anyways, and perhaps use a checklist of
 things such as those issues you mention like avoiding static buffer returns.

  I think a lot of existing code can be re-used.  I'm not sure how many 
functions can be re-used completely.

  I suppose the apply code is a reasonable example.  For some object types, the 
logic has been broken apart into its own function.

  But some object types are still handled within a case statement.  In those 
cases, you'll basically need to write a new function.

  But this really needs to be thought at the time of callbacks, as you'll need 
to set up the function pointers to consistent.

  To be honest, the only case I can think of right now would be the 
describe/examine object logic.


  Other thought not related to the above:
Given this is a major restructure, would it make sense to collapse/redo some of 
the types, particularly the equipable equipment?

With the change of equipment (body info), for the most part, there is not the 
need of all the types right now (boots, shield, helmet, etc) Really, the only 
types (or subtypes) needed are likely:

WEAPON - some classes or gods are prohibited from using them so need to know 
what are weapons and what are not.
BOW - like weapon above, but these also fire other objects
ARMOR - some classes can't use armor, so need this info.  helmets, boots, 
gloves, shields, etc fall into this.
MISC - anything else equipable that is not in the above that conveys 
adjustments 
to the players (eg, wands/rods/horns are not here, because they do not provide 
ac, protections to the player simply by equipping them).  But things like 
rings, 
amulets, necklaces (some of these don't exist but are being used as examples) 
would be in this category


___
crossfire mailing list
crossfire@metalforge.org
http://mailman.metalforge.org/mailman/listinfo/crossfire


Re: [crossfire] 2.0 object-type refactoring

2006-10-30 Thread Nicolas Weeger (Laposte)
Hello.

 For a while now I've been throwing the idea of separating the server's
 object-type-specific code by object types. Currently things are rather
 disorganized with code for how an object type works being spread all
 around the code. I plan to begin to implement the plan on the following
 wiki page soon if nobody has any objections:
 /http://wiki.metalforge.net/doku.php/dev_todo:refactor/

Sounds ok to me. I guess not everything is planned, so there'll probably be 
adjustments later on, but fine for me :)

 Also, would be nice for some others to help a bit once it's started :)

I'll try to help when i get time.


Nicolas

___
crossfire mailing list
crossfire@metalforge.org
http://mailman.metalforge.org/mailman/listinfo/crossfire


Re: [crossfire] 2.0 object-type refactoring

2006-10-30 Thread Mark Wedel
Alex Schultz wrote:

 Well, the way I was thinking of doing it, would be to replace specific
 calls to apply, drop, etc as they are sprinkled throughout the code, and
 keep common code such as you say, in a common subdirectory of the
 types directory, and then have the callback called by ob_apply(), call
 something like ob_apply_common() at the start of it's code.
 IMHO, with this method, the best way to handle legacy non-separated
 logic would be to move that legacy logic to the types/common
 subdirectory, and set it up so ob_apply() and such will call that if it
 can't find a callback registered for the type.

  That works I suppose.  You do get into various questions that have no clear 
answer.  Like 'should the main drop function be in the commands.c file (since 
it 
is a command the player issues) or should it be in the types/common.c file'

  I'd be somewhat inclined to have the command logic still in the command 
(c_...) files, and from there, it does the callback.  It seems (to me at least) 
that is a little better - the basic command logic is still in the commands 
file, 
and the actual object specific drop is in the types area.  But I can also see 
the other point - lets put all the common object operations together.

  I also do think that if there are standards in place that all objects should 
follow (like not letting unpaid objects be used), that should be enforced, and 
not something that is affected by object type.  If some objects really should 
change the behaviour, I'd be more inclined to say they should have specific 
flags for such variations - one reason there are some many object types right 
now is that there is some of that 'if type==FOO, I behave 95% like BAR, except 
for this one little case'.  The problem this leads to is down the road, someone 
says 'which do I use, FOO or BAR', where if there was just a flag which 
controlled that basic actions, that is clearer.

  The other reason I'd like there to be common functions is that I think it 
will 
make debugging a little easier.  With a single function that then makes the 
callbacks if necessary, I can easily set a breakpoint in there to catch when 
any 
object of any type does that action.  Otherwise, there isn't any easy way to 
catch 'if any object of any type is dropped' type of thing, unless there is a 
common function that every object type uses 100% of the time.

  But it probably is more stylistically - to me, this seems cleaner:

void action()
{
  some common code
  make object callback
}

vs

void action_for_type()
{
make_common_call_for_all_types()
do my specific code
}

  It just seems that if everything is going to call 
make_common_call_for_all_types(), why not just call that in the first place?


   Other thought not related to the above:
 Given this is a major restructure, would it make sense to collapse/redo some 
 of 
 the types, particularly the equipable equipment?
 IMHO This should be done as well, though one question is: would it be
 worth completely redoing the type numbers and converting all of the
 maps/arches? Or should we try to keep the numbers the same where
 possible, and have the loader change type values that were collapsed to
 the new value?

  If maps are changing the object types, they are broken.  While it is 
sometimes 
done, changing object types is pretty much never good behaviour, because other 
aspects of the object are not updated.

  However, for the archetypes, they should be updated with the new numbers - it 
shouldn't be hard to do, and is definitely the right approach.



___
crossfire mailing list
crossfire@metalforge.org
http://mailman.metalforge.org/mailman/listinfo/crossfire


Re: [crossfire] 2.0 object-type refactoring

2006-10-30 Thread Alex Schultz
Mark Wedel wrote:
 Alex Schultz wrote:

   
 Well, the way I was thinking of doing it, would be to replace specific
 calls to apply, drop, etc as they are sprinkled throughout the code, and
 keep common code such as you say, in a common subdirectory of the
 types directory, and then have the callback called by ob_apply(), call
 something like ob_apply_common() at the start of it's code.
 IMHO, with this method, the best way to handle legacy non-separated
 logic would be to move that legacy logic to the types/common
 subdirectory, and set it up so ob_apply() and such will call that if it
 can't find a callback registered for the type.
 

   That works I suppose.  You do get into various questions that have no clear 
 answer.  Like 'should the main drop function be in the commands.c file (since 
 it 
 is a command the player issues) or should it be in the types/common.c file'

   I'd be somewhat inclined to have the command logic still in the command 
 (c_...) files, and from there, it does the callback.  It seems (to me at 
 least) 
 that is a little better - the basic command logic is still in the commands 
 file, 
 and the actual object specific drop is in the types area.  But I can also see 
 the other point - lets put all the common object operations together.
   
To me it also seems best to keep command logic in the command files, and
that does the callback. However IMHO only bits about decoding the
command, and providing certain types of player feedback, should be
handled in the command files. That way, it is easy to make code that
does an action that is (currently) normally a player action, with code
common to it, and without giving feedback to the player that isn't valid
considering the player didn't make that action.
Basically, I believe that player-actions should keep their
command-decoding, player-feedback, and other code that should only
happen when a player is triggering the action directly, in the command
code, but delegate all action code in the callback system.
Also, along the same sort of lines, unpaid checks should for the most
part be done in the command code as opposed to the actual action code.

 snip
   
   The other reason I'd like there to be common functions is that I think it 
 will 
 make debugging a little easier.  With a single function that then makes the 
 callbacks if necessary, I can easily set a breakpoint in there to catch when 
 any 
 object of any type does that action.  Otherwise, there isn't any easy way to 
 catch 'if any object of any type is dropped' type of thing, unless there is a 
 common function that every object type uses 100% of the time.
   
Well, I was planning on having ob_apply(), ob_drop() functions to run
the callbacks. I would say that's even easier to do breakpoints and
debugging with, than leaving things to run a callback directly in many
places in the code such as the commands files as it would be if we just
inserted a callback check before the existing case statements and such.
I do think there should be common functions, but I believe they should
be in a central file relating to this callback system.

   But it probably is more stylistically - to me, this seems cleaner:

 void action()
 {
   some common code
   make object callback
 }

 vs

 void action_for_type()
 {
 make_common_call_for_all_types()
 do my specific code
 }

   It just seems that if everything is going to call 
 make_common_call_for_all_types(), why not just call that in the first place?
   
Well, the issue IMHO, is for things such as applying, what if one
specific type wants to put restrictions on when one can apply an object?
Due to that, it seems to me that the second option would be a more
flexible and desirable way to call the common code for all types. Also,
I believe that such common code should be more specific name than
make_common_call_for_all_types(), perhaps more along the lines of
something like can_apply() and complete_apply(), with can_apply() doing
common checks on if it can be applied, and complete_apply() doing common
code about what applying the object does.
(also, just a note for others reading, for the sake of clarity I'll
note, in the second case, the action() function becomes simply void
action() { make object callback } and eventually extended to include
things like integrating into the plugin system.)

Alex Schultz

___
crossfire mailing list
crossfire@metalforge.org
http://mailman.metalforge.org/mailman/listinfo/crossfire


Re: [crossfire] 2.0 object-type refactoring

2006-10-29 Thread Alex Schultz
Alex Schultz wrote:
 For a while now I've been throwing the idea of separating the server's
 object-type-specific code by object types. Currently things are rather
 disorganized with code for how an object type works being spread all
 around the code. I plan to begin to implement the plan on the following
 wiki page soon if nobody has any objections:
 /http://wiki.metalforge.net/doku.php/dev_todo:refactor/
 Also, would be nice for some others to help a bit once it's started :)
Just a note, since my original mailing list posting, made a small
revision to the page/plan based on IRC discussion.
http://wiki.metalforge.net/doku.php/dev_todo:refactor?rev=1162054825do=diff

Alex Schultz

___
crossfire mailing list
crossfire@metalforge.org
http://mailman.metalforge.org/mailman/listinfo/crossfire


Re: [crossfire] 2.0 object-type refactoring

2006-10-29 Thread Mark Wedel

  Few questions:
Use form of server/types/foo.c or server/types/foo/bar.c depending on if the 
object type requires multiple C files to be clean.

  With the fact that the SVN repository is called server, it is unclear exactly 
what server refers to.

If we include the repository name, is it:
server/server/types
server/common (example of existing directory)

Or

server/types
common/ (being the exististing directory)

  IMO, I'd prefer the second form - put the types directory at the top level, 
relative to the server directory.  This is how the socket code is, doesn't have 
any real disadvantage, but could have some advantages (better for unit tests? 
Better for stand alone program that does simulations, etc?)

Function pointers:
It's not specifically mentioned on how the callbacks are registed.  I see one 
of 
two ways:

  register_apply_callback(type, funcpointer);

  or

  register_callback(type, funcpointer, callback_type)

  Both have pros and cons.  The first case has the advantage that the compiler 
can do actual type checking on funcpointer.  The second has the advantage you 
don't need as many callback functions, and it makes it easier to add new 
callbacks (add the new callback_type #define, update the number of callback 
types, and recompile.  One could even do things like:

callbacks[ob-type][CALLBACK_TYPE]()

  It's also not really mentioned on how the callbacks themselves are registered 
(that code has to be someplace).  And rather than having a single function that 
registers all the callbacks for all the object types, I'd suggest that each 
type 
file has something like init_callbacks_weapons (or something).  Then in some 
other place, you have that function that calls all the init_callbacks - having 
a 
function of MAX_TYPES lines longer is much better than one of MAX_TYPES * 
MAX_CALLBACK_METHODS lines long.

  The other advantage of this is that the individual types files themselves can 
then declare all the functions except the init function static, so that it 
can't 
be called by other functions.

  I suppose that one could use various system function calls to try and look up 
the function names and call them (presuming a standard naming scheme, like 
init_callbacks_46, where 46 is the type number for that object), but that may 
be 
overly complicating things.

  Other thoughts:
It may be better working on getting all of this infrastructure/callback methods 
in place, and implementing one type of object using them instead of focusing on 
the specific callback method (the web page mentioned doing all the apply 
stuff). 
  Why?  That way, if someone does write code for a new object type, all the 
pieces are in place so they can use the new method.  If only the apply callback 
is implemented, they can use that logic, but are left using old coding method 
for the other areas, that then needs to be converted down the road.

  Also, by doing all the different callback methods, it provides better 
coverage 
of the callback logic, eg, if there are issues with any of the methods, they 
will be found sooner vs later.

Second, if we're going to be writing all new functions, we should use some 
other 
'best practices'.  In particular, for anything that would return a string 
value, 
they should be modified so that a string pointer (and length of that pointer) 
is 
passed in, instead of using static return buffers.  This will assist if we ever 
thread the server, but it will also help prevent a few bugs (related to this, 
sprintf, strcat, and strcpy should never be used, but rather snprintf, strncat, 
and strncpy instead)





___
crossfire mailing list
crossfire@metalforge.org
http://mailman.metalforge.org/mailman/listinfo/crossfire


Re: [crossfire] 2.0 object-type refactoring

2006-10-29 Thread Alex Schultz
Mark Wedel wrote:
 snip

   IMO, I'd prefer the second form - put the types directory at the top level, 
 relative to the server directory.  This is how the socket code is, doesn't 
 have 
 any real disadvantage, but could have some advantages (better for unit tests? 
 Better for stand alone program that does simulations, etc?)
   
I was originally thinking of as a subdirectory of the server directory
as in the first form, but now that I think of it, what you suggest I
think is a bit better.
 Function pointers:
 It's not specifically mentioned on how the callbacks are registed.  I see one 
 of 
 two ways:

   register_apply_callback(type, funcpointer);

   or

   register_callback(type, funcpointer, callback_type)
 snip
I'm currently thinking of using the first form, largely so it matches
what I plan on using for calling the callback itself. I decided on using
ob_apply() ob_drop() instead of my old idea of cf_method(ob, type, ...)
primarily due to compiler type checking.

   It's also not really mentioned on how the callbacks themselves are 
 registered 
 (that code has to be someplace).  And rather than having a single function 
 that 
 registers all the callbacks for all the object types, I'd suggest that each 
 type 
 file has something like init_callbacks_weapons (or something).  Then in some 
 other place, you have that function that calls all the init_callbacks - 
 having a 
 function of MAX_TYPES lines longer is much better than one of MAX_TYPES * 
 MAX_CALLBACK_METHODS lines long.
   
Yes, that makes sense and was my original plan to use such init
functions. Good idea with how everything except the init could be
static, that is a nice bonus I didn't think of.
 snip

   Other thoughts:
 It may be better working on getting all of this infrastructure/callback 
 methods 
 in place, and implementing one type of object using them instead of focusing 
 on 
 the specific callback method (the web page mentioned doing all the apply 
 stuff). 
   Why?  That way, if someone does write code for a new object type, all the 
 pieces are in place so they can use the new method.  If only the apply 
 callback 
 is implemented, they can use that logic, but are left using old coding method 
 for the other areas, that then needs to be converted down the road.

   Also, by doing all the different callback methods, it provides better 
 coverage 
 of the callback logic, eg, if there are issues with any of the methods, they 
 will be found sooner vs later.
   
Well, my original thinking, was that if I were to do it by completing an
object type, instead of a callback type, then it would be somewhat of a
pain to make it redirect to legacy methods for other object types not
yet implemented in the new system. What's your view on that issue?

 Second, if we're going to be writing all new functions, we should use some 
 other 
 'best practices'.  In particular, for anything that would return a string 
 value, 
 they should be modified so that a string pointer (and length of that pointer) 
 is 
 passed in, instead of using static return buffers.  This will assist if we 
 ever 
 thread the server, but it will also help prevent a few bugs (related to this, 
 sprintf, strcat, and strcpy should never be used, but rather snprintf, 
 strncat, 
 and strncpy instead)
Well, I wouldn't say necessarily writing all new functions: I believe
some existing functions can be tweaked fairly easily to fit into the new
system. However I believe that when moving functions that map well over,
we should hand review them anyways, and perhaps use a checklist of
things such as those issues you mention like avoiding static buffer returns.

Alex Schultz

___
crossfire mailing list
crossfire@metalforge.org
http://mailman.metalforge.org/mailman/listinfo/crossfire