Hi, Thonsew

First of all, thanks for trying to make Wesnoth faster. I've looked
through the t_token changes. It seems like a good thing to do, but
there are a couple of things to keep in mind:

1) Switching to t_token is a good solution, but it might be not the
best one in some cases:
 - Firstly, some places in the ai routines which now use string/token
comparison can be fixed in a better way - for example,
ai_special='guardian' state should be a known boolean state instead -
after that, by making a unit::get_state(state_t) accessor, we can
speed the lookup to O(1) - currently there're two O(ln(N)) map lookups
involved.
 - Secondly, in some cases the AI is trying to duplicate the logic
which should be handled by 'game rules' part of the game, to take some
special abilities into account - it's better to rework the code to
avoid that

2) As of now, t_token is not opt-in, so it affects a lot of the code,
including parts which do not care about speed at all. Also, there's
increased risk of new bugs (but those can be sorted out), and
increased risk of slowdowns in some other workloads (like other people
there were mentioning). It might be better to code a t_token mechanism
in a way that would allow it to be opt-in, and only convert those
pieces of the code which show, on profiling, to be hitting
performance. I am absolutely ok with the kind of boilerplate code that
I see in C++, but, it's not really going to affect performance in
places which are called 1-2 times per AI turn - instead, I think that
we should focus our attention on the hotspots, and leave other code
unchanged unless the benefits outweight the costs and risks of side
effects.

3) Lua interface and lua code is targeted towards outside developers,
so, IMO, ease of use should be a big concern. And I see that people
who actively using Lua in their projects are complaining that the
changes would decrease the ease-of-use. Also, if the behavior changes,
it is a problem because we can't fix UMC code easily.

I will spend some time this weekend profiling the code to find out for
myself where things are standing now.
One idea that I was having is to measure the effects of fixing only
the part of the problem with allocation/deallocation of strings, which
would allow to make those ( cfg[attribute] == value ) comparisons
faster without unexpected side effects.
I.e., theoretically, we can have a
config::has_attribute_with_value(const char *attribute, const char
*value)  which would return true or false without having the need for
any std::string allocation and deallocation. It is interested for me
to know how much those kinds of fixes can affect performance.
Also, I'd see if I can remove the need to compare strings or tokens in
those places which are located on AIs main loop - I see that in some
cases it's possible to do so.

For the record, my overall opinion about the changes is that they are
good but we should be cautious to push the changes to places which do
not need/welcome them. It is great to have a faster set of functions,
but they should not be mandatory if properly using them is harder.

-- 
Cheers, Iurii Chernyi (Crab)

_______________________________________________
Wesnoth-dev mailing list
[email protected]
https://mail.gna.org/listinfo/wesnoth-dev

Reply via email to