So I had thought that in an ideal rewrite, there is 1 thread per map, and 1 
thread per player (and maybe a few others that do stuff in the background).

That keeps things relative simple - you just have per map and per player thread 
locks - if the thread has the lock, it can do whatever it wants.

This also helps crossfire use multiple cores of the cpu.

For maps, it makes the most sense, because right now, one slow map slows the 
entire game down (be that someone dropping a pile of items in a store, or some 
map with lots of spell effects).  Each being contained to its own thread (map) 
means that people on that map see the lag, but other players don't.  And 
certain map operations (load/save) are also known to take a while.

To me, it makes more sense to let the system library deal with thread 
scheduling vs implementing some scheduler within crossfire itself.  Threads are 
a fairly cheap resource, and crossfire still would not need a lot of them.

Nathan is correct in that the problem is that until the drop (or pickup) 
command completes, the server is not doing anything else.  The approach he 
presents would also work (and not require threading) - building a list of items 
to drop, and dropping X of those/tick (X could be more than none).  But he is 
also correct that depending on how much stuff is on the square affects 
performance.  Dropping 1 item on a space that has 1000 items already will be 
slow, and dropping 20 items on a space that is completely empty may be pretty 
fast.

One way this could be handled is tracking the number of objects on a space, and if that 
object count is >X, disallow dropping of more items there ("this area is overflowing 
with stuff, and you just can't find an area to drop that %s")

There is still the problem with pickup - and when going through cleaning out a 
dungeon, the players inventory can get quite long, especially since there tends 
to be many variations of an item.  Limiting pickup to X items/tick helps out 
here, but could also be annoying, as you move over a space, not everything is 
picked up, move back, move off again, notice not everything is picked up again, 
repeat as needed.

Some of the problem as noted above is just the sheer number of items - while 
having lots of variation is interesting, having 5 different types of swords 
with minor variations, and then those 5 swords could have slightly different 
artifact (of Lythander, etc) values, and add that they could then also have 
different +magic values, and those 5 swords can turn into 100 variations of an 
item, which increases the item count when dealing with merges.

Linked lists are also not particularly efficient when dealing with having to 
search them - using something like a tree (organized on merging criteria) could 
greatly speed up the insertions if it is well balanced, though can be a bit 
less efficient when walking it.

Another thought would be rather than dumping items onto the linked list of the 
space and marking that it needs to be sorted, there is in fact a different 
linked list there contains the 'incoming' items - by the fact it exists means 
those items on that incoming list are not sorted, and need to get added/merged 
to the normal list.  But this type of deferral gets complicated - you now have 
to deal with the case where the player wants to pick up an item they just 
(mistakenly) dropped - there are now 2 lists to look for.

In some ways, not merging items on the ground actually makes sense - things 
don't automatically group together in real life, so having a space with '10 
arrows, 3 swords, 4 arrows, ..' actually makes some sense.

IIRC, another performance issue (hack) was for the shop menus, where it 
basically would use the merging logic to generate a more concise list, but when 
people go unload 1000 items into the shop, it means that the logic it uses 
(making a new list, copying objects onto it, etc) is not very efficient.
_______________________________________________
crossfire mailing list
crossfire@metalforge.org
http://mailman.metalforge.org/mailman/listinfo/crossfire

Reply via email to