Jules Bean wrote:
Jack Kelly wrote: -snip-
Essentially you have to choose some form of identity (relational DB fans
might call it a key) for your players and rooms. My gut feeling is
that String will do fine; it's nice for debugging, gives rise to a
natural way to write your admin commands and so on. However, it may not
be quite good enough (100 mobs all called an orc need individual keys)
and you may end up using an Int or Int64 for your key.
Perhaps doing the keying on an Int and then having a [String] of names
that entities can be called is the way to go. List comprehensions will
still work:
let ents = [ ent | ent - allents, name `elem` (Entity.aliases ent) ] in
-- ...
but so will addressing an object uniquely when needed:
let target = listToMaybe $ [ ent | ent - allents, id == (Entity.id ent)
] in
-- ...
Once you've chosen a key, you can do something very like the C++
structure: -snip-
I'm thinking of doing the following (based on some suggestions from
#haskell):
data Room = Room {...}
type Zone = Data.Map String Room
type World = Data.Map String Zone
type ZoneID = String
type RoomID = String
data Location = Location { zone :: ZoneID, room :: RoomID}
data Player = Player { location :: Location, ... }
Aside: When should I use newtype vs type? From what I understand, both
create type aliases that are only relevant at compile time, but newtype
disallows things like:
-- Bad
newtype Fred = Fred String
putStrLn $ Fred this won't work
as a rule of thumb, is it `better' to newtype things so that the type
system can trip you up?
I should mention, actually, that there is also the naive solution:
-snip-
That does seem like a bad idea.
There are always two techniques for maintaining invariants.
In solution 1, you would simply not store the player list in the room
structure at all. Instead, you would only store the room in the player,
and whenever you wanted to answer the question Which players are in
this room you would do a quick search through all players to find them
playersInRoom :: Room - [Player]
playersInRoom r = [p | p - allplayers, (room p) == r]
-- Just like a database! In SQL we'd say:
-- SELECT p FROM allplayers WHERE p.room = r
Of course the disadvantage to this is the linear search through all the
players. Depending on how often this is performed, this may actually
cost you less that explicitly caching the value inside the room.
I'd not thought of doing it that way. Maybe because keeping a couple of
pointers in sync in C is less painful then coding up a search to
generate an appropriate list. Learning to stop looking through the
imperative lens is hard.
In solution 2, of course, whenever you move a player from room A to
room B you do it via a function which is explicitly designed to keep
everythign up to date:
movePlayer PlayerID - RoomID - RoomID - Game ()
movePlayer pid raid rbid = do
p - getPlayer pid
setPlayer pid (p {room = rbid})
ra - getRoom raid
rb - getRoom rbid
setRoom raid (ra {players = (players ra)\\[pid]} )
setRoom rbid (rb {players = pid : (players rb)} )
Which I've written in an imaginary Game monad, and using rather ugly
low-level combinators get/set Room/Player. You could make it look much
more pleasant in practice with some better chosen combinators.
Game looks like it'd behave like some kind of specialisation of State,
but since there'd be responses sent out to the sockets, I'm thinking of
something like StateT GameState IO (). Is it worth having the StateT
because IO already handles some concept of state? If so, will running
something like:
forkIO $ evalStateT gameLoop initialstate
work or will the IO actions be queued up until the end of gameLoop? The
idea of building a Game monad is attractive regardless because I'll
still need functions like movePlayer regardless of which path is chosen.
Still I think solution 1 (don't store redundant data which can get out
of sync) has a lot to recommend it and solution 2 may be premature
optimization; i.e. implement solution 2, which is a kind of caching of
computed data, only once you've convinced yourself that recalculating
that data every time really is slowing you down.
I think I'll go with solution 1. It's conceptually simpler and if it's
slow I can profile and optimise later, right?
The simplest way to do this is to bundle all your big shared mutable
world into a single MVar. What this amounts to is perfect brute force
serialisation of the actual modification part: i.e. all world
modifications share a global lock. This is easy to implement and easy to
reason about.
I think that my fear of doing this was again instinctive premature
optimisation.
If that turns out to be too restrictive, then you split up the MVars
into smaller pieces, but then you have to think a bit harder to convince
yourself it is safe.
Another idea that I picked up from the Simple TCP server
(http://sequence.complete.org/node/258) is to have a single thread