Alright, so, they've learnt sooner than I expected lol. Although the game
is really simple sooo, I guess that excepted lmao. Apparently 5 of the 7
players have reached the stalemate, the other two haven't figured it out
yet, but I guess they will eventually, poor bots.

To put it into perspective, here are the initial average rewards over the
first 1,000 trials:

NomicLearningBOT_0: Mean Reward: 3.452.
NomicLearningBOT_1: Mean Reward: 4.984.
NomicLearningBOT_2: Mean Reward: 2.486.
NomicLearningBOT_3: Mean Reward: 5.671.
NomicLearningBOT_4: Mean Reward: 5.311.
NomicLearningBOT_5: Mean Reward: 2.464.
NomicLearningBOT_6: Mean Reward: 3.460.

After 50,000:

NomicLearningBOT_0: Mean Reward: 4.975.
NomicLearningBOT_1: Mean Reward: 4.675.
NomicLearningBOT_2: Mean Reward: 1.061.
NomicLearningBOT_3: Mean Reward: 4.850.
NomicLearningBOT_4: Mean Reward: 4.792.
NomicLearningBOT_5: Mean Reward: 2.580.
NomicLearningBOT_6: Mean Reward: 4.920.

Yeah they're stuck around 4,8 or so and the max turns per game have started
to kick in.

Anyways, all of these results are really messy. I'll make something more
tidy and essay-worthy later, this is just a sort of proof of concept to
myself lol, and the results are interesting and cool.

On Tue, Feb 26, 2019 at 7:33 PM Cuddle Beam <> wrote:

> If anyone is interested, I’ve simplified the pseudonomic game to just
> turns of proposing an assignment of N points, where N is the amount of
> players, and with 20 points you win. It emulates how in a dynastic (or
> here, even) nomic its super common for win conditions to be having enough
> of a token. And there is no social “rule inertia”, where you have to
> respect previous rules/connotations, the power of a proposal is to change
> anything (within whats “dynastic” ie. the basic “nomic” doesnt change like
> the proposing mechanism) so if you want to keep a previous split of points
> its only if you so choose.
> It’s running on my computer right now, let’s see what happens.
> I suspect they’ll eventually (learn to) eternally stalemate, because the
> winning move needs the majority to agree to it, which they won’t (once
> rational enough), because it will make them lose. So nobody ever wins.
> On Wed, 20 Feb 2019 at 10:50, Cuddle Beam <> wrote:
>> For a good while now I've wanted to figure out a way to have little
>> machine-learning bots play nomic and learn and improve at the game to see
>> what kind of emergent strategies they develop. Problem is, real nomic is
>> real fucking complicated lmao.
>> So, I figured I could try to simplify it somehow. Also, there's a lot of
>> premade neural network code out there which look real cool and it makes all
>> of this less tedious lol. I had in mind to use NEAT (NeuroEvolution of
>> Augmenting Topologies).
>> Anyways, instead of trying to make the little robot players try to
>> compete at a game of nomic that resembles code or something like Nomyx was,
>> I'd simplify it to a sort of grid-based Pachinko (via Unity for its
>> physics), let's call it Pachinkonomic. Also, in order to get generations
>> and such, I'd make the Pachinkonomic "dynastic" and make the game end once
>> a player has "won".
>> Balls would fall from the top (randomly maybe?) and the players would
>> each have a cup at the bottom. Once they have enough balls to win, the game
>> restarts, new population, yadda yadda (I'd probably need to have a lot of
>> "tables" of play too where the players can sit at for a game of
>> Pachinkonomic to have big enough populations... Although my computer is a
>> bit of a wuss and having so many physics going on at the same time might
>> give it a stroke so I might simply the Pachinko to something else lmao,
>> we'll see).
>> Each turn, a player would propose a change to the pins in the grid,
>> either removing or adding any amount of pins, in pantomime of how we can
>> change pretty much anything in a nomic as well. The players "see" this
>> proposal (input neurons on each point of the grid) and vote if to pass it
>> or not. And right after, pachinko balls fall and all players get a payout.
>> To avoid possible bias based on where on the bottom of the pachinko board
>> the player's cup is, the pachinko board would be cylindrical. Like that,
>> all players are in pretty much the same initial conditions, except for what
>> kind of player they got next to them, but they're blind to that anyways.
>> The balls emulate the super common notion that there's "wealth" in a game
>> nomic, and with enough wealth, you win.
>> So yeah. A bunch of robotic players trying to control a common Pachinko
>> board (that emulates a real fucking simple "nomic") to get the highest
>> payoff. Also, Pachinko is easily very visual which is real nice too.
>> What do you think?

Reply via email to