This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
     new 005f677  setup.sh and fix visualization in dqn_run_test.py (#11051)
005f677 is described below

commit 005f67759fac7bcf451e31b42c30b6c6ca24586a
Author: Pedro Larroy <928489+lar...@users.noreply.github.com>
AuthorDate: Thu May 31 03:36:29 2018 +0900

    setup.sh and fix visualization in dqn_run_test.py (#11051)
    
    fix type error: type of action needs to be int
---
 example/reinforcement-learning/dqn/README.md       | Bin 2146 -> 2230 bytes
 example/reinforcement-learning/dqn/dqn_run_test.py |   8 +++++---
 example/reinforcement-learning/dqn/setup.sh        |   7 ++++++-
 3 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/example/reinforcement-learning/dqn/README.md 
b/example/reinforcement-learning/dqn/README.md
index fd32667..4547904 100644
Binary files a/example/reinforcement-learning/dqn/README.md and 
b/example/reinforcement-learning/dqn/README.md differ
diff --git a/example/reinforcement-learning/dqn/dqn_run_test.py 
b/example/reinforcement-learning/dqn/dqn_run_test.py
old mode 100644
new mode 100755
index 2abf273..e8f36b9
--- a/example/reinforcement-learning/dqn/dqn_run_test.py
+++ b/example/reinforcement-learning/dqn/dqn_run_test.py
@@ -1,3 +1,5 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -89,8 +91,8 @@ def calculate_avg_reward(game, qnet, test_steps=125000, 
exploartion=0.05):
                     current_state = game.current_state()
                     state = nd.array(current_state.reshape((1,) + 
current_state.shape),
                                      ctx=qnet.ctx) / float(255.0)
-                    action = nd.argmax_channel(
-                        qnet.forward(is_train=False, data=state)[0]).asscalar()
+                    action = int(nd.argmax_channel(
+                        qnet.forward(is_train=False, 
data=state)[0]).asscalar())
             else:
                 action = npy_rng.randint(action_num)
 
@@ -120,7 +122,7 @@ def main():
                         help='Running Context. E.g `-c gpu` or `-c gpu1` or 
`-c cpu`')
     parser.add_argument('-e', '--epoch-range', required=False, type=str, 
default='22',
                         help='Epochs to run testing. E.g `-e 0,80`, `-e 
0,80,2`')
-    parser.add_argument('-v', '--visualization', required=False, type=int, 
default=0,
+    parser.add_argument('-v', '--visualization', action='store_true',
                         help='Visualize the runs.')
     parser.add_argument('--symbol', required=False, type=str, default="nature",
                         help='type of network, nature or nips')
diff --git a/example/reinforcement-learning/dqn/setup.sh 
b/example/reinforcement-learning/dqn/setup.sh
index 3fcfacb..3069fef 100755
--- a/example/reinforcement-learning/dqn/setup.sh
+++ b/example/reinforcement-learning/dqn/setup.sh
@@ -22,9 +22,14 @@ set -x
 
 pip install opencv-python
 pip install scipy
+pip install pygame
 
 # Install arcade learning environment
-sudo apt-get install libsdl1.2-dev libsdl-gfx1.2-dev libsdl-image1.2-dev cmake
+if [[ "$OSTYPE" == "linux-gnu" ]]; then
+    sudo apt-get install libsdl1.2-dev libsdl-gfx1.2-dev libsdl-image1.2-dev 
cmake
+elif [[ "$OSTYPE" == "darwin"* ]]; then
+    brew install sdl sdl_image sdl_mixer sdl_ttf portmidi
+fi
 git clone g...@github.com:mgbellemare/Arcade-Learning-Environment.git || true
 pushd .
 cd Arcade-Learning-Environment

-- 
To stop receiving notification emails like this one, please contact
j...@apache.org.

Reply via email to