Hi,

This is not training time, but about mini_batch for prediction.

Need time for one batch and time for per one position

mini_batch   one batch        one position  Memory required(Caffe's log)
    1       0.002330 sec       2.33ms          4435968 (4.2MB)
    2       0.002440 sec       1.22ms
    4       0.002608 sec       0.65ms
    5       0.002717 sec       0.54ms         22179840 (21.2MB)
    6       0.003915 sec       0.65ms         26615808
    7       0.004107 sec       0.58ms         31051776
    8       0.004141 sec       0.51ms         35487744
   16       0.007400 sec       0.46ms         70975488
   32       0.012268 sec       0.38ms        141950976
   64       0.023358 sec       0.36ms        283901952
  128       0.044951 sec       0.35ms        567803904
  256       0.088478 sec       0.34ms       1135607808
  512       0.175728 sec       0.34ms       2271215616
 1024       0.352346 sec       0.34ms       4542431232
 2048       Err, out of memory              9084862464 (8.5GB)


One batch speed does not change up to mini_batch = 5, then becomes slow.
Over mini_batch = 32, one position time is same,
mini_batch = 2048 does not work by out of memory.
I don't know learning speed when changing mini-batch.

I think this result depends on GPU and DCNN size.
And one batch speed maybe depends on CPU and GPU memory bus speed?

DCNN is 12 layers, 5x5_128, 3x3_128 x11
Input is two channels, black and white stone.
Training is 15.6 million position, GoGoD, accuracy 41.5%
batch_size=256, 700000 iteration(11.5 epoch), 106 hours.
*.caffemodel size is 5.6MB. GTX 980 4GB

Regards,
Hiroshi Yamashita

_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to