On Tue, 6 Mar 2018 12:52:14, Robert Kern wrote:
> I would just recommend using one of the codebases to initialize the
> network, save the network out to disk, and load up the initialized network
> in each of the different codebases for training. That way you are sure
that
> they are both starting f
On Wed, Mar 7, 2018 at 1:10 PM, Marko Asplund
wrote:
>
> However, the results look very different when using random initialization.
> With respect to exact cost this is course expected, but what I find
troublesome
> is that after N training iterations the cost starts approaching zero
with the Num