>
> Aha - it makes complete sense that the Python workload would show up
> somewhere in the profiler. I suppose I wasn't expecting it in exception.jl,
> but if that is where the work happens that is fine.
It's an artifact of macro expansion not tracking where the expanded code
came from
Aha - it makes complete sense that the Python workload would show up
somewhere in the profiler. I suppose I wasn't expecting it in exception.jl,
but if that is where the work happens that is fine.
I'm copying the results from TF into my memory - if TF doesn't create new
ones every time I can
>
> When passing arrays to Python, the PyCall default is already to use NumPy
> wrappers that pass the data without copying.
Yes, sorry for being unclear. My point was that this wrapper function ('
tfJuliaInterface.pass_image_to_ff') might not be taking advantage of the
existing NumPy-based
On Friday, April 15, 2016 at 12:14:53 AM UTC-4, Isaiah wrote:
>
> Your profiling result is not necessarily unreasonable. The listed line
> number (exception.jl:78) is where the macro-wrapped code is actually
> executed, and "pass_image_to_ff" sounds like it could be expensive.
>
> Is the
Your profiling result is not necessarily unreasonable. The listed line
number (exception.jl:78) is where the macro-wrapped code is actually
executed, and "pass_image_to_ff" sounds like it could be expensive.
Is the Julia-Pycall-TF wrapper copying data in that function? If so, then
it may be
The relevant lines are:
pycall(tfJuliaInterface.pass_image_to_ff, Void,
weights, means, covs,
model.sess, model.deepdrive, model.input_tensor)
pyerr_check("pass image to ff")
def pass_image_to_ff(weights, means, covs, sess, NN, image):
feed = {NN.input_image: image,