Hello!

I use rk3399 firefly board with LLVM 8.0.0 and ubuntu 18.04.
I ran vgg16 on the board using the following code.

    import tvm
    from tvm import te
    import tvm.relay as relay
    from tvm.contrib import graph_runtime
    import numpy as np
    import topi
    from tvm.relay import testing


    target_host = 'llvm -target=aarch64-linux-gnu'
    target_mali_gpu = tvm.target.mali()
    ctx_mali_gpu = tvm.runtime.cl(0)

    dtype='float32'
    batch_size = 1
    data_shape = (1,3,224,224)
    out_shape = (1, 1000)

    mod, params = relay.testing.vgg.get_workload(
        num_layers=16, batch_size=batch_size, image_shape=(3,224,224))
    opt_level = 3

    #Mali GPU
    with relay.build_config(opt_level = opt_level):
        graph, lib, params = relay.build_module.build( mod, target_mali_gpu, 
target_host , params = params )

    data = tvm.nd.array( np.random.uniform(-1, 1, size=data_shape 
).astype("float32") , ctx_mali_gpu )
    moduleg = graph_runtime.create(graph, lib, ctx_mali_gpu)
    moduleg.set_input("data", data)
    moduleg.set_input(**params)
    moduleg.run()

When I run the code, the following error message occurs.

    error: Compiler frontend failed (error code 59)

    terminate called after throwing an instance of 'dmlc::Error'
      what():  [15:28:19] 
/home/firefly/Desktop/tvm/src/runtime/workspace_pool.cc:115: Check failed: 
allocated_.size() == 1 (3 vs. 1) : 
    Stack trace:
      [bt] (0) 
/home/firefly/Desktop/tvm/build/libtvm.so(tvm::runtime::WorkspacePool::Pool::Release(DLContext,
 tvm::runtime::DeviceAPI*)+0x4a0) [0x7fa74ede20]
      [bt] (1) 
/home/firefly/Desktop/tvm/build/libtvm.so(tvm::runtime::WorkspacePool::~WorkspacePool()+0x48)
 [0x7fa74ec8b0]
      [bt] (2) 
/home/firefly/Desktop/tvm/build/libtvm.so(tvm::runtime::cl::OpenCLThreadEntry::~OpenCLThreadEntry()+0x18)
 [0x7fa753df10]
      [bt] (3) /lib/aarch64-linux-gnu/libc.so.6(__call_tls_dtors+0x48) 
[0x7fab874620]


    Aborted (core dumped)

Obviously, there is no problem with the old TVM version and ubuntu16.04, but 
the problem occurs after the update.
Is it a bug inside TVM? Or is there a problem with the code?





---
[Visit 
Topic](https://discuss.tvm.ai/t/error-dmlc-error-occurred-when-compiling-a-model-for-mali-gpu/6217/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/f2d25ed1e0ece4779656d30656ea492e35b91f804b67d23eb9e6387fee53f5fa).

Reply via email to