NNVM defines the graph structure, and how to write a pass function. What
you referred includes a concrete definition of the operator interface, e.g.
the lapack/blas interface.

A similar project is DLPack (https://github.com/dmlc/dlpack), it currently
defines the tensor interface, but not the operator level now.

A complete DL/ML stack will include

1. Frontend users will see
2. Computation graph IR, a serializable format to define the computation
flow, such as c = a*b+1
3. Operator IR, a common operator interface, e.g. +: a:tensor, b:tensor,
res:tensor
4. Compilers to optimize these IRs
5. Executor to run the workloads

Currently we have

1. apache/incubator-mxnet
2. dmlc/NNVM
3. dmlc/DLPack, dmlc/TVM/topi
4. pass on dmlc/NNVM, dmlc/TVM
5. Exectuor in apache/incubator-mxnet






On Wed, Aug 23, 2017 at 1:54 PM, Markus Weimer <[email protected]> wrote:

> Warming up this thread :)
>
> I thought a bit about this and wonder: Would a shared type system of
> all the ML projects in the ASF be a good place to start?
>
> Many data science projects I observe seem to be using several of our
> tools, mixed with other open source and proprietary software. This
> introduces all sorts of inefficiencies and breakages, many of which
> have their root cause in the different type systems used (e.g., "wait,
> -1 means missing value her? I thought we used 0??").
>
> If we could agree on a type system here in the ASF, it could provide a
> north star for the community at large. Maybe we can even collaborate
> with Apache Avro to make sure that the whole type system has a defined
> serialized form.
>
> WDYT? How much of my observations did you have? Is a shared type
> system feasible and interesting? If so, we can start a cross-project
> thread on it.
>
> Thanks,
>
> Markus
>

Reply via email to