[GitHub] [incubator-tvm] srkreddy1238 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-29 Thread GitBox


srkreddy1238 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r432813198



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2895,16 +2909,25 @@ def _parse_import_prerequisites(self, graph):
 which are not supported
 """
 missing_operators = set()
+from tensorflow.python.framework import op_def_registry
 for node in graph.node:
+getOpDef = op_def_registry._registered_ops.get if 
hasattr(op_def_registry,\
+"_registered_ops") else op_def_registry.get
+op_def = getOpDef(node.op)
 if node.op == "Placeholder" or node.op == 'PlaceholderWithDefault':
 pass
 elif node.op == "Const":
 pass
+elif node.op in ["PartitionedCall", "StatefulPartitionedCall"]:
+pass
 else:
 if any([node.op in t for t in [_identity_list, _convert_map,
_convert_map_rnn,
_control_flow_nodes]]):
 pass
+elif op_def is not None and op_def.is_stateful:
+self._main_graph_proto._stateful_ops_list.append(node.op)

Review comment:
   No need of another list. If needed you may append (StatufulOperator) to 
node.op





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] srkreddy1238 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-29 Thread GitBox


srkreddy1238 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r432813134



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2773,6 +2774,12 @@ def from_tensorflow(self, graph, layout="NHWC", 
shape=None, outputs=None):
 if freezed_ops:
 raise Exception("Graph is not frozen. Provide a frozen graph. "
 "Found operators {}".format(freezed_ops))
+stateful_ops = [op for op in missing_operators
+if op in self._main_graph_proto._stateful_ops_list]
+if stateful_ops:
+raise Exception("Found stateful operators in this graph {}. " \

Review comment:
   Again this will display exception with only stateful missing ops 
(Doesn't show normal missing ops.).
   I don't think we need separate list for stateful missing ops. Just add them 
to missing_operators list and display as one.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] srkreddy1238 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


srkreddy1238 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r431629053



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2896,15 +2903,29 @@ def _parse_import_prerequisites(self, graph):
 """
 missing_operators = set()
 for node in graph.node:
+try:
+from tensorflow.python.framework import op_def_registry
+except ImportError as e:
+raise ImportError(
+"Unable to import tensorflow which is required 
{}".format(e))
+getOpDef = op_def_registry._registered_ops.get if 
hasattr(op_def_registry,\
+"_registered_ops") else op_def_registry.get
+op_def = getOpDef(node.op)
 if node.op == "Placeholder" or node.op == 'PlaceholderWithDefault':
 pass
 elif node.op == "Const":
 pass
+elif node.op in ["PartitionedCall", "StatefulPartitionedCall"]:
+pass
 else:
 if any([node.op in t for t in [_identity_list, _convert_map,
_convert_map_rnn,
_control_flow_nodes]]):
 pass
+elif op_def is not None and op_def.is_stateful:
+raise Exception("Found a stateful operator in this graph 
{}. "\

Review comment:
   Objective of this function is to give a consolidated list of unsupported 
ops.
   Any approach is good as long as the purpose is met, which is showing all 
unsupported ops instead of just an exception.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] srkreddy1238 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-28 Thread GitBox


srkreddy1238 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r431620598



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2896,15 +2903,29 @@ def _parse_import_prerequisites(self, graph):
 """
 missing_operators = set()
 for node in graph.node:
+try:
+from tensorflow.python.framework import op_def_registry

Review comment:
   Hence we don't need this try. We could join it with all tf imports above 
:)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] srkreddy1238 commented on a change in pull request #5617: [TENSORFLOW]StatefulPartitionedCall/PartitionedCall Ops support added

2020-05-22 Thread GitBox


srkreddy1238 commented on a change in pull request #5617:
URL: https://github.com/apache/incubator-tvm/pull/5617#discussion_r429331590



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2896,15 +2903,29 @@ def _parse_import_prerequisites(self, graph):
 """
 missing_operators = set()
 for node in graph.node:
+try:
+from tensorflow.python.framework import op_def_registry

Review comment:
   Can you confirm is op_def_registry is not part of all TF versions ? pls 
confirn.
   If this is not in 1.x we shouldn't error as the front end is compatible to 
TF 1.x now.

##
File path: tests/python/frontend/tensorflow/test_forward.py
##
@@ -3179,10 +3183,342 @@ def test_forward_isfinite():
 _verify_infiniteness_ops(tf.is_finite, "isfinite")
 
 
+def _test_spop_placeholder_one():
+with tf.Graph().as_default():

Review comment:
   Advice to use appropriate name instead of _one / _two ...etc.
   

##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -3155,6 +3176,91 @@ def _convert_control_flow_operator(self, node, inputs, 
attrs, control_flow_node_
 
 return op
 
+def _partition_call_operator(self, inputs, attr):
+"""
+Convert the Relay Partition call ops into Relay Function calls and
+function definitions from Tensorflow graph library attribute to Relay 
global
+functions
+
+Parameters
+--
+node: TensorFlow graph node object.
+A TensorFlow graph node object.
+
+inputs : List[tvm.relay.Expr]
+List of input symbols.
+
+attrs : Dict[tvm.Attrs]
+Dict of operator attributes.
+
+Returns
+---
+op : tvm.relay.Expr
+Converted relay expression.
+"""
+
+try:
+from tensorflow.python.framework import function_def_to_graph
+except ImportError as e:
+raise ImportError(
+"Unable to import tensorflow which is required {}".format(e))
+
+main_graph_proto = self._main_graph_proto
+outer_graph_def = main_graph_proto._graph
+
+node_func_name = attr.get('f').name
+func = next((f for f in outer_graph_def.library.function
+ if f.signature.name == node_func_name), None)
+if func:
+devices = set(node.device for node in func.node_def)
+if len(devices) > 1:
+raise Exception("Found inconsistent Device assignment in the "\
+"Stateful Partitioned SubGraph. Rejecting "\
+"the subgraph ")
+# Convert function definition to graph
+func_input_shapes = func.attr["_input_shapes"].list.shape
+subgraph, _ = function_def_to_graph.\
+function_def_to_graph_def(func, func_input_shapes)
+
+# Computing subgraph's input shape dictionary
+subgraph_shape_dict, input_expr_dict = {}, {}
+for f_arg, input in zip(func.signature.input_arg, inputs):
+input_expr_dict[f_arg.name] = input
+subgraph_shape_dict[f_arg.name] = _infer_shape(input, 
main_graph_proto._mod)
+
+func_name = 'func_{}'.format(func.signature.name)
+try:
+global_func = main_graph_proto._mod[func_name]

Review comment:
   Is this the case where the function is called multiple times with in a 
graph ?

##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2896,15 +2903,29 @@ def _parse_import_prerequisites(self, graph):
 """
 missing_operators = set()
 for node in graph.node:
+try:
+from tensorflow.python.framework import op_def_registry
+except ImportError as e:
+raise ImportError(
+"Unable to import tensorflow which is required 
{}".format(e))
+getOpDef = op_def_registry._registered_ops.get if 
hasattr(op_def_registry,\
+"_registered_ops") else op_def_registry.get
+op_def = getOpDef(node.op)
 if node.op == "Placeholder" or node.op == 'PlaceholderWithDefault':
 pass
 elif node.op == "Const":
 pass
+elif node.op in ["PartitionedCall", "StatefulPartitionedCall"]:
+pass
 else:
 if any([node.op in t for t in [_identity_list, _convert_map,
_convert_map_rnn,
_control_flow_nodes]]):
 pass
+elif op_def is not None and op_def.is_stateful:
+raise Exception("Found a stateful operator in this graph 
{}. "\

Review comment:
   Better to add this to missing op list (with extended info if needed) 
still