Bazel Error Parsing Tf.estimator Model
Solution 1:
I ran into this same problem when I was trying to find the input/output nodes from a model that I had trained using a custom tf.Estimator. The error is from the fact that the output that you get when when using export_savedmodel
is a servable (which is a, as I understand it currently, a GraphDef
and other meta data) and not simply just a GraphDef
.
To find the input and output nodes, you can do.
# -*- coding: utf-8 -*-import tensorflow as tf
from tensorflow.saved_model import tag_constants
with tf.Session(graph=tf.Graph()) as sess:
gf = tf.saved_model.loader.load(
sess,
[tf.saved_model.tag_constants.SERVING],
"/path/to/saved/model/")
nodes = gf.graph_def.node
print([n.name + " -> " + n.op for n in nodes
if n.op in ('Softmax', 'Placeholder')])
# ... ['Placeholder -> Placeholder',# 'dnn/head/predictions/probabilities -> Softmax']
I used the canned DNNEstimator as well so the OP's nodes should be the same as mine, other users, your operation names may be different than Placeholder
and Softmax
depending on your classifier.
Now that you have the names of your input/output nodes, you can freeze the graph, which is addressed here
If you want to work with the values of your trained parameters, for example to quantize weights, you'll need to run tensorflow/python/tools/freeze_graph.py to convert the checkpoint values into embedded constants within the graph file itself.
#!/bin/bash
python ./freeze_graph.py \
--in_graph="/path/to/model/saved_model.pb" \
--input_checkpoint="/MyModel/model.ckpt-xxxx" \
--output_graph="/home/user/pruned_saved_model_or_whatever.pb" \
--input_saved_model_dir="/path/to/model" \
--output_node_names="dnn/head/predictions/probabilities" \
then assuming you have graph_transforms
built
#!/bin/bash
tensorflow/bazel-bin/tensorflow/tools/graph_transforms/summarize_graph \
--in_graph=pruned_saved_model_or_whatever.pb
Output:
Found 1 possible inputs: (name=Placeholder, type=string(7), shape=[?])
No variables spotted.
Found 1 possible outputs: (name=dnn/head/predictions/probabilities, op=Softmax)
Found 256974297 (256.97M) const parameters, 0 (0) variable parameters, and0
control_edges
Op types used: 155 Const, 41Identity, 32 RegexReplace, 18 Gather, 9
StridedSlice, 9 MatMul, 6 Shape, 6 Reshape, 6 Relu, 5 ConcatV2, 4 BiasAdd, 4Add, 3 ExpandDims, 3 Pack, 2 NotEqual, 2Where, 2Select, 2 StringJoin, 2 Cast,
2 DynamicPartition, 2 Fill, 2 Maximum, 1 Size, 1Unique, 1 Tanh, 1 Sum, 1
StringToHashBucketFast, 1 StringSplit, 1 Equal, 1 Squeeze, 1 Square, 1
SparseToDense, 1 SparseSegmentSqrtN, 1 SparseFillEmptyRows, 1 Softmax, 1
FloorDiv, 1 Rsqrt, 1 FloorMod, 1 HashTableV2, 1 LookupTableFindV2, 1Range, 1
Prod, 1 Placeholder, 1 ParallelDynamicStitch, 1 LookupTableSizeV2, 1 Max, 1 Mul
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --
graph=pruned_saved_model.pb --show_flops --input_layer=Placeholder --
input_layer_type=string --input_layer_shape=-1 --
output_layer=dnn/head/predictions/probabilities
Hope this helps.
Update (2018-12-03):
A related github issue I opened that seems to be resolved in a detailed blog post which is listed at the end of the ticket.
Post a Comment for "Bazel Error Parsing Tf.estimator Model"