agnesnatasya opened a new pull request #711: URL: https://github.com/apache/singa/pull/711
Implemented Squeezenet model using Squeezenet1.1 model in ONNX model zoo. As compared to the inference here https://github.com/onnx/models/blob/master/vision/classification/imagenet_inference.ipynb , which resizes the image to 256 and crop the 224 in the center, I resized the image to 224 and take all these 224 pixels as an input to the model. It gave different result when I tried these 2 approaches, but the second one gives more similar result. Thus, I choose the second one. Is there any significant impact of this difference? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
