In the code to test the hybrid ImageNet+Places CNN, change the line: model 
= VGG16_Hubrid_1365(weights='places', include_top=False) to model = 
VGG16_Hubrid_1365(weights='places', include_top=True). Then 
preds.shape=(1365L) and the code is debugged.
Note however, that there is a known issue with the pre-trained weights so 
that the top-5 class results for different images are practically identical 
and also the top-5 confidence levels are very low, i.e., 2-3%. Refer 
to the GitHub page  https://github.com/GKalliatakis/Keras-VGG16-places365 
<https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FGKalliatakis%2FKeras-VGG16-places365&sa=D&sntz=1&usg=AFQjCNGBJUu-M4S4av5I6ghg4TfyoM_wHg>
 under 
"issues" for updates.
====================================================================.

On Tuesday, April 24, 2018 at 4:57:10 PM UTC-4, [email protected] wrote:

> .  I am testing version of a python script (see below) that I found at 
> https://github.com/GKalliatakis/Keras-VGG16-places365 
> <https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FGKalliatakis%2FKeras-VGG16-places365&sa=D&sntz=1&usg=AFQjCNGBJUu-M4S4av5I6ghg4TfyoM_wHg>
>  
> to predict the VGG places 365 CNN classes and probabilities for several 
> test images.
>
> >             1) I found that the image variable "x" has dimension 4  and 
> shape (1L, 224L, 224L, 3L).
>
> >             2) I found that the "preds" variable has dimension 1 and 
> shape (365L,) so that  np.sort(preds)[::-1][0:5] looked something like [ 
> 0.03043453  0.02591384  0.02299733  0.01979745  0.01885794].
>
> >             *** Are these the predicted probabilities? if so, they are 
> quite small. please advise.
>
> >             3) I also found that "top_preds" variable has dimension 1 
> and shape (5L,) so that np.argsort(preds)[::-1][0:5] looked something like 
> [236  99  21 317  77]. 
>
> >             *** These appear to be the top-5 predicted classes, wherein 
> the SCENE CATEGORIES are: museum/indoor; coffee_shop; art_studio; 
> staircase; and campus.
>
> >             HOWEVER, my image was image_path='tank_desert' and when I 
> change the test image to 'tank_forest' I get similar classes, i.e., [99  
> 236  21  77  362].
>
> >             *** I suspect this test script for the  VGG16 Places 365 
> model is hardwired for the jpg image  'restaurant.jpg' as shown in the 
> example on the github web page for the Places 365 CNN .
>
> >             *** Note that when I test the attached 'restaurant.jpg' the 
> top-5 classes are [99, 236, 21, 77, 193] corresponding to the categories 
> coffee_shop, museum/indoor, art_studio, campus, inn/outdoor.
>
> >             .
>
> >             Is there a way to change a configuration file and/or a part 
> of a .py code to allow for new and different test images?
>
> >             Any comments or suggestions that you may offer would be 
> helpful.
>
> > 
>
> > Best Regards,
>
> > Arnold
>
> > 
>
> > PS: I downloaded the categories list locally to my notebook computer 
> because I was having trouble accessing the file online as the code ran.
>
> > .
>
> > PSS: to follow up... I modified the script for  VGG places 365 CNN to 
> test the VGG16_hybrid_places_1365 CNN (see further below) and found that 
> now the variable "x" has dimension 4 and shape 4 (1L, 224L, 224L, 3L).  I 
> also found that "preds" have dimension 3 and shape  (7L, 7L, 512L) and 
> that "top_preds" have shape  (5, 7L, 512L) so that I get a very large 
> array of top-5 class predictions.
>
> > 
>
> > ***Why are the prediction arrays of a different shape in this case? I 
> expected "preds" to have shape (1365L) and top_preds again to have shape 
> (5L).
>
> > Again, Any comments or suggestions that you may offer would be helpful.
>
> > Best,
>
> > Arnold
>
> > ====================================================================
>
> > #script for  VGG places 365 CNN
>
> > import keras
>
> > import numpy as np
>
> > import os
>
> > from VGG16_places_365 import VGG16_Places365 from keras.preprocessing 
>
> > import image from places_utils import preprocess_input model =
>
> > VGG16_Places365(weights='places')
>
> > 
>
> > img_path = r'C:\Users\atunick\VGG16_hybrid_places_1365\tank_desert.jpg'
>
> > #img_path = 'tank_desert.jpg'
>
> > img = image.load_img(img_path, target_size=(224, 224)) x =
>
> > image.img_to_array(img) x = np.expand_dims(x, axis=0) x =
>
> > preprocess_input(x) print x.ndim, x.shape, 'x'
>
> > predictions_to_return = 5
>
> > preds = model.predict(x)[0]
>
> > print 'preds', preds.ndim, preds.shape, np.sort(preds)[::-1][0:5] 
>
> > top_preds = np.argsort(preds)[::-1][0:predictions_to_return]
>
> > print 'top_preds', top_preds.ndim, top_preds.shape, top_preds # load 
>
> > the class label file_name = 
>
> > r'C:\Users\atunick\VGG16_hybrid_places_1365\categories_places365.txt'
>
> > #if not os.access(file_name, os.W_OK):
>
> > #    synset_url = 'Caution-
> https://raw.githubusercontent.com/CSAILVision/places365/master/categories_places365.txt
> '
>
> > #    os.system('wget ' + synset_url)
>
> > classes = list()
>
> > with open(file_name) as class_file:
>
> >    for line in class_file:
>
> >        classes.append(line.strip().split(' ')[0][3:]) classes =
>
> > tuple(classes) #print classes print('--SCENE CATEGORIES:') # output 
>
> > the prediction for i in range(0, 5):
>
> >    #print top_preds[i]
>
> >    print(classes[top_preds[i]])
>
> > print 'completed'
>
> > 
>
> > 
>
> > =============================================
>
> > #script to test the VGG16_hybrid_places_1365 CNN
>
> > import keras
>
> > import numpy as np
>
> > import os
>
> > from VGG16_hybrid_places_1365 import VGG16_Hubrid_1365
>
> > from keras.preprocessing import image
>
> > from places_utils import preprocess_input
>
> > model = VGG16_Hubrid_1365(weights='places', include_top=False)
>
> > 
>
> > img_path = r'C:\Users\atunick\VGG16_hybrid_places_1365\tank_desert.jpg'
>
> > #img_path = 'tank_desert.jpg'
>
> > img = image.load_img(img_path, target_size=(224, 224))
>
> > x = image.img_to_array(img)
>
> > x = np.expand_dims(x, axis=0)
>
> > x = preprocess_input(x)
>
> > print x.ndim, x.shape
>
> > predictions_to_return = 5
>
> > preds = model.predict(x)[0]
>
> > print 'preds', preds.ndim, preds.shape, np.sort(preds)[::-1][0:5]
>
> > top_preds = np.argsort(preds)[::-1][0:predictions_to_return]
>
> > print'top_preds', top_preds.ndim, top_preds.shape, top_preds
>
> > print '  '
>
> > # load the class label
>
> > file_name = 
> r'C:\Users\atunick\VGG16_hybrid_places_1365\categories_hybrid1365.txt'
>
> > #if not os.access(file_name, os.W_OK):
>
> > #    synset_url = 'Caution-
> https://raw.githubusercontent.com/CSAILVision/places365/master/categories_hybrid1365.txt
> '
>
> > #    os.system('wget ' + synset_url)
>
> > classes = list()
>
> > with open(file_name) as class_file:
>
> >    for line in class_file:
>
> >        classes.append(line.strip().split(' ')[0][3:])
>
> > classes = tuple(classes)
>
> > print('--SCENE CATEGORIES:')
>
> > # output the prediction
>
> > for i in range(0, 5):
>
> >    print(classes[top_preds[i]])
>
> > print 'completed'
>
> > 
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to