tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error

I am training my data for sign language. There are 26 classes(or folder) of each alphabet. Training data has 30 images each and test have 10 images each.

After training to train the data I am getting following error

File "c:\Users\Rishith Vadher\Downloads\ml.proj\ml.proj\ml project\newtrain.py", line 66, in <module>     classifier.fit(   File "C:\additionalPackages\envs\miniProject\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler           raise e.with_traceback(filtered_tb) from None   File "C:\additionalPackages\envs\miniProject\lib\site-packages\tensorflow\python\eager\execute.py", line 54, in quick_execute       tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:  Detected at node 'categorical_crossentropy/softmax_cross_entropy_with_logits' defined at (most recent call last):     File "c:\Users\Rishith Vadher\Downloads\ml.proj\ml.proj\ml project\newtrain.py", line 66, in <module>       classifier.fit(     File "C:\additionalPackages\envs\miniProject\lib\site-packages\keras\utils\traceback_utils.py", line 65, in error_handler           return fn(*args, **kwargs)     File "C:\additionalPackages\envs\miniProject\lib\site-packages\keras\engine\training.py", line 1564, in fit       tmp_logs = self.train_function(iterator)     File "C:\additionalPackages\envs\miniProject\lib\site-packages\keras\engine\training.py", line 1160, in train_function              return step_function(self, iterator)     File "C:\additionalPackages\envs\miniProject\lib\site-packages\keras\engine\training.py", line 1146, in step_function               outputs = model.distribute_strategy.run(run_step, args=(data,))     File "C:\additionalPackages\envs\miniProject\lib\site-packages\keras\engine\training.py", line 1135, in run_step       outputs = model.train_step(data)     File "C:\additionalPackages\envs\miniProject\lib\site-packages\keras\engine\training.py", line 994, in train_step          [[{{node PyFunc}}]] 

This is my code for training.

# Importing the Keras libraries and packages from keras.models import Sequential from keras.layers import Convolution2D from keras.layers import MaxPooling2D from keras.layers import Flatten from keras.layers import Dense , Dropout import os os.environ["CUDA_VISIBLE_DEVICES"] = "1" sz = 128 # Step 1 - Building the CNN  # Initializing the CNN classifier = Sequential()  # First convolution layer and pooling classifier.add(Convolution2D(32, (3, 3), input_shape=(sz, sz, 1), activation='relu')) classifier.add(MaxPooling2D(pool_size=(2, 2))) # Second convolution layer and pooling classifier.add(Convolution2D(32, (3, 3), activation='relu')) # input_shape is going to be the pooled feature maps from the previous convolution layer classifier.add(MaxPooling2D(pool_size=(2, 2))) #classifier.add(Convolution2D(32, (3, 3), activation='relu')) # input_shape is going to be the pooled feature maps from the previous convolution layer #classifier.add(MaxPooling2D(pool_size=(2, 2)))  # Flattening the layers classifier.add(Flatten(input_shape=(sz, sz, 1)))  # Adding a fully connected layer classifier.add(Dense(units=128, activation='relu')) classifier.add(Dropout(0.40)) classifier.add(Dense(units=96, activation='relu')) classifier.add(Dropout(0.40)) classifier.add(Dense(units=64, activation='relu')) classifier.add(Dense(units=27, activation='softmax')) # softmax for more than 2  # Compiling the CNN classifier.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) # categorical_crossentropy for more than 2   # Step 2 - Preparing the train/test data and training the model classifier.summary() # Code copied from - https://keras.io/preprocessing/image/ from keras.preprocessing.image import ImageDataGenerator  train_datagen = ImageDataGenerator(         rescale=1./255,         shear_range=0.2,         zoom_range=0.2,         horizontal_flip=True)  test_datagen = ImageDataGenerator(rescale=1./255)  training_set = train_datagen.flow_from_directory('data2/train',                                                  target_size=(sz, sz),                                                  batch_size=10,                                                  color_mode='grayscale',                                                  class_mode='categorical')  test_set = test_datagen.flow_from_directory('data2/test',                                             target_size=(sz , sz),                                             batch_size=10,                                             color_mode='grayscale',                                             class_mode='categorical')  print(training_set) classifier.fit(         training_set,         steps_per_epoch=783, # No of images in training set         epochs=5,         validation_data=test_set,         validation_steps=260)# No of images in test set   # Saving the model model_json = classifier.to_json() with open("model-bw.json", "w") as json_file:     json_file.write(model_json) print('Model Saved') classifier.save_weights('model-bw.h5') print('Weights saved') 

Pls help with this

Hi @Rivaa_Vadher_UCOE_38, As you said there are 26 classes, but you have defined 27 neurons in the last dense layer but it should be equal to the number of classes i.e. 26.

And you mentioned that

For training you have 26*30=780 images for training but in model.fit you have assigned 738 for steps_per_epoch, and for batch size you have assigned a value 10, then steps_per_epoch should be 78 and validation_steps should be 26.

Could you please change the code according to the above mentioned points. Please let us know if the error is resolved or not.

Thank You.

1 Like