
Hyperparameters Optimization in Deep Convolutional Neural Network / Bayesian Approach with Gaussian Process Prior
Convolutional Neural Network is known as ConvNet have been extensively u...
read it

Feed Forward and Backward Run in Deep Convolution Neural Network
Convolution Neural Networks (CNN), known as ConvNets are widely used in ...
read it

Competing Ratio Loss for Discriminative Multiclass Image Classification
The development of deep convolutional neural network architecture is cri...
read it

Image classification in frequency domain with 2SReLU: a second harmonics superposition activation function
Deep Convolutional Neural Networks are able to identify complex patterns...
read it

Layerwise Learning of Kernel Dependence Networks
We propose a greedy strategy to train a deep network for multiclass cla...
read it

Regularization and Optimization strategies in Deep Convolutional Neural Network
Convolution Neural Networks, known as ConvNets exceptionally perform wel...
read it

Adaptive Neuronwise Discriminant Criterion and Adaptive Center Loss at Hidden Layer for Deep Convolutional Neural Network
A deep convolutional neural network (CNN) has been widely used in image ...
read it
Implementation of Deep Convolutional Neural Network in Multiclass Categorical Image Classification
Convolutional Neural Networks has been implemented in many complex machine learning takes such as image classification, object identification, autonomous vehicle and robotic vision tasks. However, ConvNet architecture efficiency and accuracy depend on a large number of fac tors. Also, the complex architecture requires a significant amount of data to train and involves with a large number of hyperparameters that increases the computational expenses and difficul ties. Hence, it is necessary to address the limitations and techniques to overcome the barriers to ensure that the architecture performs well in complex visual tasks. This article is intended to develop an efficient ConvNet architecture for multiclass image categorical classification applica tion. In the development of the architecture, large pool of grey scale images are taken as input information images and split into training and test datasets. The numerously available technique is implemented to reduce the overfitting and poor generalization of the network. The hyperpa rameters of determined by Bayesian Optimization with Gaussian Process prior algorithm. ReLu nonlinear activation function is implemented after the convolutional layers. Max pooling op eration is carried out to downsampling the data points in pooling layers. Crossentropy loss function is used to measure the performance of the architecture where the softmax is used in the classification layer. Minibatch gradient descent with Adam optimizer algorithm is used for backpropagation. Developed architecture is validated with confusion matrix and classification report.
READ FULL TEXT
Comments
There are no comments yet.