Home ANN Machine Learning Deep Learning Generative AI Responsive AI

Important Terminology of ANN

Artificial Neural Networks (ANNs) are complex systems used in machine learning and artificial intelligence. Here are some important terminology related to ANNs

  1. Activation Function: A mathematical function applied to the output of a neuron, determining its firing or activation level. It introduces non-linearity to the network and enables complex mappings between inputs and outputs. Common activation functions include the sigmoid, ReLU (Rectified Linear Unit), and softmax functions.
  2. Weights: Each connection between neurons in an ANN is associated with a weight. The weight represents the strength or importance of that connection, indicating how much influence the output of one neuron has on the input of another.
  3. Bias: A bias term is an additional parameter associated with each neuron in an ANN. It allows for the shifting of the activation function's output, providing flexibility in the decision boundary of the network.
  4. Feedforward: The process of propagating input data through the network, layer by layer, from the input layer to the output layer. In feedforward neural networks, information flows in one direction without any loops or feedback connections.
  5. Backpropagation: A learning algorithm for adjusting the weights of an ANN based on the difference between the network's predicted output and the desired output. It calculates the gradient of the loss function with respect to the weights and uses this information to update the weights in the network.
  6. Loss Function: A function that measures the dissimilarity between the predicted output of an ANN and the true output. It quantifies the error made by the network and is used to guide the training process, minimizing the error through weight adjustments.
  7. Gradient Descent:: An optimization algorithm used in the training of neural networks. It iteratively adjusts the weights in the direction of steepest descent of the loss function, aiming to find the global or local minimum of the error surface.
  8. Epoch: A single pass of the entire training dataset through an ANN. During one epoch, forward propagation, backpropagation, and weight updates take place.
  9. Batch Size: The number of training examples from the dataset used in a single iteration of the gradient descent algorithm. In each batch, the weights are updated based on the average gradient calculated from the examples in the batch.
  10. Learning Rate: A hyperparameter that determines the step size or rate at which the weights are updated during training. It controls the magnitude of weight adjustments based on the calculated gradient, balancing the speed and accuracy of learning.
  11. Overfitting: A phenomenon in machine learning where an ANN learns to perform well on the training data but fails to generalize to unseen or test data. It occurs when the network becomes too complex or when the training dataset is insufficient.
  12. Regularization:: Regularization techniques are used to prevent overfitting in ANNs. They introduce additional terms to the loss function or modify the weight update process to discourage excessive complexity in the network.
Snow



Workshop on Generative AI

Upcoming Webinars

Latest Blog

Latest AI News