The fundamental unit of a neural network is called a neuron. A neuron receives multiple input values (x), each of which is multiplied by a corresponding weight (w) that the model learns during training. The weighted inputs are then summed along with a bias term (b). This result is passed through an activation function, which produces the final output (y). During the training, the model continuously adjusts the weights and bias term to minimize the difference between the predicted and actual values of y.
Do you have any other questions about neural networks?