Optimizer SGD
This function block provides an implementation of the Stochastic Gradient Descent (SGD) optimization algorithm, widely used in training machine learning models.
đĨ Inputs
This function block does not require any inputs.
đ¤ Outputs
The output of this block is the configured optimizer that can be used for model training.
đšī¸ Controls
Learning rate
A text input that specifies the learning rate for the optimizer. This is a crucial hyperparameter that controls how much to change the model in response to the estimated error each time the model weights are updated.
Momentum
A text input that specifies the momentum value. This is a technique that helps accelerate SGD in the relevant direction and dampens oscillations.
Centered
A dropdown menu allowing you to choose whether to activate the centered version of the Nesterov Accelerated Gradient. This option can give improved convergence properties for certain problems.
đ¨ Features
Customizable Hyperparameters
Users can customize the learning rate, momentum, and whether to use Nesterov momentum.
Integration with Keras
This block leverages Keras for optimization, enabling easy integration with Keras-based deep learning models.
đ Usage Instructions
Set Learning Rate: Input the desired learning rate in the
Learning rate
field. A common starting point is0.001
.Set Momentum: Input the momentum value in the
Momentum
field. If not needed, leave it as0
.Toggle Nesterov: Select whether you want to use the centered Nesterov momentum from the
Centered
dropdown.Evaluate: Execute the block to configure the SGD optimizer based on the provided parameters.
đ Evaluation
When evaluated, this function block produces a configured SGD optimizer that can be utilized in the training phase of a machine learning model.
đĄ Tips and Tricks
đ ī¸ Troubleshooting
Last updated
Was this helpful?