Fully Connected

This function block implements a fully connected (dense) layer for artificial neural networks. It allows users to customize the output size and activation function, facilitating the design of deep learning models.

πŸ“₯ Inputs

This function block does not have inputs as it generates neural network layers directly.

πŸ“€ Outputs

This function block does not produce outputs directly; instead, it contributes to the architecture of a neural network model that can be further evaluated.

πŸ•ΉοΈ Controls

Output Size A dropdown that allows you to set the number of output neurons in the fully connected layer. Options range from 8 to 512.

Activation Function A dropdown menu to select the activation function used in the layer. Available options include:

  • None

  • relu

  • sigmoid

  • softmax

  • softplus

  • softsign

  • tanh

  • selu

  • elu

  • exponential

🎨 Features

Customizable Layer Settings Users can easily adjust the number of outputs and activation functions to refine their model’s performance during training.

Integration with Keras The block utilizes the Keras library to create layers seamlessly, allowing for easy integration into larger models.

πŸ“ Usage Instructions

  1. Open the Block: Add the Fully Connected block to your workflow in AugeLab Studio.

  2. Set Output Size: Choose the desired number of output neurons from the Output Size dropdown.

  3. Select Activation Function: Choose the preferred activation function from the Activation Function dropdown.

  4. Integrate into Model: The output of this block can be connected to other layers to build a complete neural network.

πŸ“Š Evaluation

This function block contributes a fully connected layer with specified parameters to the neural network model.

πŸ’‘ Tips and Tricks

Choosing Activation Functions
  • For binary classification tasks, use sigmoid.

  • For multi-class classification, use softmax.

  • For regression tasks, consider using linear (set to None).

Layer Stacking

When building deeper networks, consider stacking multiple fully connected layers with suitable activation functions between each layer to improve model capacity.

Tuning Output Size

Experiment with different output sizes to find the optimal configuration for your specific dataset. Layers with too few neurons may underfit, while too many can lead to overfitting.

πŸ› οΈ Troubleshooting

Layer Not Working as Expected

Ensure that the output size and activation function are correctly set for your specific use case. Also, verify the connections from previous layers to ensure proper data flow.

Performance Issues

If your model is performing poorly, consider adjusting the output size or activation function, or implementing techniques such as dropout or regularization to prevent overfitting.

Last updated