The researcher focuses on understanding and comparing various optimization algorithms in machine learning, particularly Adam, RMSprop, GD, AdaGrad, and AdamW. They systematically compare these methods by evaluating their performance through metrics like mean squared error or validation loss, assessing factors such as convergence speed and generalization ability. Their work aims to optimize training dynamics for machine learning models across diverse datasets and tasks.
All Papers
No papers found for the selected criteria.
No collaborations found in the dataset.
This profile is generated from publicly available publication metadata and is intended for research discovery purposes. Themes, summaries, and trajectories are inferred computationally and may not capture the full scope of the lecturer's work. For authoritative information, please refer to the official KNUST profile.