Abstract
Deep Learning models have proven necessary in dealing with the challenges posed by the continuous growth of data volume acquired from satellites and the increasing complexity of new Remote Sensing applications. To obtain the best performance from such models, it is necessary to fine-tune their hyperparameters. Since the models might have massive amounts of parameters that need to be tuned, this process requires many computational resources. In this work, a method to accelerate hyperparameter optimization on a High-Performance Computing system is proposed. The data batch size is increased during the training, leading to a more efficient execution on Graphics Processing Units. The experimental results confirm that this method reduces the runtime of the hyperparameter optimization step by a factor of 3 while achieving the same validation accuracy as a standard training procedure with a fixed batch size.
Original language | English |
---|---|
Title of host publication | IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 263-266 |
Number of pages | 4 |
ISBN (Electronic) | 9781665427920 |
DOIs | |
Publication status | Published - 17 Jul 2022 |
Event | 2022 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2022 - Kuala Lumpur, Malaysia Duration: 17 Jul 2022 → 22 Jul 2022 |
Publication series
Name | International Geoscience and Remote Sensing Symposium (IGARSS) |
---|---|
Volume | 2022-July |
Conference
Conference | 2022 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2022 |
---|---|
Country/Territory | Malaysia |
City | Kuala Lumpur |
Period | 17/07/22 → 22/07/22 |
Bibliographical note
Funding Information:This work was performed in the Center of Excellence (CoE) Research on AI-and Simulation-Based Engineering at Exascale (RAISE) receiving funding from EU’s Horizon 2020 Research and Innovation Framework Programme H2020-INFRAEDI-2019-1 under grant agreement no. 951733. The authors gratefully acknowledge the computing time granted by the JARA Vergabegremium and provided on the JARA Partition part of the supercomputer JURECA at Forschungszentrum Jülich.
Publisher Copyright:
© 2022 IEEE.
Other keywords
- Batch Size
- Deep Learning
- High-Performance Computing
- Hyperparameter Tuning
- Remote Sensing