Title | Collective Learning of Low-Memory Matrix Adaptation for Large-Scale Black-Box Optimization |
Author | |
Corresponding Author | Duan,Qiqi |
DOI | |
Publication Years | 2022
|
Conference Name | 17th International Conference on Parallel Problem Solving from Nature (PPSN)
|
ISSN | 0302-9743
|
EISSN | 1611-3349
|
ISBN | 978-3-031-14720-3
|
Source Title | |
Volume | 13399 LNCS
|
Pages | 281-294
|
Conference Date | SEP 10-14, 2022
|
Conference Place | null,Dortmund,GERMANY
|
Publication Place | GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND
|
Publisher | |
Abstract | The increase of computing power can be continuously driven by parallelism, despite of the end of Moore’s law. To cater to this trend, we propose to parallelize the low-memory matrix adaptation evolution strategy (LM-MA-ES) recently proposed for large-scale black-box optimization, aiming at further improving its scalability (w.r.t. CPU cores) in the modern distributed computing platform. To achieve this aim, three key design choices are carefully made and naturally combined within the multilevel learning framework. First, to fit into the memory hierarchy and reduce communication cost, which is critical for parallel performance on modern multi-core computer architectures, the well-known island model with a star interaction network is employed to run multiple concurrent LM-MA-ES instances, each of which can be effectively and serially executed in each separate island owing to its low computational complexity. Second, to support fast convergence under the multilevel learning framework, we adopt Meta-ES to hierarchically exploit the spatial-nonlocal information for global step-size adaptation at the outer-ES level, combined with cumulative step-size adaptation, which exploits the temporal-nonlocal information in the inner-ES (i.e., serial LM-MA-ES) level. Third, a set of fitter individuals at the outer-ES level, represented as (distribution mean, evolution path, transformation matrix)-tuples, are collectively recombined to utilize the desirable genetic repair effect for statistically more stable online learning. Experiments in a clustering computing environment empirically validate the parallel performance of our approach on high-dimensional memory-costly test functions. Its Python code is available at https://github.com/Evolutionary-Intelligence/D-LM-MA. |
Keywords | |
SUSTech Authorship | Corresponding
|
Language | English
|
URL | [Source Record] |
Indexed By | |
Funding Project | Shenzhen Fundamental Research Program[JCYJ20200109141235597]
; Shenzhen Peacock Plan[KQTD2016112514355531]
; Program for Guangdong Introducing Innovative and Entrepreneurial Teams[2017ZT07X386]
; National Science Foundation of China[61761136008]
; Special Funds for the Cultivation of Guangdong College Students Scientific and Technological Innovation (Climbing Program Special Funds)[pdjh2022c0061]
|
WOS Research Area | Computer Science
|
WOS Subject | Computer Science, Artificial Intelligence
|
WOS Accession No | WOS:000871753400020
|
EI Accession Number | 20223712707331
|
EI Keywords | Computing power
; Evolutionary algorithms
; Learning systems
; Linear transformations
; Matrix algebra
; Memory architecture
|
ESI Classification Code | Computer Systems and Equipment:722
; Computer Peripheral Equipment:722.2
; Digital Computers and Systems:722.4
; Computer Software, Data Handling and Applications:723
; Algebra:921.1
; Mathematical Transformations:921.3
; Optimization Techniques:921.5
|
Scopus EID | 2-s2.0-85137275010
|
Data Source | Scopus
|
Citation statistics |
Cited Times [WOS]:0
|
Document Type | Conference paper |
Identifier | http://kc.sustech.edu.cn/handle/2SGJ60CL/401661 |
Department | Southern University of Science and Technology |
Affiliation | 1.Harbin Institute of Technology,Harbin,China 2.University of Technology Sydney,Sydney,Australia 3.Southern University of Science and Technology,Shenzhen,China |
First Author Affilication | Southern University of Science and Technology |
Corresponding Author Affilication | Southern University of Science and Technology |
Recommended Citation GB/T 7714 |
Duan,Qiqi,Zhou,Guochen,Shao,Chang,et al. Collective Learning of Low-Memory Matrix Adaptation for Large-Scale Black-Box Optimization[C]. GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND:SPRINGER INTERNATIONAL PUBLISHING AG,2022:281-294.
|
Files in This Item: | There are no files associated with this item. |
|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment