Please use this identifier to cite or link to this item:
http://hdl.handle.net/11667/109
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor | Brownlee, Alexander E I | - |
dc.contributor.other | EPSRC - Engineering and Physical Sciences Research Council | en_GB |
dc.creator | Brownlee, Alexander E I | - |
dc.creator | Christie, Lee | - |
dc.creator | Woodward, John R | - |
dc.date.accessioned | 2018-04-05T07:43:02Z | - |
dc.date.available | 2018-04-05T07:43:02Z | - |
dc.date.created | 2017-05 | - |
dc.identifier.uri | http://hdl.handle.net/11667/109 | - |
dc.description.abstract | Benchmarks are important to demonstrate the utility of optimisation algorithms, but there is controversy about the practice of benchmarking; we could select instances that present our algorithm favourably, and dismiss those on which our algorithm under-performs. Several papers highlight the pitfalls concerned with benchmarking, some of which concern the context of the automated design of algorithms, where we use a set of problem instances (benchmarks) to train our algorithm. As with machine learning, if the training set does not reflect the test set, the algorithm will not generalize. This raises some open questions concerning the use of test instances to automatically design algorithms. We use differential evolution, and sweep the parameter settings to investigate the practice of benchmarking using the BBOB benchmarks. We make three key findings. Firstly, several benchmark functions are highly correlated. This may lead to the false conclusion that an algorithm performs well in general, when it performs poorly on a few key instances, possibly introducing unwanted bias to a resulting automatically designed algorithm. Secondly, the number of evaluations can have a large effect on the conclusion. Finally, a systematic sweep of the parameters shows how performance varies with time across the space of algorithm configurations. This data set includes the experimental results and correlations reported in the paper. | en_GB |
dc.description.tableofcontents | Data sets for the paper "Investigating Benchmark Correlations when Comparing Algorithms with Parameter Tuning"; Lee A. Christie, Alexander E.I. Brownlee, John R. Woodward; Proceedings of GECCO 2018, Kyoto Japan. vote-si1.xlsx - ranks for the coarse-grained sweep. finished-results/*.csv - these are the output files from which were calculated the correlations for the fine-grained sweep. correlations.csv - the spearman's rank correlation data for the fine-grained sweep between functions for generations 1-25. Additional details are provided in the readme.txt file. Dedicated UnZip software is recommended for accessing the dataset, for example, IZArc. | en_GB |
dc.language.iso | eng | en_GB |
dc.publisher | University of Stirling. Faculty of Natural Sciences. | en_GB |
dc.relation | Brownlee, AEI; Christie, L; Woodward, JR (2018): Data for the paper "Investigating benchmark correlations when comparing algorithms with parameter tuning". University of Stirling. Faculty of Natural Sciences. Dataset. http://hdl.handle.net/11667/109 | en_GB |
dc.relation.isreferencedby | Christie, L.A., Brownlee, A.E.I and Woodward, J.R. (2018) Investigating Benchmark Correlations when Comparing Algorithms with Parameter Tuning. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. Genetic and Evolutionary Computation Conference 2018, 15.07.2018-19.07.2018. New York: ACM, pp. 209-210. DOI: https://doi.org/10.1145/3205651.3205747 Available from: http://hdl.handle.net/1893/27083 and http://hdl.handle.net/1893/26956 | en_GB |
dc.rights | Rights covered by the standard CC-BY 4.0 licence: https://creativecommons.org/licenses/by/4.0/ | en_GB |
dc.subject | benchmarks | en_GB |
dc.subject | BBOB | en_GB |
dc.subject | ranking | en_GB |
dc.subject | differential evolution | en_GB |
dc.subject | continuous optimisation | en_GB |
dc.subject | parameter tuning | en_GB |
dc.subject | automated design of algorithms | en_GB |
dc.subject.classification | ::Information and communication technologies::Artificial Intelligence Technologies | en_GB |
dc.title | Data for the paper "Investigating benchmark correlations when comparing algorithms with parameter tuning" | en_GB |
dc.type | dataset | en_GB |
dc.contributor.email | alexander.brownlee@stir.ac.uk | en_GB |
dc.identifier.rmsid | 1855 | en_GB |
dc.identifier.rmsid | 1067 | en_GB |
dc.identifier.projectid | EP/N002849/1 | en_GB |
dc.identifier.projectid | EP/J017515/1 | en_GB |
dc.title.project | FAIME: A Feature based Framework to Automatically Integrate and Improve Metaheuristics via Examples | en_GB |
dc.title.project | DAASE: Dynamic Adaptive Automated Software Engineering | en_GB |
dc.contributor.affiliation | University of Stirling (Computing Science - CSM Dept) | en_GB |
dc.contributor.affiliation | Queen Mary University of London | en_GB |
dc.date.publicationyear | 2018 | en_GB |
Appears in Collections: | University of Stirling Research Data |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
data.zip | 1.02 MB | Unknown | View/Open | |
readme.txt | 520 B | Text | View/Open |
This item is protected by original copyright |
Items in DataSTORRE are protected by copyright, with all rights reserved, unless otherwise indicated.