Optimization of Extreme Learning Machine (ELM) Input Weights and Biases Using Bat Algorithm (BA)

Resource Overview

Optimization of Extreme Learning Machine (ELM) input weights and biases using Bat Algorithm (BA) to enhance model stability and diagnostic accuracy through intelligent parameter tuning.

Detailed Documentation

In the field of machine learning, the Extreme Learning Machine (ELM) has gained significant attention due to its fast learning capability and strong generalization performance. However, ELM's input weights and biases are typically initialized randomly, which can lead to unstable model performance. To improve the diagnostic accuracy of ELM models, the Bat Algorithm (BA) can be employed to optimize these input weights and biases. The Bat Algorithm is a swarm intelligence-based optimization method that mimics the echolocation behavior of bats when searching for prey. By adjusting parameters such as frequency, velocity, and position of the virtual bats, BA efficiently explores the solution space to find optimal values. Applying BA to ELM parameter optimization helps mitigate performance fluctuations caused by random initialization while enhancing the model's convergence speed and diagnostic accuracy. The optimization process generally involves the following key steps: First, initialize the bat population where each individual represents a combination of ELM input weights and biases. In code implementation, this can be structured as a matrix where each row corresponds to a bat's position vector encoding the weight-bias parameters. Second, evaluate the performance of each individual using a fitness function, such as classification accuracy or mean squared error. This typically requires implementing a function that trains ELM with given parameters and returns the fitness score. Third, update the bat positions according to BA's movement rules, gradually approaching the optimal parameter combination. The algorithm typically includes frequency adjustment using: f_i = f_min + (f_max - f_min) * β, where β is a random vector, and velocity updates based on the difference between current and best positions. Experimental results demonstrate that BA-optimized ELM achieves higher accuracy and stability in diagnostic tasks. This integration not only leverages ELM's rapid learning characteristics but also overcomes the limitations of random initialization through BA's intelligent search capabilities, providing more reliable solutions for complex classification problems.