News

Nature-Inspired Self-Organizing Collision Avoidance for Drone Swarm Based on Reward-Modulated Spiking Neural Network

Time:2022-10-31

Font:【B】 【M】 【S】

On October 28, 2022, the Brain-Inspired Cognitive AI Team of the Laboratory of Brain Atlas and Brain-Inspired Intelligence published a paper titled Nature-inspired Self-organizing Collision Avoidance for Drone Swarm Based on Reward-modulated Spiking Neural Network in Cell Press journal Patterns. Inspired by distributed, self-organizing swarm intelligence mechanisms in nature, the team used reward-modulated spiking neural networks to enable online learning in individual drones, which self-organize through local interactions to produce collective collision-avoidance behaviors.

Swarm behavior is widespread in nature: bees use waggle dances to coordinate and locate good nectar sources, and flocks of birds, schools of fish, and animal herds exhibit spontaneous, orderly patterns without collisions while cooperating to hunt or evade predators. Such natural swarm behaviors are characterized by self-organization, decentralization, and distributed coordination. Each individual has relatively simple learning abilities and interacts with its local environment. The swarm's intelligent behavior emerges from self-organized coordination among these individuals.

In computational modeling, the coupling of individual behaviors often leads to centralized control approaches for optimizing swarm behavior. However, global optimization can be computationally intensive and poorly adaptable to environmental changes.

Brain-Inspired Cognitive AI Team proposed a self-organizing survival and collision-avoidance model for drone swarms inspired by these decentralized, self-organizing principles in nature. Each drone independently employs a brain-inspired spiking neural network for online reinforcement learning, combining global reward modulation through long-term dopamine signals and local spike-timing-dependent synaptic plasticity. Each individual optimizes its spiking neural network based on observations of other agents within its field of view, achieving efficient, self-organizing interactive learning. Through local interactions between learning-enabled individuals, collective intelligent behaviors emerge organically.

Figure 1 shows the self-organizing collision avoidance process in a drone swarm.

This model has been applied to simulated survival-territory experiments inspired by hoverfly-like territoriality, in which swarms maintain stable regions without collisions and avoid encroaching on each other's "territories" in a confined space. Simulation results for swarms of varying sizes showed that the model rapidly learns safe flight strategies and ensures long-term stability and safety for the entire swarm. Real-world experiments with multiple drones in confined areas also validated the model’s ability to quickly learn and adapt in dynamic, uncertain environments, with drones rapidly avoiding each other without collisions (see Figure 2). Compared with artificial neural network-based learning methods, this spiking neural network model demonstrated superior performance and greater stability (see Figure 3).

Figure 2 shows real-world drone swarm survival-territory experiments.

Figure 3 compares different methods under very small collision thresholds:
a. Collision statistics across different swarm sizes.
b. Collision count evolution during training for different models.

Associate Researcher Feifei Zhao explained: “This study is inspired by the self-organizing, distributed intelligent behaviors observed in biological swarms. By combining biologically plausible spiking neural networks with local interactions, we achieved online, self-organizing intelligent decision-making in drone swarms. From group-level decision-making to individual-level online learning models, our approach is closer to biological information processing mechanisms, laying a foundation for developing swarm intelligence that aligns with natural learning, decision-making, and evolutionary principles.”

Researcher Yi Zeng added: “We believe the key contribution of this study lies in demonstrating how local brain-inspired learning and decision-making principles, combined with environmental interactions, can evolve and give rise to group-level self-organized collision avoidance and stable exploratory behaviors. This shows that the scientific principles underlying seemingly complex cognitive functions and intelligent behaviors may not be complex themselves. It strengthens our confidence and determination to tackle even more advanced cognitive functions in the future. For nearly a decade, we've been continuously building the fully spiking neural network-based Brain-inspired Cognitive Intelligence Engine (BrainCog) to help decode the essence of biological intelligence—including human intelligence—and develop brain-inspired AI based on these insights. This paper represents fundamental research and application in BrainCog’s exploration of brain-like learning mechanisms and emergent behavior evolution. We have open-sourced all related models and algorithms and hope to advance brain-inspired AI collaboratively with the academic community.”

Associate Researcher Feifei Zhao is the first author of this paper, with Yi Zeng as the corresponding author. Doctoral students Bing Han, Hongjian Fang, and Zhuoya Zhao also contributed to the study.

Paper title:
Nature-inspired Self-organizing Collision Avoidance for Drone Swarm Based on Reward-modulated Spiking Neural Network

Paper link:
https://www.cell.com/patterns/fulltext/S2666-3899(22)00236-7

Open-source code:
https://github.com/Brain-Cog-Lab/RSNN