Swarm Intelligence 1
Last updated
Last updated
The topic of this session is computer simulation beyond physics, namely simulating collective behaviours of interacting simple-minded agents. The topic of agent based modelling [] is quite vast and we do not have the room to properly introduce it in depth. Nevertheless, the subset of swarm intelligence [] is very close to the system and understanding we have developed so far. The key reference for this section is the paper “Steering Behaviours For Autonomous Characters” by Craig W. Reynolds []. Within this paradigm our particles become the agents or “boids” and their rules of engagement are expressed by non-physical forces.
Swarm intelligence studies phenomena involving the systemic behaviour of multiple entities such as schools of fish [], flocks of birds [] and even human crowds [] often observed in nature. An interesting aspect of those behaviours is that they are loosely or implicitly coordinated rather than driven by a centralized type of control. Moreover, the logic that gives rise to such complex behaviours can be expressed by simple spatial and temporal rules.
In order to simulate interacting agents within swarm intelligence paradigm, we need to develop the concept of an unstructured and dynamically changing spatial context; the notion of neighbourhood of nearest neighbours. Given a set of objects we define nearest neighbours as the subset of those objects that fall within a proximity range.
There are two ways to implement this concept: (a) select all other particles contained within a sphere with a fixed radius centred about each particle; and (b) select a fixed number of particles that are closest. The first implies a fixed size geometry context, while the second a fixed group size.
We will show implementations of both versions because nearest neighbours [] is a very important concept with applications across many fields. It is a soft introduction to non-parametric modelling approaches which are very useful for real world applications in cases where data does not obey a prior known structure.
The cluster component for computing the fixed distance nearest neighbours is seen below. It requires a list of positions and the sphere’s radius, particles must be within to be accepted. The strategy for computing those involves calculating first all particle distances and then applying conditional expressions to filter list items by index.
Distances between all pairs of positions are computed using the Cartesian product, creating a table where each column represents the distance of one particle to all others. The table is symmetric with diagonal elements being zero. The computation is a bit wasteful because of duplication “di,j = dj,i” but it is simple enough to proceed.
Distance
…
…
…
…
…
…
…
…
…
Applying the conditional expression “distance > 0″ and ” distance < radius” ensures we get rid of the zero distance between a particle and itself, and all others particles beyond range. The result is a table of boolean values where each column represents semantically a “select” or “skip” flag. The “dispatch” component uses those flags to split each column into two new lists: “accepted” in “list a” and “ignored” in “list b”.
Filter
…
False
True/False
…
True/False
True/False
False
…
True/False
…
…
…
…
…
True/False
True/False
…
False
A subtle point here is that we do not care about the distances themselves. We only care about the particles within range. So we do not filter and output the distances but instead the list indices of accepted particles. We use a “series” component with as many items as the particle’s list passed to the “dispatch” component therefore. The result is a jagged table, i.e. it have uneven number of items per columns. This is very similar to the edge table used for mesh topology; here our topology is just dynamic.
The figure demonstrates the effect of increasing the neighbourhood’s radius using a random set of points. The edges are dynamically generated for groups of points in the same neighbourhood. If the distance is too small and particles too far apart, each become a neighbourhood of its own. As we expand the radius clusters start to appear. In the extreme case of the radius being too large, then we end up with a complete graph as all points become neighbours of every other point.
The cluster component for computing the fixed group size nearest neighbours, which is also known as k-nearest neighbours, is seen below. It requires a list of positions and the number of items per group. The strategy for computing those involve calculating all particle distances, sorting from closest to farthest and then extracting a sub-list of a top “count” items and returning their indices, not their actual positions.
All distances are computed using the Cartesian product. Next we order each column from the smallest to the largest. This is performed using the “sort list” component for which given a list input as the “keys” parameters, it returns a sorted copy the inputs list via the “keys” output parameter. The list items must be “comparable” in a sense e.g. numbers encompass an “ordinal” sense while colours do not. To sort from largest to smallest is equivalent of reversing the output list, using the context menu.
What we are interested again is not the actual distances but the first “count” particles in the ordered lists. We supply thus to the sorting component a series, via the “values a” input parameter, which will be also rearranged along with the “keys”, i.e. the list of distances, such that we can keep track of which particles are the closest by index. To see how this works let’s focus on the distances of a particle, say “p0“. The distances computed prior to sorting are seen below.
0
1
2
3
4
5
…
n-1
Distance
0.0
8.4
4.2
6.3
3.1
9.5
…
6.6
Particle
0
1
2
3
4
5
…
n-1
After sorting, the distances are ordered from the smallest to the largest but also the particle indices have been swapped along the way. Therefore, we retain knowledge of which particles are the nearest by index. Obviously, the closest one will always be the particle itself at zero distance, since we are using the Cartesian product.
0
1
2
3
4
5
…
n-1
Distance
0.0
3.1
4.2
6.3
6.6
8.4
…
9.5
Particle
0
4
2
3
n-1
1
…
5
Particles of interest are thus always in the index range of “[1, count]”. However, the graph is a bit more complicated to account of the corner cases such as using a zero values as input or a larger number of “count” than available particles. The result of this clustering logic forms groups much more aggressively than than the nearest neighbours by radius method.