Flock vs Predator
In this blog post we will look at an agent based model of flocking. We will see how complex behaviour can emerge from relatively simple rules. We will let predators loose within the flock to see how the flock can respond.
If you have ever looked up at the sky in early evening you might have been greeted by the spectacle of many birds flying in unison creating elaborate and complex patterns in the sky. Or perhaps on the discovery channel you have seen a documentary on the sea and have seen schools of fish creating similar elaborate patterns.
There are many evolutionary reasons for why these patterns may form: firstly is the "safety in numbers" concept whereby having many individuals grouped together protects against predators. Predators can only attack the edge of the pattern, the number of individuals on the edges is significantly fewer than the individuals in the middle of the pattern - thus increasing safety for the largest number of individuals. The patterns may also trick predators into thinking the pattern is some larger threatening entity and so less likely to attack in the first place. There is also the possiblity that by creating a group pattern that is large enough to be seen from far away it attracts more individuals to join the group.
This phenomena is known as "flocking" - in this blog post we will investigate how flocks form.
An Agent Based Approach
One way in which we could reverse engineer these patterns is to have a "central controller" that tells where each flock member should be located at each moment in time. While this may produce results that are seemingly indistinguishable from real-world flocking behaviour we would be very hard pressed to argue that this is what happens in nature. Instead we would like each member to have some level of autonomy and a simple limited set of rules that when combined produces the desired result - if such a model can be found it will allow us to analyse and study flock formation in more detail. Any model containing many members each following simple rules within an environment can be classified as an "agent based model" (members herein being referred to as agents). Agent based models span further than flocking models, they can be used anywhere where it's possible to define simple rules for local behaviour and where large scale/population models (e.g. differential equations) do not provide details on the scales that are of interest to us. Agent based models have been used to model a whole manner of areas from biologically inspired predator/prey interactions, spreading of infection through to the social sciences and idea spreading and the economy.
Model Design
Flocking models have been studied fairly extensively. In this blog I will present my implementation of a flocking model, there will be "prey" agents who are large in number and form flocks to try and protect themselves. There will be a small number of "predator" agents who try and eat/kill the prey. Each class of agent has their own dynamic rules for how they move in space.
Rules for Prey
For the prey agents we will assume that there are 5 forces operating to direct the agents. These forces are equivalent to acceleration vectors, we will integrate over time to change the velocity and integrate velocity to get the change in position. We can describe the forces at bay as follows:
- Flocking Forces - This represents the attraction of an agent to move towards a density of other agents (join the flock). To do this we consider the velocity vector of agents which we denote $\mathbf{v_i}$. We assume that we will only flock towards others if they are near us, we thus have a "flocking radius" which we denote $r_f$ which is how far the agent can "see". We define a vector quantity: $\mathbf{V_i} = \sum_{j \in N_{r_f}(i)} \mathbf{v_j} $. Where we denote the set of agents within a neighbourhood of radius $r$ of agent $i$ as: $N_{r}(i)$. The strength of this force is defined by a parameter $\alpha$. We then define the flocking force applied to agent $i$ as: $$ \mathbf{F}_{flock}^i = \alpha \frac{\mathbf{V_i}}{|\mathbf{V_i}|}$$
- Repulsive Forces - If we are in a crowd we like our own space, if somebody gets to close we move away. At the most extreme level there is a limit on how close one can get to another owing to the size of their bodies. The repulsive force acts to stop agents getting too close together. This force operates only on a radius $r_0$ - if somebody is very far away they won't impact how we move in space. We denote $|\mathbf{d_{ij}}|$ to be the scalar distance (magnitude) between agents $i$ and $j$. We use boldface $\mathbf{d_{ij}}$ to represent the vector distance. The strength of this force is defined by the parameter $\epsilon$. We can then express this force applied to agent $i$ as: $$ \mathbf{F}_{rep}^i = \epsilon \sum_{j \in N_{r_0}(i)}\left(1 - \frac{|\mathbf{d_{ij}}|}{r_0}\right)^{\frac{3}{2}} \frac{\mathbf{d_{ij}}}{|\mathbf{d_{ij}}|} $$
- Self-Propulsion Forces - This force represents a desire to "keep moving". In some situations there is a tendency for sustained movement - for example in a march there is a desire to keep moving at a constant velocity. We represent this by the self-propulsion force. The strength of this force is controlled via a parameter $\mu$. For agent $i$ we can express this as: $$ \mathbf{F}_{prop}^i = \mu \left(v_0 - |\mathbf{v_i}|\right)\frac{\mathbf{v_i}}{|\mathbf{v_i}|} $$
- Random Forces - In the model we may wish to allow for random forces, that is each agent has it's own autonomy. To do this we will just apply a uniform random draw. We will use a parameter $r_{amp}$ (random amplitude) to denote the strength of this force.
- Predator-Repulsive Forces - One of the driving forces for any prey is to escape predators. In this model we will assume that this force trumps all others, that is if a predator is close an agent will run away and ignore all other desires of flocking/self-propulsion/etc. This operates in the exact same way as the repulsive force above but instead of repulsion from other agents it considers repulsion from predators. We define the radius over which this occurs as $r_p$ and the strength of this force $\delta$ - these replace $r_0$ and $\epsilon$ in the formula above. We also apply a "speed limit" to the agents to prevent them travelling arbitrarily quickly.
Rules for Predators
For predators the dynamics are much simpler. We assume that a predator has infinite vision, it will look for the nearest prey and run towards it at a fixed speed. However we will add a (possibly small) random component to the movement - this could represent indecisiveness on the part of the predator or the evasive tactics employed by the prey. The only model assumptions for predators is the predator speed $pv_{max}$ and the proportion of movement that is random $pr_{amp}$.
Environment
We will assume a simple environment consisting of a unit-square. The boundaries will be periodic (i.e. the left edge will "connect" to the right edge and top to bottom). Like the game asteroids this assumes the dynamics take place on the surface of a torus. This assumption simplifies the dynamics since there is no need to consider any boundary conditions - if we assume a fixed "wall" around the environment we would need to define rules as to what happens when an agent tries to cross (e.g. do they "reflect" off walls? Do they "stick"? Do they slow down and change direction to avoid hitting the wall all together?). For convenience in calculating distances in the periodic boundary world we have a helper function called "buffer" which replicates any agents near the walls and places them outside the unit-square - this ensures that distances are calculated correctly.
Other Model Assumptions
There are a few other modelling assumptions made, one of the major decisions of any agent based model is deciding whether to use synchronous or asynchronous updating. We will assume synchronous updating here, each agent will make their movements at the same time. This avoids the problem of agents who are updated "later" within a tick of an asynchronous scheme gaining an "advantage" over those updated first (this would be a particular issue for predator-prey relations where if a predator waits until the prey has moved it is more likely to catch the prey!)
To avoid the problem of a predator following a single prey-agent aimlessly we will use an "eat" dynamic. Here if a predator gets within a small distance of a prey it will "eat" it, the prey will disappear and instantly re-spawn to a random location in the environment with a random starting velocity. There will be no limit to how many prey a predator can eat in a single step and it is assumed the predator can eat without stopping/slowing down.
The Code
An implementation of this model can be seen below. I use matplotlib's animaton feature to make an animation of the system through time. In the animation prey are represented by small blue crosses and predators by larger red diamonds.
import numpy as np
from matplotlib import animation
import matplotlib.pyplot as plt
from IPython.display import HTML
%matplotlib inline
###### Model Parameters ######
dt = 0.01 # Time step
num_frames = 250 # Number of animated frames
burn = 1000 # Number of burn in iterations
# Flock Parameters
N = 250 # Number of flocking agents
r0 = 0.1 # Repulsion force range
eps = 2.0 # Repulsion force strength
rf = 0.25 # Flocking force range
alpha = 10.0 # Flocking force strength
v0 = 0.25 # Target speed
vmax = 0.75 # Maximum speed
mu = 0.1 # Self-Propulsion force
ramp = 1.0 # Random force
rp = 0.5 # Predator-repulsion force range
delta = 5.0 # Predator-repulsion force strength
# Predator Parameters
M = 3 # Number of predator agents
pv_max = 1.0 # Predator speed
pramp = 0.25 # Predator random force
eat = 0.025 # Eating radius
###### Fix Random Seed ######
seed = 123
np.random.seed(seed)
###### Helper Functions ######
def buffer(rb, x, y, vx, vy):
"""
buffer(rb, x, y, vx, vy)
Takes copies of agents near the edges
of the unit square and extends outwards
by a buffer margin rb. This makes it
easier to calculate distances between
agents for flocking and repulsive forces
Inputs:
rb - buffer range
x - array of x coordinate positions [0,1]
y - array of y coordinate positions [0,1]
vx - array of x-component of velocity
vy - array of y-component of velocity
Outputs:
nb, xb, yb, vxb, vyb - Buffered copies of inputs
"""
_N = x.shape[0]
# Initialize buffer arrays
xb[0:_N] = x
yb[0:_N] = y
vxb[0:_N] = vx
vyb[0:_N] = vy
# nb is a counter already have _N agents
nb = _N-1
for i in range(nb+1):
if x[i] <= rb:
# left edge
nb+=1
xb[nb] = x[i]+1
yb[nb] = y[i]
vxb[nb] = vx[i]
vyb[nb] = vy[i]
if x[i] >= 1 - rb:
# right edge
nb+=1
xb[nb] = x[i]-1
yb[nb] = y[i]
vxb[nb] = vx[i]
vyb[nb] = vy[i]
if y[i] <= rb:
# bottom edge
nb+=1
xb[nb] = x[i]
yb[nb] = y[i]+1
vxb[nb] = vx[i]
vyb[nb] = vy[i]
if y[i] >= 1 - rb:
# top edge
nb+=1
xb[nb] = x[i]
yb[nb] = y[i]-1
vxb[nb] = vx[i]
vyb[nb] = vy[i]
if (x[i] <= rb) and (y[i] <= rb):
# bottom left corner
nb+=1
xb[nb] = x[i]+1
yb[nb] = y[i]+1
vxb[nb] = vx[i]
vyb[nb] = vy[i]
if (x[i] <= rb) and (y[i] >= 1 - rb):
# top left corner
nb+=1
xb[nb] = x[i]+1
yb[nb] = y[i]-1
vxb[nb] = vx[i]
vyb[nb] = vy[i]
if (x[i] >= 1 - rb) and (y[i] <= rb):
# bottom right corner
nb+=1
xb[nb] = x[i]-1
yb[nb] = y[i]+1
vxb[nb] = vx[i]
vyb[nb] = vy[i]
if (x[i] >= 1 - rb) and (y[i] >= 1 - rb):
# top right corner
nb+=1
xb[nb] = x[i]-1
yb[nb] = y[i]-1
vxb[nb] = vx[i]
vyb[nb] = vy[i]
return nb, xb, yb, vxb, vyb
def force(nb, xb, yb, vxb, vyb, x, y, vx, vy, mb, pxb, pyb):
"""
force(nb, xb, yb, vxb, vyb, x, y, vx, vy, mb, pxb, pyb)
Calculate force applied to agents (x,y) divided into:
- flocking force (flockx, flocky)
- repulsive force (repx, repy)
- Self propulsion (fpropx, fpropy)
- Random force (frandx, frandy)
- Predator repulsive force (fpredx, fpredy)
Inputs:
nb - number of buffered agents
xb - buffered agent x position array
yb - buffered agent y position array
vxb - buffered agent velocity x coordinate array
vyb - buffered agent velocity y coordinate array
x - agent x position array
y - agent y position array
vx - agent x coordinate velocity array
vy - agent y coordinate velocity array
mb - number buffered predator agents
pxb - predator x-coordinate position array
pyb - predator y-coordinate position array
Global Variables Called:
rf - Flocking force range
r0 - Repulsion force range
alpha - Flocking force strength
mu - Self propulsion force
v0 - Target velocity
ramp - Random force strength
rp - Predator repulsion force range
Outputs:
fx, fy - x and y coordinate forces
"""
_N = x.shape[0]
for i in range(_N):
repx = 0
repy = 0
flockx = 0
flocky = 0
nflock = 0
fpredx = 0
fpredy = 0
for j in range(nb):
d2 = (xb[j] - x[i])**2 + (yb[j] - y[i])**2
if (d2 <= rf**2) and (i != j):
flockx += vxb[j]
flocky += vyb[j]
nflock += 1
if d2 <= r0**2:
d = np.sqrt(d2)
repx += eps*(1- d / r0)**1.5 * (x[i] - xb[j]) / d
repy += eps*(1- d / r0)**1.5 * (y[i] - yb[j]) / d
normflock = np.sqrt(flockx**2 + flocky**2)
if nflock == 0:
normflock = 1
flockx = alpha * flockx / normflock
flocky = alpha * flocky / normflock
vnorm = np.sqrt(vx[i]**2 + vy[i]**2)
fpropx = mu * (v0 - vnorm) * (vx[i] / vnorm)
fpropy = mu * (v0 - vnorm) * (vy[i] / vnorm)
frandx = ramp * (2*np.random.random() - 1)
frandy = ramp * (2*np.random.random() - 1)
fx[i] = (flockx + frandx + fpropx + repx)
fy[i] = (flocky + frandy + fpropy + repy)
for k in range(mb):
d2 = (pxb[k] - x[i])**2 + (pyb[k] - y[i])**2
if (d2 <= rp**2):
d = np.sqrt(d2)
fpredx += delta*(1- d / rp)**1.5 * (x[i] - pxb[k]) / d
fpredy += delta*(1- d / rp)**1.5 * (y[i] - pyb[k]) / d
if fpredx**2 + fpredy**2 > 0:
fx[i] = fpredx
fy[i] = fpredy
return fx, fy
def pred_dist(px1, py1, x1, y1):
"""
pred_dist(px1, py1, x1, y1)
Returns an array of distances from predator to an array of prey
Inputs:
px1 - predator x coordinate
py1 - predator y coordinate
x1 - array of prey x coordinates
y1 - array of prey y coordinates
Outputs:
distance array
"""
dx1 = np.abs(px1-x1)
dx2 = np.abs(1+px1-x1)
dx3 = np.abs(px1-(1+x1))
dy1 = np.abs(py1-y1)
dy2 = np.abs(1+py1-y1)
dy3 = np.abs(py1-(1+y1))
dx = np.minimum(np.minimum(dx1, dx2), dx3)
dy = np.minimum(np.minimum(dy1, dy2), dy3)
return np.sqrt(dx**2+dy**2)
def direction(start_x, start_y, target_x, target_y):
"""
direction(start_x, start_y, target_x, target_y)
Find a unit direction vector from a starting point to a target
Inputs:
start_x - starting point x coordinate
start_y - starting point y coordinate
target_x - target point x coordinate
target_y - target point y coordinate
Returns
dir_x - direction in x
dir_y - direction in y
"""
dx1 = target_x - start_x
dx2 = target_x - (start_x + 1)
dx3 = target_x - (start_x - 1)
dy1 = target_y - start_y
dy2 = target_y - (start_y + 1)
dy3 = target_y - (start_y - 1)
if abs(dx1) <= (abs(dx2) and abs(dx3)):
dir_x = dx1
elif abs(dx2) <= (abs(dx1) and abs(dx3)):
dir_x = dx2
elif abs(dx3) <= (abs(dx1) and abs(dx2)):
dir_x = dx3
if abs(dy1) <= (abs(dy2) and abs(dy3)):
dir_y = dy1
elif abs(dy2) <= (abs(dy1) and abs(dy3)):
dir_y = dy2
elif abs(dy3) <= (abs(dy1) and abs(dy2)):
dir_y = dy3
return dir_x, dir_y
def unitize(vec_x, vec_y):
"""
unitize(vec_x, vec_y)
Returns a vector of same direction as input
but with unit size
"""
norm = np.sqrt(vec_x**2 + vec_y**2)
return vec_x/norm, vec_y/norm
def update():
"""
update()
Update function that updates position and
velocities of prey and predator agents by
one time step
Inputs:
None
Outputs:
None (updates values of global variables)
"""
global x
global y
global vx
global vy
global px
global py
global vpx
global vpy
global eat_count
x_old = x.copy()
y_old = y.copy()
nb, xb, yb, vxb, vyb = buffer(max(r0,rf), x, y, vx, vy)
mb, pxb, pyb, vpxb, vpyb = buffer(rp, px, py, vpx, vpy)
fx, fy = force(nb, xb, xb, vxb, vyb, x, y, vx, vy, mb, pxb, pyb)
# Use force to calculate speeds
vx += fx * dt
vy += fy * dt
# Apply speed limit of maximum velocity
vmod = np.sqrt(vx**2 + vy**2)
vmult = np.where(vmod < vmax, vmod, vmax) / vmod
vx *= vmult
vy *= vmult
# Calculate new positions and re-position in unit square
x += vx * dt
y += vy * dt
x = (1 + x) % 1
y = (1 + y) % 1
# Predator calculation
for i in range(M):
pred_x = px[i]
pred_y = py[i]
# Calculate distances to prey
dist = pred_dist(pred_x, pred_y, x_old, y_old)
# If prey within eat range then eat and regenerate prey
for j in range(N):
if dist[j] < eat:
x[j] = np.random.random()
y[j] = np.random.random()
vx[j] = 2*np.random.random() - 1
vy[j] = 2*np.random.random() - 1
eat_count += 1
# Find closest prey
min_dist = np.min(dist)
mask = dist == min_dist
prey_x = x_old[mask][0]
prey_y = y_old[mask][0]
# Calculate direction to move in
vpx_t, vpy_t = direction(pred_x, pred_y, prey_x, prey_y)
vpx_t, vpy_t = unitize(vpx_t, vpy_t)
# Simulate random component and normalize
vpx_rand = 2*np.random.random() - 1
vpy_rand = 2*np.random.random() - 1
vpx_rand, vpy_rand = unitize(vpx_rand, vpy_rand)
# Combine prey-direction and random components
vpx[i] = (1-pramp)*vpx_t + pramp*vpx_rand
vpy[i] = (1-pramp)*vpy_t + pramp*vpy_rand
vpx[i], vpy[i] = unitize(vpx[i], vpy[i])
# Scale to maximum velocity
vpx[i] *= pv_max
vpy[i] *= pv_max
# Move predator forward
px[i] += vpx[i]*dt
py[i] += vpy[i]*dt
# Re-position predators to unit square
px = (1 + px) % 1
py = (1 + py) % 1
###### Main Code ######
###### Set Up Arrays ######
# Flock arrays
x = np.zeros(N)
y = np.zeros(N)
vx = np.zeros(N)
vy = np.zeros(N)
fx = np.zeros(N)
fy = np.zeros(N)
# Flock buffer arrays
xb = np.zeros([4*N])
yb = np.zeros([4*N])
vxb = np.zeros([4*N])
vyb = np.zeros([4*N])
# Predator arrays
px = np.zeros(M)
py = np.zeros(M)
vpx = np.zeros(M)
vpy = np.zeros(M)
# Predator buffer arrays
pxb = np.zeros([4*M])
pyb = np.zeros([4*M])
vpxb = np.zeros([4*M])
vpyb = np.zeros([4*M])
# Eat counter initialization
eat_count = 0
# Initialize positions and velocities
for i in range(N):
x[i] = np.random.random()
y[i] = np.random.random()
vx[i] = 2*np.random.random() - 1
vy[i] = 2*np.random.random() - 1
for i in range(M):
px[i] = np.random.random()
py[i] = np.random.random()
vpx[i] = 2*np.random.random() - 1
vpy[i] = 2*np.random.random() - 1
# Burn simulations to improve initial conditions
for i in range(burn):
update()
# Reset Eat count
eat_count = 0
###### Animate ######
# Initialize figure
figure = plt.figure()
axes = plt.axes(xlim=(0, 1), ylim=(0, 1))
axes.set_xticks([], [])
axes.set_yticks([], [])
scatter_prey = axes.scatter(x, y, marker='x', s=25, c='Blue')
scatter_pred = axes.scatter(px, py, marker='D', s=100, c='Red')
# Define animation step
def animate(frame):
update()
prey_data = np.array((x, y)).T
scatter_prey.set_offsets(prey_data)
pred_data = np.array((px, py)).T
scatter_pred.set_offsets(pred_data)
# Display animation function
def display_animation(anim):
# Close initialized static plot
plt.close(anim._fig)
# Returns animation to HTML
return HTML(anim.to_jshtml())
# Set up FuncAnimation
anim = animation.FuncAnimation(figure, animate, frames=num_frames, interval=50)
# Call the display function
display_animation(anim)
Since it's a bit hard to follow all that is going on with regards to eating prey, in this animation we can track the number of prey eaten:
print("Number of eaten prey per predator per frame:", eat_count / M / num_frames)
As the dynamics evolve we can see flocks form, these are occasionally interrupted as a predator "breaks" them apart. When this happens the predator can get "confused" and doesn't know which flock is the one it should follow - showing how flocking is beneficial to the prey agents! Through changing parameters we can observe a wide variety of behaviours owing to the relatively large parameter space.
These emergent behaviours are particularly interesting. It is, seemingly, impossible to predict how the dynamics will behave without running a simulation (or if it is predictable it is not apparently obvious). The model rules seem to exhibit a "computational irreducibility" - there is no "short-cut" to computing them. This concept is prevelant in the social sciences, we can study a lot about individuals yet have no way of extending these ideas to populations. As such many problems in complex systems (e.g. social sciences, some areas of biology, etc) are in some sense incompatible with the prevelant scientific principle of reductionism - the study of irreducible computation (and agent based modelling) will no doubt have a large impact on how we understand these systems in the future.
Suggested Extensions or Improvements
Part of the beauty of agent based models such as the one presented here is we can modify the code to investigate different behaviours. Some things that you may want to consider for yourselves include:
- Study parameter space - The obvious first thing to do is study the parameters of this model. There are a large number of parameters and so a large variation in potential behaviour. In particular we would like to create a "zoo" of behaviours - a taxonomy of what is possible with the model and whether this captures what is observed in nature. We can also look for "reasonable ranges" of parameters that produce desired outputs. This could also include the creation of summary statistics describe the behaviours.
- Improved predator dynamics - At the moment the predators are not very smart, they are incapable as working as a team. We may want to consider the effect of adding a "repulsion force" betwen predators so that they can cover the space more effectively. We could also allow for the predators to take the speed of prey into account, so a close fast moving prey may be less attractive than a slower prey slightly further away. The predators could "learn" too - perhaps after being well fed they can move more quickly or have a larger eating radius? Perhaps we want to have limited vision for predators? We could really go to town with this and implement a reinforcement learning algorithm to look for "successful" strategies for eating prey.
- Non-static population size - The model so far assumes the populations stay the same. We may wish to allow for the prey to decrease as they are eaten. Perhaps the predators, when well fed, can reproduce and if they don't eat for a long time they die off.
- Remove homogeneity - Each agent current has the same dynamics properties. We may wish to allow for some heterogeneity in the populations - some prey may be slower/have less good vision and so are easier targets. What if we create classes of prey/predator agents? Perhaps some are highly flock seeking while others prefer to be alone.
- Environment - We have only looked at this model with periodic boundary conditions. What happens if there are "walls" at the edges so prey can get trapped? What happens if we put barriers/obstacles in the environment?
- Different forces - We could also consider adding different forces or replacing the existing ones. For example we may not believe that agents are able to effectively calculate forces based on all their neighbours within a radius, instead perhaps we want to refactor the model so that the agents consider their N-nearest neighbours only instead. Perhaps we could imagine that there is a section of the environment where "food" for the prey exists so there is a force that draws them towards that area. Perhaps we want to implement a force that "slows down" the prey if they have been moving quickly (effectively the agents get "tired"). It's not hard to think of other possibilities.
- New agent types - Suppose that instead of a simple predator-prey relation we have multiple species, some happily coexisting while others eat the other. The "prey" in the existing model could have an attractive force to a species higher up the food chain that eats their predator! We could also investigate "rock paper scissors" type behaviour where no one species is "dominant" by design.
- Performance enhancements - The code as presented is reasonably efficient but could be made to run quicker. Replacing the current code with Cython/Numba would be the obvious choice but there are also opportunities for more structural changes that could improve efficiency. For example we could approximate the forces using a quadtree (as in the Barnes-Hut N-body algorithm).
Conclusion
In this blog post we have seen how to implement a basic flocking algorithm with predators added to the environment. Although each agent in the model is "dumb", complex behaviours of the population can emerge out of relatively simple rules. From this we can notice some interesting applications for these methods - for example we could model crowd behaviour in rock concerts/sports events to increase safety, or we could model evacuation procedures for large buildings/venues that could influence their design. We could also use these techniques for video games and CGI in movies where we may wish to generate large crowds that look "natural" and not repeated/regular.
Hopefully this post acts as a good motivation for the use of agent-based models in general and in particular how they can be used to study emergent behaviours in complex systems.