The simple example is 2 dimensional with the use of the sphere function, pictured right. With each iteration the flies will get pulled slightly closer to the lowest point of the sphere, homing in on both the fly with the best overall fitness as well as its fittest neighbour.
It’s worth noting that a fly’s neighbours are determined by its position in the list or array, rather than its position in the data. For example: fly[x] will consider the position of fly[x-1], fly[x+1] and fly[fittest]. Of flies immediately before and after it in the list, the least fit is disregarded.
Flies can get stuck in places that they believe to be best, purely because none of them have encountered a better location yet. This can lead them to settle in the wrong location on wavy planes, or planes that are predominantly smooth with the exception of a narrow and isolated spike in fitness. To aid in their search, the flies need to be disturbed on occasion, to send a number of them scattering. If there’s an improvement, this is likely to find it.
The whole process seems to be a careful balance of exploring the data and exploiting the flies.
In a way, DFO reminds me of the ELO rating system and made me wonder how well the pair would work, either in parallel or in place of ELO for determining the best players of a game. However, the more I thought about it the more it seemed like a convoluted and ineffective idea. It could however prove interesting to plug demographic information into it in an attempt to find out where the best players in the world come from.
Of course, we can put something like this to much better use than that. How about finding out where the best place in the world to live would be? If we plugged in the cost and various qualities of visas, housing, electricity, internet and other clearly less important utilities we might be able to find the perfect place to live. Failing that, we’d certainly find an interesting place to consider living.
I’m also quite tempted to plug in some data from AirBnB to see what it spits back.
Either way, that’s quite the chain of cognitive leaps from the original thought: “It reminds me of ELO.”
The given exercise for this was to write a fitness function and adjust the agent behaviour, such that they coalesced on the white spot of a grey-scale image. The fitness function itself is rather straight forward – or it should be, black being 0 and white being 255 and all. The less obvious adjustment comes from altering the agent’s behaviour such that they don’t follow a leader. At that point I didn’t realise that having an agent compare itself to the best agent in the collection was unnecessary. It’s unnecessary by the way, I don’t know why Mohammad initially showed us a behaviour that involved it.