Assume that your GA has chromosomes in the following structure:
ch = (g0, g1, g2, g3, g4, g5, g6, g7)
g0-7 can be any digits between zero to nine.
The fitness of each chromosome is calculated using the following formula:
f(ch) = (g0 + g1) – (g2 + g3) + (g4 + g5 ) – (g6 + g7)
BODMAS is a thing…
This problem is a maximisation problem.
In this scenario we initially have a population of four chromosomes as shown below: Continue reading
Once again with definitions and stuff. I’m sure this makes for an absolutely thrilling read. Below we’re talk about the different types of genetic algorithm. Pretty straight forward.
Genetic algorithms are a form of evolutionary computation pioneered by one John Henry Holland in 1975. At the time, the main limitation of applying early genetic algorithms was computing power. Because apparently my current computer is like 30,000 times more powerful than my first computer. Yeah… Continue reading
It’s been a thousand years since the Great Exodus and little fewer since the Age of Dragons came to an end. The horrors of The Old World have long since been forgotten. Civilisation thrives in the new land of Epimia.
Posted in RPGs
This week on Natural Algorithms: We learn some terminology, Kriss makes a wisecrack and a dog does science!
Okay, so the point here is that we’re looking at different types of fitness functions. We divide the type of function an algorithm uses based on what it returns and how it searches for the result. Notably, a continuous or boolean function can be applied to both a full function search and a partial function search. Continue reading
Stochastic Diffusion Search is a search algorithm that can take the form of either a neural network or a swarm and attempts an optimal application of resources. The agents scatter randomly across the search area and keep searching random locations until either they or one of their neighbours find a location that they determine to be good. An agent that finds a good location tries to take a neighbour to that location (which will in-turn judge whether it believes the location to be good, or bad, and do the same), while an agent that fails to find a good location will follow a neighbour that has, given the opportunity. Continue reading
In this post, we’ll be exploring the application of Dispersive Flies Optimisation, as originally pondered in my previous post. Specifically, we’ll discuss applying DFO to AirBnB data, as the AirBnB data is readily available with very little effort. I will be referring to the data provided for London, however all of the available data should be the same.
There are probably loads of ways we can apply DFO to search this information; I’m going to be looking for the best place to stay. Continue reading
It’s quite curious that at the time of writing, this algorithm doesn’t have so much as a Wikipedia page. Heck, a cursory Google search implies that the individual who came up with it is the very same guy who asked for people to think about it.
Yeah, you! I know you’re out there Mohammad!
Dispersive Flies Optimisation
Dispersive Flies Optimisation (DFO) is a swarm intelligence algorithm that aims to find the best piece of data in a matrix. How good a piece of data (referred to as an agent) is, is judged by its fitness. An agent contains data, or I suppose metadata, such as a location, among other possible things. The flies are then scattered across various locations within the data and set to finding the optimal location, which is either the lowest possible fitness, or highest. Continue reading
There are two dominant No Free Lunch theorem’s that relate to computing. One is focused towards search and optimisation, while the other is for supervised machine learning.
The Key Point
Eric Cai eloquently describes the No Free Lunch theorem in relation to Machine Learning. He describes the theorem as a series of simplifications and assumptions that apply to a problem and it’s solution. In turn, he also points out that the simplifications and assumptions that work for one problem and it’s solution will not nescessarily work on another problem, thus making the solution ineffective.
The “No Free Lunch” theorem states that there is no one model that works best for every problem
The idea that a solution cannot be picked up and applied to another problem without any work at all is most likely the origin of the name.
The original No Free Lunch Theorems for Optimization paper can be found here, if you’re into that kind of thing.