Unsupervised Learning in an Ensemble of Spiking Neural Networks Mediated by ITDP
19/10/2016
PLOS Computational Biology
Y. Shim, K. Staras, P. Husbands
PLOS
We propose a biologically plausible architecture for unsupervised ensemble learning in a
population of spiking neural network classifiers. A mixture of experts type organisation is
shown to be effective, with the individual classifier outputs combined via a gating network
whose operation is driven by input timing dependent plasticity (ITDP). The ITDP gating
mechanism is based on recent experimental findings. An abstract, analytically tractable
model of the ITDP driven ensemble architecture is derived from a logical model based on
the probabilities of neural firing events. A detailed analysis of this model provides insights
that allow it to be extended into a full, biologically plausible, computational implementation
of the architecture which is demonstrated on a visual classification task. The extended
model makes use of a style of spiking network, first introduced as a model of cortical microcircuits,
that is capable of Bayesian inference, effectively performing expectation maximization.
The unsupervised ensemble learning mechanism, based around such spiking
expectation maximization (SEM) networks whose combined outputs are mediated by
ITDP, is shown to perform the visual classification task well and to generalize to unseen
data. The combined ensemble performance is significantly better than that of the individual
classifiers, validating the ensemble architecture and learning mechanisms. The properties
of the full model are analysed in the light of extensive experiments with the classification
task, including an investigation into the influence of different input feature selection
schemes and a comparison with a hierarchical STDP based ensemble architecture.
Breeding novel solutions in the brain: a model of Darwinian neurodynamics
28/09/2016
András Szilágyi, István Zacha, Anna Fedor, Harold P. de Vladar, Eörs Szathmáry
F1000Research Ltd.
Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain.
Methods: We combine known components of the brain – recurrent neural networks (acting as attractors), the action selection loop and implicit working memory – to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory.
Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors.
Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants
Intuition and Insight: Two Processes That Build on Each Other or Fundamentally Differ?
13/09/2016
T. Zander, M. Ollinger, KG Volz
Intuition and insight are intriguing phenomena of non-analytical mental functioning: whereas intuition denotes ideas that have been reached by sensing the solution without any explicit representation of it, insight has been understood as the sudden and unexpected apprehension of the solution by recombining the single elements of a problem. By face validity, the two processes appear similar; according to a lay perspective, it is assumed that intuition precedes insight. Yet, predominant scientific conceptualizations of intuition and insight consider the two processes to differ with regard to their (dis-)continuous unfolding. That is, intuition has been understood as an experience-based and gradual process, whereas insight is regarded as a genuinely discontinuous phenomenon. Unfortunately, both processes have been investigated differently and without much reference to each other. In this contribution, we therefore set out to fill this lacuna by examining the conceptualizations of the assumed underlying cognitive processes of both phenomena, and by also referring to the research traditions and paradigms of the respective field. Based on early work put forward by Bowers et al. (1990, 1995), we referred to semantic coherence tasks consisting of convergent word triads (i.e., the solution has the same meaning to all three clue words) and/or divergent word triads (i.e., the solution means something different with respect to each clue word) as an excellent kind of paradigm that may be used in the future to disentangle intuition and insight experimentally. By scrutinizing the underlying mechanisms of intuition and insight, with this theoretical contribution, we hope to launch lacking but needed experimental studies and to initiate scientific cooperation between the research fields of intuition and insight that are currently still separated from each other.
An Attractor Network-Based Model with Darwinian Dynamics
11/08/2016
Harold P. de Vladar, Anna Fedor, András Szilágyi, István Zachar, Eörs Szathmáry
ACM Press
The human brain can generate new ideas, hypotheses and candidate solutions to difficult tasks with surprising ease. We argue that this process has evolutionary dynamics, with multiplication, inheritance and variability all implemented in neural matter. This inspires our model, whose main componentis a population of recurrent attractor networks with
palimpsest memory that can store correlated patterns. The candidate solutions are represented as output patterns of the attractor networks and they are maintained in implicit working memory until they are evaluated by selection. The best patterns are then multiplied and fed back to attractor networks as a noisy version of these patterns (inheritance
with variability), thus generating a new generation of candidate
hypotheses. These components implement a truly Darwinian process which is more ecient than both natural selection on genetic inheritance or learning, on their own.
We argue that this type of evolutionary search with learning can be the basis of high-level cognitive processes, such as problem solving or language.
Entraining and copying of temporal correlations in dissociated cultured neurons
24/07/2016
Terri Roberts, Kevin Staras, Philip Husbands, Andrew Philippides
Springer International Publishing
Here we used multi-electrode array technology to examine the encod-ing of temporal information in dissociated hippocampal networks. We demon-strate that two connected populations of neurons can be trained to encode a de-fined time interval, and this memory trace persists for several hours. We also investigate whether the spontaneous firing activity of a trained network, can act as a template for copying the encoded time interval to a naive network. Such findings are of general significance for understanding fundamental principles of information storage and replication
Darwinian Dynamics of Embodied Chaotic Exploration
20/07/2016
Y. Shim, J.Auerbach, P. Husbands
ACM
We present Embodied Chaotic Exploration (ECE), a novel
direction of research into a possible candidate for Darwinian
neural dynamics, where such dynamics are occurring not at
the level of synaptic connections, but rather at the slightly
higher and more abstract level of embodied motor pattern
attractors. Crucially, the (chaotic) neuro dynamics are embodied
and it is the whole neuro-body-environment system
that must be considered, although the changes occur at the
neural level. ECE incrementally explores and learns motor
behaviors through an integrated combination of chaotic
search and re
ex learning. The architecture developed here
allows real-time, goal-directed exploration and learning of
the possible motor patterns (e.g. for locomotion) of embodied
systems of arbitrary morphology. The overall iterative
search process formed from this combination is shown to
have strong parallels with evolutionary search.
Gaining Insight into Quality Diversity
20/07/2016
Joshua E. Auerbach, Giovanni Iacca and Dario Floreano
ACM New York, NY, USA
Recently there has been a growing movement of researchers
that believes innovation and novelty creation, rather than
pure optimization, are the true strengths of evolutionary al-
gorithms relative to other forms of machine learning. This
idea also provides one possible explanation for why evolu-
tionary processes may exist in nervous systems on top of
other forms of learning. One particularly exciting corollary
of this, is that evolutionary algorithms may be used to pro-
duce what Pugh et al have dubbed Quality Diversity (QD):
as many as possible dierent solutions (according to some
characterization), which are all as t as possible. While the
notion of QD implies choosing the dimensions on which to
measure diversity and performance, we propose that it may
be possible (and desirable) to free the evolutionary process
from requiring dening these dimensions. Toward that aim,
we seek to understand more about QD in general by inves-
tigating how algorithms informed by dierent measures of
diversity (or none at all) create QD, when that QD is mea-
sured in a diversity of ways.
The Seamless Peer and Cloud Evolution Framework
20/07/2016
G. Leclerc, J.E. Auerbach, G. Iacca, and D. Floreano
ACM New York, NY, USA
Evolutionary algorithms are increasingly being applied to problems that are too computationally expensive to run on a single personal computer due to costly fitness function evaluations and/or large numbers of fitness evaluations. Here, we introduce the Seamless Peer And Cloud Evolution (SPACE) framework, which leverages bleeding edge web technologies to allow the computational resources necessary for running large scale evolutionary experiments to be made available to amateur and professional researchers alike, in a scalable and cost-effective manner, directly from their web browsers. The SPACE framework accomplishes this by distributing fitness evaluations across a heterogeneous pool of cloud compute nodes and peer computers. As a proof of concept, this framework has been attached to the hbox{RoboGentexttrademark} open-source platform for the co-evolution of robot bodies and brains, but importantly the framework has been built in a modular fashion such that it can be easily coupled with other evolutionary computation systems.
Agent-based models for the emergence and evolution of grammar
11/05/2016
Philosophical transactions b
Luc Steels
Royal Society
Human languages are extraordinarily complex adaptive systems. They feature intricate hierarchical sound structures, are able to express elaborate meanings and use sophisticated syntactic and semantic structures to relate sound to meaning. What are the cognitive mechanisms that speakers and listeners need to create and sustain such a remarkable system? What is the collective evolutionary dynamics that allows a language to self-organize, become more complex and adapt to changing challenges in expressive power? This paper focuses on grammar. It presents a basic cycle observed in the historical language record, whereby meanings move from lexical to syntactic and then to a morphological mode of expression before returning to a lexical mode, and discusses how we can discover and validate mechanisms that can cause these shifts using agent-based models.
This article is part of the themed issue ‘The major synthetic evolutionary
transitions’.
Learning to Generate Genotypes with Neural Networks
14/04/2016
Alexander W. Churchill, Siddharth Sigtia, Chrisantha Fernando
MIT
Neural networks and evolutionary computation have a rich intertwined history. Theymost commonly appear together when an evolutionary algorithm optimises the pa-rameters and topology of a neural network for reinforcement learning problems, orwhen a neural network is applied as a surrogate fitness function to aid the evolutionaryoptimisation of expensive fitness functions. In this paper we take a different approach,
asking the question of whether a neural network can be used to provide a mutation distribution for an evolutionary algorithm, and what advantages this approach may offer? Two modern neural network models are investigated, a Denoising Autoencoder modified to produce stochastic outputs and the Neural Autoregressive Distribution Estimator. Results show that the neural network approach to learning genotypes is able to solve many difficult discrete problems, such as MaxSat and HIFF, and regularly outperforms other evolutionary techniques.
Neuronal boost to evolutionary dynamics
23/10/2015
P. de Vladar, E. Szathmary
The Royal Society publishing
Standard evolutionary dynamics is limited by the constraints of the genetic system. A central message of evolutionary neurodynamics is that evolutionary dynamics in the brain can happen in a neuronal niche in real time, despite the fact that neurons do not reproduce. We show that Hebbian learning and structural synaptic plasticity broaden the capacity for informational replication and guided variability provided a neuronally plausible mechanism of replication is in place. The synergy between learning and selection is more efficient than the equivalent search by mutation selection. We also consider asymmetric landscapes and show that the learning weights become correlated with the fitness gradient. That is, the neuronal complexes learn the local properties of the fitness landscape, resulting in the generation of variability directed towards the direction of fitness increase, as if mutations in a genetic pool were drawn such that they would increase reproductive success. Evolution might thus be more efficient within evolved brains than among organisms out in the wild.
Usage-based Grammar Learning as Insight Problem Solving
01/09/2015
Emilia Garcia-Casademont and Luc Steels
We report on computational experiments in which a Learning agent incrementally acquires grammar from a tutoring agent through situated embodied interactions. The learner is able to detect impasses in routine language processing, such as missing a grammatical construction to integrate a word in the rest of the sentence structure, to move to a meta-level to repair these impasses, primarily based on semantics, and to then
expand or restructure his grammar using insights gained from repairs. The paper proposes a cognitive architecture able to support this kind of insight learning and tests it on a grammar learning task.
Problem solving stages in the fives quare problem
04/08/2015
Anna Fedor, Eörs Szathmáry and Michael Öllinger
According to the restructuring hypothesis, insight problem solving typically progresses through consecutive stages of search, impasse, insight, and search again for someone, who solves the task. The order of the sestages was determined through self-reports of problem solvers and has never been verified behaviorally. We asked whether individual analysis of problem solving attempts of participants revealed the same order of problem solving stages as defined by the theory and whether their subjective feelings corresponded to the problem solving stages they were in. Our participants tried to solve the Five-Square problem in a nonline task, while we recorded the time and trajectory of their stick movements. After the task they were asked about their feelings related to insight and some of them also had the possibility of reporting impasse while working on the task. We found that the majority of participants did not follow the classic four-stage model of insight, bu that more complex sequences of problem solving stages, with search and impasse recurring several times. This means that the classic four-stage model is not sufficient to describe variability on the individual level. We revised the classic model and we provide a new model that can generate all sequences found. Solvers reported insight more of ten than non-solvers and non-solvers reported impasse more often than solvers, as expected; but participants did not report impasse more often during behaviorally defined impasse stages than during other stages.
This shows that impasse reports might be unreliable indicators of impasse. Our study highlights the importance of individual analysis of problem solving behavior to verify insight theory.
Co-Acquisition of Syntax and Semantics — An Investigation in Spatial Language
01/08/2015
M. Spranger and L. Steels
How to play the Syntax Game
01/08/2015
Luc Steels and Emilia Gracia Casademont
Proceedings of the European Conference on Artificial Life 2015
Insight and Intuition – two sides of the same coin?
01/08/2015
Maze Navigation and Memory with Physical Reservoir Computers
21/07/2015
C. Johnson, A. Philippides, P. Husbands
Proc. Late Breaking Papers ECAL'15
The extent to which an organism’s morphology may shape its behaviour is increasingly studied, but still not well understood (McGeer, 1990; Pfeifer and Bongard, 2007; Nakajima et al., 2015; Caluwaerts et al., 2012; Zhao et al., 2013).
Hauser et al. (2011, 2012) introduced mass-spring-damper (MSD) reservoir networks as morphologically computing abstracted bodies. As these networks are abstracted from biological bodies, the two will share some properties and capabilities, and studying the former may give us useful clues about the latter. We have previously applied small MSD network pairs to the production of reactive behaviour often referred to as ‘minimally cognitive’ (Johnson et al., 2014, 2015). Here we go on to use similar controllers to solve a target-seeking problem for a mobile agent in a maze, which necessitates memory, over a finite but extended period. If MSD networks with relatively few elements but still high dynamic complexity can solve navigation problems requiring this kind of short term memory, then we may speculate that simple organisms can also.
Rapid Evolution of Robot Gaits
11/07/2015
GECCO Companion '15 Proceedings of the Companion Publication of the 2015 Annual Conference on Genetic and Evolutionary Computation -
Go to the journal web site
Joshua E. Auerbach, Grégoire Heitz, Przemyslaw M. Kornatowski, and Dario Floreano
ACM New York, NY, USA
Incremental embodied chaotic exploration of self-organized motor behaviors with proprioceptor adaptation
26/03/2015
Yoonsik Shim and Phil Husbands
This paper presents a general and fully dynamic embodied artificial neural system, which incrementally explores and learns motor behaviors through an integrated combination of chaotic search and reflex learning. The former uses adaptive bifurcation to exploit the intrinsic chaotic dynamics arising from neuro-body-environment interactions, while the latter is based around proprioceptor adaptation. The overall iterative search process formed from this combination is shown to have a close relationship to evolutionary methods. The architecture developed here allows realtime goal-directed exploration and learning of the possible motor patterns (e.g., for locomotion) of embodied systems of arbitrary morphology. Examples of its successful application to a simple biomechanical model, a simulated swimming robot, and a simulated quadruped robot are given. The tractability of the biomechanical systems allows detailed analysis of the overall dynamics of the search process. This analysis sheds light on the strong parallels with evolutionary search.
Phenotypic plasticity, the Baldwin effect, and the speeding up of evolution: The computational roots of an illusion
19/02/2015
Mauro Santos, Eörs Szathmáry, José F. Fontanari
An increasing number of dissident voices claim that the standard neo-Darwinian view of genes as ‘leaders’ and phenotypes as ‘followers’ during the process of adaptive evolution should be turned on its head. This idea is older than there discovery of Mendel’s laws of inheritance, with the turn-of-the- twentieth-century notion eventually labeled as the ‘Baldwin effect’ as one of the many ways in which the standard neo -Darwinian view can be turned around. A condition for this effect is that environmentally induced variation such as phenotypic plasticity or learning is crucial for the initial establishment of a trait. This gives the additional time for natural selection to actongenetic variation and the adaptive trait can be eventually encoded in the genotype. An influential paper published in the late 1980s claimed the Baldwin effect to happen in computer simulations, and a vowed that it was crucial to solve a difficult adaptive task. This generated much excitement among scholars in various disciplines that regard neo- Darwinian accounts to explain the evolutionary emergence of high-order phenotypic traits suchas consciousness or language almost hopeless. Here, we use analytical and computational approaches to show that a standard population genetics treatment can easily crack what the scientific community has granted as a nunsolvable adaptive problem without learning. Evolutionary psychologists and linguists have invoked the (claimed) Baldwin effect to make wild assertions that should not be taken seriously. What the Baldwin effect needs are plausible case-histories.
Ambiguity and the origins of syntax
06/01/2015
Luc Steels / Emília Garcia Casademont
Editor-in-Chief: Hulst, Harry
The paper argues that syntax is motivated by the need to avoid combinatorial search in parsing and semantic ambiguity in interpretation. It reports on a case study for the emergence and sharing of first-order phrase structures in a population of agents playing language games. First-order phrase structures combine words into phrases but do not yet generalise to hierarchical or recursive phrases. To study why human languages exhibit phrase structure, a series of strategies for creating and sharing linguistic conventions are examined, starting from a lexical strategy without syntax and then studying the use of groups, n-grams and patterns. Each time we show in which way a strategy improves on the computational complexity of the previous on.
An Evolutionary Cognitive Architecture Made of a Bag of Networks
14/11/2014
Alexander W. Churchill, Chrisantha Fernando
Springer
A cognitive architecture is presented for modelling some properties of sensorimotor learning in infants, namely the ability to accumulate adaptations and skills over multiple tasks in a manner which allows recombination and re-use of task specific competences. The control architecture invented consists of a population of compartments (units of neuroevolution) each containing networks capable of controlling a robot with many degrees of freedom. The nodes of the network undergo internal mutations, and the networks undergo stochastic structural modifications, constrained by a mutational and recombinational grammar. The nodes used consist of dynamical systems such as dynamic movement primitives, continuous time recurrent neural networks and high-level supervised and unsupervised learning algorithms. Edges in the network represent the passing of information from a sending node to a receiving node. The networks in a compartment operate in parallel and encode a space of possible subsumption-like architectures that are used to successfully evolve a variety of behaviours for a NAO H25 humanoid robot.
Discovering communication through ontogenetic ritualisation
12/10/2014
4th International Conference on Development and Learning and on Epigenetic Robotics (pp. 14–19). IEEE. -
Go to the journal web site
Spranger, M., & Steels, L.
IEEE Xplore
The entry into symbolic communication through language, gesture or visual signs is one of the key moments in the mental and social development of infants. It is the point from which they start to have a much better social interaction with their parents, other children and adults, and can begin to observe the massive achievements of cultural accumulation. The question addressed in this paper is how developing robots could autonomously make this important transition in their mental development. Based on observations of the way human infants bootstrap into symbolic communication, we propose that gestural symbolic communication comes before auditory symbolic communication and is discovered through a process of ontogenetic ritualisation. The paper identifies the nature of ontogenetic ritualisation and reports on first experiments to achieve this form of learning in humanoid robots.
Breaking down false barriers to understanding
31/07/2014
Luc Steels
Oxford University Press
This chapter argues that there are four dichotomies underlying contemporary linguistics which are getting in the way of developing adequate theories of language evolution, namely the distinction between competence and performance, synchrony and diachrony, origins of language vs. origins of languages, and competence vs. processing. When we break down these dichotomies we can apply the general theory of selection on a cultural level to explain the many features of human languages. Illustrating this approach, this chapter argues that languages culturally evolve to maximize communicative success...
Modelling Reaction Times in Non-linear Classification Tasks
31/07/2014
Martha Lewis, Anna Fedor, Michael Öllinger, Eörs Szathmáry, Chrisantha Fernando
Springer Link
We investigate reaction times for classification of visual stimuli composed of combinations of shapes, to distinguish between parallel and serial processing of stimuli. Reaction times in a visual XOR task are slower than in AND/OR tasks in which pairs of shapes are categorised. This behaviour is explained by the time needed to perceive shapes in the various tasks, using a parallel drift diffusion model. The parallel model explains reaction times in an extension of the XOR task, up to 7 shapes. Subsequently, the behaviour is explained by a combined model that assumes perceptual chunking, processing shapes within chunks in parallel, and chunks themselves in serial. The pure parallel model also explains reaction times for ALL and EXISTS tasks. An extension to the perceptual chunking model adds time taken to apply a logical rule. We are able to improve the fit to the data by including this extra parameter, but using model selection the extra parameter is not supported. We further simulate the behaviour exhibited using an echo state network, successfully recreating the behaviour seen in humans.
Voxel Robot: A Pneumatic Robot with Deformable Morphology
22/07/2014
Mark Roper, Nikolaos Katsaros, Chrisantha Fernando
The Voxbot is a cubic (voxel) shaped robot actuated by expansion and contraction of its 12 edges designed for running evolutionary experiments, built as cheaply as possible. Each edge was made of a single 10ml medical syringe for pneumatic control. These were connected to an array of 12 servos situated on an external housing and controlled with an Arduino microcontroller from a laptop. With twenty motor primitive commands and the slow response of its pneumatics this robot allows real time controllers to be evolved in situ rather than just in simulation. With simple combinations and sequencing of motor primitives the Voxbot can be made to walk, rotate and crab crawl. The device is available in kit form and is very easy to build and replicate. Other morphologies can be built easily.
Programmable self-assembly with chained soft cells: an algorithm to fold into 2-D shapes
10/07/2014
From Animals to Animats 13, vol. 8575. Lecture Notes in Computer Science. Springer International Publishing, 2014. -
Go to the journal web site
Jürg Germann, Joshua Auerbach, Dario Floreano
---
Programmable self-assembly of chained modules holds potential for the automatic shape formation of morphologically adapted robots. However, current systems are limited to modules of uniform rigidity, which restricts the range of obtainable morphologies and thus the functionalities of the system. To address these challenges, we previously introduced “soft cells” as modules that can obtain different mechanical softness pre-setting. We showed that such a system can obtain a higher diversity of morphologies compared to state-of-the-art systems and we illustrated the system’s potential by demonstrating the self-assembly of complex morphologies. In this paper, we extend our previous work and present an automatic method that exploits our system’s capabilities in order to find a linear chain of soft cells that self-folds into a target 2-D shape.
Online Extreme Evolutionary Learning Machines
10/07/2014
Artificial Life 14: Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems. The MIT Press, 2014 -
Go to the journal web site
Joshua E. Auerbach, Chrisantha Fernando and Dario Floreano
---
Recently, the notion that the brain is fundamentally a prediction machine has gained traction within the cognitive science community. Consequently, the ability to learn accurate predictors from experience is crucial to creating intelligent robots. However, in order to make accurate predictions it is necessary to find appropriate data representations from which to learn. Finding such data representations or features is a fundamental challenge for machine learning. Often domain knowledge is employed to design useful features for specific problems, but learning representations in a domain independent manner is highly desirable. While many approaches for automatic feature extraction exist, they are often either computationally expensive or of marginal utility. On the other hand, methods such as Extreme Learning Machines (ELMs) have recently gained popularity as efficient and accurate model learners by employing large collections of fixed, random features. The computational efficiency of these approaches becomes particularly relevant when learning is done fully online, such as is the case for robots learning via their interactions with the world. Selectionist methods, which replace features offering low utility with random replacements, have been shown to produce efficient feature learning in one class of ELM. In this paper we demonstrate that a Darwinian neurodynamic approach of feature replication can improve performance beyond selection alone, and may offer a path towards effective learning of predictive models in robotic agents.
RoboGen: Robot Generation through Artificial Evolution
10/07/2014
Artificial Life 14: Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems. The MIT Press, 2014 -
Go to the journal web site
J. E. Auerbach, D. Aydin, A. Maesani, P. M. Kornatowski, T. Cieslewski, G. Heitz, P. R. Fernando, I. Loshchilov, L. Daler and D. Floreano
---
Science instructors from a wide range of disciplines agree that hands-on laboratory components of courses are pedagogically necessary (Freedman, 1997). However, certain shortcomings of current laboratory exercises have been pointed out by several authors (Mataric, 2004; Hofstein and Lunetta, 2004). The overarching theme of these analyses is that hands-on components of courses tend to be formulaic, closed-ended, and at times outdated. To address these issues, we envision a novel platform that is not only a didactic tool but is also an experimental testbed for users to play with different ideas in evolutionary robotics (Nolfi and Floreano, 2000), neural networks, physical simulation, 3D printing, mechanical assembly, and embedded processing.
Here, we introduce RoboGen™: an open-source software and hardware platform designed for the joint evolution of robot morphologies and controllers a la Sims (1994); Lipson and Pollack (2000); Bongard and Pfeifer (2003). Robo- Gen has been designed specifically to allow evolved robots to be easily manufactured via widely available desktop 3D-printers, and the use of simple, open-source, low-cost, offthe- shelf electronic components. RoboGen features an evolution engine complete with a physics simulator, as well as utilities both for generating design files of body components for 3D printing, and for compiling neural-network controllers to run on an Arduino microcontroller board.
In this paper, we describe the RoboGen platform, and provide some metrics to assess the success of using it as the hands-on component of a masters-level bio-inspired artificial intelligence course.
Nitric Oxide Neuromodulation
30/05/2014
M. O’Shea, P. Husbands, A. Philippides
Springer Encyclopedia of Computational Neuroscience
Neuromodulators are a class of neurotransmitter that diffuse into the region surrounding an emitting neuron and affect potentially large numbers of other neurons by modulating their responses, irrespective of whether or not they are electrically connected to the modulating neuron. Nitric oxide (NO) is a particularly interesting example of a neuromodulator because of its very small size and gaseous state. The type of modulatory signaling NO is involved in is sometimes known as volume signaling and is in sharp contrast to the connectionist point-to-point electrical transmission picture that dominated thinking about the nervous system for many decades, whereby neural signaling could only occur between synaptically connected neurons.
Ultrastructural readout of functional synaptic vesicle pools in hippocampal slices based on FM-dye-labeling and photoconversion
15/05/2014
Vincenzo Marra, Jemima J Burden, Freya Crawford & Kevin Staras
Elsevier
Fast activity-driven turnover of neurotransmitter-filled vesicles at presynaptic terminals is a crucial step in information transfer in the CNS. Characterization of the relationship between the nanoscale organization of synaptic vesicles and their functional properties during transmission is currently of interest. Here we outline a procedure for ultrastructural investigation of functional vesicles in synapses from native mammalian brain tissue. FM dye is injected into the target region of a brain slice and upstream axons are electrically activated to stimulate vesicle turnover and dye uptake. In the presence of diaminobenzidine (DAB), photoactivation of dye-filled vesicles yields an osmiophilic precipitate that is visible in electron micrographs. When combined with serial-section electron microscopy, fundamental ultrastructure-function relationships of presynaptic terminals in native circuits are revealed. We outline the utility of this protocol for the 3D reconstruction of a recycling vesicle pool in CA3–CA1 synapses from an acute hippocampal slice and for the characterization of its anatomically defined docked pool.
A Denoising Autoencoder Guides a Genetic Algorithm
01/04/2014
Alexander W. Churchill, Siddharth Sigtia, and Chrisantha Fernando
An algorithm is described that adaptively learns a non-linear mutation distribution. It works by training a denoising autoencoder (DA) online at each generation of a genetic algorithm to reconstruct a slowly decaying memory of the best genotypes so far. A compressed hidden layer forces the autoencoder to learn hidden features in the training set that can be used to accelerate search on novel problems with similar structure.
Its output neurons define a probability distribution that we sample from to produce offspring solutions. The algorithm outperforms a canonical genetic algorithm on several combinatorial optimisation problems, e.g. multidimensional 0/1 knapsack problem, MAXSAT, HIFF, and on parameter
optimisation problems, e.g. Rastrigin and Rosenbrock functions.
Variation, impasse and insight
28/03/2014
Anna Fedor, Michael Öllinger, Eörs Szathmáry. Affiliation: Parmenides Foundation, Munich, Germany
A vector representation of Fluid Construction Grammar
01/12/2013
Yana Knight, Michael Spranger and Luc Steels
The question of how symbol systems can be instantiated in neural network-like computation is still open. Many technical challenges remain and most proposals do not scale up to realistic examples of symbol processing, for example, language understanding or language production. Here we use a top-down approach. We start from Fluid Construction Grammar, a wellworked out framework for language processing that is compatible with recent insights into Construction Grammar and investigate how we could build a neural compiler that automatically translates grammatical onstructions and grammatical processing into neural computations. We proceed in two steps. FCG is translated from symbolic processing to numeric processing using a vector symbolic architecture, and this numeric processing is then translated into neural network computation. Our experiments are still in an early stage but already show promise.
From Blickets to Synapses: Inferring Temporal Causal Networks by Observation.
18/08/2013
Fernando Chrisantha
PubMed
How do human infants learn the causal dependencies between events? Evidence suggests that this remarkable feat can be achieved by observation of only a handful of examples. Many computational models have been produced to explain how infants perform causal inference without explicit teaching about statistics or the scientific method. Here, we propose a spiking neuronal network implementation that can be entrained to form a dynamical model of the temporal and causal relationships between events that it observes. The network uses spike-time dependent plasticity, long-term depression, and heterosynaptic competition rules to implement Rescorla-Wagner-like learning. Transmission delays between neurons allow the network to learn a forward model of the temporal relationships between events. Within this framework, biologically realistic synaptic plasticity rules account for well-known behavioral data regarding cognitive causal assumptions such as backwards blocking and screening-off. These models can then be run as emulators for state inference. Furthermore, this mechanism is capable of copying synaptic connectivity patterns between neuronal networks by observing the spontaneous spike activity from the neuronal circuit that is to be copied, and it thereby provides a powerful method for transmission of circuit functionality between brain regions.
Fluid Construction Grammar for historical and evolutionary linguistics
03/08/2013
Pieter Wellen, Remi van Trijp, Katrien Beuls, Luc Steels
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics
Fluid Construction Grammar (FCG) is an open-source computational grammar for- malism that is becoming increasingly pop-ular for studying the history and evolution of language. This demonstration shows how FCG can be used to operationalise the cultural processes and cognitive mecha-nisms that underly language evolution and change.
The watchmaker is blind but he is not stupid: Comment on "How life changes itself: The Read-Write (RW) genome" by James Shapiro.
07/07/2013
Chrisantha Fernando
Elsevier
A Darwinian Cognitive Architecture
24/06/2013
Frontiers in Cognitive Science
Alexander W. Churchill, Vera Vasas, Goren Gordon, Chrisantha Fernando
We present a Darwinian cognitive architecture capable of evolving both the goals and the controllers of a variety of robots. Our Python software evolves a set of cognitive representations of tasks and solutions that have modularity, compositionally, systematicity, and productivity as required by the physical symbol system hypothesis, and yet also is consistent with the dynamical systems and connectionist approaches in cognitive science. Thus it is an attempt at a synthesis. We demonstrate that our architecture allows the accumulation of adaptations across tasks, i.e. transfer learning or lifetime learning. This is lacking in most other work in intrinsic motivation. Our architecture autonomously creates its own games to play, i.e. generates intrinsically motivated challenges for itself, and then tries to make progress on the games it has invented during behaviour in the world. Games are selected by the robot on the basis of how much progress can be made on the games. This is just a start towards a formal definition of what are the necessary and sufficient criteria for a game worth playing. An important aspect of our work is that a range of game selection functions can be specified and tested. Previous work suggests that this cognitive architecture has neuronal plausibility.
We believe this architecture properly combines population based stochastic search in the brain with other machine learning algorithms, benefiting from both. The code is freely available and users can add ’atoms’ to the system and evolve them in the Nao humanoid robot, the two wheeled e-puck robot or in Arduino based 3D printed robots they make themselves.
Design for a Darwinian Brain: Part 1. Philosophy and Neuroscience
27/03/2013
Chrisantha Fernando
Springer Berlin Heidelberg
Fodor and Pylyshyn in their 1988 paper denounced the claims of the connectionists, claims that continue to percolate through neuroscience. In they proposed that a physical symbol system was necessary for open-ended cognition. What is a physical symbol system, and how can one be implemented in the brain? A way to understand them is by comparison of thought to chemistry. Both have systematicity, productivity and compositionality, elements lacking in most computational neuroscience models. To remedy this woeful situation, I examine cognitive architectures capable of open-ended cognition, and think how to implement them in a neuronal substrate. I motivate a cognitive architecture that evolves physical symbol systems in the brain. In Part 2 of this paper pair develops this architecture and proposes a possible neuronal implementation.
The draft of the article is available to be downaloaded.
Design for a Darwinian Brain: Part 2. Cognitive Architecture
27/03/2013
Chrisantha Fernando, Vera Vasas, Alexander W. Churchill
Springer Berlin Heidelberg
The accumulation of adaptations in an open-ended manner during lifetime learning is a holy grail in reinforcement learning, intrinsic motivation, artificial curiosity, and developmental robotics. We present a design for a cognitive architecture that is capable of specifying an unlimited range of behaviors. We then give examples of how it can stochastically explore an interesting space of adjacent possible behaviors. There are two main novelties; the first is a proper definition of the fitness of self-generated games such that interesting games are expected to evolve. The second is a modular and evolvable behavior language that has systematicity, productivity, and compositionality, i.e. it is a physical symbol system. A part of the architecture has already been implemented on a humanoid robot.
Agent-Based Models of Strategies for the Emergence and Evolution of Grammatical Agreement
17/03/2013
Katrien Beuls, Luc Steels
Grammatical agreement means that features associated with one linguistic unit (for example number or gender) become associated with another unit and then possibly overtly expressed, typically with morphological markers. It is one of the key mechanisms used in many languages to show that certain linguistic units within an utterance grammatically depend on each other. Agreement systems are puzzling because they can be highly complex in terms of what features they use and how they are expressed. Moreover, agreement systems have undergone considerable change in the historical evolution of languages. This article presents language game models with populations of agents in order to find out for what reasons and by what cultural processes and cognitive strategies agreement systems arise. It demonstrates that agreement systems are motivated by the need to minimize combinatorial search and semantic ambiguity, and it shows, for the first time, that once a population of agents adopts a strategy to invent, acquire and coordinate meaningful markers through social learning, linguistic self-organization leads to the spontaneous emergence and cultural transmission of an agreement system. The article also demonstrates how attested grammaticalization phenomena, such as phonetic reduction and conventionalized use of agreement markers, happens as a side effect of additional economizing principles, in particular minimization of articulatory effort and reduction of the marker inventory. More generally, the article illustrates a novel approach for studying how key features of human languages might emerge.
Hidden information transfer in an autonomous swinging robot
James Thorniley and Phil Husbands
This paper describes a hitherto overlooked aspect of the information dynamics of embodied agents, which can be thought of as hidden information transfer. This phenomenon is demonstrated in a minimal model of an autonomous agent.
While it is well known that information transfer is generally low between closely synchronised systems, here we show how it is possible that such close synchronisation may serve to “carry” signals between physically separated endpoints.
This creates seemingly paradoxical situations where transmitted information is not visible at some intermediate point in a network, yet can be seen later after further processing. We discuss how this relates to existing theories relating information transfer to agent behaviour, and the possible explanation by analogy to communication systems.
Solving Classical Insight Problems Without Aha! Experience: 9 Dot, 8 Coin, and Matchstick Arithmetic Problems
The Journal of Problem Solving
Danek Amory H., Wiley Jennifer, Ollinger Michal
Insightful problem solving is a vital part of human thinking, yet very difficult to grasp. Traditionally,
insight has been investigated by using a set of established “insight tasks,” assuming
that insight has taken place if these problems are solved. Instead of assuming that insight
takes place during every solution of the 9 Dot, 8 Coin, and Matchstick Arithmetic Problems,
this study explored the likelihood that solutions evoked the “Aha! experience,” which is often
regarded as the defining characteristic of insight. It was predicted that the rates of selfreported
Aha! experiences might vary based on the necessary degree of constraint relaxation.
The main assumption was that the likelihood of experiencing an Aha! would decrease
with increasing numbers of constraints that must be relaxed, because several steps are
needed to achieve a representational change and solve the problem, and thus, the main
feature of suddenness of a solution might be lacking. The results supported this prediction,
and demonstrated that in many cases participants do solve these classical insight problems
without any Aha! experience. These results show the importance of obtaining insight ratings
from participants to determine whether any given problem is solved with insight or not.
Evolution in Cognition 2016 Chairs’ Welcome
GECCO 2016 Proceeding
S. Doncieux, J. Auerbach, R. Duro, H.P. de Vladar
ACM
Evolution by natural selection has shaped life over billions of years leading to the emergence of complex
organisms capable of exceptional cognitive abilities. These natural evolutionary processes have inspired the
development of Evolutionary Algorithms (EAs), which are optimization algorithms widely popular due to
their efficiency and robustness. Beyond their ability to optimize, EAs have also proven to be creative and
efficient at generating innovative solutions to novel problems. The combination of these two abilities
makes EAs a tool of choice for the resolution of complex problems.
Even though there is evidence that the principle of selection on variation is at play in the human brain,
as proposed in Changeux's and Edelman's models of Neural Darwinism [1, 8], and more recently expanded
in the theory of Darwinian Neurodynamics by Szathmáry, Fernando and others [9], not much attention has
been paid to the possible interaction between evolutionary processes and cognition over physiological time
scales. Since the development of human cognition requires years of maturation, it can be expected that
artificial cognitive agents will also require months if not years of learning and adaptation. It is in this
context that the optimizing and creative abilities of EAs could become an ideal framework that
complement, aid in understanding, and facilitate the implementation of cognitive processes. Additionally, a
better understanding of how evolution can be implemented as part of an artificial cognitive architecture can
lead to new insights into cognition in humans and other higher organisms.
The goals of the workshop are to depict the current state of the art of evolution in cognition and to
sketch the main challenges and future directions. In particular, we aim at bringing together different
theoretical and empirical approaches that can potentially contribute to the understanding of how evolution
and cognition can act together in an algorithmic way in order to solve complex problems. In this workshop
we welcome approaches that contribute to an improved understanding of evolution in cognition using
robotic agents, in silico computation, as well as mathematical models.
Modelling Reaction Times in Non-linear Classification Tasks
Martha Lewis, Anna Fedor, Michael Ollinger, Eors Szathmary, Chrisantha Fernando
Springer International Publishing
We investigate reaction times for classification of visual stimuli composed of combinations of shapes, to distinguish between parallel and serial processing of stimuli. Reaction times in a visual XOR task are slower than in AND/OR tasks in which pairs of shapes are categorised. This behaviour is explained by the time needed to perceive shapes in the various tasks, using a parallel drift diffusion model. The parallel model explains reaction times in an extension of the XOR task, up to 7 shapes. Subsequently, the behaviour is explained by a combined model that assumes perceptual chunking, processing shapes within chunks in parallel, and chunks themselves in serial. The pure parallel model also explains reaction times for ALL and EXISTS tasks. An extension to the perceptual chunking model adds time taken to apply a logical rule. We are able to improve the fit to the data by including this extra parameter, but using model selection the extra parameter is not supported. We further simulate the behaviour exhibited using an echo state network, successfully recreating the behaviour seen in humans.