An Aesop fairy story on an Artificial Neural Network
From SwarmWiki
Pietro Terna
Department of Economics and Public Finance, University of Torino, Italy
TITLE: An Aesop fairy story on Artificial Neural Network
ABSTRACT: At the 2009 SwarmFest we proposed SLAPP, or Swarm-Like Agent Protocol in Python, as a simplified implementation of the original Swarm protocol, choosing Python as a simultaneously simple and complete object-oriented framework. The SLAPP project has also the goal of offering to scholars interested in agent-based models a set of programming examples that could be easily understood in all their details and adapted to other applications. Why Python? Quoting from its main web page: “Python is a dynamic object-oriented programming language that can be used for many kinds of software development. It offers strong support for integration with other languages and tools, comes with extensive standard libraries, and can be learned in a few days.”
The next step, introduced at the 2010 SwarmFest, is a new simulation layer, written upon SLAPP, named AESOP (Agents and Emergencies for Simulating Organizations in Python) intended to be a simplified way to describe and generate interaction within artificial agents.
We have two different main families of agent: (i) bland agents (simple, unspecific, basic, insipid, …) used to populate our simulated world with agents doing basic actions (e.g., in a stock market, zerointelligence agents) and (ii) tasty agents (specialized, with given skills, acting in a discretionary way, …), used to specify important roles into the simulation scenario. The schedules defining and managing agents’ actions are operating (a) “in the background” for all the agents, or only for the bland ones, but in both cases written internally in the code, or (b) “in the foreground”, explicitly managed via an external scripting system, using text files or the spreadsheet formalism.
Bland and tasty agents are built with different sets of agents and with different numbers of elements, via external text files.
Tasty agents can be constructed upon an artificial neural network (ANN) structure, to be trained (1) with sets of examples, or (2) using auto-generated training sets, in the reinforcement learning perspective. In the second case, we use ANNs having: information and set of possible actions as inputs; guesses about effects, as outputs. The agents containing them will choose their actual actions applying a utility function to the guessed effects. Finally, a previous structure of ANNs (CT learning, http://web.econ.unito.it/terna/ct-era/ct-era.html) can be employed there as a third possible solution.
Back to Swarmfest2010_program
![[Main Page]](/stylesheets/images/wiki.png)