Simulating Evolution and Learning of Human Vocal Signals
Anne Warlaumont, University of Memphis
Humans produce reflexive vocalizations, such as cries, shrieks and laughs, related to those produced by other primates that are quite fixed across the lifetime, as well as more flexible, speech-related sounds that are unique to humans and undergo a long period of development. I have been working to develop computational models of the emergence of both these types of sounds. In a model of the evolution of reflexive vocal signals, there is a population of individuals with genetically specified neural networks that translate from signal types to muscle activations. The muscle activations affect the parameters of a human vocal tract model that generates synthesized vocalizations. Another population of individuals receives these vocalizations as inputs and uses their neural networks to classify them according to signal type. The two populations of signalers and receivers are evolved using a genetic algorithm in which individuals' fitnesses are based on their communicative success. The model evolves distinct signal types and shows that the use of a realistic vocal tract, as opposed to abstract vectors (used in most previous models), affects the rate of evolution and the types of signals that evolve. In other work I have been modeling the development of motor speech skills by human children. The model features a self-organizing neural network which has been adapted so that learning is dependent on reinforcement. The model produces sounds in an exploratory fashion, as appears to occur in human infancy, and it is reinforced when it produces sounds that are voiced (as opposed to silent or breathy) and when it produces vowels that sound like those from its target language (either English or Korean). The model successfully acquires the important skill of reliable production of voiced sounds and exhibits a tendency for its productions to resemble those of its target language.
Abstract Author(s): Anne Warlaumont