This book attempts to marry truth-conditional semantics with cognitive linguistics in the church of computational neuroscience. To this end, it examines the truth-conditional meanings of coordinators, quantifiers, and collective predicates as neurophysiological phenomena that are amenable to a neurocomputational analysis. Drawing inspiration from work on visual processing, and especially the simple/complex cell distinction in early vision (V1), we claim that a similar two-layer architecture is sufficient to learn the truth-conditional meanings of the logical coordinators and logical quantifiers. As a prerequisite, much discussion is given over to what a neurologically plausible representation of the meanings of these items would look like. We eventually settle on a representation in terms of correlation, so that, for instance, the semantic input to the universal operators (e.g. and, all)is represented as maximally correlated, while the semantic input to the universal negative operators (e.g. nor, no)is represented as maximally anticorrelated. On the basis this representation, the hypothesis can be offered that the function of the logical operators is to extract an invariant feature from natural situations, that of degree of correlation between parts of the situation. This result sets up an elegant formal analogy to recent models of visual processing, which argue that the function of early vision is to reduce the redundancy inherent in natural images. Computational simulations are designed in which the logical operators are learned by associating their phonological form with some degree of correlation in the inputs, so that the overall function of the system is as a simple kind of pattern recognition. Several learning rules are assayed, especially those of the Hebbian sort, which are the ones with the most neurological support. Learning vector quantization (LVQ) is shown to be a perspicuous and efficient means of learning the patterns t

Key Features

· The discovery of several algorithmic similarities between visison and semantics. · The support of all of this by means of simulations, and the packaging of all of this in a coherent theoretical framework.


This book is designed primarily for the benefit of linguists, and secondarily for logicians, neurocomputationalists, and philosophers of mind and of language. It may even be of benefit to mathematicians. It will certainly be of benefit to all those that want an introduction to natural language semantics, logic, computationational neuroscience, and cognitive science.

Table of Contents

Contents Preface. Acknowledgements. 1. Modest vs. robust theories of semantics. 2. Single neuron modeling. 3. Logical measures. 4. The representation of coordinator meanings. 5. Neuromimetic networks for coordinator meanings. 6. The representation of quantifier meanings. 7. ANNs for quantifier learning and recognition. 8. Inferences among logical operators. 9. The failure of subalternacy: reciprocity and center-oriented constructions. 10. Networks of real neurons. 11. Three generations of Cognitive Science. References. Index


No. of pages:
© 2004
Elsevier Science
Print ISBN:
Electronic ISBN:

About the author

Harry Howard

Affiliations and Expertise

Tulane University, New Orleans, USA.