Mathematics for Neuroscientists
2nd Edition
Secure Checkout
Personal information is secured with SSL technology.Free Shipping
Free global shippingNo minimum order.
Description
Mathematics for Neuroscientists, Second Edition, presents a comprehensive introduction to mathematical and computational methods used in neuroscience to describe and model neural components of the brain from ion channels to single neurons, neural networks and their relation to behavior. The book contains more than 200 figures generated using Matlab code available to the student and scholar. Mathematical concepts are introduced hand in hand with neuroscience, emphasizing the connection between experimental results and theory.
Key Features
- Fully revised material and corrected text
- Additional chapters on extracellular potentials, motion detection and neurovascular coupling
- Revised selection of exercises with solutions
- More than 200 Matlab scripts reproducing the figures as well as a selection of equivalent Python scripts
Readership
Neuroscientists, experimental neuroscientists, computational neuroscientists, mathematicians
Table of Contents
Chapter 1: Introduction
- Abstract
- 1.1. How to Use This Book
- 1.2. Brain Facts Brief
- 1.3. Mathematical Preliminaries
- 1.4. Units
- 1.5. Sources
- Bibliography
Chapter 2: The Passive Isopotential Cell
- Abstract
- 2.1. Introduction
- 2.2. The Nernst Potential
- 2.3. Membrane Conductance
- 2.4. Membrane Capacitance & Current Balance
- 2.5. Synaptic Conductance
- 2.6. Summary and Sources
- 2.7. Exercises
- Bibliography
Chapter 3: Differential Equations
- Abstract
- 3.1. Exact Solution
- 3.2. Moment Methods⁎
- 3.3. The Laplace Transform⁎
- 3.4. Numerical Methods
- 3.5. Synaptic Input
- 3.6. Summary and Sources
- 3.7. Exercises
- Bibliography
Chapter 4: The Active Isopotential Cell
- Abstract
- 4.1. The Delayed Rectifier Potassium Channel
- 4.2. The Sodium Channel
- 4.3. The Hodgkin–Huxley Equations
- 4.4. The Transient Potassium Channel⁎
- 4.5. The Sodium–Potassium Pump⁎
- 4.6. Summary and Sources
- 4.7. Exercises
- Bibliography
Chapter 5: The Quasi-Active Isopotential Cell
- Abstract
- 5.1. The Quasi-Active Model
- 5.2. Numerical Methods
- 5.3. Exact Solution via Eigenvector Expansion
- 5.4. A Persistent Sodium Current⁎
- 5.5. A Nonspecific Cation Current that is Activated by Hyperpolarization⁎
- 5.6. Linearization of the Sodium–Potassium Pump⁎
- 5.7. Summary and Sources
- 5.8. Exercises
- Bibliography
Chapter 6: The Passive Cable
- Abstract
- 6.1. The Discrete Passive Cable Equation
- 6.2. Exact Solution via Eigenvector Expansion
- 6.3. Numerical Methods
- 6.4. The Passive Cable Equation
- 6.5. Synaptic Input
- 6.6. Summary and Sources
- 6.7. Exercises
- Bibliography
Chapter 7: Fourier Series and Transforms
- Abstract
- 7.1. Fourier Series
- 7.2. The Discrete Fourier Transform
- 7.3. The Fourier Transform
- 7.4. Reconciling the Discrete and Continuous Fourier Transforms
- 7.5. Summary and Sources
- 7.6. Exercises
- Bibliography
Chapter 8: The Passive Dendritic Tree
- Abstract
- 8.1. The Discrete Passive Tree
- 8.2. Eigenvector Expansion
- 8.3. Numerical Methods
- 8.4. The Passive Dendrite Equation
- 8.5. The Equivalent Cylinder⁎
- 8.6. Branched Eigenfunctions⁎
- 8.7. Summary and Sources
- 8.8. Exercises
- Bibliography
Chapter 9: The Active Dendritic Tree
- Abstract
- 9.1. The Active Uniform Cable
- 9.2. On the Interaction of Active Uniform Cables⁎
- 9.3. The Active Nonuniform Cable
- 9.4. The Quasi-Active Cable⁎
- 9.5. The Active Dendritic Tree
- 9.6. Summary and Sources
- 9.7. Exercises
- Bibliography
Chapter 10: Extracellular Potential
- Abstract
- 10.1. Maxwell's Equations
- 10.2. The Wave Equation
- 10.3. From Maxwell to Laplace
- 10.4. The Solution to Laplace's Equation
- 10.5. Extracellular Potential Near a Passive Cable
- 10.6. Extracellular Potential Near Active Cables
- 10.7. Summary and Sources
- 10.8. Exercises
- Bibliography
Chapter 11: Reduced Single Neuron Models
- Abstract
- 11.1. The Leaky Integrate-and-Fire Neuron
- 11.2. Bursting Neurons
- 11.3. Simplified Models of Bursting Neurons
- 11.4. Summary and Sources
- 11.5. Exercises
- Bibliography
Chapter 12: Probability and Random Variables
- Abstract
- 12.1. Events and Random Variables
- 12.2. Binomial Random Variables
- 12.3. Poisson Random Variables
- 12.4. Gaussian Random Variables
- 12.5. Cumulative Distribution Functions
- 12.6. Conditional Probabilities⁎
- 12.7. Sum of Independent Random Variables⁎
- 12.8. Transformation of Random Variables⁎
- 12.9. Random Vectors⁎
- 12.10. Exponential and Gamma Distributed Random Variables
- 12.11. The Homogeneous Poisson Process
- 12.12. Summary and Sources
- 12.13. Exercises
- Bibliography
Chapter 13: Synaptic Transmission and Quantal Release
- Abstract
- 13.1. Basic Synaptic Structure and Physiology
- 13.2. Discovery of Quantal Release
- 13.3. Compound Poisson Model of Synaptic Release
- 13.4. Comparison with Experimental Data
- 13.5. Quantal Analysis at Central Synapses
- 13.6. Facilitation, Potentiation and Depression of Synaptic Transmission
- 13.7. Models of Short-Term Synaptic Plasticity
- 13.8. Summary and Sources
- 13.9. Exercises
- Bibliography
Chapter 14: Neuronal Calcium SignalingNeuronal Calcium Signaling⁎
- Abstract
- 14.1. Voltage Gated Calcium Channels
- 14.2. Diffusion, Buffering and Extraction of Cytosolic Calcium
- 14.3. Calcium Release from the Endoplasmic Reticulum
- 14.4. Regulation of Calcium in Spines
- 14.5. Spinal Calcium and Bidirectional Synaptic Plasticity
- 14.6. Presynaptic Calcium and Transmitter Release
- 14.7. Summary and Sources
- 14.8. Exercises
- Bibliography
Chapter 15: Neurovascular Coupling, the BOLD Signal and MRI
- Abstract
- 15.1. The Metabolic Cost of Neural Signaling
- 15.2. Astrocytes
- 15.3. Smooth Muscle
- 15.4. Endothelium
- 15.5. The Neurovascular Unit
- 15.6. How Blood Distorts an Applied Magnetic Field
- 15.7. Nuclear Magnetic Resonance and the BOLD Signal
- 15.8. The Hemodynamic Response
- 15.9. Magnetic Resonance Imaging
- 15.10. Summary and Sources
- 15.11. Exercises
- Bibliography
Chapter 16: The Singular Value Decomposition and ApplicationsThe Singular Value Decomposition and Applications⁎
- Abstract
- 16.1. The Singular Value Decomposition
- 16.2. Principal Component Analysis and Spike Sorting
- 16.3. Synaptic Plasticity and Principal Components
- 16.4. Neuronal Model Reduction via Balanced Truncation
- 16.5. Summary and Sources
- 16.6. Exercises
- Bibliography
Chapter 17: Quantification of Spike Train Variability
- Abstract
- 17.1. Interspike Interval Histograms and Coefficient of Variation
- 17.2. Refractory Period
- 17.3. Spike Count Distribution and Fano Factor
- 17.4. Renewal Processes
- 17.5. Return Maps and Serial Correlation Coefficients
- 17.6. Summary and Sources
- 17.7. Exercises
- Bibliography
Chapter 18: Stochastic Processes
- Abstract
- 18.1. Definition and General Properties
- 18.2. Gaussian Processes
- 18.3. Point Processes
- 18.4. The Inhomogeneous Poisson Process
- 18.5. Spectral Analysis
- 18.6. Summary and Sources
- 18.7. Exercises
- Bibliography
Chapter 19: Membrane NoiseMembrane Noise*
- Abstract
- 19.1. Two-State Channel Model
- 19.2. Multi-State Channel Models
- 19.3. The Ornstein–Uhlenbeck Process
- 19.4. Synaptic Noise
- 19.5. Summary and Sources
- 19.6. Exercises
- Bibliography
Chapter 20: Power and Cross-Spectra
- Abstract
- 20.1. Cross-Correlation and Coherence
- 20.2. Estimator Bias and Variance
- 20.3. Numerical Estimate of the Power Spectrum⁎
- 20.4. Summary and Sources
- 20.5. Exercises
- Bibliography
Chapter 21: Natural Light Signals and Phototransduction
- Abstract
- 21.1. Wavelength and Intensity
- 21.2. Spatial Properties of Natural Light Signals
- 21.3. Temporal Properties of Natural Light Signals
- 21.4. A Model of Phototransduction
- 21.5. Summary and Sources
- 21.6. Exercises
- Bibliography
Chapter 22: Firing Rate Codes and Early Vision
- Abstract
- 22.1. Definition of Mean Instantaneous Firing Rate
- 22.2. Visual System and Visual Stimuli
- 22.3. Spatial Receptive Field of Retinal Ganglion Cells
- 22.4. Characterization of Receptive Field Structure
- 22.5. Spatio-Temporal Receptive Fields
- 22.6. Static Non-Linearities⁎
- 22.7. Summary and Sources
- 22.8. Exercises
- Bibliography
Chapter 23: Models of Simple and Complex Cells
- Abstract
- 23.1. Simple Cell Models
- 23.2. Non-Separable Receptive Fields
- 23.3. Receptive Fields of Complex Cells
- 23.4. Motion-Energy Model
- 23.5. Hubel–Wiesel Model
- 23.6. Multiscale Representation of Visual Information
- 23.7. Summary and Sources
- 23.8. Exercises
- Bibliography
Chapter 24: Models of Motion Detection
- Abstract
- 24.1. HRC Model of Motion Detection
- 24.2. Responses to Moving Stimuli
- 24.3. Properties of the Correlation Model
- 24.4. Equivalence with the Motion-Energy Model
- 24.5. Beyond Correlation in Motion Detection
- 24.6. Summary and Sources
- 24.7. Exercises
- Bibliography
Chapter 25: Stochastic Estimation Theory
- Abstract
- 25.1. Minimum Mean-Square Error Estimation
- 25.2. Estimation of Gaussian Signals⁎
- 25.3. Linear Non-Linear (LN) Models⁎
- 25.4. Summary and Sources
- 25.5. Exercises
- Bibliography
Chapter 26: Reverse-Correlation and Spike Train Decoding
- Abstract
- 26.1. Reverse-Correlation
- 26.2. Stimulus Reconstruction
- 26.3. Summary and Sources
- 26.4. Exercises
- Bibliography
Chapter 27: Signal Detection Theory
- Abstract
- 27.1. Testing Hypotheses
- 27.2. Ideal Decision Rules
- 27.3. ROC Curves⁎
- 27.4. Multi-Dimensional Gaussian Signals⁎
- 27.5. Fisher Linear Discriminant⁎
- 27.6. Summary and Sources
- 27.7. Exercises
- Bibliography
Chapter 28: Relating Neuronal Responses and Psychophysics
- Abstract
- 28.1. Single Photon Detection
- 28.2. Signal Detection Theory and Psychophysics
- 28.3. Motion Detection
- 28.4. Summary and Sources
- 28.5. Exercises
- Bibliography
Chapter 29: Population CodesPopulation Codes⁎
- Abstract
- 29.1. Cartesian Coordinate Systems
- 29.2. Overcomplete Representations
- 29.3. Frames
- 29.4. Maximum Likelihood
- 29.5. Estimation Error and Cramer–Rao Bound⁎
- 29.6. Population Coding in the Superior Colliculus
- 29.7. Summary and Sources
- 29.8. Exercises
- Bibliography
Chapter 30: Neuronal Networks
- Abstract
- 30.1. Perceptrons
- 30.2. Hopfield Networks
- 30.3. Integrate and Fire Networks
- 30.4. Integrate and Fire Networks with Plastic Synapses
- 30.5. Formation of the Grid Cell Network via STDP
- 30.6. Hodgkin–Huxley Based Networks
- 30.7. Hodgkin–Huxley Based Networks with Plastic Synapses
- 30.8. Rate Based Networks
- 30.9. Brain Maps and Self-Organizing Maps
- 30.10. Summary and Sources
- 30.11. Exercises
- Bibliography
Chapter 31: Solutions to Exercises
- Abstract
- 31.1. Chapter 2
- 31.2. Chapter 3
- 31.3. Chapter 4
- 31.4. Chapter 5
- 31.5. Chapter 6
- 31.6. Chapter 7
- 31.7. Chapter 8
- 31.8. Chapter 9
- 31.9. Chapter 10
- 31.10. Chapter 11
- 31.11. Chapter 12
- 31.12. Chapter 13
- 31.13. Chapter 14
- 31.14. Chapter 15
- 31.15. Chapter 16
- 31.16. Chapter 17
- 31.17. Chapter 18
- 31.18. Chapter 19
- 31.19. Chapter 20
- 31.20. Chapter 21
- 31.21. Chapter 22
- 31.22. Chapter 23
- 31.23. Chapter 24
- 31.24. Chapter 25
- 31.25. Chapter 26
- 31.26. Chapter 27
- 31.27. Chapter 28
- 31.28. Chapter 29
- 31.29. Chapter 30
- Bibliography
Details
- No. of pages:
- 628
- Language:
- English
- Copyright:
- © Academic Press 2017
- Published:
- 23rd February 2017
- Imprint:
- Academic Press
- Hardcover ISBN:
- 9780128018958
- eBook ISBN:
- 9780128019061
About the Authors
Fabrizio Gabbiani
Dr. Gabbiani is Professor in the Department of Neuroscience at the Baylor College of Medicine. Having received the prestigious Alexander von Humboldt Foundation research prize in 2012, he just completed a one-year cross appointment at the Max Planck Institute of Neurobiology in Martinsried and has international experience in the computational neuroscience field. Together with Dr. Cox, Dr. Gabbiani co-authored the first edition of this bestselling book in 2010.
Affiliations and Expertise
Baylor College of Medicine, Houston, TX, USA
Steven Cox
Dr. Cox is Professor of Computational and Applied Mathematics at Rice University. Affiliated with the Center for Neuroscience, Cognitive Sciences Program, and the Ken Kennedy Institute for Information Technology, he is also Adjunct Professor of Neuroscience at the Baylor College of Medicine. In addition, Dr. Cox has served as Associate Editor for a number of mathematics journals, including the Mathematical Medicine and Biology and Inverse Problems. He previously authored the first edition of this title with Dr. Gabbiani.
Affiliations and Expertise
Computational and Applied Mathematics, Rice University, Houston, TX, USA
Reviews
"This is a big book in more than one sense. It has a large page format measuring about 20cm x 27cm making it easy to open up and take in large swathes of text, equations, and figures. More importantly, it covers a very wide range of mathematical methodologies relevant to neuroscience. ...I would highly recommend this book to those with an interest in computational neuroscience who wish to delve more deeply into the biophysics underlying cell-based dynamics and computations, especially if they are interested in flexing their mathematical muscles." --MathSciNet
Amazon Editorial Reviews for First Edition:
"I really think this book is very, very important. This is precisely what has been missing from the field and is badly needed. " --Dr. Kevin Franks, research fellow, Richard Axel's laboratory Columbia University, NYC
"The idea of presenting sufficient maths to understand the theoretical neuroscience, alongside the neuroscience itself, is appealing. The inclusion of Matlab code for all examples and computational figures is an excellent idea. " --David Corney, research fellow, Institute of Ophthalmology, University College London
Ratings and Reviews
Request Quote
Tax Exemption
Elsevier.com visitor survey
We are always looking for ways to improve customer experience on Elsevier.com.
We would like to ask you for a moment of your time to fill in a short questionnaire, at the end of your visit.
If you decide to participate, a new browser tab will open so you can complete the survey after you have completed your visit to this website.
Thanks in advance for your time.