Connectionism: A Hands-On Approach By Michael R. W. Dawson(auth.)
2005 | 207 Pages | ISBN: 1405130741 | PDF | 3 MB
2005 | 207 Pages | ISBN: 1405130741 | PDF | 3 MB
Connectionism is a “hands on” introduction to connectionist modeling through practical exercises in different types of connectionist architectures. explores three different types of connectionist architectures – distributed associative memory, perceptron, and multilayer perceptron provides a brief overview of each architecture, a detailed introduction on how to use a program to explore this network, and a series of practical exercises that are designed to highlight the advantages, and disadvantages, of each accompanied by a website at http://www.bcp.psych.ualberta.ca/~mike/Book3/ that includes practice exercises and software, as well as the files and blank exercise sheets required for performing the exercises designed to be used as a stand-alone volume or alongside Minds and Machines: Connectionism and Psychological Modeling (by Michael R.W. Dawson, Blackwell 2004) Content: Chapter 1 Hands?On Connectionism (pages 1–4): Chapter 2 The Distributed Associative Memory (pages 5–8): Chapter 3 The James Program (pages 9–21): Chapter 4 Introducing Hebb Learning (pages 22–29): Chapter 5 Limitations of Hebb Learning (pages 30–36): Chapter 6 Introducing the Delta Rule (pages 38–40): Chapter 7 Distributed Networks and Human Memory (pages 41–45): Chapter 8 Limitations of Delta Rule Learning (pages 46–47): Chapter 9 The Perceptron (pages 49–57): Chapter 10 The Rosenblatt Program (pages 58–71): Chapter 11 Perceptrons and Logic Gates (pages 72–80): Chapter 12 Performing More Logic with Perceptrons (pages 81–85): Chapter 13 Value Units and Linear Nonseparability (pages 87–90): Chapter 14 Network by Problem Type Interactions (pages 91–93): Chapter 15 Perceptrons and Generalization (pages 94–98): Chapter 16 Animal Learning Theory and Perceptrons (pages 99–107): Chapter 17 The Multilayer Perceptron (pages 108–113): Chapter 18 The Rumelhart Program (pages 114–128): Chapter 19 Beyond the Perceptron's Limits (pages 129–132): Chapter 20 Symmetry as a Second Case Study (pages 133–136): Chapter 21 How Many Hidden Units? (pages 137–144): Chapter 22 Scaling Up with the Parity Problem (pages 145–150): Chapter 23 Selectionism and Parity (pages 151–156): Chapter 24 Interpreting a Small Network (pages 157–162): Chapter 25 Interpreting Networks of Value Units (pages 163–173): Chapter 26 Interpreting Distributed Representations (pages 174–182): Chapter 27 Creating Your Own Training Sets (pages 183–187):