1 - 20
Next
- Stanford, California : HeurisTech Press, c1981.
- Description
- Book — 1 online resource (424 pages)
- Stanford, California : HeurisTech Press, c1982.
- Description
- Book — 1 online resource (443 pages)
- Stanford, California : HeurisTech Press, c1982.
- Description
- Book — 1 online resource (659 pages)
4. Evolution, learning, and cognition [1988]
- Singapore ; Teaneck, N.J., USA : World Scientific, ©1988.
- Description
- Book — 1 online resource (x, 411 pages) : illustrations
- Summary
-
- PREFACE; CONTENTS; Part One MATHEMATICAL THEORY; Connectionist Learning Through Gradient Following; INTRODUCTION; CONNECTIONIST SYSTEMS; LEARNING; Supervised Learning vs. Associative Reinforcement Learning; FORMAL ASSUMPTIONS AND NOTATION; BACK-PROPAGATION ALGORITHM FOR SUPERVISED LEARNING; Extended Back-Propagation; REINFORCE ALGORITHMS FOR ASSOCIATIVE REINFORCEMENT LEARNING; Extended REINFORCE Algorithms; DISCUSSION; SUMMARY; REFERENCES; Efficient Stochastic Gradient Learning Algorithm for Neural Network; 1 Introduction; 2 Learning as Stochastic Gradient Descents.
- 3 Convergence Theorems for First Order Schemes4 Convergence of the Second Order Schemes; 5 Discussion; References; INFORMATION STORAGE IN FULLY CONNECTED NETWORKS; 1 INTRODUCTION; 1.1 Neural Networks; 1.2 Organisation; 1.3 Notation; 2 THE MODEL OF McCULLOCH-PITTS; 2.1 State-Theoretic Description; 2.2 Associative Memory; 3 THE OUTER-PRODUCT ALGORITHM; 3.1 The Model; 3.2 Storage Capacity; 4 SPECTRAL ALGORITHMS; 4.1 Outer-Products Revisited; 4.2 Constructive Spectral Approaches; 4.3 Basins of Attraction; 4.4 Choice of Eigenvalues; 5 COMPUTER SIMULATIONS; 6 DISCUSSION; A PROPOSITIONS.
- B OUTER-PRODUCT THEOREMSC PROOFS OF SPECTRAL THEOREMS; References; NEURONIC EQUATIONS AND THEIR SOLUTIONS;
- 1. Introduction; 1
- .1. Reminiscing; 1
- .2. The 1961 Model; 1
- .3. Notation;
- 2. Linear Separable NE; 2
- .1. Neuronic Equations; 2
- .2. Polygonal Inequalities; 2
- .3. Computation of the n-expansion of arbitrary l.s. functions; 2
- .4. Continuous versus discontinuous behaviour: transitions;
- 3. General Boolean NE; 3
- .1. Linearization in tensor space; 3
- .2. Next-state matrix; 3
- .3. Normal modes, attractors; 3
- .4. Synthesis of nets: the inverse problem; 3
- .5. Separable versus Boolean nets.
- Connections with spin formalismReferences; The Dynamics of Searches Directed by Genetic Algorithms; The Hyperplane Transformation.; The Genetic Algorithm as a Hyperplane-Directed Search Procedure; (1) Description of the genetic algorithm; (2) Effects of the S's on the search generated by a genetic algorithm.; (3) An Example.; References.; PROBABILISTIC NEURAL NETWORKS;
- 1. INTRODUCTION;
- 2. MODELING THE NOISY NEURON; 2
- .1. Empirical Properties of Neuron and Synapse;
- 22. Model of Shaw and Vasudevan; 2
- .3. Model of Little; 2
- .4. Model of Taylor.
- 3. NONEQUILIBRIUM STATISTICAL MECHANICS OF LINEAR MODELS3.1. Statistical Law of Motion
- Markov Chain and Master Equation; 3.2. Entropy Production in the Neural; 3.3. Macroscopic Forces and Fluxes; 3.4. Conditions for Thermodynamic Equilibrium; 3.5. Implications for Memory Storage: How Dire?; 4. DYNAMICAL PROPERTIES OF NONLINEAR MODELS; 4.1. Views of Statistical Dynamics; 4.2. Multineuron Interactions, Revisited; 4.3. Cognitive Aspects of the Taylor Model; 4.4. Noisy RAMS and Noisy Nets; 5. THE END OF THE BEGINNING; ACKNOWLEDGMENTS; APPENDIX. TRANSITION PROBABILITIES IN 2-NEURON NETWORKS.
(source: Nielsen Book Data)
5. How to build a person : a prolegomenon [1989]
- Pollock, John L.
- Cambridge, Mass. : MIT Press, ©1989.
- Description
- Book — 1 online resource (xi, 189 pages) : illustrations
- Summary
-
Building a person has been an elusive goal in artificial intelligence. This failure, John Pollock argues, is because the problems involved are essentially philosophical; what is needed for the construction of a person is a physical system that mimics human rationality. Pollock describes an exciting theory of rationality and its partial implementation in OSCAR, a computer system whose descendants will literally be persons.In developing the philosophical superstructure for this bold undertaking, Pollock defends the conception of man as an intelligent machine and argues that mental states are physical states and persons are physical objects as described in the fable of Oscar, the self conscious machine.Pollock brings a unique blend of philosophy and artificial intelligence to bear on the vexing problem of how to construct a physical system that thinks, is self conscious, has desires, fears, intentions, and a full range of mental states. He brings together an impressive array of technical work in philosophy to drive theory construction in AI. The result is described in his final chapter on "cognitive carpentry." John Pollock is Professor of Philosophy and Cognitive Science at the University of Arizona. A Bradford Book.
(source: Nielsen Book Data)
- Singapore ; Teaneck, N.J. : World Scientific, ©1990.
- Description
- Book — 1 online resource (vi, 222 pages) : illustrations
- Summary
-
- An intelligent image-based computer-aided education system: the prototype BIRDS / A.A. David, O. Thiery & M. Crehange
- PLAYMAKER: a knowledge-based approach to characterizing hydrocarbon plays / G. Biswas [and others]
- An expert system for interpreting mesoscale features in oceanographic satellite images / N. Krishnakumar [and others]
- An expert system for tuning particle beam accelerators / D.L. Lager, H.R. Brand & W.J. Maurer
- Expert system approach to assessments of bleeding predispositions in tonsillectomy/adenoidectomy patients / N.J. Pizzi & J.M. Gerrard
- Expert system approach using graph representation and analysis for variable-stroke internal-combustion engine design / S.N.T. Shen, M.S. Chew & G.F. Issa
- A comparison of two new techniques for conceptual clustering / S.L. Crawford & S.K. Souders
- Querying an object-oriented database using free language / P. Trigano [and others]
- Adaptive planning for air combat maneuvering / I.C. Hayslip, J.P. Rosenking & J. Filbert
- AM/AG model: a hierarchical social system metaphor for distributed problem solving / D.G. Shin & J. Leone
- CAUSA
- A tool for model-based knowledge acquisition / W. Dilger & J. Moller
- PRIOPS: a real-time production system architecture for programming and learning in embedded systems / D.E. Parson & G.D. Blank.
(source: Nielsen Book Data)
7. Naturally intelligent systems [1990]
- Caudill, Maureen.
- Cambridge, Mass. : MIT Press, ©1990.
- Description
- Book — 1 online resource (304 pages) : illustrations
- Summary
-
For centuries, people have been fascinated by the possibility of building an artificial system that behaves intelligently. Now there is a new entry in this arena - neural networks. Naturally Intelligent Systems offers a comprehensive introduction to these exciting systems. It provides a technically accurate, yet down-to-earth discussion of neural networks, clearly explaining the underlying concepts of key neural network designs, how they are trained, and why they work. Throughout, the authors present actual applications that illustrate neural networks' utility in the new world.
(source: Nielsen Book Data)
Naturally Intelligent Systems offers a comprehensive introduction to neural networks.
(source: Nielsen Book Data)
For centuries, people have been fascinated by the possibility of building an artificial system that behaves intelligently. From Mary Shelley's Frankenstein monster to the computer intelligence of HAL in 2001, scientists have been cast in the role of creator of such devices. Now there is a new entry into this arena, neural networks, and "Naturally Intelligent Systems explores these systems to see how they work and what they can do. Neural networks are not computers in any traditional sense, and they have little in common with earlier approaches to the problem of fabricating intelligent behavior. Instead, they are information processing systems that are physically modeled after the structure of the brain and that are "trained to perform a task rather than programmed like a computer. Neural networks, in fact, provide a tool with problemsolving capabilities - and limitations - strikingly similar to those of animals and people. In particular, they are successful in applications such as speech, vision, robotics, and pattern recognition. "Naturally Intelligent Systems offers a comprehensive introduction to these exciting systems. It provides a technically accurate, yet down-to-earth discussion of neural networks. No particular mathematical background is necessary; it is written for all interested readers. "Naturally Intelligent Systents clearly explains the underlying concepts of key neural network designs, how they are trained, and why they work. It compares their behavior to the natural intelligence found in animals - and people. Throughout, Caudill and Butler bring the field into focus by presenting actual applications that illustrate neural networks' utility in the real world. MaureenCaudill is President of Adaptics, a neural network consulting company in San Diego and author of the popular "Neural Network Primer" articles that appear regularly in "AI Expert. Charles Butler is a Senior Principal Scientist at Physical Sciences in Alexandria, Virginia. He is a specialist in neural network application development. A Bradford Book.
(source: Nielsen Book Data)
- Judd, J. Stephen.
- Cambridge, Mass. : MIT Press, ©1990.
- Description
- Book — 1 online resource (150 pages) : illustrations
- Summary
-
- 1. Neural networks: hopes, problems, and goals
- 2. The loading problem
- 3. Other studies of learning
- 4. The intractability of loading
- 5. Subcases
- 6. Shallow architectures
- 7. Memorization and generalization
- 8. Conclusion.
(source: Nielsen Book Data)
- Neurale netværk. English
- Brunak, Søren.
- Singapore ; Teaneck, N.J., USA : World Scientific, ©1990.
- Description
- Book — 1 online resource (180 pages) : illustrations
- Summary
-
Both specialists and laymen will enjoy reading this book. Using a lively, non-technical style and images from everyday life, the authors present the basic principles behind computing and computers. The focus is on those aspects of computation that concern networks of numerous small computational units, whether biological neural networks or artificial electronic devices.
(source: Nielsen Book Data)
10. Applications of learning & planning methods [1991]
- Singapore ; Teaneck, N.J. : World Scientific, 1991.
- Description
- Book — 1 online resource
- Summary
-
- Ch 1. Embedding learning in a general frame-based architecture / T. Tanaka and T.M. Mitchell
- ch. 2. Connectionist learning with Chebychev networks and analyses of its internal representation / A. Narnatame
- ch. 3. Layered inductive learning algorithms and their computational aspects / H. Madala
- ch. 4. An approach to combining explanation-based and neural learning algorithms / J.W. Shavlik and G.G. Towell
- ch. 5. The application of symbolic inductive learning to the acquisition and recognition of noisy texture concepts / P.W. Pachowicz
- ch. 6. Automating technology adaptation in design synthesis / J.R. Kipps and D.D. Gajski
- ch. 7. Connectionist production systems in local and hierarchical representation / A. Sohn and J.-L. Gaudiot
- ch. 8. A parallel architecture for AI nonlinear planning / S. Lee and K. Chung
- ch. 9. Heuristic tree search using nonparametric statistical inference methods / W. Zhang and N.S.V. Rao
- ch. 10. An A* approach to robust plan recognition for intelligent interfaces / R.J. Calistri-Yeh
- ch. 11. Differential A*: an adaptive search method illustrated with robot path planning for moving obstacles & goals, and an uncertain environment / K.I. Trovato
- ch. 12. Path planning under uncertainty / F. Yegenoglu and H.E. Stephanou
- ch. 13. Knowledge-based acquisition in real-time path planning in unknown space / N.G. Bourbakis
- ch. 14. Path planning for two cooperating robot manipulators / Q. Xue and P.C.-Y. Sheu.
(source: Nielsen Book Data)
- Russell, Stuart J. (Stuart Jonathan), 1962-
- Cambridge, Mass. : MIT Press, ©1991.
- Description
- Book — 1 online resource (xx, 200 pages) : illustrations
- Summary
-
- Limited rationality
- execution architectures for decision procedures
- metareasoning architecture
- rational metareasoning
- application to game playing
- application to problem solving search
- learning the value of computation
- toward limited rational agents.
- (source: Nielsen Book Data)
(source: Nielsen Book Data)
Like Mooki, the hero of Spike Lee's film "Do the Right Thing" artificially, intelligent systems have a hard time knowing what to do in all circumstances. Classical theories of perfect rationality prescribe the "right thing" for any occasion, but no finite agent can compute their prescriptions fast enough. In "Do the Right Thing", the authors argue that a new theoretical foundation for artificial intelligence can be constructed in which rationality is a property of "programs" within a finite architecture, and their behaviour over time in the task environment, rather than a property of individual decisions. "Do The Right Thing" suggests that the rich structure that seems to be exhibited by humans, and ought to be exhibited by AI systems, is a necessary result of the pressure for optimal behaviour operating within a system of strictly limited resources. It provides an outline for the design of new intelligent systems and describes theoretical and practical tools for bringing about intelligent behaviour in finite machines. The tools are applied to game planning and real-time problem solving, with surprising results.
(source: Nielsen Book Data)
- Aleksandrov, V. V. (Viktor Vasilʹevich)
- Singapore ; Teaneck, N.J. : World Scientific, ©1991.
- Description
- Book — 1 online resource (viii, 203 pages) : illustrations (some color)
- Summary
-
- AUTHORS' NOTES AND ACKNOWLEDGEMENTS; INTRODUCTION; 1.1. Objectives of this Book; 1.2. The Seeing Eye and the Knowing Eye
- 1 IMAGE AND COMPUTER; 1.1. A Short History; 1.2. The Computer's Eye; 1.3. A Beetle and an Ant-Hill; 1.4. Features and Models; 2 HOW HUMANS SEE THE WORLD; 2.1. The Eye and the Brain; 2.2. The Level of Preattention; 2.3. Right and Left Vision; 2.4. Images and Words; 3 CONVERSATIONS WITH A COMPUTER; 3.1. From a Point to a Region; 3.2. From a Region to an Object; 3.3. From an Object to a Situation; 4 AN APOLOGIA FOR VISION; 4.1. The Evolution of Vision.
- 4
- .2. Vision and Thinking4
- .3. Recollection of the Future; 4
- .4. Cognition through Vision; 5 CREATING A NEW WORLD; 5
- .1. From Elements to the System; 5
- .2. Back to Nature; 5
- .3. Who Do We Think They Are?; CONCLUSIONS; PLATES; REFERENCES; ILLUSTRATIONS; INDEX.
(source: Nielsen Book Data)
- Singapore ; River Edge, N.J. : World Scientific, ©1991.
- Description
- Book — 1 online resource (iii, 159 pages) : illustrations
- Summary
-
- Introduction, C.H. Chen
- combined neural-net/knowledge-based adaptive systems for large scale dynamic control, A.D.C. Holden and S.C. Suddarth
- a connectionist incremental expert system combining production systems and associative memory, H.F. Yin and P. Liang
- optimal hidden units for two-layer nonlinear feedforward networks, T.D. Sanger
- an incremental fine adjustment algorithm for the design of optimal interpolating networks, S.K. Sin and R.J.P. deFigueiredo
- on the asymptotic properties of recurrent neural networks for optimization, J. Wang
- a real-time image segmentation system using a connectionist classifier architecture, W.E. Blanz and S.L. Gish
- segmentation of ultrasonic images with neural network technology's on automatic active sonar classifier development, T.B. Haley
- on the relationships between statistical pattern recognition and artificial neural networks, C.H. Chen.
- (source: Nielsen Book Data)
(source: Nielsen Book Data)
- Singapore ; River Edge, N.J. : World Scientific, ©1992.
- Description
- Book — 1 online resource (xxiv, 705 pages) : illustrations
- Summary
-
- An introduction to artificial intelligence, N.G. Bourbakis
- fundamental methods for horn logic and AI applications, E. Kounalis and P. Marquis
- applications of genetic algorithms to permutation problems, F. Petry and B. Buckles
- extracting procedural knowledge from software systems using inductive learning in the PM system, R. Reynolds and E. Zannoni
- resource oriented parallel planning, S. Lee and K. Chung
- advanced parsing technology for knowledge based shells, J. Kipps
- analysis and synthesis of intelligent systems, W. Arden
- document analysis and recognition, S.N. Srihari et al
- signal understanding - an AI approach to modulation and classification, J.E. Whelchel et al
- and others.
- (source: Nielsen Book Data)
(source: Nielsen Book Data)
- Dreyfus, Hubert L.
- Cambridge, Mass. : MIT Press, ©1992.
- Description
- Book — 1 online resource (liii, 354 pages)
- Summary
-
- Ten years of research in artificial intelligence (1957-1967)
- Cognitive simulation (1957-1962)
- Semantic information processing (1962-1967)
- Assumptions underlying persistent optimism
- Biological assumption
- Psychological assumption
- Epistemological assumption
- Ontological assumption
- Alternatives to the traditional assumptions
- The role of the body in intelligent behavior
- The situation: orderly behavior without recourse to rules
- The situation as a function of human needs
- Conclusion: the scope and limits of artificial reason
- The limits of artificial intelligence
- The future of artificial intelligence.
(source: Nielsen Book Data)
- Clark, Andy, 1957-
- Cambridge, Mass. : MIT Press, ©1993.
- Description
- Book — 1 online resource (xiii, 252 pages) : illustrations
- Summary
-
- Part 1 Melting the inner code: computational models, syntax, and the folk solids
- connectionism, code, and context
- what networks know
- what networks don't know
- concept, category and prototype. Part 2 From code to process: the presence of a symbol
- the role of representational trajectories
- the cascade of significant virtual machines
- associative learning in a hostile world
- the fate of the folk
- associative engines - the next generation.
- (source: Nielsen Book Data)
(source: Nielsen Book Data)
17. Neural network learning and expert systems [1993]
- Gallant, Stephen I.
- Cambridge, Mass. : MIT Press, ©1993.
- Description
- Book — 1 online resource (xvi, 365 pages) : illustrations
- Summary
-
- 1. Introduction and important definitions
- 2. Representation issues
- 3. Perceptron learning and the pocket algorithm
- 4. Winner-take-all groups or linear machines
- 5. Autoassociators and one-shot learning
- 6. Mean squared error (MSE) algorithms
- 7. Unsupervised learning
- 8. The distributed method and radial basis functions
- 9. Computational learning theory and the BRD algorithm
- 10. Constructive algorithms
- 11. Backpropagation
- 12. Backpropagation : variations and applications
- 13. Simulated annealing and boltzmann machines
- 14. Expert systems and neural networks
- 15. Details of the MACIE system
- 16. Noise, redundancy, fault detection, and bayesian decision theory
- 17. Extracting rules from networks.
(source: Nielsen Book Data)
18. Advances in genetic programming. [Voume 1] [1994]
- Cambridge, Massachusetts : The MIT Press, [1994]
- Description
- Book — 1 online resource (ix, 476 pages) : illustrations
- Summary
-
- A perspective on the work in this book / Kenneth E. Kinnear, Jr.
- Introduction to genetic programming / John R. Koza
- The evolution of evolvability in genetic programming / Lee Altenberg
- Genetic programming and emergent intelligence / Peter J. Angeline
- Scalable learning in genetic programming using automatic function definition / John R. Koza
- Alternatives in automatic function definition : a comparison of performance / Kenneth E. Kinnear, Jr.
- The donut problem : scalability, generalization and breeding policies in genetic programming / Walter Alden Tackett, Aviram Carmi
- Effects of locality in individual and population evolution / Patrik D'haeseleer, Jason Bluming
- The evolution of mental models / Astro Teller
- Evolution of obstacle avoidance behavior : using noise to promote robust solutions / Craig W. Reynolds
- Pygmies and civil servants / Conor Ryan
- Genetic programming using a minimum decsription length principle / Hitoshi Iba, Hugo de Garis, Taisuke Sato
- Genetic programming in C++: implementation issues / Mike J. Keith, Martin C. Martin. A compiling genetic programming system that directly manipulates the machine code / Peter Nordin
- Automatic generation of programs for crawling and walking / Graham Spencer
- Genetic programming for the acquisition of double auction market strategies / Martin Andrews, Richard Prager
- Two scientific applications of genetic programming : stack filters and non-linear equation fitting to chaotic data / Howard Oakley
- The automatic generation of plans for a mobile robot via genetic programming with automatically defined functions / Simon G. Handley
- Competitively evolving decision trees against fixed training cases for natural language processing / Eric V. Siegel
- Cracking and co-evolving randomizers / Jan Jannink
- Optimizing confidence of text classification by evolution of symbolic expressions / Brij Masand
- Evolvable 3D modeling for model-based object recognition systems / Thang Nguyen, Thomas Huang
- Automatically defined features : the simultaneous evolution of 2-dimensional feature detectors and an algorithm for using them / David Andre
- Genetic micro programming of neural networks / Frédéric Gruau.
(source: Nielsen Book Data)
There is increasing interest in genetic programming by both researchers and professional software developers. These twenty-two invited contributions show how a wide variety of problems across disciplines can be solved using this new paradigm. Advances in Genetic Programming reports significant results in improving the power of genetic programming, presenting techniques that can be employed immediately in the solution of complex problems in many areas, including machine learning and the simulation of autonomous behavior. Popular languages such as C and C++ are used in many of the applications and experiments, illustrating how genetic programming is not restricted to symbolic computing languages such as LISP. Researchers interested in getting started in genetic programming will find information on how to begin, on what public domain code is available, and on how to become part of the active genetic programming community via electronic mail. A major focus of the book is on improving the power of genetic programming. Experimental results are presented in a variety of areas, including adding memory to genetic programming, using locality and "demes" to maintain evolutionary diversity, avoiding the traps of local optima by using coevolution, using noise to increase generality, and limiting the size of evolved solutions to improve generality. Significant theoretical results in the understanding of the processes underlying genetic programming are presented, as are several results in the area of automatic function definition. Performance increases are demonstrated by directly evolving machine code, and implementation and design issues for genetic programming in C++ are discussed.
(source: Nielsen Book Data)
19. Circuit complexity and neural networks [1994]
- Parberry, Ian.
- Cambridge, Mass. : MIT Press, ©1994.
- Description
- Book — 1 online resource (xxix, 270 pages) : illustrations
- Summary
-
- Computers and computation
- the discrete neuron
- the Boolean neuron
- alternating circuits
- small, shallow alternating circuits
- threshold circuits
- cyclic networks
- probabilistic neural networks.
- (source: Nielsen Book Data)
(source: Nielsen Book Data)
Neural networks usually work adequately on small problems but can run into trouble when they are scaled up to problems involving large amounts of input data. Circuit Complexity and Neural Networks addresses the important question of how well neural networks scale - that is, how fast the computation time and number of neurons grow as the problem size increases. It surveys recent research in circuit complexity (a robust branch of theoretical computer science) and applies this work to a theoretical understanding of the problem of scalability.
(source: Nielsen Book Data)
- Kearns, Michael J.
- Cambridge, Mass. : MIT Press, ©1994.
- Description
- Book — 1 online resource (xii, 207 pages) : illustrations
- Summary
-
- The probably approximately correct learning model
- Occam's razor
- the Vapnik-Chervonenkis dimension
- weak and strong learning
- learning in the presence of noise
- inherent unpredictability
- reducibility in PAC learning
- learning finite automata by experimentation
- appendix - some tools for probabilistic analysis.
- (source: Nielsen Book Data)
(source: Nielsen Book Data)
Articles+
Journal articles, e-books, & other e-resources
Guides
Course- and topic-based guides to collections, tools, and services.