1  20
Next
 Hastie, Trevor.
 2nd ed.  New York : Springer, c2009.
 Description
 Book — xxii, 745 p. : ill. ; 24 cm.
 Summary

 Introduction. Overview of supervised learning. Linear methods for regression. Linear methods for classification. Basis expansions and regularization. Kernel smoothing methods. Model assessment and selection. Model inference and averaging. Additive models, trees, and related methods. Boosting and additive trees. Neural networks. Support vector machines and flexible discriminants. Prototype methods and nearestneighbors. Unsupervised learning.
 (source: Nielsen Book Data)
(source: Nielsen Book Data)
Marine Biology Library (Miller), Science Library (Li and Ma)
Marine Biology Library (Miller)  Status 

Stacks  Request (opens in new tab) 
Q325.75 .H37 2009  Unknown 
Science Library (Li and Ma)  Status 

Stacks  
Q325.75 .H37 2009  CHECKEDOUT Request 
Q325.75 .H37 2009  Unknown 
Q325.75 .H37 2009  CHECKEDOUT Request 
 Hastie, Trevor.
 New York : Springer, c2001.
 Description
 Book — xvi, 533 p. : ill. (some col.) ; 25 cm.
 Summary

 Overview of Supervised Learning. Linear Methods for Regression. Linear Methods for Classification. Basic Expansions and Regularization. Kernel Methods. Model Assessment and Selection. Model Inference and Averaging. Additive Models, Trees, and Related Methods. Boosting and Additive Trees. Neural Networks. Support Vector Machines and Flexible Discriminates. Prototype Methods and Nearest Neighbors. Unsupervised Learning.
 (source: Nielsen Book Data)
(source: Nielsen Book Data)
Science Library (Li and Ma)
Science Library (Li and Ma)  Status 

Stacks  
Q325.75 .H37 2001  Unknown 
Q325.75 .H37 2001  Unknown 
 Hastie, Trevor.
 New York : Springer, c2001.
 Description
 Book — xvi, 533 p. : col. ill. ; 25 cm.
 Summary

 Overview of Supervised Learning. Linear Methods for Regression. Linear Methods for Classification. Basic Expansions and Regularization. Kernel Methods. Model Assessment and Selection. Model Inference and Averaging. Additive Models, Trees, and Related Methods. Boosting and Additive Trees. Neural Networks. Support Vector Machines and Flexible Discriminates. Prototype Methods and Nearest Neighbors. Unsupervised Learning.
 (source: Nielsen Book Data)
(source: Nielsen Book Data)
 Online
Law Library (Crown)
Law Library (Crown)  Status 

Basement  Request (opens in new tab) 
Q325.75 .H37 2001  Unknown 
4. Elements of Statistical Learning, The. [2001]
 Hastie, T.
 New York, NY : Springer, 2001.
 Description
 Book — 1 online resource (546 pages)
 Summary

During the past decade there has been an explosion in computation and information technology.; With it has come a vast amount of data in a variety of fields such as medicine, biology, finance, and marketing.; The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics.; Many of these tools have common underpinnings but are often expressed with different terminology.; This book describes the important ideas in these areas in a common conceptual framework.; While the approach is statistical, the emphasis is on concepts rather than mathematics.
 International Workshop on Continual SemiSupervised Learning (1st : 2021 : Online)
 Cham : Springer, 2022.
 Description
 Book — 1 online resource (xiii, 135 pages) : illustrations (some color).
 Summary

 International Workshop on Continual SemiSupervised Learning: Introduction, Benchmarks and Baselines. Unsupervised Continual Learning Via Pseudo Labels. Transfer and Continual Supervised Learning for Robotic Grasping through Grasping Features. Unsupervised Continual Learning via SelfAdaptive Deep Clustering Approach. Evaluating Continual Learning Algorithms by Generating 3D Virtual Environments. A Benchmark and Empirical Analysis for Replay Methods in Continual Learning. SPeCiaL: SelfSupervised Pretraining for Continual Learning. Distilled Replay: Overcoming Forgetting through Synthetic Samples. Selfsupervised Novelty Detection for Continual Learning: A Gradientbased Approach Boosted by Binary Classification.
 (source: Nielsen Book Data)
(source: Nielsen Book Data)
6. Learning to quantify [2023]
 Esuli, Andrea, author.
 Cham : Springer, 2023.
 Description
 Book — 1 online resource (xvi, 137 pages) : illustrations.
 Summary

  1. The Case for Quantification.
 2. Applications of Quantification.
 3. Evaluation of Quantification Algorithms.
 4. Methods for Learning to Quantify.
 5. Advanced Topics.
 6. The Quantification Landscape.
 7. The Road Ahead.
 Cham : Springer, 2020.
 Description
 Book — 1 online resource (191 pages)
 Summary

 Chapte
 r1: A Systematic Review on Supervised & Unsupervised Machine Learning Algorithms for Data Science. Chapte
 r2: Overview of OnePass and DiscardAfterLearn Concepts for Classification and Clustering in Streaming Environment with Constraints. Chapte
 r3: Distributed SingleSource Shortest Path Algorithms with Two Dimensional Graph Layout. Chapte
 r4: Using NonNegative Tensor Decomposition for Unsupervised Textual Influence Modeling. Chapte
 r5: Survival Support Vector Machines: A Simulation Study and Its Healthrelated Application. Chapte
 r6: Semantic Unsupervised Learning for Word Sense Disambiguation. Chapte
 r7: Enhanced Tweet Hybrid Recommender System using Unsupervised Topic Modeling and Matrix Factorization based Neural Network. Chapte
 r8: New Applications of a Supervised Computational Intelligence (CI) Approach: Case Study in Civil Engineering.
 (source: Nielsen Book Data)
(source: Nielsen Book Data)
8. Boosting : foundations and algorithms [2012]
 Schapire, Robert E., author.
 Cambridge, Massachusetts : MIT Press, c2012 [Piscataqay, New Jersey] : IEEE Xplore, [2012]
 Description
 Book — 1 online resource (xv, 526 pages) : illustrations
 Summary

 Foundations of machine learning
 Using AdaBoost to minimize training error
 Direct bounds on the generalization error
 The margins explanation for boosting's effectiveness
 Game theory, online learning, and boosting
 Loss minimization and generalizations of boosting
 Boosting, convex optimization, and information geometry
 Using confidencerated weak predictions
 Multiclass classification problems
 Learning to rank
 Attaining the best possible accuracy
 Optimally efficient boosting
 Boosting in continuous time
(source: Nielsen Book Data)
 Schapire, Robert E.
 Cambridge, MA : MIT Press, ©2012.
 Description
 Book — 1 online resource (xv, 526 pages) : illustrations.
 Summary

 Foundations of machine learning
 Using AdaBoost to minimize training error
 Direct bounds on the generalization error
 The margins explanation for boosting's effectiveness
 Game theory, online learning, and boosting
 Loss minimization and generalizations of boosting
 Boosting, convex optimization, and information geometry
 Using confidencerated weak predictions
 Multiclass classification problems
 Learning to rank
 Attaining the best possible accuracy
 Optimally efficient boosting
 Boosting in continuous time.
(source: Nielsen Book Data)
 Schapire, Robert E.
 Cambridge, MA : MIT Press, ©2012.
 Description
 Book — 1 online resource (xv, 526 pages) : illustrations.
 Summary

 Foundations of machine learning
 Using AdaBoost to minimize training error
 Direct bounds on the generalization error
 The margins explanation for boosting's effectiveness
 Game theory, online learning, and boosting
 Loss minimization and generalizations of boosting
 Boosting, convex optimization, and information geometry
 Using confidencerated weak predictions
 Multiclass classification problems
 Learning to rank
 Attaining the best possible accuracy
 Optimally efficient boosting
 Boosting in continuous time.
(source: Nielsen Book Data)
11. Boosting : foundations and algorithms [2012]
 Schapire, Robert E.
 Cambridge, MA : MIT Press, ©2012.
 Description
 Book — 1 online resource (xv, 526 pages) : illustrations Digital: data file.
 Summary

 Foundations of machine learning
 Using AdaBoost to minimize training error
 Direct bounds on the generalization error
 The margins explanation for boosting's effectiveness
 Game theory, online learning, and boosting
 Loss minimization and generalizations of boosting
 Boosting, convex optimization, and information geometry
 Using confidencerated weak predictions
 Multiclass classification problems
 Learning to rank
 Attaining the best possible accuracy
 Optimally efficient boosting
 Boosting in continuous time.
(source: Nielsen Book Data)
12. Boosting : foundations and algorithms [2012]
 Schapire, Robert E.
 Cambridge, MA : MIT Press, ©2012.
 Description
 Book — 1 online resource (xv, 526 pages) : illustrations Digital: data file.
 Summary

 Foundations of machine learning
 Using AdaBoost to minimize training error
 Direct bounds on the generalization error
 The margins explanation for boosting's effectiveness
 Game theory, online learning, and boosting
 Loss minimization and generalizations of boosting
 Boosting, convex optimization, and information geometry
 Using confidencerated weak predictions
 Multiclass classification problems
 Learning to rank
 Attaining the best possible accuracy
 Optimally efficient boosting
 Boosting in continuous time.
(source: Nielsen Book Data)
13. Semisupervised learning [2006]
 Cambridge, Mass. : MIT Press, c2006.
 Description
 Book — x, 508 p. : ill. ; 26 cm.
 Summary

A comprehensive review of an area of machine learning that deals with the use of unlabeled data in classification problems: stateoftheart algorithms, a taxonomy of the field, applications, benchmark experiments, and directions for future research. In the field of machine learning, semisupervised learning (SSL) occupies the middle ground, between supervised learning (in which all training examples are labeled) and unsupervised learning (in which no label data are given). Interest in SSL has increased in recent years, particularly because of application domains in which unlabeled data are plentiful, such as images, text, and bioinformatics. This first comprehensive overview of SSL presents stateoftheart algorithms, a taxonomy of the field, selected applications, benchmark experiments, and perspectives on ongoing and future research.SemiSupervised Learning first presents the key assumptions and ideas underlying the field: smoothness, cluster or lowdensity separation, manifold structure, and transduction. The core of the book is the presentation of SSL methods, organized according to algorithmic strategies. After an examination of generative models, the book describes algorithms that implement the lowdensity separation assumption, graphbased methods, and algorithms that perform twostep learning. The book then discusses SSL applications and offers guidelines for SSL practitioners by analyzing the results of extensive benchmark experiments. Finally, the book looks at interesting directions for SSL research. The book closes with a discussion of the relationship between semisupervised learning and transduction.
(source: Nielsen Book Data)
Engineering Library (Terman)
Engineering Library (Terman)  Status 

Stacks  
Q325.75 .S42 2006  Unknown 
 New York : Nova Science Publishers, [2018]
 Description
 Book — 1 online resource (241 pages)
 Summary

 Intro; SEMISUPERVISED LEARNINGBACKGROUND, APPLICATIONSAND FUTURE DIRECTIONS; SEMISUPERVISED LEARNINGBACKGROUND, APPLICATIONSAND FUTURE DIRECTIONS; CONTENTS; PREFACE; Introduction to This Book; Target Audience; Acknowledgments; Chapter 1CONSTRAINED DATASELFREPRESENTATIVE GRAPHCONSTRUCTION; Abstract;
 1. Introduction;
 2. Constrained Data SelfRepresentative GraphConstruction;
 3. Kernelized Variants; 3
 .1. Hilbert Space; 3
 .2. Column Generation;
 4. Performance Evaluation; 4
 .1. Label Propagation; 4.1
 .1. Gaussian Random Fields; 4.1
 .2. Local and Global Consistency; 4
 .2. Experimental Results
 4.2
 .1. Comparison among Several Graph Construction Methods4.2
 .2. Stability of the Proposed Method; 4.2
 .3. Sensitivity to Parameters; 4.2
 .4. Computational Complexity and CPU Time; Acknowledgments; Conclusion; References; Chapter 2INJECTING RANDOMNESS INTO GRAPHS:AN ENSEMBLE SEMISUPERVISEDLEARNING FRAMEWORK; Abstract;
 1. Introduction;
 2. Background; 2
 .1. GraphBased SemiSupervised Learning; 2
 .2. Ensemble Learning and Random Forests; 2
 .3. Anchor Graph;
 3. Random MultiGraphs; 3
 .1. Problem Formulation; 3
 .2. Algorithm; 3
 .3. Graph Construction; 3
 .4. SemiSupervised Inference
 3
 .5. Inductive Extension3
 .6. Randomness as Regularization;
 4. Experiments; 4
 .1. Data Sets; 4
 .2. Experimental Results; 4
 .3. Impact of Parameters; 4
 .4. Hyperspectral Image Classification; Acknowledgments; Conclusion; References; Chapter 3LABEL PROPAGATION VIA KERNELFLEXIBLE MANIFOLD EMBEDDING; Abstract;
 1. Introduction;
 2. RelatedWork; 2
 .1. SemiSupervised Discriminant Analysis; 2
 .2. SemiSupervised Discriminant Embedding; 2
 .3. Laplacian Regularized Least Square; 2
 .4. Review of the Flexible Manifold Embedding Framework;
 3. Kernel FlexibleManifold Embedding; 3
 .1. The Objective Function
 3
 .2. Optimal Solution3
 .3. The Algorithm; 3
 .4. Difference between KFME and Existing Methods; 3.4
 .1. Difference between KFME and FME; 3.4
 .2. Difference between KFME and Other Methods;
 4. Experimental Results; 4
 .1. Datasets; 4
 .2. Method Comparison; 4
 .3. Results Analysis; 4
 .4. Stability with Respect to Graph; Acknowledgments; Conclusion; References; Chapter 4FAST GRAPHBASED SEMISUPERVISEDLEARNING AND ITS APPLICATIONS; Abstract;
 1. Introduction;
 2. Related Work; 2
 .1. Scalable GraphBased SSL/TL Methods; 2
 .2. Scalable Graph Construction Methods; 2
 .3. Robust GraphBased SSL/TL Methods
 3. Minimum Tree Cut Method3
 .1. Notations; 3
 .2. The Proposed Method; 3
 .3. The Tree Labeling Algorithm; 3
 .4. Generate a Spanning Tree from a Graph;
 4. Insensitiveness to Graph Construction;
 5. Experiments; 5
 .1. Data Set; 5.1
 .1. UCI Data Set; 5.1
 .2. Image; 5.1
 .3. Text; 5
 .2. Graph Construction; 5
 .3. Accuracy; 5
 .4. Speed; 5
 .5. Robustness; 5
 .6. Effect of Different Spanning Tree and Ensemble of MultipleSpanning Trees;
 6. Applications in Text Extraction; 6
 .1. Interactive Text Extraction in Natural Scene Images; 6
 .2. Document Image Binarization; Conclusion and FutureWork; References
 弱监督学习实用指南 : 用更少的数据做更多的事情 = Practical weak supervision : doing more with less data
 Practical weak supervision : doing more with less data. Chinese
 Tok, WeeHyong, author.
 Di 1 ban 第1版.  Nanjing : Dong nan da xue chu ban she = Southeast University Press, 2023 南京 : 东南大学出版社 = Southeast University Press, 2023.
 Description
 Book — 1 online resource (209 pages)
 Summary

Detailed summary in vernacular field
如今，绝大多数数据科学家和数据工程师基于 高质量的标签数据集训练学习模型。但是，人工 构建训练集既耗时又十分昂贵，以至于很多公司 的机器学习项目无法完成。在本书中，有一种更 为实用的方法，由Wee Hyong Tok、Amit Bahree和Senja Filipi展示如何使用弱监督学习模型创建产品。 你将学习如何通过使用Snorkel（斯坦福大学人工智 能实验室的一个衍生产品），在弱标签数据集上 建立自然语言处理和计算机视觉项目。因为很多 公司研究的机器学习项目从未走出他们的实验室 ，所以本书还提供了如何在真实案例中使用构建 的深度学习模型的指南。 了解弱监督领域的最新进展，包括将其用在数据 科学过程中的方法 使用Snorkel AI进行弱监督和数据编程 获取使用Snorkel标记文本和图像数据集的代码示例 使用弱标签数据集进行文本和图像分类 了解使用 Snorkel 处理大型数据集和使用 Spark 集群扩展标签的注意事项.
 Jang, Yeona.
 Cambridge, Mass. : Massachusetts Institute of Technology. Laboratory for Computer Science, c1993.
 Description
 Book — 162 p. : ill. ; 28 cm.
SAL3 (offcampus storage)
SAL3 (offcampus storage)  Status 

Stacks  Request (opens in new tab) 
135101  Available 
17. Semisupervised learning [2006]
 Cambridge, Mass. : MIT Press, ©2006.
 Description
 Book — 1 online resource (x, 508 pages) : illustrations Digital: data file.
 Summary

 Series Foreword; Preface; 1
 Introduction to SemiSupervised Learning; 2
 A Taxonomy for SemiSupervised Learning Methods; 3
 SemiSupervised Text Classification Using EM; 4
 Risks of SemiSupervised Learning: How Unlabeled Data Can Degrade Performance of Generative Classifiers; 5
 Probabilistic SemiSupervised Clustering with Constraints; 6
 Transductive Support Vector Machines; 7
 SemiSupervised Learning Using Semi Definite Programming; 8
 Gaussian Processes and the NullCategory Noise Model; 9
 Entropy Regularization; 10
 DataDependent Regularization.
 11
 Label Propagation and Quadratic Criterion12
 The Geometric Basis of SemiSupervised Learning; 13
 Discrete Regularization; 14
 SemiSupervised Learning with Conditional Harmonic Mixing; 15
 Graph Kernels by Spectral Transforms; 16 Spectral Methods for Dimensionality Reduction; 17
 Modifying Distances; 18
 LargeScale Algorithms; 19
 SemiSupervised Protein Classification Using Cluster Kernels; 20
 Prediction of Protein Function from Networks; 21
 Analysis of Benchmarks; 22
 An Augmented PAC Model for Semi Supervised Learning.
 23
 MetricBased Approaches for Semi Supervised Regression and Classification24
 Transductive Inference and SemiSupervised Learning; 25
 A Discussion of SemiSupervised Learning and Transduction; References; Notation and Symbols; Contributors; Index.
(source: Nielsen Book Data)
A comprehensive review of an area of machine learning that deals with the use of unlabeled data in classification problems: stateoftheart algorithms, a taxonomy of the field, applications, benchmark experiments, and directions for future research. In the field of machine learning, semisupervised learning (SSL) occupies the middle ground, between supervised learning (in which all training examples are labeled) and unsupervised learning (in which no label data are given). Interest in SSL has increased in recent years, particularly because of application domains in which unlabeled data are plentiful, such as images, text, and bioinformatics. This first comprehensive overview of SSL presents stateoftheart algorithms, a taxonomy of the field, selected applications, benchmark experiments, and perspectives on ongoing and future research.SemiSupervised Learning first presents the key assumptions and ideas underlying the field: smoothness, cluster or lowdensity separation, manifold structure, and transduction. The core of the book is the presentation of SSL methods, organized according to algorithmic strategies. After an examination of generative models, the book describes algorithms that implement the lowdensity separation assumption, graphbased methods, and algorithms that perform twostep learning. The book then discusses SSL applications and offers guidelines for SSL practitioners by analyzing the results of extensive benchmark experiments. Finally, the book looks at interesting directions for SSL research. The book closes with a discussion of the relationship between semisupervised learning and transduction.
(source: Nielsen Book Data)
18. Graphbased semisupervised learning [2014]
 Subramanya, Amarnag, author.
 Cham, Switzerland : Springer, [2014]
 Description
 Book — 1 online resource (xiii, 111 pages) : illustrations
 Summary

 Introduction Graph Construction Learning and Inference Scalability Applications Future Work Bibliography Authors' Biographies Index.
 (source: Nielsen Book Data)
(source: Nielsen Book Data)
19. Introduction to semisupervised learning [2009]
 Zhu, Xiaojin, Ph. D.
 Cham, Switzerland : Springer, ©2009.
 Description
 Book — 1 online resource (xi, 116 pages) : color illustrations
 Summary

 Introduction to Statistical Machine Learning Overview of SemiSupervised Learning Mixture Models and EM CoTraining GraphBased SemiSupervised Learning SemiSupervised Support Vector Machines Human SemiSupervised Learning Theory and Outlook.
 (source: Nielsen Book Data)
(source: Nielsen Book Data)
 Hastie, Trevor author.
 2nd ed.  New York : Springer, [2009]
 Description
 Book — xxii, 745 pages : illustrations (some color) ; 24 cm.
 Summary

 Introduction
 Overview of supervised learning
 Linear methods for regression
 Linear methods for classification
 Basis expansions and regularization
 Kernel smoothing methods
 Model assessment and selection
 Model inference and averaging
 Additive models, trees, and related methods
 Boosting and additive trees
 Neural networks
 Support vector machines and flexible discriminants
 Prototype methods and nearestneighbors
 Unsupervised learning
 Random forests
 Ensemble learning
 Undirected graphical models
 Highdimensional problems : p>> N.
(source: Nielsen Book Data)
 Online
Law Library (Crown)
Law Library (Crown)  Status 

Basement  Request (opens in new tab) 
Q325.75 .H37 2009  Unknown 
Articles+
Journal articles, ebooks, & other eresources
Guides
Course and topicbased guides to collections, tools, and services.