1 - 20
Next
- IFIP/IEEE International Workshop on Distributed Systems: Operations and Management (17th : 2006 : Dublin, Ireland)
- Berlin ; New York : Springer, c2006.
- Description
- Book — xiii, 282 p. : ill.
- Molkova, Liudmila.
- 1st edition - Birmingham : Packt Publishing, Limited, 2023
- Description
- Book — 1 online resource (336 p.)
- Summary
-
- Cover
- Title Page
- Copyright and Credit
- Dedicated
- Foreword
- Contributors
- Table of Contents
- Preface
- Part 1: Introducing Distributed Tracing
- Chapter 1: Observability Needs of Modern Applications
- Understanding why logs and counters are not enough
- Logs
- Events
- Metrics and counters
- What's missing?
- Introducing distributed tracing
- Span
- Tracing
- building blocks
- Reviewing context propagation
- In-process propagation
- Out-of-process propagation
- Ensuring consistency and structure
- Building application topology
- Resource attributes
- Performance analysis overview
- The baseline
- Investigating performance issues
- Summary
- Questions
- Further reading
- Chapter 2: Native Monitoring in .NET
- Technical requirements
- Building a sample application
- Log correlation
- On-demand logging with dotnet-monitor
- Monitoring with runtime counters
- Enabling auto-collection with OpenTelemetry
- Installing and configuring OpenTelemetry
- Exploring auto-generated telemetry
- Debugging
- Performance
- Summary
- Questions
- Chapter 3: The .NET Observability Ecosystem
- Technical requirements
- Configuring cloud storage
- Using instrumentations for popular libraries
- Instrumenting application
- Leveraging infrastructure
- Configuring secrets
- Configuring observability on Dapr
- Tracing
- Metrics
- Instrumenting serverless environments
- AWS Lambda
- Azure Functions
- Summary
- Questions
- Chapter 4: Low-Level Performance Analysis with Diagnostic Tools
- Technical requirements
- Investigating common performance problems
- Memory leaks
- Thread pool starvation
- Profiling
- Inefficient code
- Debugging locks
- Using diagnostics tools in production
- Continuous profiling
- The dotnet-monitor tool
- Summary
- Questions
- Part 2: Instrumenting .NET Applications
- Chapter 5: Configuration and Control Plane
- Technical requirements
- Controlling costs with sampling
- Head-based sampling
- Tail-based sampling
- Enriching and filtering telemetry
- Span processors
- Customizing instrumentations
- Resources
- Metrics
- Customizing context propagation
- Processing a pipeline with the OpenTelemetry Collector
- Summary
- Questions
- Chapter 6: Tracing Your Code
- Technical requirements
- Tracing with System.Diagnostics or the OpenTelemetry API shim
- Tracing with System.Diagnostics
- Tracing with the OpenTelemetry API shim
- Using ambient context
- Recording events
- When to use events
- The ActivityEvent API
- Correlating spans with links
- Using links
- Testing your instrumentation
- Intercepting activities
- Filtering relevant activities
- Summary
- Questions
- Chapter 7: Adding Custom Metrics
- Technical requirements
- Metrics in .NET
- past and present
- Cardinality
- When to use metrics
- Reporting metrics
- Using counters
- The Counter class
- The UpDownCounter class
- The ObservableCounter class
- Fryman, Lowell, author.
- Cambridge, MA : Morgan Kaufmann, [2017]
- Description
- Book — 1 online resource (1 volume) : illustrations.
- Summary
-
- 1. Purpose, Scope and Audience
- 2. Executive Call to Action-How Chief Data Officers and Business Sponsors Can Empower Results
- 3. Assessing Conditions, Controls and Capabilities
- 4. Detailed Playbook Activities
- 5. Aligning the Language of Business: The Business Glossary
- 6. The Business Data Governance Toolkit
- 7. Playbook Deployment
- 8. Data Governance as an Operations Process
- 9. Governing Big Data and Analytics
- 10. Rapid Playbook Deployment.
- (source: Nielsen Book Data)
(source: Nielsen Book Data)
4. The distributed systems video collection [2016]
- O'Reilly Software Architecture Conference (2016 : New York, N.Y.)
- [Place of publication not identified] : O'Reilly Media, 2016.
- Description
- Video — 1 online resource (1 streaming video file (9 hr., 27 min., 43 sec.))
- Summary
-
"Scaling up and out - and back down again - at a moment's notice is essential for many large scale applications today. Scaling and performance at scale doesn't have to be a nightmare, this collection covers concrete ways to ensure your distributed architecture is resilient, robust, and able to seamlessly interact with databases, APIs, and customers."--Resource description page
- IFIP/IEEE International Workshop on Distributed Systems: Operations and Management (17th : 2006 : Dublin, Ireland)
- Berlin ; New York : Springer, c2006.
- Description
- Book — xiii, 282 p. : ill. ; 24 cm.
- Summary
-
This book constitutes the refereed proceedings of the 17th IFIP/IEEE International Workshop on Distributed Systems, Operations and Management, DSOM 2006, held in Dublin, Ireland in October 2006 in the course of the 2nd International Week on Management of Networks and Services, Manweek 2006. The 21 revised full papers and 4 revised short papers presented were carefully reviewed and selected from 85 submissions. The papers are organized in topical sections on performance of management protocols, complexity of service management, ontologies and network management, management of next generation network and services, business and service management, security and policy based management, short papers, and supporting approaches for network management.
(source: Nielsen Book Data)
SAL3 (off-campus storage)
SAL3 (off-campus storage) | Status |
---|---|
Stacks | Request (opens in new tab) |
QA76.9 .D5 I33835 2006 | Available |
- [Place of publication not identified] : Packt Publishing, 2017.
- Description
- Video — 1 online resource (1 streaming video file (15 hr., 16 min., 15 sec.)) Digital: data file.
- Summary
-
"This course is an end-to-end, practical guide to using Hive for Big Data processing ... Hive helps you leverage the power of Distributed computing and Hadoop for Analytical processing. Its interface is like an old friend: the very SQL like HiveQL. This course will fill in all the gaps between SQL and what you need to use Hive. End-to-End: The course is an end-to-end guide for using Hive: whether you are analyst who wants to process data or an Engineer who needs to build custom functionality or optimize performance - everything you'll need is right here."--Resource description page
- Sapaty, Peter author.
- Cham, Switzerland : Springer, 2017.
- Description
- Book — 1 online resource (xvii, 284 pages) : illustrations (some color) Digital: text file.PDF.
- Summary
-
- Chapter 1 Introduction.-
- Chapter 2 Some Theoretical Background.-
- Chapter 3 Spatial Grasp Model.-
- Chapter 4 SGL Detailed Specification.-
- Chapter 5 Main Spatial Mechanisms in SGL.-
- Chapter 6 SGL Networked Interpreter.-
- Chapter 7 Creation, Activation and Management of a Distributed World.-
- Chapter 8 Parallel and Distributed Network Operations.-
- Chapter 9 Solving Social Problems.-
- Chapter 10 Automated Command and Control.-
- Chapter 11 Collective Robotics.-
- Chapter 12 Conclusions.
- (source: Nielsen Book Data)
(source: Nielsen Book Data)
- Sapaty, Peter.
- Cham : Springer, 2021.
- Description
- Book — 1 online resource
- Summary
-
- Introduction.- Spatial Grasp Model and Technology.- Spatial Grasp Language (SGL).- Symbiosis of Different Worlds in SGT.- Global Network Management under Spatial Grasp Paradigm.- Simulating Distributed and Global Consciousness under SGT.- Fighting Global Viruses under SGT.- Decision-Centric and Mosaic-Based Organizations under SGT.- Conclusions.
- (source: Nielsen Book Data)
(source: Nielsen Book Data)
- Oram, Andrew, author.
- First edition. - Sebastopol, CA : O'Reilly Media, 2019.
- Description
- Book — 1 online resource (1 volume) : illustrations
- Summary
-
Many modern programming languages include libraries to handle performance, multiple cores, structured data, errors or failures, and other tasks. Ballerina builds many of these tasks directly into the language. With this brief introduction, developers and software architects will learn how Ballerina can speed development and reduce failures in today's cloud native, distributed environments. Ballerina is a general-purpose cloud native programming language that specializes in integration. On the surface, it looks like many other C-style languages, but Ballerina also contains features that incorporate current best practices for web programming, microservices, and Agile- or DevOps-oriented development. In this report, O'Reilly editor Andy Oram helps you understand what Ballerina offers and how it solves modern development problems. You'll explore how: High-level Ballerina features make it easier to conduct network activities This language is designed around DevOps practices with an IDE-based build system Ballerina includes a module for continuous integration and testing Modules for deploying programs on Docker, Kubernetes, or AWS Lambda are included Ballerina also features compiler extensions, security, concurrency, and error checking.
- Keune, Nicholas Alan, author.
- First edition. - Sebastopol, CA : O'Reilly Media, [2019]
- Description
- Book — 1 online resource (1 volume)
- Summary
-
Miniservices provide a valuable middle ground between monoliths and microservices. As Nicholas Keune explains in this report, miniservices are suited for application landscapes involving data-intensive workloads that span monoliths and microservices or cross the traditional boundaries of a service context. Drawn from the work of many development teams, the report gives a model and language to data-centric system attributes so that they can be considered more proactively in the design discussion. Combining monolithic corporate or third-party systems with microservices requires a design pattern to balance both local and global aspects of the data lifecycle. The approach advocated here, called a data discourse, is both flexible and bounded by guiding principles that help bring data discussions into early architectural conversations. Using real-world experiences and use cases, the report focuses on three of the most commonly observed attributes in a miniservice: consistency, transactionality, and proximity. The examples illustrate how design discussions about data discourses lead to miniservice creation, and how miniservices help solve otherwise difficult architectural challenges. With this report, you'll learn: What miniservices are and how they offer solutions to challenges What data discourses are and how to use them How data discourses and miniservices help shift design discussions around data.
- Pasupuleti, Pradeep, author.
- Birmingham : Packt Publishing, 2015.
- Description
- Book — 1 online resource : illustrations
- Summary
-
- Cover; Copyright; Credits; About the Authors; Acknowledgement; About the Reviewer; www.PacktPub.com; Table of Contents; Preface;
- Chapter 1: The Need for Data Lake; Before the Data Lake; Need for a Data Lake; Defining Data Lake; Key benefits of Data Lake; Challenges in implementing a Data Lake; When to go for a Data Lake implementation; Data Lake architecture; Architectural considerations; Architectural composition; Architectural details; Understanding Data Lake layers; Understanding Data Lake tiers; Summary;
- Chapter 2: Data Intake; Understanding Intake tier zones
- Source System Zone functionalitiesUnderstanding connectivity processing; Understanding Intake Processing for data variety; Transient Landing Zone functionalities; File validation checks; Data Integrity checks; Raw Storage Zone functionalities; Data lineage processes; Deep Integrity checks; Security and governance; Information Lifecycle Management; Practical Data Ingestion scenarios; Architectural guidance; Structured data use cases; Semi-structured and Unstructured data use cases; Big Data tools and technologies; Ingestion of structured data; Ingestion of streaming data; Summary
- Chapter
- 3: Data Integration, Quality, and EnrichmentIntroduction to the Data Management Tier; Understanding Data Integration; Introduction to Data Integration; Prominent features of Data Integration; Practical Data Integration scenarios; The workings of Data Integration; Raw data discovery; Data quality assessment; Data cleansing; Data transformations; Data enrichment; Collect Metadata and track data lineage; Traditional data integration versus Data Lake; Data pipelines; Data partitioning; Scale on demand; Data ingest parallelism; Extensibility; Big Data tools and technologies; Syncsort
- Use case scenarios for SyncsortTalend; Use case scenarios for Talend; Pentaho; Use case scenarios for Pentaho; Summary; Chapter 4: Data Discovery and Consumption; Understanding the Data Consumption tier; Data Consumption
- Traditional versus Data Lake; An introduction to Data Consumption; Practical Data Consumption scenarios; Data Discovery and metadata; Enabling Data Discovery; Data classification; Relation extraction; Indexing data; Performing Data Discovery; Semantic search; Faceted search; Fuzzy search; Data Provisioning and metadata; Data publication; Data subscription
- Data Provisioning functionalitiesData formatting; Data selection; Data Provisioning approaches; Post-provisioning processes; Architectural guidance; Data discovery; Big Data tools and technologies; Data Provisioning; Big Data tools and technologies; Summary; Chapter 5: Data Governance; Understanding Data Governance; Introduction to Data Governance; The need for Data Governance; Governing Big Data in the Data Lake; Data Governance
- traditional versus Data Lake; Practical Data Governance scenarios; Data Governance components; Metadata management and lineage tracking; Data security and privacy
(source: Nielsen Book Data)
- Sapaty, Peter author.
- United Kingdom : Emerald Publishing, 2019.
- Description
- Book — 1 online resource
- Summary
-
- Preface Acknowledgements
- Chapter 1. Introduction
- Chapter 2. Word Security Areas, Bodies, and Measures
- Chapter 3. Spatial Grasp Model and Technology (SGT)
- Chapter 4. Spatial Grasp Language (SGL)
- Chapter 5. Security Related Management Examples under SGT
- Chapter 6. Networked Security Related Solutions
- Chapter 7. Managing Security by Spatial Control of Moving Objects
- Chapter 8. Investigating Nuclear War Dangers under SGT
- Chapter 9. Distributed Mosaic-Based Organizations
- Chapter 10. Conclusions.
- (source: Nielsen Book Data)
(source: Nielsen Book Data)
- Dubhashi, Dipa, author.
- Birmingham, UK : Packt Publishing, 2016.
- Description
- Book — 1 online resource : illustrations.
- Summary
-
The ultimate guide to managing, building, and deploying large-scale clusters with Apache Mesos About This Book * Master the architecture of Mesos and intelligently distribute your task across clusters of machines * Explore a wide range of tools and platforms that Mesos works with * This real-world comprehensive and robust tutorial will help you become an expert Who This Book Is For The book aims to serve DevOps engineers and system administrators who are familiar with the basics of managing a Linux system and its tools What You Will Learn * Understand the Mesos architecture * Manually spin up a Mesos cluster on a distributed infrastructure * Deploy a multi-node Mesos cluster using your favorite DevOps * See the nuts and bolts of scheduling, service discovery, failure handling, security, monitoring, and debugging in an enterprise-grade, production cluster deployment * Use Mesos to deploy big data frameworks, containerized applications, or even custom build your own applications effortlessly In Detail Apache Mesos is open source cluster management software that provides efficient resource isolations and resource sharing distributed applications or frameworks. This book will take you on a journey to enhance your knowledge from amateur to master level, showing you how to improve the efficiency, management, and development of Mesos clusters. The architecture is quite complex and this book will explore the difficulties and complexities of working with Mesos. We begin by introducing Mesos, explaining its architecture and functionality. Next, we provide a comprehensive overview of Mesos features and advanced topics such as high availability, fault tolerance, scaling, and efficiency. Furthermore, you will learn to set up multi-node Mesos clusters on private and public clouds. We will also introduce several Mesos-based scheduling and management frameworks or applications to enable the easy deployment, discovery, load balancing, and failure handling of long-running services. Next, you will find out how a Mesos cluster can be easily set up and monitored using the standard deployment and configuration management tools. This advanced guide will show you how to deploy important big data processing frameworks such as Hadoop, Spark, and Storm on Mesos and big data storage frameworks such as Cassandra, Elasticsearch, and Kafka. Style and approach This advanced guide provides a detailed step-by-step account of deploying a Mesos cluster. It will demystify the concepts behind Mesos.
(source: Nielsen Book Data)
14. Microservices development cookbook : design and build independently deployable, modular services [2018]
- Osman, Paul, author.
- Birmingham, UK : Packt Publishing, 2018.
- Description
- Book — 1 online resource (1 volume) : illustrations
- Summary
-
- Table of Contents Breaking the Monolith Edge Services Interservice Communication Client Patterns Reliability Patterns Data Modelling Monitoring Scaling Continuous Integration & Delivery.
- (source: Nielsen Book Data)
(source: Nielsen Book Data)
- Saxena, Shilpi, author.
- Birmingham, UK : Packt Publishing, 2017.
- Description
- Book — 1 online resource (1 volume) : illustrations
- Summary
-
A practical guide to help you tackle different real-time data processing and analytics problems using the best tools for each scenario About This Book * Learn about the various challenges in real-time data processing and use the right tools to overcome them * This book covers popular tools and frameworks such as Spark, Flink, and Apache Storm to solve all your distributed processing problems * A practical guide filled with examples, tips, and tricks to help you perform efficient Big Data processing in real-time Who This Book Is For If you are a Java developer who would like to be equipped with all the tools required to devise an end-to-end practical solution on real-time data streaming, then this book is for you. Basic knowledge of real-time processing would be helpful, and knowing the fundamentals of Maven, Shell, and Eclipse would be great. What You Will Learn * Get an introduction to the established real-time stack * Understand the key integration of all the components * Get a thorough understanding of the basic building blocks for real-time solution designing * Garnish the search and visualization aspects for your real-time solution * Get conceptually and practically acquainted with real-time analytics * Be well equipped to apply the knowledge and create your own solutions In Detail With the rise of Big Data, there is an increasing need to process large amounts of data continuously, with a shorter turnaround time. Real-time data processing involves continuous input, processing and output of data, with the condition that the time required for processing is as short as possible. This book covers the majority of the existing and evolving open source technology stack for real-time processing and analytics. You will get to know about all the real-time solution aspects, from the source to the presentation to persistence. Through this practical book, you'll be equipped with a clear understanding of how to solve challenges on your own. We'll cover topics such as how to set up components, basic executions, integrations, advanced use cases, alerts, and monitoring. You'll be exposed to the popular tools used in real-time processing today such as Apache Spark, Apache Flink, and Storm. Finally, you will put your knowledge to practical use by implementing all of the techniques in the form of a practical, real-world use case. By the end of this book, you will have a solid understanding of all the aspects of real-time data processing and analytics, and will know how to deploy the solutions in production environments in the best possible manner. Style and Approach In this practical guide to real-time analytics, each chapter begins with a basic high-level concept of the topic, followed by a practical, hands-on implementation of each concept, where you can see the working and execution of it. The book is written in a DIY style, with plenty of practical use cases, well-explained code examples, and relevant screenshots and diagrams.
(source: Nielsen Book Data)
16. SAP HANA platform migration [2020]
- Quintero, Dino, author.
- First edition (March 2020). - Poughkeepsie, NY : IBM Corporation, IBM Redbooks, 2020.
- Description
- Book — 1 online resource (1 volume) : illustrations
- Mehrotra, Shrey, author.
- Birmingham, UK : Packt Publishing, 2019.
- Description
- Book — 1 online resource (1 volume) : illustrations
- Summary
-
- Table of Contents Introduction to Apache Spark Apache Spark Installation Spark RDD Spark DataFrame and Dataset Spark Architecture and Application Execution Flow Spark SQL Spark Streaming, Machine Learning, and Graph Analysis Spark Optimizations.
- (source: Nielsen Book Data)
(source: Nielsen Book Data)
- Gidley, Scott, author.
- First edition. - Sebastopol, CA : O'Reilly Media, 2019.
- Description
- Book — 1 online resource (1 volume) : illustrations
- Summary
-
Data is changing everything. Many industries today are being fundamentally transformed through the accumulation and analysis of large quantities of data, stored in diversified but flexible repositories known as data lakes. Whether your company has just begun to think about big data or has already initiated a strategy for handling it, this practical ebook shows you how to plan a successful data lake migration. You'll learn the value of data lakes, their structure, and the problems they attempt to solve. Using Zaloni's data lake maturity model, you'll then explore your organization's readiness for putting a data lake into action. Do you have the tools and data architectures to support big data analysis? Are your people and processes prepared? The data lake maturity model will help you rate your organization's readiness. This report includes: The structure and purpose of a data lake Descriptive, predictive, and prescriptive analytics Data lake curation, self-service, and the use of data lake zones How to rate your organization using the data lake maturity model A complete checklist to help you determine your strategic path forward.
- Abbasi, Muhammad Asif, author.
- Birmingham, UK : Packt Publishing, 2017.
- Description
- Book — 1 online resource (1 volume) : illustrations
- Summary
-
Learn about the fastest-growing open source project in the world, and find out how it revolutionizes big data analytics About This Book * Exclusive guide that covers how to get up and running with fast data processing using Apache Spark * Explore and exploit various possibilities with Apache Spark using real-world use cases in this book * Want to perform efficient data processing at real time? This book will be your one-stop solution. Who This Book Is For This guide appeals to big data engineers, analysts, architects, software engineers, even technical managers who need to perform efficient data processing on Hadoop at real time. Basic familiarity with Java or Scala will be helpful. The assumption is that readers will be from a mixed background, but would be typically people with background in engineering/data science with no prior Spark experience and want to understand how Spark can help them on their analytics journey. What You Will Learn * Get an overview of big data analytics and its importance for organizations and data professionals * Delve into Spark to see how it is different from existing processing platforms * Understand the intricacies of various file formats, and how to process them with Apache Spark. * Realize how to deploy Spark with YARN, MESOS or a Stand-alone cluster manager. * Learn the concepts of Spark SQL, SchemaRDD, Caching and working with Hive and Parquet file formats * Understand the architecture of Spark MLLib while discussing some of the off-the-shelf algorithms that come with Spark. * Introduce yourself to the deployment and usage of SparkR. * Walk through the importance of Graph computation and the graph processing systems available in the market * Check the real world example of Spark by building a recommendation engine with Spark using ALS. * Use a Telco data set, to predict customer churn using Random Forests. In Detail Spark juggernaut keeps on rolling and getting more and more momentum each day. Spark provides key capabilities in the form of Spark SQL, Spark Streaming, Spark ML and Graph X all accessible via Java, Scala, Python and R. Deploying the key capabilities is crucial whether it is on a Standalone framework or as a part of existing Hadoop installation and configuring with Yarn and Mesos. The next part of the journey after installation is using key components, APIs, Clustering, machine learning APIs, data pipelines, parallel programming. It is important to understand why each framework component is key, how widely it is being used, its stability and pertinent use cases. Once we understand the individual components, we will take a couple of real life advanced analytics examples such as 'Building a Recommendation system', 'Predicting customer churn' and so on. The objective of these real life examples is to give the reader confidence of using Spark for real-world problems. Style and approach With the help of practical examples and real-world use cases, this guide will take you from scratch to building efficient data applications using Apache Spark. You will learn all about this excellent data processing engine in a step-by-step manner, taking one aspect of it at a time. This highly practical guide will include how to work with data pipelines, dataframes, clustering, SparkSQL, parallel programming, and such insightful topics with the help of real-world use cases.
(source: Nielsen Book Data)
- Abbasi, Muhammad Asif, author.
- Birmingham, UK : Packt Publishing, 2017.
- Description
- Book — 1 online resource (1 volume) : illustrations
- Summary
-
Learn about the fastest-growing open source project in the world, and find out how it revolutionizes big data analytics About This Book Exclusive guide that covers how to get up and running with fast data processing using Apache Spark Explore and exploit various possibilities with Apache Spark using real-world use cases in this book Want to perform efficient data processing at real time? This book will be your one-stop solution. Who This Book Is For This guide appeals to big data engineers, analysts, architects, software engineers, even technical managers who need to perform efficient data processing on Hadoop at real time. Basic familiarity with Java or Scala will be helpful. The assumption is that readers will be from a mixed background, but would be typically people with background in engineering/data science with no prior Spark experience and want to understand how Spark can help them on their analytics journey. What You Will Learn Get an overview of big data analytics and its importance for organizations and data professionals Delve into Spark to see how it is different from existing processing platforms Understand the intricacies of various file formats, and how to process them with Apache Spark. Realize how to deploy Spark with YARN, MESOS or a Stand-alone cluster manager. Learn the concepts of Spark SQL, SchemaRDD, Caching and working with Hive and Parquet file formats Understand the architecture of Spark MLLib while discussing some of the off-the-shelf algorithms that come with Spark. Introduce yourself to the deployment and usage of SparkR. Walk through the importance of Graph computation and the graph processing systems available in the market Check the real world example of Spark by building a recommendation engine with Spark using ALS. Use a Telco data set, to predict customer churn using Random Forests. In Detail Spark juggernaut keeps on rolling and getting more and more momentum each day. Spark provides key capabilities in the form of Spark SQL, Spark Streaming, Spark ML and Graph X all accessible via Java, Scala, Python and R. Deploying the key capabilities is crucial whether it is on a Standalone framework or as a part of existing Hadoop installation and configuring with Yarn and Mesos. The next part of the journey after installation is using key components, APIs, Clustering, machine learning APIs, data pipelines, parallel programming. It is important to understand why each framework component is key, how widely it is being u...
Articles+
Journal articles, e-books, & other e-resources
Guides
Course- and topic-based guides to collections, tools, and services.