Course Overview

Design, build, and deploy containerized applications on Red Hat OpenShift

Red Hat OpenShift Developer II: Building Kubernetes Applications (DO288) teaches you how to design, build, and deploy containerized software applications on an OpenShift cluster

Whether you are migrating existing applications or writing container-native applications, you will learn how to boost developer productivity powered by Red Hat® OpenShift Container Platform, a containerized application platform that allows enterprises to manage container deployments and scale their applications using Kubernetes.

The skills you learn in this course can be applied using all versions of Red Hat OpenShift, including Red Hat OpenShift on AWS (ROSA), Azure Red Hat OpenShift (ARO), and Red Hat OpenShift Container Platform.

This course is based on Red Hat OpenShift 4.12.

Note: This course is five days. Durations may vary based on the delivery. For full course details, scheduling, and pricing, select your location then “get started” on the right hand menu.

Course Objectives

  • Features for developers in the Red Hat OpenShift web console
  • Building and publishing container images for Red Hat OpenShift
  • Managing container deployments on Red Hat OpenShift
  • Create and deploy multi-container applications on Red Hat OpenShift
  • Deploy multi-container applications using Helm Charts and Kustomize
  • Create health checks to monitor and improve application reliability
  • Creating CI/CD Workflows using Red Hat OpenShift Pipelines

Course Content

Red Hat OpenShift Container Platform for Developers
Define the Red Hat OpenShift architecture, concepts and terminology, and set up the developer environment.

Deploying Simple Applications
Deploy simple applications by using the Red Hat OpenShift web console and command-line tools.

Building and Publishing Container Images
Build, deploy and manage the lifecycle of container images by using a container registry.

Managing Red Hat OpenShift Builds
Describe the Red Hat OpenShift build process and build container images.

Managing Red Hat OpenShift Deployments
Describe the different Red Hat OpenShift deployment strategies and how to monitor the health of applications.

Deploying Multi-container Applications
Deploy multi-container applications by using Red Hat OpenShift templates, Helm charts, and Kustomize.

Continuous Deployment using Red Hat OpenShift Pipelines
Implement CI/CD workflows by using Red Hat OpenShift Pipelines.

Note: Course outline is subject to change with technology advances and as the nature of the underlying job evolves. For questions or confirmation on a specific objective or topic, contact one of our training specialists.

Course Overview

Course description

Plan, implement, and manage OpenShift clusters at scale

Red Hat OpenShift Administration III: Scaling Kubernetes Deployments in the Enterprise (DO380) expands upon the skills required to plan, implement, and manage OpenShift® clusters in the enterprise. You will learn how to support a growing number of stakeholders, applications, and users to achieve large-scale deployments.

This course is based on Red Hat® OpenShift Container Platform 4.10.

Note: This course is five days. Durations may vary based on the delivery. For full course details, scheduling, and pricing, select your location then “get started” on the right hand menu.

Course summary

– Manage OpenShift cluster operators and add operators.

– Automate OpenShift management tasks using Ansible® playbooks.

– Create and schedule cluster administration jobs.

– Implement GitOps workflows using Jenkins.

– Integrate OpenShift with enterprise authentication.

– Query and visualize cluster-wide logs, metrics, and alerts.

– Manage both shared, file-based storage and non-shared, block-based storage.

– Manage machine pools and machine configurations.

Course Objectives

This course builds upon the essential skills required to configure and manage an OpenShift 4.x cluster, teaching the enhanced skills needed to operate production environments at scale, including:

  • Automating Day 2 tasks to establish production clusters with higher performance and availability.
  • Integrating OpenShift with enterprise authentication, storage, CI/CD, and GitOps systems to improve productivity of IT operations and compliance with organization’s standards.
  • Troubleshooting techniques to identify issues with cluster operators and compute capacity.

Course Content

Move from Kubernetes to OpenShift

Demonstrate that OpenShift is Kubernetes by deploying Kubernetes-native applications on OpenShift.


Introduce automation on OpenShift

Automate OpenShift administration tasks using bash scripts and Ansible playbooks.


Manage operators with OpenShift

Deploy Kubernetes Operators and configure OpenShift cluster operators.


Implement GitOps with Jenkins

Implement a GitOps workflow using containerized Jenkins to administer an OpenShift cluster.


Configure enterprise authentication

Integrate OpenShift with enterprise identity providers.


Configure trusted TLS certificates

Configure OpenShift with trusted TLS certificates for external access to cluster services and applications.


Configure dedicated node pools

Configure a subset of the cluster nodes for special workloads.


Configure persistent storage

Configure storage providers and storage classes to ensure cluster user access to persistent storage.


Manage cluster monitoring and metrics

Configure and manage the OpenShift monitoring stack.


Provision and inspect cluster logging

Deploy, query, and troubleshoot cluster-wide logging.


Recover failed worker nodes

Inspect, troubleshoot, and remediate worker nodes in a variety of failure scenarios.


Note: Course outline is subject to change with technology advances and as the nature of the underlying job evolves. For questions or confirmation on a specific objective or topic, contact one of our Red Hatters..

Course Overview

Deploy, manage, and troubleshoot containerized applications running as Kubernetes workloads in OpenShift clusters.

Course Description

Red Hat OpenShift Administration I: Managing Containers and Kubernetes (DO180) prepares OpenShift cluster administrators to manage Kubernetes workloads and to collaborate with developers, DevOps engineers, system administrators, and SREs to ensure the availability of application workloads. This course focuses on managing typical end-user applications that are often accessible from a web or mobile UI and that represent most cloud-native and containerized workloads. Managing applications also includes deploying and updating their dependencies, such as databases, messaging, and authentication systems.

The skills that you learn in this course apply to all versions of OpenShift, including Red Hat OpenShift on AWS (ROSA), Azure Red Hat OpenShift, and OpenShift Container Platform.

This course is based on Red Hat OpenShift 4.12.

Note: This course is five days. Durations may vary based on the delivery. For full course details, scheduling, and pricing, select your location then “get started” on the right hand menu.

Course Content Summary

– Managing OpenShift clusters from the command-line interface and from the web console.

– Troubleshooting network connectivity between applications inside and outside an OpenShift cluster.

– Connecting Kubernetes workloads to storage for application data.

– Configuring Kubernetes workloads for high availability and reliability.

– Managing updates to container images, settings, and Kubernetes manifests of an application.

Course Objectives

Impact on the Organization

This course is intended to develop the skills needed to manage Red Hat OpenShift clusters and support containerized applications that are highly available, resilient, and scalable. Red Hat OpenShift is an enterprise-hardened application platform based on Kubernetes that provides a common set of APIs and abstractions that enable application portability across cloud providers and traditional data centers. Red Hat OpenShift adds consistency and portability of operational processes across these environments and can also be deployed as a managed service. An external SRE team shares the responsibility of managing Red Hat OpenShift clusters with a customer’s IT operations team when using a managed OpenShift offering such as Red Hat OpenShift on AWS (ROSA) or Azure Red Hat OpenShift.


Impact on the Individual

As a result of attending this course, students will understand the architecture of Red Hat OpenShift clusters and of Kubernetes applications, and will be able to deploy, manage, and troubleshoot applications on OpenShift. Students will also be able to identify and escalate application and infrastructure issues to development teams, operation teams, and IT vendors.

Course Content

Introduction to Kubernetes and OpenShift

Identify the main Kubernetes cluster services and OpenShift platform services, and monitor them from the web console.


Kubernetes and OpenShift Command-Line Interfaces and APIs

Access an OpenShift cluster from the command line, and query its Kubernetes API resources to assess the health of a cluster.


Run Applications as Containers and Pods

Run and troubleshoot containerized applications as unmanaged Kubernetes pods.


Deploy Managed and Networked Applications on Kubernetes

Deploy applications and expose them to network access from inside and outside a Kubernetes cluster.


Manage Storage for Application Configuration and Data

Externalize application configurations in Kubernetes resources, and provision storage volumes for persistent data files.


Configure Applications for Reliability

Configure applications to work with Kubernetes for high availability and resilience.


Manage Application Updates

Manage reproducible application updates and rollbacks of code and configurations.

Course Overview

Build essential skills to implement agile and DevOps development processes and workflows.

DevOps practices have enabled organizations to undergo a digital transformation, moving from a monolithic waterfall approach to a rapidly deploying cloud-based agile process. This transformation requires a team of developers trained to use tools that enable them to spend more time coding and testing and less time troubleshooting. Red Hat DevOps Pipelines and Processes: CI/CD with Jenkins, Git, and Test-Driven Development (TDD) is a practical introduction to DevOps for developers that teaches students the necessary skills and technologies for automated building and deploying of cloud-native applications.

Course Objectives

  • Version control with Git
  • Build and execute Jenkins pipelines
  • Release strategies
  • Build applications with Test Driven Development
  • Security scanning and code analysis of applications
  • Monitor applications and pipelines
  • Consume and troubleshoot pipelines

Course Content

Introduction to continuous integration and continuous deployment (CI/CD)
Describe the principles of DevOps and the role of Jenkins.

Integrate source code with version control
Manage source code changes with Git version control.

Test applications
Describe the foundational principles behind comprehensive application testing and implement unit, integration, and functional testing.

Build applications with test-driven development
Implement and build application features with TDD.

Author pipelines
Create basic pipelines to run Jenkins jobs.

Deploy applications with pipelines
Safely and automatically deploy applications to Red Hat OpenShift Container Platform.

Implement pipeline security and monitoring
Manage the security and monitor the performance of pipelines.

Consume pipelines
Work with (or “Use”) and troubleshoot CI/CD pipelines for automated deployment and automated testing.

Course Overview

This course starts with the hands-on approach to develop an application and create a quality product using DevOps with Azure. This course provides a perfect blend of real-world examples and hands-on exercises to help you learn key concepts and techniques.

Learn everything you need to get started with DevOps on Microsoft Azure, including automation, testing, development, and the provisioning of services. You’ll learn all about the practical aspects of DevOps by understanding how different teams (such as development, QA, cloud, and build engineers) collaborate to develop an application and create high-quality products with Azure. Streamline your software development lifecycle with Microsoft Azure’s integrated cloud tools and resources.

Course Objectives

The course begins by giving you an overview of PaaS and aPaaS. You’ll also learn about Visual Studio Team Services (VSTS) and its integration with Eclipse IDE. You’ll see how to configure the application code for automated compilation and run a unit test.

As you progress, you’ll explore continuous development with Microsoft Azure Web Apps by learning to create different environments for deploying web applications. You’ll also explore the difference between Azure Web Apps and Azure App Service Environments. Next, you’ll gain insight into end-to-end automation for deploying an application in PaaS. When you complete this course, you will feel confident and excited to apply your skills in real-life business scenarios.

After completing this course, you will be able to:

  • Explore the features of PaaS and aPaaS in DevOps
  • Use Visual Studio Team Services (VSTS) to manage code versions
  • Understand and configure continuous integration in VSTS
  • Build different environments for continuously deploying an application
  • Configure role-based access to enable secure access for Azure Web Apps
  • Execute an end-to-end automation process
  • Test an app for performance using JMeter
  • Creating and configuring Traffic Manager with endpoints
  • Understand disaster recovery and high availability of Azure Web Apps

Course Content

Lesson 1: Visual Studio Team Services Fundamentals

  • Overview of Visual Studio Team Services (VSTS)
  • Integrating VSTS with Visual Studio IDE
  • Managing Code Using VSTS and Visual Studio

Lesson 2: Microsoft Azure Fundamentals

  • What is Cloud Computing?
  • Azure Web Apps
  • Azure Data and Storage
  • Azure Web App Key Concepts

Lesson 3: Agile with Visual Studio Team Services

  • Introducing Agile in VSTS
  • Working with Kanban Boards

Lesson 4: Continuous Integration with VSTS

  • Overview of Continuous Integration in VSTS
  • Customizing Your CI Build

Lesson 5: Continuous Deployment with VSTS

  • An Overview of Continuous Deployment in VSTS
  • Extending the Release Definition

Lesson 6: Continuous Monitoring with VSTS

  • Performance Testing Using VSTS
  • Azure Web Apps Troubleshooting

Course Overview

If you are an experienced Nutanix administrator, this course will serve as a deep dive that gives you a rich, nuanced understanding of the Nutanix platform, and will help you get the most out of your Nutanix solutions. AAPM is divided into six major sections, each focused on performance improvements and advanced administration techniques for different aspects of your clusters:

– Storage: Take a deep dive into AOS storage services, different aspects of Acropolis Distributed Storage, storage optimization, and storage best practices for application workloads.

– Networks: Learn how to optimize physical and virtual workloads, as well as how to implement Flow Virtual Networking and Virtual Private Clouds (VPCs).

– VMs: Learn about sizing the CVM and Prism Central VMs, alternate methods of VM provisioning (such as via CLI), how to work with GPUs, and how to improve VM storage and network performance.

– Security: Understand important features such as authentication, RBAC, IAM, and encryption. Learn how to use essential security products, such as Flow Security Central and Flow Network Security.

– Analyzing Problems: Explore ways to monitor and identify health issues, network performance, VM performance, and cluster performance.

– Business Continuity and Disaster Recovery: Learn about Nutanix data backup, web-scale data protection, protection from ransomware, self service restore, and third-party integrations. You will also learn how to use protection domains and Nutanix Leap for disaster recovery.

Course Content

1: Exploring Nutanix Storage Features

  • Understanding Nutanix AOS Services and AOS Storage Services
  • Exploring Storage Components
  • AOS Storage Data Pathing

Hands-on Labs of 1

  • Creating a Storage Container
  • Updating Reported Capacity

2: Creating a Highly Available, Performant, and Resilient Storage Layer

  • Creating Highly Available, Resilient Infrastructure
  • Storage Optimization and Data Efficiency
  • Optimizing and Planning for New Workloads
  • Storage Best Practices for Application Workloads

Hands-on Labs of 2

  • Observing Nutanix Cloning Efficiency
  • Reserving Rebuild Capacity in AHV
  • Observing the Rebuild Process
  • Disabling Rebuild Capacity Reservation
  • Creating a Storage Container with Deduplication Enabled
  • Reviewing Deduplication Savings
  • Enabling Replication Factor 1 and Creating a Storage Container

3: Optimizing Physical and Virtual Networks in AOS

  • Optimizing Physical & Virtual Networks
  • Best Practices

Hands-on Labs of 3

  • Managing Virtual Switches and Uplinks
  • Viewing Virtual Switches from Prism Element
  • Configuring CVM Network Segmentation
  • Configuring QoS Traffic Marking

4: Optimizing Overlay Networks Using Flow Networking

  • Optimizing Physical & Virtual Networks
  • Implementing Flow Networking
  • Implementing VPCs
  • Overlay Network Use Cases

Hands-on Labs of 4

  • Enabling Flow Networking
  • Creating an External Subnet
  • Creating a VPC
  • Creating VMs using the Overlay Subnets
  • Configuring Local and Remote Gateways
  • Establishing a VPN Connection
  • Verifying VPN Connectivity

5: Optimizing VM Performance

  • Sizing the CVM & Prism Central
  • Alternate Methods of Provisioning User VMs
  • Working with GPUs in AHV
  • Improving VM Storage and Network Performance

Hands-on Labs of 5

  • Creating VMs with the REST API
  • Configuring VirtIO Multi-Queue
  • Configuring Volumes Block Storage

6: Analyzing Nutanix Cluster Security Options

  • Nutanix Security Technologies
  • User Authentication and Permissions
  • Hardening AHV and the CVM
  • Using Flow Network Security & Flow Security Central
  • Data Encryption with Nutanix
  • Managing Log Files

Hands-on Labs of 6

  • Configuring Cluster Lockdown
  • Replacing Default SSL Certificates
  • Configuring Syslog Integration
  • Managing User Permissions

7: Microsegmentation with Flow Network Security

  • Flow Policy Constructs
  • Security Policy Models and Types
  • Enabling Microsegmentation
  • Creating and Applying Policies

Hands-on Labs of 7

  • Enabling Flow Microsegmentation
  • Creating Categories
  • Creating VMs and Assigning Categories
  • Configuring Isolation and Application Security Policies

8: Microsegmentation with Flow Network Security

  • Evaluating Cluster Health
  • Network Packet Capture and Inspection
  • Acropolis Service Failures
  • Ensuring Efficient Physical Resource Consumption with Machine Learning
  • Application Monitoring and Discovery
  • Monitoring Performance

Hands-on Labs of 8

  • Creating a Prism Central Performance Monitoring Dashboard
  • Creating Charts to Analyze Metrics Using Prism Central
  • Creating Charts to Analyze Entities Using Prism Element

9: Business Continuity

  • Assessing Business Continuity and Disaster Recovery
  • High Availability and Data Protection
  • Third Party Backup Integrations
  • Best Practices

Hands-on Labs of 9

  • Configuring Self Service Restore

10: Implementing Disaster Recovery

  • Replicating Data with AOS
  • Disaster Recovery Orchestration
  • Disaster Recovery with Protection Domains
  • Getting Started with Nutanix Leap
  • Protecting Against Ransomware

Hands-on Labs of 10

  • Enabling Nutanix Leap
  • Configuring an Availability Zone
  • Configuring a Protection Policy
  • Creating Production and Test VLANs
  • Preparing VMs for Nutanix Leap
  • Configuring a Recovery Plan
  • Performing Test and Planned Failover

Course Overview

Learn how to use Red Hat Ansible Automation for Networking to remotely automate configuration of network devices, test and validate the current network state, and perform compliance checks to detect and correct configuration drift.

Course Objectives

Deploy AnsibleInstall
Ansible and create Ansible inventories.

Run commands and plays
Execute ad hoc commands and prepare Ansible playbooks.

Parameterize Ansible
Control tasks with loops and conditions.

Administer Ansible
Safeguard information with Ansible Vault and manage inventories.

Automate simple network operations
Gather network information with Ansible and configure network devices.

Automate complex operations
Solve new MACD challenges and overcome real-world challenges.

Note: Course outline is subject to change with technology advances and as the nature of the underlying job evolves. For questions or confirmation on a specific objective or topic, please contact us.

Course Content

  • Install and configure Red Hat Ansible Automation for Networking on a management system
  • Use Ansible to run ad hoc commands and playbooks to automate tasks
  • Write effective Ansible playbooks for network automation
  • Gather information about network infrastructure configuration and backup
  • Automate specific network administration use cases, including configuration of routers and switches, ports, VLANs, SNMP monitoring, and routing protocols
  • Use Ansible playbooks to target devices from various hardware vendors, including Cisco, Juniper, and Arista

Course Overview

This course complies with instructional designing principles for all the 3 lessons. This will ensure that you repeat and reinforce your gained knowledge at every step. Each and every minute spent during this 1-day course will incrementally take you to a next level.

Course Objectives

ensure that your container-based applications sail into production without hiccups, you need robust container orchestration. This course teaches you the art of container management with Kubernetes.

The course will provide enough knowledge of the following:

  • Understand and classify software designs patterns as per the cloud-native paradigm
  • Apply best practices in Kubernetes with design patterns
  • Access the Kubernetes API programmatically using client libraries
  • Extend Kubernetes with custom resources and controllers
  • Integrate access control mechanisms and interact with the resource lifecycle in Kubernetes
  • Develop and run custom schedulers in Kubernetes

Course Content

LESSON 1: KUBERNETES DESIGN PATTERNS

  • Software Design Patterns
  • Kubernetes Design Patterns

LESSON 2: KUBERNETES CLIENT LIBRARIES

  • Accessing Kubernetes API
  • Official Client Libraries
  • Community Maintained Client Libraries

LESSON 3: KUBERNETES EXTENSIONS

  • Kubernetes Extension Points
  • Extending Kubernetes Clients
  • Extending Kubernetes API
  • Kubernetes Dynamic Admission Control
  • Extending Kubernetes Scheduler
  • Extending Kubernetes Infrastructure

Course Overview

The Implementing DevOps Solutions and Practices Using Cisco Platforms course teaches you how to automate application deployment, enable automated configuration, enhance management and improve scalability of cloud microservices and infrastructure processes on Cisco® platforms. Learn to integrate Docker and Kubernetes to create advanced capabilities and flexibility in application deployment.

Course Objectives

After completing this course you should be able to:

  • Describe the DevOps philosophy and practices, and how they apply to real-life challenges
  • Explain container-based architectures and available tooling provided by Docker
  • Describe application packaging into containers and start building secure container images
  • Utilize container networking and deploy a three-tier network application
  • Explain the concepts of configuration item (CI) pipelines and what tooling is available
  • Implement a basic pipeline with Gitlab CI that builds and deploys applications
  • Implement automated build testing and validation
  • Describe DevOps principles applied to infrastructure
  • Implement on-demand test environments and explain how to integrate them with an existing pipeline
  • Implement tooling for metric and log collection, analysis, and alerting
  • Describe the benefits of application health monitoring, telemetry, and chaos engineering in the context of improving the stability and reliability of the ecosystem
  • Describe how to implement secure DevOps workflows by safely handling sensitive data and validating applications
  • Explain design and operational concepts related to using a mix of public and private cloud deployments
  • Describe modern application design and microservices architectures
  • Describe the building blocks of Kubernetes and how to use its APIs to deploy an application
  • Explain advanced Kubernetes deployment patterns and implement an automated pipeline
  • Explain how monitoring, logging, and visibility concepts apply to Kubernetes

Course Content

Introducing the DevOps Model

  • DevOps Philosophy
  • DevOps Practices

Introducing Containers

  • Container-Based Architectures
  • Linux Containers
  • Docker Overview
  • Docker Commands

Packaging an Application Using Docker

  • Dockerfiles
  • Golden Images
  • Safe Processing Practices

Deploying a Multitier Application

  • Linux Networking
  • Docker Networking
  • Docker Compose

Introducing CI/CD

  • Continuous Integration
  • CI Tools
  • DevOps Pipelines

Building the DevOps Flow

  • GitLab Overview
  • GitLab CI Overview
  • Continuous Delivery with GitLab

Validating the Application Build Process

  • Automated Testing in the CI Flow

Building an Improved Deployment Flow

  • Post deployment Validation
  • Release Deployment Strategies

Extending DevOps Practices to the Entire Infrastructure

  • Introduction to NetDevOps
  • Infrastructure as Code

Implementing On-Demand Test Environments at the Infrastructure Level

  • Configuration Management Tools
  • Terraform Overview
  • Ansible Overview
  • Ansible Inventory File
  • Use the Cisco IOS Core Configuration Module
  • Jinja2 and Ansible Templates
  • Basic Jinja2 with YAML
  • Configuration Templating with Ansible

Monitoring in NetDevOps

  • Introduction to Monitoring, Metrics and Logs
  • Introduction to Elasticsearch, Beats and Kibana
  • Introduction to Prometheus and Instrumenting Python Code for Observability

Engineering for Visibility and Stability

  • Application Health and Performance
  • AppDynamics Overview
  • Chaos Engineering Principles

Securing DevOps Workflows

  • DevSecOps Overview
  • Application Security in the CI/CD Pipeline
  • Infrastructure Security in the CI/CD Pipeline

Exploring Multicloud Strategies

  • Application Deployment to Multiple Environments
  • Public Cloud Terminology Primer
  • Tracking and Projecting Public Cloud Costs
  • High Availability and Disaster Recovery Design Considerations
  • IaC for Repeatable Public Cloud Consumption
  • Cloud Services Strategy Comparison

Examining Application and Deployment Architectures

  • The Twelve-Factor Application
  • Microservices Architectures

Describing Kubernetes

  • Kubernetes Concepts: Nodes, Pods and Clusters
  • Kubernetes Concepts: Storage
  • Kubernetes Concepts: Networking
  • Kubernetes Concepts: Security
  • Kubernetes API Overview

Integrating Multiple Data Center Deployments with Kubernetes

  • Kubernetes Deployment Patterns
  • Kubernetes Failure Scenarios
  • Kubernetes Load-Balancing Techniques
  • Kubernetes Namespaces
  • Kubernetes Deployment via CI/CD Pipelines

Monitoring and Logging in Kubernetes

  • Kubernete Resource Metrics Pipeline
  • Kubernetes Full Metrics Pipeline and Logging

Labs:

  • Interact with GitLab Continuous Integration
  • Explore Docker Command-Line Tools
  • Package and Run a WebApp Container
  • Build and Deploy Multiple Containers to Create a Three-Tier Application
  • Explore Docker Networking
  • Build and Deploy an Application Using Docker Compose
  • Implement a Pipeline in Gitlab CI
  • Automate the Deployment of an Application
  • Validate the Application Build Process
  • Validate the Deployment and Fix the Infrastructure
  • Build a YAMl IaC Specification for the Test Enviroment
  • Manage On-Demand Test Environments with Terraform
  • Build Ansible Playbooks to Manage Infrastructure
  • Integrate the Testing Enviroment in the CI/CD Pipeline
  • Implement Pre-Deployment Health Checks
  • Set Up Logging for the Application Servers and Visualize with Kibana
  • Create System Dashboard Focused on Metrics
  • Use Alerts Through Kibana
  • Instrument Application Monitoring
  • Use Alerts and Thresholds to Notify Webhook Listener and Webex Teams Room
  • Secure Infrastructure in the CI/CD Pipeline
  • Explore Kubernetes Setup and Deploy an Application
  • Explore and Modify a Kubernetes CI/CD Pipeline
  • Kubernetes Monitoring and Metrics – ELK

Course Overview

This five-hour class equips you to containerize workloads in Docker containers, deploy them to Kubernetes clusters provided by Google Kubernetes Engine, and scale those workloads to handle increased traffic. Students also learn how to continuously deploy new code in a Kubernetes cluster to provide application updates.

Learn to containerize workloads in Docker containers, deploy them to Kubernetes clusters provided by Google Kubernetes Engine, and scale those workloads to handle increased traffic. You also learn how to continuously deploy new code in a Kubernetes cluster to provide application updates.

Virtual Learning

This interactive training can be taken from any location, your office or home and is delivered by a trainer. This training does not have any delegates in the class with the instructor, since all delegates are virtually connected. Virtual delegates do not travel to this course, Global Knowledge will send you all the information needed before the start of the course and you can test the logins.

Course Objectives

At the end of the course, you will be able to:

  • Understand container basics.
  • Containerize an existing application.
  • Understand Kubernetes concepts and principles.
  • Deploy applications to Kubernetes using the CLI.
  • Set up a continuous delivery pipeline using Jenkins

Course Content

This course includes presentations and hands-on labs.Module 1: Introduction to Containers and Docker

Acquaint yourself with containers, Docker, and the Google Container Registry.

  • Create a container.
  • Package a container using Docker.
  • Store a container image in Google Container Registry.
  • Launch a Docker container.

Module 2: Kubernetes Basics

Deploy an application with microservices in a Kubernetes cluster.

  • Provision a complete Kubernetes cluster using Kubernetes Engine.
  • Deploy and manage Docker containers using kubectl.
  • Break an application into microservices using Kubernetes’ Deployments and Services.

Module 3: Deploying to Kubernetes

Create and manage Kubernetes deployments.

  • Create a Kubernetes deployment.
  • Trigger, pause, resume, and rollback updates.
  • Understand and build canary deployments.

Module 4: Continuous Deployment with Jenkins

Build a continuous delivery pipeline.

  • Provision Jenkins in your Kubernetes cluster.
  • Create a Jenkins pipeline.
  • Implement a canary deployment using Jenkins.