seed: example course content

This commit is contained in:
ALE Seeder 2026-01-01 00:00:00 +00:00
commit 9c6ce01b67
30 changed files with 2119 additions and 0 deletions

34
COURSE.md Normal file
View File

@ -0,0 +1,34 @@
---
id: devops-ale-example
title: "Continuous Integration & Delivery — DevOps (ALE Example)"
year: 2023
language: en
instructors:
- "Assoc. Prof. Panche Ribarski, PhD"
- "Assoc. Prof. Milos Jovanovik, PhD"
description: >
Example DevOps course content repo for ALE Lite v0.1. This repository hosts learning materials
and mode-aware activities as Markdown files. Instructors sync content into ALE explicitly using the
“Sync Course Content” action in the VS Code extension.
---
# Continuous Integration & Delivery — DevOps (ALE Example)
This repository is a **course content repo** for ALE Lite v0.1.
## What is in this repo?
- `materials/` — lecture materials converted to Markdown (chunked by headings for referencing).
- `activities/` — scored activities with behavior controlled by `mode` (`understanding`, `lab`, `homework`, `test`).
- `course.yml` — course metadata (machine-friendly; optional in v0.1).
- `COURSE.md` — this file (human-friendly course metadata and orientation).
## How ALE uses this repo
1. An instructor links the repo URL + branch in ALE.
2. The instructor clicks **Sync Course Content**.
3. ALE ingests `materials/` and `activities/`.
4. When new activities are detected, ALE creates corresponding grade activities in Moodle (dev/local uses moodle-mock).
## Authoring rules (v0.1)
- Each `materials/*.md` file must have YAML front matter: `id`, `title`.
- Each `activities/*.md` file must have YAML front matter: scheduling + grading fields, including `mode`.
- Activity tasks use the **multi-line block format** (Type, Points, Prompt, Refs, Rubric, Runner).

27
README.md Normal file
View File

@ -0,0 +1,27 @@
# DevOps (ALE Example Course)
This is an example course repo for ALE Lite activity mode.
## Contents
- `course.yml` — course metadata (documentation / optional for v0.1)
- `materials/` — markdown materials (chunked by headings for refs)
- `01..05` original DevOps/Kubernetes fundamentals materials
- `06-kubernetes-intro.md` integrated generated Kubernetes intro material set
- `07-git.md` integrated generated Git ALE Lite material set
- `activities/` — mode-aware scored activities
- `01-kubernetes-understanding.md` (`mode: understanding`)
- `02-working-with-pods-lab.md` (`mode: lab`)
- `03-kubernetes-fundamentals-test.md` (`mode: test`)
- `04-kubernetes-intro-understanding.md` (`mode: understanding`)
- `05-kubernetes-intro-lab.md` (`mode: lab`)
- `06-kubernetes-intro-homework.md` (`mode: homework`)
- `07-kubernetes-intro-test.md` (`mode: test`)
- `08-git-understanding.md` (`mode: understanding`)
- `09-git-lab.md` (`mode: lab`)
- `10-git-homework.md` (`mode: homework`)
- `11-git-test.md` (`mode: test`)
## Source notes
- Activity content was adapted from the prior unit/test authoring set and normalized to unified task blocks (`T1`, `T2`, ...).
- Additional Kubernetes intro material/activity set was integrated from generated content and remapped to DevOps-local IDs.
- Additional Git ALE Lite material/activity set was integrated from generated content and remapped to DevOps-local IDs.

View File

@ -0,0 +1,57 @@
---
id: act-01-kubernetes-understanding
title: "Unit 01 — Kubernetes Primer"
mode: understanding
open_at: "2026-02-01T00:00:00+01:00"
close_at: "2026-12-31T23:59:59+01:00"
retakes_enabled: true
max_attempts: 999
grade_max: 100
---
# Unit 01 — Kubernetes Primer
## T1
Type: essay
Points: 25
Prompt: In 2-3 sentences, explain what Kubernetes is.
Refs:
- mat-01-kubernetes-primer#kubernetes-background
Rubric:
- Correctly defines Kubernetes as a container orchestration platform (15)
- Mentions at least one practical benefit (scaling, self-healing, or rollout support) (10)
## T2
Type: single_choice
Points: 25
Prompt: Which Kubernetes control plane component stores cluster state? A) kube-scheduler B) etcd C) kube-proxy. Write only one letter.
Refs:
- mat-01-kubernetes-primer#control-plane
Rubric:
- Selects the correct option B (etcd) (20)
- Provides a brief one-sentence justification (5)
## T3
Type: multiple_choice
Points: 25
Prompt: Select all statements that are correct about Kubernetes. A) Kubernetes can automatically scale workloads. B) Services provide stable endpoints even when Pod IPs change. C) Deployments support rolling updates and rollbacks. D) kube-proxy stores cluster state. Select all that apply. Write letters only (example: A, B, C).
Refs:
- mat-01-kubernetes-primer#control-plane
- mat-02-k8s-objects-and-getting-started#services
- mat-02-k8s-objects-and-getting-started#deployments-controllers
Rubric:
- Selects A, B, and C (21)
- Excludes D (4)
## T4
Type: file
Points: 25
Prompt: Create @solutions/unit01-file-answer.md with a short mini-report that includes (a) what Kubernetes is, (b) the correct single-choice answer from T2, and (c) the correct multiple-choice selections from T3 with one-sentence rationale.
Refs:
- mat-01-kubernetes-primer#kubernetes-background
- mat-01-kubernetes-primer#control-plane
- mat-02-k8s-objects-and-getting-started#services
Rubric:
- File exists at requested path and is readable (8)
- Includes section (a), section (b), and section (c) content (12)
- Content is clear and concise (5)

View File

@ -0,0 +1,114 @@
---
id: act-02-working-with-pods-lab
title: "Unit 02 — Working with Pods"
mode: lab
open_at: "2026-02-01T00:00:00+01:00"
close_at: "2026-12-31T23:59:59+01:00"
retakes_enabled: true
max_attempts: 999
grade_max: 100
---
# Unit 02 — Working with Pods
## T1
Type: essay
Points: 10
Prompt: Explain why Pods are called “the atomic unit of scheduling” and why we typically avoid deploying Pods directly (mention what Controllers provide).
Refs:
- mat-03-working-with-pods#previously
- mat-03-working-with-pods#static-pods-vs-controllers
Rubric:
- Correctly explains “atomic unit of scheduling” (4)
- Explains why direct Pods are limited and what Controllers add (6)
## T2
Type: essay
Points: 15
Prompt: List at least 6 examples of “pod augmentation data” and explain (one sentence each) why they are useful.
Refs:
- mat-03-working-with-pods#pod-augmentation-data
- mat-03-working-with-pods#some-pod-features-why-they-matter
Rubric:
- ≥6 correct items listed (9)
- Each item has a correct purpose statement (6)
## T3
Type: essay
Points: 10
Prompt: Describe the high-level process of deploying a Pod from manifest to running state (use numbered steps).
Refs:
- mat-03-working-with-pods#deploying-pods-high-level
Rubric:
- Includes manifest creation + posting to API (3)
- Includes API authz/validation and scheduling (4)
- Includes kubelet monitoring/execution (3)
## T4
Type: essay
Points: 15
Prompt: Explain how networking works for single-container vs multi-container Pods, including how containers communicate inside the same Pod.
Refs:
- mat-04-pod-networking-lifecycle-and-patterns#pods-and-shared-networking
Rubric:
- Correctly states shared IP/ports/routing in multi-container pods (7)
- Correctly explains localhost communication within a pod (8)
## T5
Type: essay
Points: 10
Prompt: Summarize the Pod lifecycle phases and provide guidance on restartPolicy values for short-lived vs long-lived apps.
Refs:
- mat-04-pod-networking-lifecycle-and-patterns#pod-lifecycle
- mat-04-pod-networking-lifecycle-and-patterns#pod-restart-policy
Rubric:
- Lifecycle phases described correctly (5)
- Restart policy guidance correct (5)
## T6
Type: essay
Points: 10
Prompt: Define Pod immutability and explain what you should do when you need to change a Pods metadata.
Refs:
- mat-04-pod-networking-lifecycle-and-patterns#pod-immutability
Rubric:
- Correctly defines immutability (5)
- Correctly states “create a new Pod” approach (5)
## T7
Type: essay
Points: 10
Prompt: Describe at least two multi-container patterns (sidecar and init container) and give one realistic use case for each.
Refs:
- mat-04-pod-networking-lifecycle-and-patterns#multi-container-pod-patterns
Rubric:
- Correct pattern definitions (6)
- Appropriate use cases (4)
## T8
Type: file
Points: 20
Prompt: Create a lab write-up in @solutions/unit02-lab.md that includes: (1) creating a fresh k3d cluster (1 server + 1 agent), (2) deploying a Pod using a manifest, (3) verifying with kubectl get/watch, (4) inspecting with -o wide and -o yaml, (5) using kubectl describe, logs, and exec, and (6) cleaning up the cluster.
Refs:
- mat-05-kubectl-for-pods#apply-a-manifest
- mat-05-kubectl-for-pods#inspect-pods-with-kubectl-get
- mat-05-kubectl-for-pods#kubectl-describe
- mat-05-kubectl-for-pods#kubectl-logs
- mat-05-kubectl-for-pods#kubectl-exec
Rubric:
- Shows correct kubectl apply/get/watch usage (6)
- Includes -o wide and -o yaml output snippets (5)
- Includes describe + logs + exec evidence (6)
- Includes cluster delete and brief reflection (3)
## T9
Type: file
Points: 10
Prompt: Homework: Create an nginx image with static content, upload it to DockerHub, write a Pod manifest that uses it, deploy it to your cluster, and watch it being deployed. Submit a short report in @solutions/unit02-homework.md describing commands used and evidence (screenshots or pasted outputs).
Refs:
- mat-03-working-with-pods#deploying-pods-high-level
- mat-05-kubectl-for-pods#apply-a-manifest
Rubric:
- Describes image build and push steps (4)
- Pod manifest described correctly (3)
- Deployment + watch evidence included (3)

View File

@ -0,0 +1,57 @@
---
id: act-03-kubernetes-fundamentals-test
title: "Test 01 — Kubernetes Fundamentals Quiz"
mode: test
open_at: "2026-02-10T00:00:00+01:00"
close_at: "2026-12-31T23:59:59+01:00"
retakes_enabled: true
max_attempts: 5
grade_max: 100
time_limit_seconds: 1800
---
# Test 01 — Kubernetes Fundamentals Quiz
## T1
Type: mcq
Points: 25
Prompt: Kubernetes is best described as:
Refs:
- mat-01-kubernetes-primer#kubernetes-background
Choices:
- [x] A container orchestration platform for managing containerized workloads
- [ ] A package manager for Linux distributions
- [ ] A CI server used only for running unit tests
## T2
Type: short
Points: 25
Prompt: Single-choice (write only A, B, or C): Which Kubernetes control plane component stores cluster state?
Refs:
- mat-01-kubernetes-primer#control-plane
Answer: B
## T3
Type: mcq
Points: 25
Prompt: Multiple-choice: Which statement about Kubernetes Services is correct?
Refs:
- mat-02-k8s-objects-and-getting-started#services
Choices:
- [x] Services provide a stable network endpoint for Pods
- [ ] Services are used to build container images
- [ ] Services replace Deployments and StatefulSets
## T4
Type: file
Points: 25
Prompt: Submit @solutions/pod-commands.md listing at least five kubectl commands for pod inspection and debugging.
Refs:
- mat-05-kubectl-for-pods#inspect-pods-with-kubectl-get
- mat-05-kubectl-for-pods#kubectl-describe
- mat-05-kubectl-for-pods#kubectl-logs
- mat-05-kubectl-for-pods#kubectl-exec
Rubric:
- Includes at least five relevant kubectl commands (10)
- Commands are appropriate for inspection/debugging tasks (10)
- Notes are clear and concise (5)

View File

@ -0,0 +1,59 @@
---
id: act-04-kubernetes-intro-understanding
title: Kubernetes Intro - Understanding
mode: understanding
open_at: 2026-02-01T00:00:00+01:00
close_at: 2026-12-31T23:59:59+01:00
retakes_enabled: true
max_attempts: 999
grade_max: 100
---
## T1
Type: mcq
Points: 20
Prompt: Which description best matches what Kubernetes does?
Refs:
- mat-06-kubernetes-intro#what-kubernetes-is
- mat-06-kubernetes-intro#orchestration
Choices:
- [x] It deploys applications and can scale, self-heal, and manage rollouts/rollbacks.
- [ ] It is a programming language for writing cloud-native applications.
- [ ] It is a container image format used to distribute applications.
- [ ] It is a single server operating system kernel.
## T2
Type: short
Points: 15
Prompt: What does the abbreviation K8s mean?
Refs:
- mat-06-kubernetes-intro#whats-in-the-name-and-k8s
Answer: The 8 replaces the eight letters between K and s in Kubernetes.
## T3
Type: single_choice
Points: 20
Prompt: For high availability, which control plane sizing is recommended? A) 1 control plane node B) 2 control plane nodes C) 3 or 5 control plane nodes D) 7 control plane nodes
Refs:
- mat-06-kubernetes-intro#control-plane-nodes-and-worker-nodes
## T4
Type: multiple_choice
Points: 25
Prompt: Which items are part of the control plane services described in the material? A) API server B) Scheduler C) Cluster store (etcd) D) Kubelet E) Controller manager/controllers F) Kube-proxy
Refs:
- mat-06-kubernetes-intro#control-plane-services
- mat-06-kubernetes-intro#worker-node-components
## T5
Type: essay
Points: 20
Prompt: Explain Kubernetes' declarative model using desired state, observed state, and reconciliation, and contrast it with an imperative approach using one concrete example (failure recovery or an update).
Refs:
- mat-06-kubernetes-intro#desired-state-observed-state-and-reconciliation
- mat-06-kubernetes-intro#declarative-versus-imperative
- mat-06-kubernetes-intro#self-healing-and-rolling-updates-example
Rubric:
- Defines desired state, observed state, and reconciliation correctly. (8)
- Describes the YAML->API server->cluster store flow and the role of controllers/watch loops. (6)
- Provides a concrete example (replica self-heal or image update) and contrasts with imperative scripting. (6)

View File

@ -0,0 +1,67 @@
---
id: act-05-kubernetes-intro-lab
title: Kubernetes Intro - Lab
mode: lab
open_at: 2026-02-01T00:00:00+01:00
close_at: 2026-12-31T23:59:59+01:00
retakes_enabled: true
max_attempts: 999
grade_max: 100
---
## T1
Type: short
Points: 10
Prompt: Write the kubectl command that lists the nodes in the current cluster.
Refs:
- mat-06-kubernetes-intro#verify-cluster-with-kubectl-get-nodes
Answer: kubectl get nodes
## T2
Type: folder
Points: 35
Prompt: Create folder @solutions/kubernetes-intro/lab/manifests containing pod.yaml and deployment.yaml that express desired state for a simple app (image, ports, and replicas) using the Pod and Deployment nesting described.
Refs:
- mat-06-kubernetes-intro#object-nesting-containers-pods-deployments
- mat-06-kubernetes-intro#desired-state-observed-state-and-reconciliation
Rubric:
- Folder exists at the required path and contains exactly pod.yaml and deployment.yaml. (5)
- pod.yaml is a valid Pod manifest that runs a container image and exposes at least one container port. (15)
- deployment.yaml is a valid Deployment manifest that references a Pod template and sets replicas. (15)
## T3
Type: file
Points: 25
Prompt: Create file @solutions/kubernetes-intro/lab/kubeconfig_notes.md explaining (a) what kubectl does on every command and (b) the purpose of clusters, users, contexts, and current-context in kubeconfig.
Refs:
- mat-06-kubernetes-intro#what-kubectl-does-per-command
- mat-06-kubernetes-intro#kubeconfig-structure-clusters-users-contexts-current-context
Rubric:
- Correctly explains the three kubectl steps (REST request, target cluster via current context, credentials via current context). (10)
- Correctly explains clusters, users, contexts, and current-context and how they relate. (10)
- Writing is clear, structured with headings/bullets, and uses accurate terminology from the material. (5)
## T4
Type: mcq
Points: 15
Prompt: In Docker Desktop, which Kubernetes option is emphasized as necessary to create a multi-node cluster (not single-node)?
Refs:
- mat-06-kubernetes-intro#deploy-docker-desktop-built-in-multi-node-cluster
Choices:
- [x] kind (sign-in required)
- [ ] kubeadm
- [ ] minikube
- [ ] k3s
## T5
Type: essay
Points: 15
Prompt: In 6-8 sentences, explain why Kubernetes Services are needed for stable networking to Pods, and name at least two events that cause Pod IP churn.
Refs:
- mat-06-kubernetes-intro#services-and-stable-networking
- mat-06-kubernetes-intro#pod-lifecycle
- mat-06-kubernetes-intro#pods-as-the-unit-of-scaling
Rubric:
- Explains why clients cannot reliably connect to individual Pods and connects this to Pod mortality/IP churn. (7)
- Names at least two sources of churn (failures, rollouts, scale up, scale down) and explains them. (5)
- Describes the Service front end (stable name/IP/port) and back end (labels + healthy Pods + load balancing). (3)

View File

@ -0,0 +1,59 @@
---
id: act-06-kubernetes-intro-homework
title: Kubernetes Intro - Homework
mode: homework
open_at: 2026-02-01T00:00:00+01:00
close_at: 2026-12-31T23:59:59+01:00
retakes_enabled: true
max_attempts: 999
grade_max: 100
---
## T1
Type: essay
Points: 30
Prompt: Compare control plane nodes and worker nodes by listing the main responsibilities and naming at least three specific components/services on each side.
Refs:
- mat-06-kubernetes-intro#control-plane-and-worker-nodes
- mat-06-kubernetes-intro#control-plane-services
- mat-06-kubernetes-intro#worker-node-components
Rubric:
- Correctly distinguishes control plane responsibilities vs worker responsibilities. (10)
- Correctly names and attributes at least three control plane components/services (e.g., API server, scheduler, controllers/controller manager, cluster store). (10)
- Correctly names and attributes at least three worker node components (kubelet, runtime, kube-proxy) and what they do. (10)
## T2
Type: mcq
Points: 20
Prompt: Which control plane component is described as the front end that all commands and requests go through?
Refs:
- mat-06-kubernetes-intro#the-api-server
Choices:
- [x] API server
- [ ] kubelet
- [ ] kube-proxy
- [ ] container runtime
## T3
Type: short
Points: 20
Prompt: What distributed database is used as the basis for the Kubernetes cluster store?
Refs:
- mat-06-kubernetes-intro#the-cluster-store-etcd-high-availability-and-split-brain
Answer: etcd
## T4
Type: single_choice
Points: 15
Prompt: If etcd enters a split-brain condition, what behavior is described? A) etcd deletes all cluster state B) etcd goes into read-only mode preventing updates C) etcd automatically adds replicas until quorum is restored D) etcd converts the cluster to single-node mode
Refs:
- mat-06-kubernetes-intro#the-cluster-store-etcd-high-availability-and-split-brain
## T5
Type: multiple_choice
Points: 15
Prompt: Which statements about Pods are consistent with the material? A) Pods are the unit of scheduling B) All containers in a Pod can be scheduled to different nodes C) A Pod is ready only when all its containers are running D) Pods are immutable and updates replace them with new Pods
Refs:
- mat-06-kubernetes-intro#pod-scheduling-atomic-readiness
- mat-06-kubernetes-intro#pods-and-containers
- mat-06-kubernetes-intro#pod-immutability

View File

@ -0,0 +1,77 @@
---
id: act-07-kubernetes-intro-test
title: Kubernetes Intro - Test
mode: test
open_at: 2026-02-01T00:00:00+01:00
close_at: 2026-12-31T23:59:59+01:00
retakes_enabled: true
max_attempts: 2
grade_max: 100
time_limit_seconds: 1800
---
## T1
Type: mcq
Points: 20
Prompt: Kubernetes is best described as:
Refs:
- mat-06-kubernetes-intro#what-kubernetes-is
- mat-06-kubernetes-intro#kubernetes-from-40k-feet
Choices:
- [x] A system that deploys and manages applications and their components across a cluster.
- [ ] A container image registry for distributing OCI images.
- [ ] A virtualization hypervisor that runs virtual machines directly on hardware.
- [ ] A source control system for YAML manifests.
## T2
Type: mcq
Points: 20
Prompt: For high availability, production clusters typically run control plane nodes in which quantity?
Refs:
- mat-06-kubernetes-intro#control-plane-nodes-and-worker-nodes
Choices:
- [ ] 1
- [ ] 2
- [x] 3 or 5
- [ ] 10
## T3
Type: short
Points: 15
Prompt: Name the main agent on each worker node that watches the API server for tasks and reports status back.
Refs:
- mat-06-kubernetes-intro#kubelet
Answer: kubelet
## T4
Type: short
Points: 15
Prompt: Name the consensus algorithm used by etcd to keep writes consistent.
Refs:
- mat-06-kubernetes-intro#raft-consensus-and-write-consistency
Answer: RAFT
## T5
Type: mcq
Points: 15
Prompt: The Kubernetes scheduler primarily:
Refs:
- mat-06-kubernetes-intro#the-scheduler
Choices:
- [x] Watches for new tasks and assigns them to appropriate nodes.
- [ ] Stores desired state as a distributed key-value database.
- [ ] Implements node-level networking and load balancing on each worker.
- [ ] Builds container images and pushes them to registries.
## T6
Type: mcq
Points: 15
Prompt: By default, what is the filename of the kubeconfig file used by kubectl (inside the .kube directory)?
Refs:
- mat-06-kubernetes-intro#configure-kubectl-and-kubeconfig
- mat-06-kubernetes-intro#kubeconfig-structure-clusters-users-contexts-current-context
Choices:
- [ ] kubeconfig.yaml
- [x] config
- [ ] cluster.conf
- [ ] kubectl.json

View File

@ -0,0 +1,110 @@
---
id: act-08-git-understanding
title: Git Understanding
mode: understanding
open_at: 2026-02-01T00:00:00+01:00
close_at: 2026-12-31T23:59:59+01:00
retakes_enabled: true
max_attempts: 999
grade_max: 100
---
## T1
Type: single_choice
Points: 10
Prompt: Which option is NOT listed as a benefit of using source control in the lecture materials? A) Tracking changes over time B) Collaborative work on the same codebase C) Encrypting all source code automatically D) Reverting to previous versions
Refs:
- mat-07-git#why-version-control
## T2
Type: multiple_choice
Points: 10
Prompt: Select ALL statements that match Git as described in the materials. A) Git works like a stream of snapshots, not deltas B) Git is centralized and requires the server for nearly every operation C) Cloning gives you a full copy of nearly all data/history by default D) Git supports branching and merging for parallel development
Refs:
- mat-07-git#types-of-version-control-systems
- mat-07-git#git-overview
- mat-07-git#snapshots-not-deltas
- mat-07-git#cloning-a-repository-with-git-clone
## T3
Type: essay
Points: 15
Prompt: Explain the roles of the working directory, staging area, and git directory, and relate them to the three file states (modified, staged, committed).
Refs:
- mat-07-git#working-directory-staging-area-git-directory
- mat-07-git#file-states-modified-staged-committed
Rubric:
- Correctly defines working directory, staging area, and git directory (.git) and their roles (6)
- Correctly explains modified, staged, committed and how files move between states (6)
- Uses correct Git terminology and connects states to areas clearly (3)
## T4
Type: file
Points: 10
Prompt: Create file @solutions/act-01-git-understanding/t4-version-and-config.txt containing the commands you ran and terminal output for: git --version, git config --global user.name, git config --global user.email, and git config --list.
Refs:
- mat-07-git#installing-git-and-verifying
- mat-07-git#first-time-setup-with-git-config
Rubric:
- Includes git --version command and its output (3)
- Includes user.name and user.email queries (or setting + query) with output (4)
- Includes git config --list output (2)
- Output is readable and clearly separated by command (1)
## T5
Type: file
Points: 15
Prompt: Create file @solutions/act-01-git-understanding/t5-init-add-commit.txt containing the commands you ran and terminal output while you: git init a new repo, create README.md, git status, git add README.md, git status -s, git commit -m "Initial commit", and finally git log --oneline.
Refs:
- mat-07-git#getting-a-repository-with-git-init
- mat-07-git#checking-status-with-git-status
- mat-07-git#staging-with-git-add
- mat-07-git#short-status-format
- mat-07-git#committing-with-git-commit
- mat-07-git#viewing-history-with-git-log
Rubric:
- Shows repository initialization (git init) and at least one commit (4)
- Includes both git status and git status -s outputs at meaningful points (4)
- Demonstrates staging with git add and committing with git commit -m (4)
- Includes git log --oneline output showing the commit (2)
- Commands and outputs are clearly labeled (1)
## T6
Type: multiple_choice
Points: 10
Prompt: In git status -s output, which interpretations are correct? Select ALL that apply. A) 'M file.txt' means the file is modified and staged B) ' M file.txt' means the file is modified but not staged C) '?? file.txt' means the file is ignored by .gitignore D) 'AM file.txt' means the file was added to staging and then modified again
Refs:
- mat-07-git#short-status-format
## T7
Type: single_choice
Points: 10
Prompt: Which command shows what you have staged that will go into your next commit (compared to the last commit)? A) git diff B) git diff --staged C) git status -s D) git log --oneline
Refs:
- mat-07-git#viewing-changes-with-git-diff
## T8
Type: essay
Points: 10
Prompt: Describe how .gitignore patterns work (including at least one use of '!' negation) and give a small example .gitignore for ignoring build/ and *.log but keeping keep.log tracked.
Refs:
- mat-07-git#ignoring-files-with-gitignore
Rubric:
- Correctly describes how patterns are matched (glob-style) and that comments/blank lines are handled (4)
- Correctly explains negation with '!' and uses it appropriately (3)
- Provides an example .gitignore that matches the requested behavior (3)
## T9
Type: file
Points: 10
Prompt: Create file @solutions/act-01-git-understanding/t9-undo.txt containing commands and output for: (1) make a commit with a wrong message then fix it with git commit --amend, (2) stage a file then unstage it using git reset HEAD <file> or git restore --staged <file>, (3) modify a tracked file then discard the modification using git checkout -- <file> or git restore <file>.
Refs:
- mat-07-git#amending-the-last-commit
- mat-07-git#unstaging-files
- mat-07-git#discarding-local-modifications
- mat-07-git#restore-versus-reset-and-safety
Rubric:
- Shows a commit being amended (commands + output/log evidence) (4)
- Demonstrates unstaging a staged file (commands + output) (3)
- Demonstrates discarding a modification safely (commands + output) (2)
- Commands and outputs are clearly labeled (1)

79
activities/09-git-lab.md Normal file
View File

@ -0,0 +1,79 @@
---
id: act-09-git-lab
title: Git Lab
mode: lab
open_at: 2026-02-01T00:00:00+01:00
close_at: 2026-12-31T23:59:59+01:00
retakes_enabled: true
max_attempts: 999
grade_max: 100
---
## T1
Type: folder
Points: 20
Prompt: Create folder @solutions/act-01-git-lab/t1-branching-basics/ containing commands.txt (commands+output) and graph.txt (output of git log --oneline --decorate --graph --all) for a repo where you created a branch, made at least one commit on each branch, and switched between branches.
Refs:
- mat-07-git#branching
- mat-07-git#creating-and-switching-branches
- mat-07-git#divergent-history-and-viewing-the-graph
Rubric:
- commands.txt shows branch creation, switching, and at least one commit on each branch (10)
- graph.txt shows both branches in the graph (6)
- Outputs are readable and clearly separated by command (4)
## T2
Type: single_choice
Points: 10
Prompt: In the materials, what is HEAD used for? A) It is the server-side copy of your repository B) It is a special pointer that tells Git what branch you are currently on C) It is a list of ignored files D) It is the file that stores your username and email
Refs:
- mat-07-git#head-pointer-and-current-branch
## T3
Type: file
Points: 20
Prompt: Create file @solutions/act-01-git-lab/t3-merge.txt containing commands+output for a merge scenario: create a topic branch, make changes, switch back to the base branch, make another commit, then git merge the topic branch and show the resulting git log --oneline --decorate --graph --all.
Refs:
- mat-07-git#fast-forward-merges
- mat-07-git#three-way-merges
- mat-07-git#merge-conflicts-and-resolution
- mat-07-git#divergent-history-and-viewing-the-graph
Rubric:
- Shows creation of a topic branch and commits on both branches (8)
- Shows a successful merge and the final graph output (8)
- If a conflict occurred, shows resolution steps (edit, git add, git commit) (3)
- Commands and outputs are clearly labeled (1)
## T4
Type: multiple_choice
Points: 10
Prompt: Select ALL correct statements about merging from the materials. A) Fast-forward merge happens when there is no divergent history B) Three-way merge creates a merge commit with 2+ parents C) Fast-forward merge always creates a merge commit D) After merging a topic branch, it is common to delete it with git branch -d
Refs:
- mat-07-git#fast-forward-merges
- mat-07-git#three-way-merges
- mat-07-git#deleting-branches-after-merge
## T5
Type: file
Points: 20
Prompt: Create file @solutions/act-01-git-lab/t5-remotes.txt containing commands+output for: showing remotes (git remote -v), adding or confirming origin (git remote add ... if needed), and pushing at least one branch to GitHub with git push (use git push -u origin <branch> if setting upstream).
Refs:
- mat-07-git#showing-and-adding-remotes
- mat-07-git#pushing
- mat-07-git#tracking-branches-upstream
Rubric:
- Includes git remote -v output showing at least one remote (6)
- Demonstrates a push to GitHub with command and output (8)
- Shows upstream setup (git push -u) or explains via output that branch is tracking (4)
- Commands and outputs are clearly labeled (2)
## T6
Type: essay
Points: 20
Prompt: Explain what merge conflict markers mean and list the exact steps to resolve a merge conflict until Git considers it resolved.
Refs:
- mat-07-git#merge-conflicts-and-resolution
Rubric:
- Correctly explains the meaning of conflict markers (<<<<<<<, =======, >>>>>>>) (8)
- Lists the resolution workflow: edit, stage (git add), commit (git commit) (8)
- Mentions checking state with git status and/or verifying final history (4)

View File

@ -0,0 +1,84 @@
---
id: act-10-git-homework
title: Git Homework
mode: homework
open_at: 2026-02-01T00:00:00+01:00
close_at: 2026-12-31T23:59:59+01:00
retakes_enabled: true
max_attempts: 999
grade_max: 100
---
## T1
Type: essay
Points: 20
Prompt: Compare merge and rebase as integration strategies and explain how they differ in the history they produce (use the terminology from the materials).
Refs:
- mat-07-git#merge-versus-rebase
- mat-07-git#basic-rebasing-workflow
Rubric:
- Correctly describes merge as combining endpoints and potentially creating a merge commit (8)
- Correctly describes rebase as replaying commits onto a new base and changing commit history (8)
- Discusses when a cleaner linear history is desired versus keeping merge structure (4)
## T2
Type: single_choice
Points: 10
Prompt: Which statement best describes git fetch as presented in the materials? A) It fetches from a remote and immediately merges into your current branch B) It downloads remote data without merging it into your current work C) It deletes remote branches that are missing locally D) It creates an annotated tag
Refs:
- mat-07-git#fetching-and-pulling
## T3
Type: file
Points: 20
Prompt: Create file @solutions/act-01-git-homework/t3-rebase.txt containing commands+output for a rebase exercise: create a topic branch with commits, make at least one new commit on the base branch, then rebase the topic branch onto the base branch and show git log --oneline --decorate --graph --all before and after.
Refs:
- mat-07-git#basic-rebasing-workflow
- mat-07-git#divergent-history-and-viewing-the-graph
Rubric:
- Shows topic branch commits and a separate base-branch commit before rebasing (8)
- Shows rebase command(s) and successful completion (8)
- Includes before-and-after graph outputs (3)
- Commands and outputs are clearly labeled (1)
## T4
Type: multiple_choice
Points: 10
Prompt: Select ALL correct statements about fetch/pull/push and tracking branches based on the materials. A) git pull fetches and then merges into your current branch B) git fetch downloads data but does not merge it into your current branch C) git push -u origin <branch> sets the upstream tracking relationship D) git fetch is the same as git pull
Refs:
- mat-07-git#fetching-and-pulling
- mat-07-git#pushing
- mat-07-git#tracking-branches-upstream
## T5
Type: file
Points: 15
Prompt: Create file @solutions/act-01-git-homework/t5-tags.txt containing commands+output for: creating an annotated tag (git tag -a v1.4 -m "my version 1.4" or similar) and showing it with git show <tag>.
Refs:
- mat-07-git#lightweight-and-annotated-tags
- mat-07-git#listing-and-showing-tags
Rubric:
- Shows creation of an annotated tag with a message (6)
- Shows git show output for the tag (6)
- Commands and outputs are clearly labeled (3)
## T6
Type: single_choice
Points: 10
Prompt: According to the materials, what happens when you git checkout a tag name? A) Git creates a new branch with that tag name automatically B) Git switches you into a detached HEAD state C) Git deletes the tag after checkout D) Git pushes the tag to origin
Refs:
- mat-07-git#checking-out-tags-and-detached-head
## T7
Type: file
Points: 15
Prompt: Create file @solutions/act-01-git-homework/t7-aliases.txt containing commands+output for: defining alias.unstage and alias.last using git config --global, verifying them via git config --list (or git config alias.unstage/alias.last), and running at least one of the aliases in a repository.
Refs:
- mat-07-git#git-aliases
- mat-07-git#unstaging-files
- mat-07-git#viewing-history-with-git-log
Rubric:
- Shows alias definitions using git config --global (6)
- Shows verification of aliases in config output (4)
- Demonstrates executing an alias and shows its output (4)
- Commands and outputs are clearly labeled (1)

91
activities/11-git-test.md Normal file
View File

@ -0,0 +1,91 @@
---
id: act-11-git-test
title: Git Test
mode: test
open_at: 2026-02-01T00:00:00+01:00
close_at: 2026-12-31T23:59:59+01:00
retakes_enabled: true
max_attempts: 2
grade_max: 100
time_limit_seconds: 1800
---
## T1
Type: single_choice
Points: 10
Prompt: Which statement matches the Git storage model described in the materials? A) Git stores only line-by-line deltas between versions B) Git stores a stream of snapshots of your project over time C) Git stores changes only on a central server D) Git stores only file timestamps, not content
Refs:
- mat-07-git#snapshots-not-deltas
## T2
Type: multiple_choice
Points: 10
Prompt: In the short status format, which statements are correct? Select ALL that apply. A) The left column refers to the staging area B) The right column refers to the working directory C) '??' means a file is untracked D) '??' means a file is staged
Refs:
- mat-07-git#short-status-format
## T3
Type: single_choice
Points: 10
Prompt: Which command shows the differences you have staged for the next commit? A) git diff B) git diff --staged C) git status D) git log
Refs:
- mat-07-git#viewing-changes-with-git-diff
## T4
Type: single_choice
Points: 10
Prompt: Which command initializes a new Git repository in the current directory? A) git init B) git clone C) git commit D) git remote add
Refs:
- mat-07-git#getting-a-repository-with-git-init
## T5
Type: single_choice
Points: 10
Prompt: In Git, a branch is best described as: A) A lightweight movable pointer to a commit B) A full copy of the repository stored on a server C) A list of ignored files D) A merge conflict marker
Refs:
- mat-07-git#what-a-branch-is
## T6
Type: multiple_choice
Points: 10
Prompt: Select ALL correct statements about HEAD and switching branches. A) HEAD is a special pointer that tells Git what branch you are currently on B) Switching branches updates files in your working directory to match that branch snapshot C) HEAD is the same thing as origin D) git checkout -b <name> both creates a branch and switches to it
Refs:
- mat-07-git#head-pointer-and-current-branch
- mat-07-git#creating-and-switching-branches
## T7
Type: single_choice
Points: 10
Prompt: A fast-forward merge occurs when: A) The branches have diverged and Git must create a new merge commit B) The current branch is directly behind the branch being merged, so Git only moves the pointer forward C) A merge conflict is present D) You are in detached HEAD state
Refs:
- mat-07-git#fast-forward-merges
## T8
Type: multiple_choice
Points: 10
Prompt: Select ALL steps that are part of resolving a merge conflict. A) Edit the conflicted file to resolve markers B) Stage the resolved file with git add C) Finish by committing (git commit) D) Delete the remote branch with git push origin --delete
Refs:
- mat-07-git#merge-conflicts-and-resolution
## T9
Type: single_choice
Points: 10
Prompt: In the materials, git pull is described as: A) Fetching remote data without merging B) Fetching and then merging into your current branch C) Creating an annotated tag D) Renaming a remote
Refs:
- mat-07-git#fetching-and-pulling
## T10
Type: file
Points: 10
Prompt: Create file @solutions/act-01-git-test/t10-practical.txt containing commands+output that show: a repo with at least 2 commits on the base branch, one additional branch with one commit, and the outputs of git status -s and git log --oneline --decorate --graph --all.
Refs:
- mat-07-git#checking-status-with-git-status
- mat-07-git#short-status-format
- mat-07-git#viewing-history-with-git-log
- mat-07-git#creating-and-switching-branches
- mat-07-git#divergent-history-and-viewing-the-graph
Rubric:
- Output shows at least 2 commits on the base branch and at least 1 commit on another branch (5)
- Includes git status -s output (2)
- Includes git log --oneline --decorate --graph --all output showing the branch structure (2)
- Commands and outputs are clearly labeled (1)

15
course.yml Normal file
View File

@ -0,0 +1,15 @@
id: devops-ale-example
title: "DevOps (ALE Example Course)"
version: "0.1"
language: "en"
description: >
Example ALE course content repo for DevOps, starting with Kubernetes fundamentals activities.
This repo is intended for dev/local testing with ALE Lite v0.1.
paths:
materials: "materials/"
activities: "activities/"
authors_note: >
Materials were converted from the provided slide deck (KIII-07-Kubernetes.pdf).
year: 2023
term: "Continuous Integration and Delivery"

View File

@ -0,0 +1,68 @@
---
id: mat-01-kubernetes-primer
title: "Kubernetes Primer"
---
# Kubernetes primer
## Kubernetes background
Kubernetes is an application orchestrator that helps you:
- deploy applications
- scale them up/down based on demand
- self-heal when things break
- do rolling updates and rollbacks
- and more
## Cloud-native and microservice apps
A cloud-native app typically demands:
- auto-scaling
- self-healing
- rolling updates
- rollbacks
A microservices app typically demands:
- lots of small, specialized, independent parts that work together
- e.g., front-end, back-end, database, many services in a mesh
## Where Kubernetes comes from
- Early public cloud era: AWS popularized modern cloud computing
- Google already ran large containerized apps (e.g., Search/Gmail) and developed in-house orchestration systems (Borg, Omega)
- Kubernetes was created from those lessons and donated to the CNCF in 2014 as an open-source project
## Kubernetes and Docker (historical view)
At first:
- Docker build tools packaged apps as containers
- Kubernetes made scheduling/orchestration decisions
- Docker runtime was installed on each worker node
Now:
- Kubernetes uses a container runtime layer via CRI (Container Runtime Interface)
## Whats in the name (K8s)
- “Kubernetes” is often shortened to “K8s” (“kates”)
- “Kubernetes” comes from Greek meaning “helmsman” (the person who steers a ship)
## Kubernetes from 40K feet (cluster view)
A Kubernetes cluster runs applications on:
- control plane nodes
- worker nodes
## Control plane
The control plane is a collection of services that control and run everything.
Common services:
- API Server
- cluster store (etcd)
- controller manager (reconciles desired vs current state)
- scheduler
## Worker nodes
Worker nodes are where user applications run.
Worker node work logic:
- watch the API server for work assignments
- execute work assignments
- report back to the API server
### Worker node components
- kubelet: main agent on every worker, executes tasks and reports
- container runtime: pulls images, starts/stops containers (via CRI; e.g., containerd)
- kube-proxy: local networking (routing, iptables, load balancing)

View File

@ -0,0 +1,82 @@
---
id: mat-02-k8s-objects-and-getting-started
title: "Kubernetes Objects & Getting Started Locally"
---
# Kubernetes objects and getting started
## Declarative model (desired state)
Kubernetes is declarative:
- you declare the desired state in a manifest (YAML)
- you post it to the API server
- Kubernetes stores desired state
- controllers continuously reconcile current state to desired state
Example scenarios:
- desired replicas = 10, 2 fail → controller schedules 2 new replicas
- change replicas from 3 to 6 → controller schedules 3 more replicas
- update app version from 1.0 to 2.0 → controller rolls out 2.0 gradually and rolls back if needed
## Pods
A Pod is the atomic unit of scheduling in Kubernetes.
- simplest use: run one container per pod
- advanced: multiple containers in one pod share network stack (same IP), volumes, IPC/shared memory
Important:
- scale by increasing the number of pods, not containers inside a pod
- pod lifecycle is “brutal”: pods are replaced (immutable); they do not “come back” after death
## Deployments (controllers)
In practice you rarely deploy pods directly.
You use higher-level controllers such as Deployments to get:
- self-healing
- scaling
- rollouts/rollbacks
Common controller names you may encounter:
- Deployment
- DaemonSet
- StatefulSet
## Services
Pods are ephemeral (they can and will die).
Services provide a stable facade:
- stable DNS name
- load balancing
- “service discovery” style routing to healthy pods
## Getting Kubernetes locally: installers overview
Kubernetes is complex (control plane, etcd, kubelet, networking…). Installers help.
Examples of installers/distributions:
- K3s
- K3d (runs K3s in Docker)
- kind (Kubernetes in Docker)
- MicroK8s
- Minikube
- Docker Desktop (Windows/Mac)
## kubectl
kubectl is the CLI tool for controlling clusters via the API server.
Version rule of thumb:
- kubectl should be within one minor version of the cluster control plane.
Config:
- uses local kubeconfig file: ~/.kube/config
- supports switching between clusters
## K3d quick start
Create a cluster:
- k3d cluster create mycluster
Delete it:
- k3d cluster delete mycluster
Example verification commands:
- kubectl get nodes
- kubectl get pods --all-namespaces
## K3d complex setup
Example: 3 server nodes and 5 agent nodes:
- k3d cluster create complex -s 3 -a 5
K3d also supports a config file (useful for versioning and sharing setup).

View File

@ -0,0 +1,70 @@
---
id: mat-03-working-with-pods
title: "Working with Pods"
---
# Working with Pods
## Previously
Key reminders:
- Pods are the atomic unit of scheduling.
- Pods die and are immutable.
- We usually do not deploy Pods directly (we use Controllers such as Deployments, DaemonSets, StatefulSets).
## Pods
Anything you run on Kubernetes runs in a Pod.
Typical effects:
- Deploy an app → a new Pod is created.
- Terminate an app → its Pod is destroyed.
- Scale up → new Pods are scheduled.
- Scale down → some Pods are destroyed.
Pods act as:
- containers with extra metadata
- a wrapper object that can bring containers “close together”
## Pod augmentation data
Pods can be augmented with:
- labels and annotations
- restart policies
- probes (startup/readiness/liveness)
- affinity and anti-affinity
- termination control
- security policies
- resource requests and limits
Helpful CLI exploration:
- `kubectl explain pods --recursive`
- `kubectl explain pod.spec.restartPolicy`
## Some Pod features (why they matter)
- **Labels**: group Pods and associate them with other objects.
- **Annotations**: attach extra info used by third-party/experimental tools.
- **Probes**: inspect container behavior and health.
- **Affinity/Anti-affinity**: control where Pods are deployed.
- **Termination control**: graceful termination.
- **Security policies**: enforce security features.
- **Resource requests/limits**: minimum required vs maximum allowed resources.
## Static Pods vs Controllers
If you deploy a Pod directly:
- it gets deployed (IP, DNS)
- kubelet can restart it for some time (depending on restart policy)
- but: no scaling, no rolling updates, no self-healing across node failure
- if the node dies → the Pod is gone
If a Pod is managed by a Controller:
- control plane monitors it
- if a Pod is irreparable, a new Pod is created
- new Pod means new IP/DNS → Pods should be treated as stateless
## Deploying Pods (high level)
Workflow:
1. Define the manifest (YAML).
2. Post it to the Kubernetes API (via kubectl).
3. API server authenticates/authorizes and validates.
4. Scheduler assigns to a worker node.
5. Kubelet starts monitoring it.
If managed by a Controller:
- config is added to the cluster store and the control plane monitors it.

View File

@ -0,0 +1,72 @@
---
id: mat-04-pod-networking-lifecycle-and-patterns
title: "Pod Networking, Lifecycle, and Multi-Container Patterns"
---
# Pod networking, lifecycle, and patterns
## Anatomy of a Pod (namespaces)
A Pod isolates resources via namespaces such as:
- net namespace (IP, ports, routing table)
- pid namespace (isolated process tree)
- mnt namespace (filesystems and volumes)
- UTS namespace (hostname)
- IPC namespace (Unix domain sockets and shared memory)
## Pods and shared networking
Every Pod has its own network namespace:
- its own IP
- its own TCP/UDP port range
- its own routing table
Single-container Pod:
- the container uses the Pods network namespace directly.
Multi-container Pod:
- all containers share the Pods IP/ports/routing table
- container-to-container communication is via `localhost:<port>`
## Kubernetes pod network overlay
- Each Pod gets a unique IP that is routable inside the clusters **pod network**.
- The pod network is a flat overlay network that allows Pod-to-Pod communication even across nodes on different underlay networks.
## Pod lifecycle
Typical phases:
1. Pending (accepted but not yet running)
2. Running
3. Succeeded (for short-lived apps that complete)
4. Running (for long-lived apps that keep running)
## Pod restart policy
Possible configs:
- Always (default)
- OnFailure
- Never
Guidance:
- short-lived apps should be `OnFailure` or `Never`
- long-lived apps can be either, but are typically managed via Controllers
- short-lived apps are often wrapped in Jobs (e.g., CronJobs)
## Pod immutability
Pods are immutable.
If you need to change metadata, create a new Pod.
## Multi-container Pod patterns
Common patterns:
- **Sidecar**: performs a secondary task for the main container (logging, metrics, service mesh, …)
- adapter (variation): reformats output (e.g., nginx logs → prometheus)
- ambassador (variation): brokers connectivity to external systems
- **Init container**: guaranteed to start and finish before the main container (pull content, setup privileges, …)
## Pod hostnames
- Every container in a Pod inherits its hostname from the Pod name.
- All containers in a multi-container Pod share the same hostname.
- Use DNS-safe Pod names: `a-z`, `0-9`, `-`, `.`
## Pod DNS (example format)
Pods can have DNS names in the format:
- `pod-ip-address.my-namespace.pod.cluster-domain.example`
Example (default namespace; cluster domain `cluster.local`):
- `172-17-0-3.default.pod.cluster.local`

View File

@ -0,0 +1,41 @@
---
id: mat-05-kubectl-for-pods
title: "kubectl for Pods"
---
# kubectl for Pods
## Apply a manifest
Deploy a Pod:
- `kubectl apply -f pod.yml`
Check Pods:
- `kubectl get pods`
- watch continuously:
- `kubectl get pods --watch`
## Inspect pods with kubectl get
Useful output formats:
- `-o wide` (more columns)
- `-o yaml` (full YAML output)
Tip:
- compare desired state (`spec`) vs observed state (`status`).
## kubectl describe
- `kubectl describe pods <pod-name>`
Useful for events and deep status information.
## kubectl logs
- `kubectl logs <pod-name>`
- for a specific container:
- `kubectl logs <pod-name> --container <container-name>`
## kubectl exec
Run a command in a running Pod:
- `kubectl exec <pod-name> -- <cmd>`
Get a tty:
- `kubectl exec -it <pod-name> -- sh`
For multi-container pods, add `--container <container-name>` as needed.

View File

@ -0,0 +1,241 @@
---
id: mat-06-kubernetes-intro
title: Kubernetes Intro
---
# Kubernetes Intro
## What Kubernetes is
Kubernetes is an orchestrator for containerized, cloud-native microservices applications. In practice, it deploys applications, scales them, helps them recover from failures, and manages updates.
## Orchestration
An orchestrator deploys applications and dynamically responds to changes. Kubernetes can deploy apps, scale up/down, self-heal, perform rollouts and rollbacks, and more.
## Containerization
Containerization packages applications and their dependencies as images, then runs them as containers. Containers are often compared to the next generation of virtual machines: smaller, faster, and more portable, while still commonly used alongside VMs.
## Cloud native
Cloud-native applications exhibit properties such as auto-scaling, self-healing, automated updates, and rollbacks. Simply running an application in a public cloud does not automatically make it cloud-native.
## Microservices
A microservices application is composed of multiple small, specialized services that work together (for example: web front-end, catalog, cart, authentication, logging, and store). A common pattern is to deploy each microservice as one or more containers so each feature can be scaled and updated independently.
## Where Kubernetes came from
Kubernetes was developed by Google engineers partly in response to the rise of modern cloud computing (popularized by AWS) and explosive container adoption (accelerated by Docker). It was open-sourced in 2014 and donated to the Cloud Native Computing Foundation (CNCF). At its core, Kubernetes abstracts infrastructure and simplifies application portability.
## Kubernetes and Docker
Early Kubernetes versions used Docker as the container runtime for low-level tasks such as creating, starting, and stopping containers. Over time, Docker became bloated and alternatives emerged. Kubernetes introduced the Container Runtime Interface (CRI) so the runtime layer could be swapped. Kubernetes 1.24 removed support for Docker as a runtime, and many new clusters use containerd as the default runtime. Docker, containerd, and Kubernetes work with OCI-standard images and containers.
## Container runtime interface and OCI
The CRI makes the container runtime pluggable so clusters can select runtimes for different needs (for example isolation or performance). OCI standards define common formats and behaviors for images and containers so tooling can interoperate.
## Kubernetes, Borg, and CNCF
Google historically orchestrated containers at scale with internal tools (Borg and Omega). Kubernetes is not an open-source version of them, but shares related ideas and lessons learned. Kubernetes is an open-source CNCF project under the Apache 2.0 license.
## Whats in the name and K8s
The word "Kubernetes" comes from the Greek word for "helmsman" (a ships steersman). The logo is a ships wheel. You will often see Kubernetes shortened to K8s (pronounced "kates"), where the 8 replaces the eight letters between K and s.
## Kubernetes as the operating system of the cloud
Kubernetes is sometimes called the operating system of the cloud because it abstracts differences between cloud platforms similarly to how operating systems abstract differences between servers:
- Operating systems schedule application processes onto server resources.
- Kubernetes schedules application microservices onto cloud and data center resources.
This abstraction enables hybrid cloud, multi-cloud, and cloud migration scenarios.
## Application scheduling analogy
Just as developers usually do not choose CPU cores or memory DIMMs directly (the OS schedules work), Kubernetes hides much of the complexity of selecting nodes, zones, and volumes. You describe what you want, and Kubernetes decides where and how to run it.
# Kubernetes principles of operation
## Kubernetes from 40K feet
Kubernetes is both a cluster and an orchestrator.
### Kubernetes as a cluster
A Kubernetes cluster is one or more nodes providing compute and other resources for applications.
### Control plane nodes and worker nodes
Clusters have control plane nodes and worker nodes. Control plane nodes implement the Kubernetes intelligence and must be Linux; worker nodes run your applications and can be Linux or Windows. Production clusters typically use three or five control plane nodes for high availability.
### Kubernetes as an orchestrator
Kubernetes deploys and manages applications across nodes and failure zones, handles failures, scales when demand changes, and manages rollouts and rollbacks.
## Control plane and worker nodes
The control plane is the collection of system services that provide the "brains" of Kubernetes: it exposes the API, schedules apps, implements self-healing, and manages scaling operations. Worker nodes run business applications.
## Control plane services
### The API server
The API server is the front end of Kubernetes. All commands and requests go through it, and internal control plane components also communicate through it. It exposes a RESTful API over HTTPS and enforces authentication and authorization. A typical deploy/update flow is: write a YAML manifest, post it to the API server, authenticate/authorize, persist it in the cluster store, and schedule the app to nodes.
### The cluster store etcd high availability and split brain
The cluster store holds the desired state of applications and cluster components and is the only stateful part of the control plane. It is based on etcd. Many clusters run an etcd replica on every control plane node for high availability, though some large clusters run etcd separately for performance. etcd prefers an odd number of replicas to reduce split brain risk during network partitions. If etcd experiences split brain, it enters read-only mode: applications continue running, but Kubernetes cannot scale or update them.
### RAFT consensus and write consistency
etcd uses the RAFT consensus algorithm to keep writes consistent and prevent corruption.
### Controllers and the controller manager
Controllers implement much of the cluster intelligence. They run as background watch loops that reconcile observed state with desired state. Kubernetes also runs a controller manager that spawns and manages controllers.
### The scheduler
The scheduler watches the API server for new work and assigns it to healthy worker nodes. It filters nodes that cannot run the task and ranks the remaining ones using factors such as available CPU/memory and whether required images are already present. If no suitable node exists, tasks can remain pending; with node autoscaling enabled, pending tasks can trigger adding nodes.
### The cloud controller manager
In public-cloud clusters, a cloud controller manager integrates with cloud services such as instances, load balancers, and storage (for example provisioning a load balancer when an app requests one).
## Worker node components
### Kubelet
The kubelet is the main Kubernetes agent on every worker node. It watches the API server for new tasks, instructs the runtime to execute tasks, and reports status back to the API server.
### Runtime
Each worker node has one or more runtimes that execute tasks such as pulling images and managing container lifecycle operations. Many newer clusters use containerd; some platforms use other CRI-compatible runtimes.
### Kube-proxy
kube-proxy implements cluster networking on the node and load balances traffic to tasks running on that node.
## Packaging apps for Kubernetes
Workloads such as containers (and other types) need to be wrapped in Pods to run on Kubernetes. Pods are then commonly wrapped in higher-level controllers (such as Deployments) to add features like scaling, self-healing, and rollouts.
### Object nesting containers pods deployments
A common nesting is: container (app + dependencies) inside a Pod (so it can run on Kubernetes) inside a Deployment (adds self-healing, scaling, and rollout/rollback behaviors). You post the Deployment YAML to the API server as the desired state.
## The declarative model and desired state
Kubernetes prefers the declarative model, centered on desired state, observed state, and reconciliation.
### Desired state observed state and reconciliation
Desired state is what you want; observed (actual/current) state is what you have; reconciliation is the process of keeping observed state aligned with desired state. You describe desired state in a YAML manifest, post it to the API server, Kubernetes persists it in the cluster store, and controllers reconcile the cluster until observed state matches the desired state and keep watching for drift.
### Declarative versus imperative
Imperative approaches rely on sequences of platform-specific commands and scripts. Declarative approaches describe the end state in a platform-agnostic way and fit well with version control. They also enable self-healing, autoscaling, and rolling updates.
### Self-healing and rolling updates example
If desired state requests 10 replicas and failures reduce the observed state, controllers create replacement Pods to return to 10. If a manifest changes to a newer image version, controllers replace old Pods with new Pods running the updated version.
## Pods
Pods are the unit of scheduling in Kubernetes.
### Pods and containers
Single-container Pods are common, which is why "Pod" and "container" are sometimes used interchangeably. Multi-container Pods are also used (for example with service meshes, initialization helpers, or tightly coupled helpers such as log scrapers).
### Multi-container pods and sidecars
A sidecar is a helper container that runs in the same Pod as the main container, providing additional services (for example encryption and telemetry in a service-mesh sidecar). Multi-container Pods can help keep each container focused on a single responsibility.
### Pod anatomy shared execution environment
A Pod is a shared execution environment for one or more containers. The environment includes a network stack, shared memory, volumes, and more. Containers in the same Pod share a Pod IP address and can communicate via localhost.
### Pod scheduling atomic readiness
Kubernetes schedules Pods (not individual containers). All containers in a Pod are scheduled to the same node. A Pod is marked ready only when all its containers are running.
### Pods as the unit of scaling
Scaling up adds Pods; scaling down deletes Pods. You do not scale by adding containers to an existing Pod.
### Pod lifecycle
Pods are created, live, and die. When a Pod dies, Kubernetes replaces it with a new Pod with a new ID and a new IP address. This encourages loose coupling so apps tolerate individual Pod failures.
### Pod immutability
Pods are immutable. To change a Pod, you replace it with a new one rather than modifying it in place.
## Deployments
Deployments are a common higher-level controller for stateless apps. They add self-healing, scaling, rolling updates, and versioned rollbacks for Pods.
## Services and stable networking
Pod IPs change due to failures, rollouts, and scaling. Services provide stable networking for groups of Pods by giving a stable front end (DNS name, IP, and port) and load balancing to a dynamic back end of healthy Pods (selected via labels).
# Getting Kubernetes
## Install everything with Docker Desktop
Docker Desktop can install Docker, provide a built-in multi-node Kubernetes cluster, and install and configure kubectl. Some examples that integrate with cloud load balancers or cloud storage require a cloud cluster instead of Docker Desktop.
### Create a Docker account
Create a free Personal Docker account.
### Install Docker Desktop and verify docker and kubectl
Install Docker Desktop (version 4.38 or newer) and verify:
- docker --version
- kubectl version --client=true -o yaml
### Deploy Docker Desktop built-in multi-node cluster
In Docker Desktop settings:
- Ensure "Use containerd for pulling and storing images" is enabled.
- Enable Kubernetes and choose the "kind (sign-in required)" option (the alternative creates a single-node cluster).
- Set Node(s) to 3 and enable showing system containers (advanced).
Apply and restart, then wait for the cluster to show as running.
### Verify cluster with kubectl get nodes
Use:
- kubectl get nodes
You should see three nodes: one control plane and two workers (in the Docker Desktop example). Docker Desktop runs the nodes as containers, but the kubectl experience is the same.
## Linode Kubernetes Engine LKE
LKE is a hosted Kubernetes service where the provider builds and manages the control plane and cluster operations. You configure kubectl to connect to the hosted cluster.
### Create an LKE cluster
From the Linode Cloud console: create a Kubernetes cluster with a label, region, Kubernetes version, and node pool.
### Configure kubectl and kubeconfig
kubectl uses a kubeconfig file called config in your home directory's hidden .kube folder. You may need to create ~/.kube and place the downloaded kubeconfig there (renamed to config), or merge it with an existing config.
### Switch contexts and test LKE cluster
Use kubectl config get-contexts and kubectl config use-context to select the context for the cluster you want to target, then test with kubectl get nodes.
## More about kubectl and kubeconfig
### What kubectl does per command
Each kubectl command is converted to an HTTP REST request, sent to the cluster defined by the current context in the kubeconfig, using the credentials in that context.
### Kubeconfig structure clusters users contexts current context
A kubeconfig defines clusters, users (credentials), contexts that map a user to a cluster, and a current-context that kubectl uses by default.

558
materials/07-git.md Normal file
View File

@ -0,0 +1,558 @@
---
id: mat-07-git
title: Git
---
# Introduction to source control
## Why version control
When you work on software, you inevitably create multiple versions of the same artifact (source code, docs, config). Source control (version control) is used to:
- track changes over time
- collaborate safely on the same codebase
- keep a detailed history and understand “who changed what, when, and why”
- revert to previous versions
- integrate with development tooling (IDEs, build automation, CI/CD)
## Types of version control systems
Version control systems are commonly grouped into:
- **Local**: versioning kept locally on one machine.
- **Centralized**: a central server stores the history; clients check out working copies.
- **Distributed**: every clone contains the full history; collaboration happens by exchanging commits.
## Git overview
Git is a distributed version control system (DVCS) used to track changes in source code during software development. It supports collaboration between multiple developers, branching and merging, and reverting to previous versions. Git works like a **stream of snapshots**, not deltas.
# Git data model and states
## Snapshots not deltas
Git stores data as a series of snapshots of the project over time. When you commit, Git records a snapshot of what you staged.
## Working directory staging area git directory
A Git project can be viewed as three “places”:
- **Working directory**: your checked-out files where you edit.
- **Staging area** (index): where you build the next commit by selecting specific changes.
- **Git directory** (`.git`): the database where Git stores committed snapshots and metadata.
## File states modified staged committed
Git describes file state using three key states:
- **Modified**: changed in the working directory but not committed.
- **Staged**: marked to go into the next commit snapshot.
- **Committed**: safely stored in the Git database.
## Integrity and sha1
Everything in Git is checksummed. Git uses a SHA-1 checksum to identify content; if file contents change, the checksum changes. This supports integrity: you cant change content “silently” without changing the identifier.
# Getting started
## Installing Git and verifying
After installing Git, verify with:
```bash
git --version
```
## First time setup with git config
Set your identity (used in commits):
```bash
git config --global user.name "Name Surname"
git config --global user.email "user@provider.com"
```
Inspect settings:
```bash
git config --list
git config user.name
git config user.email
```
## Getting help
Use Gits built-in help:
```bash
git help <verb>
git <verb> --help
```
## Getting a repository with git init
To start version control in an existing directory:
```bash
cd /path/to/project
git init
```
This creates a `.git` directory (the repository “skeleton”). To begin tracking existing files and make an initial commit:
```bash
git add <files...>
git commit -m "Initial project version"
```
## Cloning a repository with git clone
To get a copy of an existing repository:
```bash
git clone <url>
```
Unlike a “checkout” in some other systems, cloning pulls down a full copy of nearly all data, including project history.
# Recording changes
## Tracked and untracked files
Files are either:
- **Untracked**: not yet in Gits database.
- **Tracked**: known to Git (from the last snapshot and any staged files). Tracked files can be unmodified, modified, or staged.
## Checking status with git status
Use `git status` to see:
- what branch youre on
- which files are staged
- which files are modified but not staged
- which files are untracked
```bash
git status
```
## Short status format
Use the short format:
```bash
git status -s
```
It uses a two-column code. The left column refers to the staging area; the right column refers to the working directory. Examples:
- `??` untracked file
- `A ` added to staging area
- `M ` modified and staged
- ` M` modified but not staged
- `AM` added to staging area and then modified again
## Staging with git add
Stage new files or stage changes of tracked files:
```bash
git add <file>
git add .
```
Staging is how you choose what will go into the next commit snapshot.
## Viewing changes with git diff
To see what you changed but have **not staged** yet:
```bash
git diff
```
To see what you staged that will go into the next commit:
```bash
git diff --staged
```
(`--cached` is a synonym of `--staged`.)
## Committing with git commit
Create a commit (a snapshot of what you staged):
```bash
git commit -m "message"
```
## Skipping the staging area with git commit -a
For tracked files, you can skip explicit staging and commit modifications directly:
```bash
git commit -a -m "message"
```
## Removing files with git rm
Remove a file from your working directory **and** stage its removal:
```bash
git rm <file>
```
To keep the file in your working directory but remove it from Git tracking (stage an “untrack”):
```bash
git rm --cached <file>
```
## Moving or renaming files with git mv
Rename/move a file:
```bash
git mv oldname newname
```
## Ignoring files with gitignore
To ignore generated files, secrets, build outputs, etc., use a `.gitignore` file. Pattern rules include:
- Blank lines are ignored.
- Lines starting with `#` are comments.
- Standard glob patterns work (`*`, `?`, `[a-z]`).
- A leading `/` matches from the repository root.
- A trailing `/` matches directories.
- `!` negates a pattern.
- `**` can match nested directories.
A repository can have one `.gitignore` at the root, and it can also have additional `.gitignore` files in subdirectories; nested rules apply only under that directory.
## Viewing history with git log
View commit history:
```bash
git log
```
A compact visualization is:
```bash
git log --oneline --decorate --graph --all
```
# Undoing mistakes
## Amending the last commit
To change the last commit (for example, fix the message or add missing staged changes):
```bash
git commit --amend
```
## Unstaging files
If you staged a file but want to unstage it:
```bash
git reset HEAD <file>
```
Newer Git also provides `git restore`:
```bash
git restore --staged <file>
```
## Discarding local modifications
To discard modifications in the working directory (dangerous; it overwrites your local changes):
```bash
git checkout -- <file>
```
Or with `git restore`:
```bash
git restore <file>
```
## Restore versus reset and safety
`git restore` was introduced in Git 2.23.0 as an alternative to using `git reset`/`git checkout` for some common “undo” cases. Commands that discard changes are dangerous: once you overwrite local modifications, recovering them can be difficult.
# Branching
## What a branch is
Branching means you diverge from the main line of development and continue work without “messing” with the main line. Git branching is lightweight and switching branches is fast.
In Git, a **branch** is a lightweight movable pointer to a commit. The default branch name is often `master` (and on GitHub the default branch is commonly `main`).
## HEAD pointer and current branch
Git keeps a special pointer called **HEAD** to know what branch youre currently on. When you commit, the current branch pointer moves forward.
## Creating and switching branches
Create a new branch:
```bash
git branch testing
```
Switch to a branch:
```bash
git checkout testing
```
Create and switch in one step:
```bash
git checkout -b <newbranchname>
```
Newer Git provides `git switch`:
```bash
git switch <branchname>
git switch -c <newbranchname>
```
When you check out another branch, your working directory files change to match the snapshot of that branch.
## Branches are cheap
A branch is represented by a small file that contains the 40-character SHA-1 checksum of the commit it points to. Creating and deleting branches is therefore cheap.
## Divergent history and viewing the graph
If you make commits on different branches, history diverges. You can visualize with:
```bash
git log --oneline --decorate --graph --all
```
# Merging
## Fast forward merges
If the branch you merge has not diverged (your current branch is directly behind it), Git can do a **fast-forward** merge: it just moves the branch pointer forward.
## Three way merges
If branches have diverged, Git performs a **three-way merge** using two branch tips and their common ancestor, creating a new merge commit (with 2+ parents).
Merge example:
```bash
git merge <branchname>
```
## Deleting branches after merge
After merging a topic branch, you often delete it:
```bash
git branch -d <branchname>
```
## Merge conflicts and resolution
If two branches change the same lines, Git cant merge automatically. You will see conflicts (e.g., in `git status`), and the conflicted file contains conflict markers like:
```text
<<<<<<< HEAD
... your current branch ...
=======
... the other branch ...
>>>>>>> <branchname>
```
Resolution workflow:
1. Open the file and resolve the conflict by editing.
2. Stage the resolved file (`git add <file>`).
3. Finish by committing (often `git commit` after a merge conflict).
## Merge versus rebase
Merging and rebasing are two ways of integrating changes from one branch into another.
- **Merge** takes the endpoints of branches and merges them, creating a merge commit (when needed).
- **Rebase** replays commits from one branch onto another base, creating a “cleaner” linear history, but changing the history of the rebased commits.
## Basic rebasing workflow
A common rebase flow is to move a topic branch onto the updated base branch:
```bash
git checkout <topic-branch>
git rebase <base-branch>
```
After resolving any conflicts during a rebase, you continue with:
```bash
git rebase --continue
```
# Remotes
## Remote repositories and origin
A remote repository is a Git repository hosted elsewhere (for example on GitHub). When you clone, Git typically adds a remote named `origin`.
## Showing and adding remotes
Show remotes:
```bash
git remote
git remote -v
```
Add a remote:
```bash
git remote add <shortname> <url>
```
## Fetching and pulling
Fetch downloads data from a remote without merging it into your current work:
```bash
git fetch <remote>
```
Pull fetches and then merges into your current branch (when your branch is set up to track a remote branch):
```bash
git pull
```
## Pushing
Push commits to a remote branch:
```bash
git push <remote> <branch>
```
## Tracking branches upstream
To set the upstream (tracking) relationship when pushing:
```bash
git push -u origin <branch>
```
## Renaming and removing remotes
Rename a remote:
```bash
git remote rename <old> <new>
```
Remove a remote:
```bash
git remote remove <name>
```
## Deleting remote branches
Delete a branch on the remote:
```bash
git push origin --delete <branch>
```
# Tagging and releases
## Lightweight and annotated tags
Tags are typically used to mark releases.
- **Lightweight tag**: just a pointer to a commit.
- **Annotated tag**: stored as a full object with metadata and a message.
Create an annotated tag:
```bash
git tag -a v1.4 -m "my version 1.4"
```
Create a lightweight tag:
```bash
git tag v1.4-lw
```
## Listing and showing tags
List tags:
```bash
git tag
git tag -l "v1.*"
```
Show information for a tag:
```bash
git show v1.4
```
## Checking out tags and detached head
If you check out a tag, Git puts you in a **detached HEAD** state. You can inspect the code, but commits you make wont belong to a named branch unless you create one.
```bash
git checkout v2.0.0
```
# Productivity
## Git aliases
You can define aliases to shorten commands:
```bash
git config --global alias.co checkout
git config --global alias.br branch
git config --global alias.ci commit
git config --global alias.st status
```
Aliases can also wrap longer commands, for example:
```bash
git config --global alias.unstage 'reset HEAD --'
git config --global alias.last 'log -1 HEAD'
```
## Credential helpers
To avoid typing credentials repeatedly, Git can use credential helpers (depending on your OS) to cache or store credentials for Git operations.
# Collaboration on GitHub
## Forking and pull requests
A common collaboration pattern on GitHub is:
1. Fork the project (create your copy on GitHub).
2. Clone your fork locally.
3. Add the original repository as an `upstream` remote.
4. Create a topic branch, make commits, and push to your fork.
5. Open a pull request from your fork/topic branch into the upstream repository.
6. Keep your fork updated by fetching from upstream and integrating changes.
## Suggested class assignment and homework workflow
The following sequence matches the lectures suggested lab/homework ideas:
- Create a new repository on GitHub and connect it to a local project.
- Make commits, inspect status/diff/log, and push to GitHub.
- Create a new branch, make changes, and merge back.
- Create a three-branch workflow (for example `develop`, `test`, `production`) and practice merging changes through the branches.
- Fork a colleagues repository, make changes in a topic branch, and submit a pull request.

View File

@ -0,0 +1,7 @@
# Sample Submission: Q4 File-Judge
- `kubectl get pods`
- `kubectl get pods -o wide`
- `kubectl describe pod nginx-demo`
- `kubectl logs nginx-demo`
- `kubectl exec -it nginx-demo -- sh`

View File

@ -0,0 +1,16 @@
# Sample Answer: File-Like Question (Q4)
## Pod inspection/debug commands
1. `kubectl get pods -n default`
2. `kubectl get pods -o wide`
3. `kubectl describe pod nginx-demo`
4. `kubectl logs nginx-demo`
5. `kubectl exec -it nginx-demo -- sh`
## Short notes
- `get` shows current pod state.
- `describe` shows events and scheduling details.
- `logs` helps troubleshoot application output.
- `exec` lets you inspect container internals.

View File

@ -0,0 +1,5 @@
# Sample Answer: Multiple-Choice Question
Selected answer:
- Q3: `Services provide a stable network endpoint for Pods`

View File

@ -0,0 +1,5 @@
# Sample Answer: Single-Choice Question
Selected answer:
- Q2: `B`

View File

@ -0,0 +1,3 @@
# Sample Answer: Easy Task (What Kubernetes Is)
Kubernetes is a platform for orchestrating containerized applications. It helps teams deploy, scale, and manage containers across a cluster of machines. In practice, it keeps applications running and makes rollouts more reliable.

View File

@ -0,0 +1,10 @@
# Unit 01 File Task Sample (T4)
## (a) What Kubernetes is
Kubernetes is a system for orchestrating containerized applications across a cluster.
## (b) Correct single-choice answer from T2
B) etcd
## (c) Correct multiple-choice selections from T3
A, B, C are correct because they describe real Kubernetes capabilities and behavior, while D is incorrect because cluster state is stored in etcd.

View File

@ -0,0 +1,3 @@
# Unit 01 Sample: Multiple-Choice Question (T3)
A, B, C

View File

@ -0,0 +1,3 @@
# Unit 01 Sample: Simple Task (T1)
Kubernetes is a container orchestration platform for running applications in clusters. It helps teams deploy, scale, and recover workloads automatically across multiple machines. This makes operations more reliable than managing containers manually.

View File

@ -0,0 +1,5 @@
# Unit 01 Sample: Single-Choice (T2)
Answer: **B**
Reason: `etcd` stores Kubernetes cluster state and configuration data.