DevOps Weekly #1
Hey! Welcome to my first weekly update. I've looked through a lot of info to find the best bits for you, mainly about DevOps and stuff like that. I do the searching so you don't have to. Just take it easy and enjoy what I've found!
Insights on Kubernetes v1.30 update
Structured Parameters for Dynamic Resource Allocation (KEP-4381): This enhancement improves upon the dynamic resource allocation feature first introduced in v1.26. It allows third-party Dynamic Resource Allocation (DRA) drivers to describe resources using a structured model predefined by Kubernetes. This model makes the parameters less opaque, enabling components like Cluster Autoscaler and job schedulers to make more informed decisions without needing to consult third-party controllers.
Node Memory Swap Support (KEP-2400): Memory swap support on Linux nodes is undergoing a major change for increased system stability. The previously used UnlimitedSwap behavior is being removed due to its potential to compromise node stability. The default setting will now be NoSwap, where the kubelet can run on a node with active swap space, but pods do not use any page file. An alternative mode, LimitedSwap, allows controlled use of the page file for pods' virtual memory.
Support for User Namespaces in Pods (KEP-127): This feature, moving to beta in v1.30, enhances pod isolation on Linux systems to mitigate high/critical CVEs. It supports pods with and without volumes, custom UID/GID ranges, and more, providing improved security.
Structured Authorization Configuration (KEP-3221): Also advancing to beta, this feature allows for more complex authorization configurations. It supports creating authorization chains with multiple webhooks and fine-grained control, including explicit deny policies and pre-filtering of requests using CEL rules. The configuration can be dynamically reloaded by the API server.
Container Resource-Based Pod Autoscaling (KEP-1610): This feature, graduating to stable, changes the HorizontalPodAutoscaler to allow scaling based on individual container resource usage rather than aggregate pod resource use.
CEL for Admission Control (KEP-3488): The integration of Common Expression Language (CEL) for admission control introduces a dynamic way to define and enforce complex, fine-grained policies through the Kubernetes API. This enhancement is aimed at improving the security, governance, and flexibility of Kubernetes clusters.
4 Ways to Improve Your CI/CD Pipeline
Another article to help you understand how to improve your CI/CD Pipeline. Main concepts:
Parallelization and Caching: Speed up builds with parallel processing and caching.
Code Quality Checks: Use static analysis and automated testing for error detection.
Employ various Deployment Strategies: Adopt Blue-Green and Canary models for reliable releases.
Secure Secrets: Store sensitive data securely using AWS Secrets Management or Azure KeyVault.
Authentication and Authorization with ISTIO and OPA on Kubernetes
Even if you're not currently using Istio or Open Policy Agent (OPA), this article offers valuable insights into sophisticated authentication (AuthN) and authorization (AuthZ) implementations. These tools can significantly reduce the burden on developers by shifting the responsibility from writing and maintaining in-app functionality to managing a YAML-like configuration file for the application's Helm Chart.
Virtual IPs for AWS EKS network (Calico)
This guide offers a straightforward approach to installing Calico on AWS EKS. Calico effectively manages IP addresses within its virtual range, helping you avoid exceeding the limits of your VPC subnet ranges.
How to Shift Left Security in Infrastructure as Code Using AWS CDK and Checkmarx KICS
This concise guide demonstrates how to combine AWS CDK with Checkmarx KICS (Keep Infrastructure as Code Secure), enabling automatic security validation of your code.
Use headless clusters in Amazon DocumentDB for cost-effective multi-Region resiliency
Learn how to operate AWS DocumentDB in headless mode in secondary regions. This approach not only saves costs but also ensures data is synced to the secondary region, allowing you to spin up a compute instance within 15 minutes to transfer workloads in the event of an outage.

