Looking forward to the new Kubernetes 1.13 features? At Sysdig we follow Kubernetes development closely and here we bring you a brief intro to what’s new and features that you will find in the next version of Kubernetes.
Kubernetes 1.13 will be out on December 03 2018, so here we go: What’s new in Kubernetes 1.13?
This is what’s new in #Kubernetes 1.13 Click to tweetKubernetes configuration management
#600 Dynamic audit configuration (alpha)
The Kubernetes audit configuration allows you to forward audit events to a remote API using a webhook. From Kubernetes 1.13 onwards you can setup a dynamic audit backend that supports pushing AuditSink API objects (remote endpoints) at runtime. Learn how to configure this new feature here. You can inspect these Kubernetes audit events using Falco.
#598 Support webhook conversion for custom resources (alpha)
Prior to version 1.13 it was possible to define multiple versions of the same Custom Resource Definition as long as you used the same schema for all of them (i.e. if you added a new field, you needed to add it for all versions). Now, different CRD versions can have different schemas and you can define a conversion webhook to handle updating.
#576 API-server dry-run (alpha ⇒ beta)
Dry-run mode let’s you emulate a real API request and see if the request would have succeeded (admission controllers chain, validation, merge conflicts, …) and/or what would have happened, without actually modifying the state. The response body for the request is supposed to be as close as possible to a non dry-run response. This core feature will enable other user level features like the kubectl diff
subcommand.
#491 Kubectl diff command (alpha ⇒ beta)
kubectl diff
will give you a preview of what changes kubectl apply
will make on your cluster. This feature, while simple to describe, seems really handy for the everyday job of a cluster operator. Note that you need to enable the dry-run feature on the API-server for this command to work.
#11 Simplify Kubernetes cluster creation (beta ⇒ stable)
This feature covers the overarching use case for kubeadm
and facilitating Kubernetes cluster creation. It’s not specific for the Kubernetes 1.13 version, but several documentation improvements will land in this release, covering subjects like the different phases in kubeadm init
, new sub-commands for kubeadm alpha
or the inclusion of CoreDNS in the deployment examples.
Kubernetes core components
#166 Taint based eviction (alpha ⇒ beta)
Taint based evictions move from alpha to beta state in Kubernetes 1.13. When this feature is enabled (TaintBasedEvictions=true
in --feature-gates
) the taints are automatically added by the NodeController (or kubelet) and the former logic for evicting pods from nodes based on the Ready NodeCondition is disabled.
#593 Scheduler can be configured to score a subset of the cluster nodes (alpha ⇒ beta)
Before Kubernetes 1.12, kube-scheduler needed to check the feasibility of all the nodes in a cluster and then scored the feasible ones, the node with the highest score was selected to run the pod(s). Now the Kubernetes scheduler can be configured to only consider a percentage of the nodes, as long as it can find enough feasible nodes in that set. This improves the scheduler’s performance in large clusters.
#579 Updated plugin mechanism for kubectl (alpha ⇒ beta)
kubectl
supports extensions adding new sub commands or overriding the existing ones, allowing for new and custom features not included in the main distribution of kubectl
. This repository offers a nice extension example. This feature has re-qualified to beta again.
#589 Move frequent Kubelet heartbeats to Lease API (alpha)
In versions of Kubernetes prior to 1.13, NodeStatus is the heartbeat from the Node. This version introduces the node-leases, a lighter, more scalable heartbeat indicator. Node leases are renewed frequently while NodeStatus is reported from node to master only when there is some change or enough time has passed. Read more about this feature in the Kubernetes node documentation.
Hardware support
#606 Support 3rd party device monitoring plugins (alpha)
In order to monitor resources provided by device plugins, monitoring agents need to be able to discover the set of devices that are in-use on the node and obtain metadata to describe which container the metric should be associated with. The kubelet now provides a gRPC service (PodResources) to enable this feature. Support for the “PodResources service” is still in alpha.
Storage
#351 Make raw block devices available for consumption via a persistent volume source (alpha ⇒ beta)
BlockVolume is enabled by default on Kubernetes 1.13, you can access a raw block device just setting the value of volumeMode
to block
. The ability to use a raw block device without a filesystem abstraction allows Kubernetes to provide better support for high performance applications that need high I/O performance and low latency, like databases.
#304 Add resize call support for FlexVolume to support volume resizing like LVM expansion (alpha)
PVC resizing was originally introduced in Kubernetes 1.8. Since then, several volume plugins have included support for this feature. In this release, FlexVolumes are included. If you are using a FlexVolume and the underlying driver supports the operation, the PV can be expanded now just updating the PVC in Kubernetes.
#178 Add support for out-of-tree CSI volume plugins in Kubernetes (beta ⇒ stable)
Container Storage Interface is a specification to expose arbitrary storage systems to the Kubernetes containerized workloads. Right now, Kubernetes volume plugins are currently in-tree, which means that they are linked, compiled, build and shipped with the core Kubernetes binaries. In 1.13 this technology is considered stable and allows third-party vendors to create and distribute (out of Kubernetes tree) CSI Volume plugins.
#490 Make the scheduler aware of a pod’s volume’s topology constraints, such as zone or node (beta ⇒ stable)
This feature allows to control where a volume is going to be scheduled, it also enables the local volume binding. It also can be used to couple a volume with an specific topology zone. This feature was present in previous Kubernetes versions, but has graduated to stable in v1.13.
Kubernetes cloud integration
#629 AWS ALB ingress controller (alpha)
A much requested feature, now the Kubernetes ingress resources will be satisfied by provisioning Amazon Application Load Balancers on demand as long as AWS integration credentials have been configured for the cluster. Read more about this feature here.
#630 Amazon elastic block store CSI driver (alpha)
The Amazon Elastic Block Store CSI Driver provides a CSI interface to manage the lifecycle of EBS volumes. The driver is still in alpha state and not supported in Kubernetes version previous to 1.12. Basic volume operation that can be used already are: CreateVolume/DeleteVolume, ControllerPublishVolume/ControllerUnpublishVolume, NodeStageVolume/NodeUnstageVolume, NodePublishVolume/NodeUnpublishVolume and Volume Scheduling.
#631 External AWS CCM (alpha)
The cloud-controller-manager is a daemon that embeds cloud-specific control loops. Since cloud providers develop and release at a different pace compared to the Kubernetes project, abstracting the provider-specific code to the cloud-controller-manager binary allows cloud vendors to evolve independently. This Kubernetes version debuts the alpha AWS cloud controller manager.
#586 Azure availability zones (alpha ⇒ beta)
Kubernetes v1.12 adds support for Azure availability zones (AZ). Nodes in availability zone will be added with label failure-domain.beta.kubernetes.io/zone=<region>-<AZ>
and topology-aware provisioning is added for Azure managed disks storage class. This Kubernetes version graduates Azure availability zones from alpha to beta stage.
#604 Support Azure cross resource group nodes (alpha ⇒ beta)
Kubernetes v1.12 adds support for cross resource group (RG) nodes and unmanaged (such as on-prem) nodes in Azure cloud provider. This Kubernetes version graduates cross resource group nodes from alpha to beta stage.
Deprecations
#622 Drop support for etcd2
All documentation references and support for v2 of etcd have been removed in this version.