Blog Icon

Blog Post

What’s new in Kubernetes 1.17?

LIVE WEBINAR: 5 Prometheus Exporter Best Practices - Oct 20, 2020 10am Pacific / 1pm Eastern

Kubernetes 1.17 is about to be released! This short-cycle release is focused on small improvements and house cleaning. There are implementation optimizations all over the place, new features like the promising topology aware routing, and improvements to the dual-stack support. Here is the list of what’s new in Kubernetes 1.17.

Kubernetes 1.17 – Editor’s pick:

These are the features that look more exciting to us for this release (ymmv):

Kubernetes 1.17 core

#1053 Kubeadm machine/structured output

Stage: Alpha
Feature group: cluster-lifecycle

The most common way to deploy a kubernetes cluster is via automated tools, like the kubeadm command, or tools that rely on it, like terraform. The current output of kubeadm is not structured, so a small change on kubeadm can break the integration with those other tools.

This alpha feature allows to get the output from kubeadm in machine-readable structured formats like json, yaml, or go templates.

If the default output prints something like this:

$ kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
7vg8cr.pks5g06s84aisb27   <invalid>   2019-06-05T17:13:55+03:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

Using the -o or -experimental-output flag, you can get a structured version:

$ kubeadm token list -o json
{
    "kind": "BootstrapToken",
    "apiVersion": "output.kubeadm.k8s.io/v1alpha1",
    "creationTimestamp": null,
    "token": "7vg8cr.pks5g06s84aisb27",
    "description": "The default bootstrap token generated by 'kubeadm init'.",
    "expires": "2019-06-05T14:13:55Z",
    "usages": [
        "authentication",
        "signing"
    ],
    "groups": [
        "system:bootstrappers:kubeadm:default-node-token"
    ]
}

Until the Kubernetes documentation gets updated, you can check some examples in the PR and KEP pages for this feature.

#1143 Clarify use of node-role labels within Kubernetes and migrate old components

Stage: Alpha
Feature group: architecture

The initial goal for the node-role.kubernetes.io namespace for labels was to provide a grouping convention for cluster users. These labels are optional, only meant for displaying cluster information in management tools, and similar non-critical use cases.

Against the usage guidelines, some core and related projects started depending on them to vary their behaviour, which could lead to problems in some clusters.

This feature summarizes the work done to clarify the proper use of the node-role labels, so they won’t be misused again, and it removes the dependency on them where needed.

This feature implies a change of behaviour in some cases, that can be reversed with the LegacyNodeRoleBehavior and NodeDisruptionExclusion feature gates. You can learn more in the Kubernetes documentation.

#382 Taint node by Condition

Stage: Graduating to Stable
Feature group: scheduling

In beta since the 1.12 Kubernetes release, this feature finally graduates to stable.

The Taint node by condition feature causes the node controller to dynamically create taints corresponding to observed node conditions. The user can choose to ignore some of the node’s problems (represented as Node conditions) by adding appropriate pod tolerations.

#548 Schedule DaemonSet Pods by kube-scheduler

Stage: Graduating to Stable

Feature group: scheduling

Enabled by default since the 1.12 Kubernetes release, this feature finally graduates to stable.

Instead of being scheduled by the DaemonSet controller, these pods are scheduled by the default scheduler. This means that we will see pods and daemonsets created in Pending state and the scheduler will consider pod priority and preemption.

#495 Configurable Pod Process Namespace Sharing

Stage: Graduating to Stable

Feature group: node

In beta since the 1.12 Kubernetes release, this feature finally graduates to stable.

Users can configure containers within a pod to share a common PID namespace by setting an option in the PodSpec. More on this in the Kubernetes documentation: share process namespace.

#589 Move frequent Kubelet heartbeats to Lease API

Stage: Graduating to Stable

Feature group: node

node-leases complements the existing NodeStatus introducing, a lighter, more scalable heartbeat indicator.

Read more in the release for 1.14 of the What’s new in Kubernetes series.

Network

#563 Add IPv4/IPv6 dual-stack support

Stage: Major Change to Alpha

Feature group: network

This feature summarizes the work done to natively support dual-stack mode in your cluster, so you can assign both IPv4 and IPv6 addresses to a given pod.

Read more in the release for 1.16 of the What’s new in Kubernetes series.

In 1.17 there are 3 main improvements related to this feature:

  • kube-proxy now supports dual stack with EndpointSlices and IPVS.
  • Now you can set podIPs using the downward API, with the <a rel="noopener nofollow noreferrer" target="_blank" href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/20180612-ipv4-ipv6-dual-stack.md#container-environment-variables">status.podIPs environment variable</a>.
  • --node-cidr-mask-size-ipv6 now defaults to /64, instead of mirroring the /24 value from IPv4.

Dual stack is a big project, so expect new improvements on following Kubernetes release before this feature leaves the alpha stage.

#536 Topology aware routing of services

Stage: Graduating to Alpha

Feature group: network

Optimizing network traffic is essential to improve performance (and reduce costs) in complex Kubernetes deployments. Service Topology optimizes traffic by keeping it between pods that are close to each other.

This feature is enabled by the ServiceTopology feature gate:

--feature-gates="ServiceTopology=true"

Configuration is done at a Service level via the topologyKeys setting, which contains a list of tags. Pods will only be able to communicate with Endpoints with matching tag values:

["kubernetes.io/hostname", "topology.kubernetes.io/zone", "*"]

In this example, traffic will be sent to endpoints within the same hostname if possible, if not, it will fallback to nodes within the same zone. As a last resort, it will use any available node.

You can read more about Service Topology on this comprehensive post.

#752 EndpointSlice API

Stage: Graduating to Beta

Feature group: network

The new EndpointSlice API will split endpoints into several Endpoint Slice resources. This solves many problems in the current API related to big Endpoints objects. This new API is also designed to support other future features, like multiple IPs per pod.

Read more in the release for 1.16 of the What’s new in Kubernetes series.

#980 Finalizer Protection for Service LoadBalancers

Stage: Graduating to Stable

Feature group: network

There are various corner cases where cloud resources are orphaned after the associated Service is deleted. Finalizer Protection for Service LoadBalancers was introduced to prevent this from happening.

Read more in the release for 1.15 of the What’s new in Kubernetes series.

Kubernetes 1.17 API

#1152 Avoid serializing the same object independently for every watcher

Stage: Graduating to Stable

Feature group: api-machinery

This optimization of kube-apiserver will improve the performance when many watches are observing the same set of objects. This problem is manifesting in clusters with several thousands of nodes, where simple operations like creating an Endpoint object can take several seconds to complete.

The problem has been located around the serialization of the objects, as the old implementation serialized each object once per watcher. The new implementation uses a cache to serialize objects only once for all watchers.

You can read more about this optimization in the implementation details.

#575 Defaulting of Custom Resources

Stage: Graduating to Stable

Feature group: api-machinery

Two features aiming to facilitate the JSON handling and processing associated with CustomResourceDefinitions.

Read more in the 1.15 release of the What’s new in Kubernetes series.

#956 Add Watch Bookmarks support

Stage: Graduating to Stable

Feature group: api-machinery

The “bookmark” watch event is used as a checkpoint, indicating that all objects up to a given resourceVersion requested by the client have already been sent. The API can skip sending all these events, avoiding unnecessary processing on both sides.

Read more in the release for 1.15 of the What’s new in Kubernetes series.

Storage

#177 Snapshot / Restore Volume Support for Kubernetes (CRD + External Controller)

Stage: Graduating to Beta

Feature group: storage

In alpha since the 1.12 Kubernetes release, this feature finally graduates to beta.

Similar to how API resources PersistentVolume and PersistentVolumeClaim are used to provision volumes for users and administrators, VolumeSnapshotContent and VolumeSnapshot API resources can be provided to create volume snapshots for users and administrators. Read more about volume snapshots here.

#554 Dynamic Maximum volume count

Stage: Graduating to Stable

Feature group: storage

In beta since the 1.12 Kubernetes release, this feature finally graduates to stable.

When dynamic volume limits feature is enabled, Kubernetes automatically determines the node type and supports the appropriate number of attachable volumes for the node and vendor.

You can read more about dynamic volume limits in the Kubernetes documentation.

#557 Kubernetes CSI topology support

Stage: Graduating to Stable

Feature group: storage

Topology allows Kubernetes to make intelligent decisions when dynamically provisioning volumes by getting scheduler input on the best place to provision a volume for a pod. To achieve feature parity with in-tree storage plugins, the topology capabilities will be implemented for CSI out-of-tree storage plugins.

Read more in the release for 1.14 of the What’s new in Kubernetes series.

#559 Provide environment variables expansion in sub path mount

Stage: Graduating to Stable

Feature group: storage

Systems often need to define the mount paths depending on env vars. The previous workaround was to create a sidecar container with symbolic links. To avoid boilerplate, they are going to introduce the possibility to add environment variables to the subPath.

Read more in the release for 1.15 of the What’s new in Kubernetes series.

#625 In-tree storage plugin to CSI Driver Migration

Stage: Graduating to Beta

Feature group: storage

Storage plugins were originally in-tree, inside the Kubernetes codebase, increasing the complexity of the base code and hindering extensibility. Moving all this code to loadable plugins will reduce the development costs and will make it more modular and extensible.

Read more in the release for 1.15 of the What’s new in Kubernetes series.

Other Kubernetes 1.17 features

#837 Promote Cloud Provider Labels to GA

Stage: Graduating to Stable

Feature group: cloud-provider

When nodes and volume resources are created, three labels are applied to them to provide information related to the cloud provider. After being in beta stage for some time, these labels are being promoted to Stable. This requires a naming change, existing labels:

  • beta.kubernetes.io/instance-type
  • failure-domain.beta.kubernetes.io/zone
  • failure-domain.beta.kubernetes.io/region

Will be renamed to remove the ‘beta’ label:

  • node.kubernetes.io/instance-type
  • topology.kubernetes.io/zone
  • topology.kubernetes.io/region

The old values are marked as deprecated and will be completely removed on Kubernetes 1.21.

#960 Behavior-driven conformance testing

Stage: Graduating to Stable
Feature group: architecture

This feature summarizes the efforts to improve the testing suite for the Kubernetes API. The goal is to not only check what API endpoints are tested, but also up to what extent the behavior of each endpoint is covered by the tests.

If you are interested in testing tools, check out how the behaviours have been defined and the plan to migrate the current tests to the new format.

#714 Break apart the kubernetes test tarball

Stage: Graduating to Stable

Feature group: testing

The kubernetes-test.tar.gz file included in the Kubernetes release artifacts includes test resources, both portable and platform specific. This file has been slowly growing, reaching up to 1.5GB, which complicates and slows down the testing process.

From now on this file will be split into seven smaller, platform specific, versions.

#1043 RunAsUserName for Windows

Stage: Graduating to Beta

Feature group: windows

Now that Kubernetes has support for Group Managed Service Accounts we can use the runAsUserName Windows specific property to define which user will run a container’s entrypoint.

Read more in the release for 1.16 of the What’s new in Kubernetes series.


That’s all folks! Exciting as always, get ready to upgrade your clusters if you are intending to use any of these features.

If you liked this, you might want to check out our previous What’s new in Kubernetes editions:

And, if you enjoy keeping up to date with the Kubernetes ecosystem, subscribe to our container newsletter, a monthly email with the coolest stuff happening in the cloud-native ecosystem.

Stay up to date

Sign up to receive our newest.

Related Posts

What’s new in Kubernetes 1.16?

What’s new in Kubernetes 1.15?

What’s new in Kubernetes 1.14?