Case Studies

Blog Post

Falco 0.13.0 Released: Kubernetes Audit Events Support

We recently released Falco 0.13.0, which is probably the most exciting release since Falco’s 0.1.0 release almost two and a half years ago. With 0.13.0, we’re adding support for a second stream of events — Kubernetes Audit Events. This release also lays the groundwork for additional event sources to be easily added.

Falco 0.13 release Kubernetes audit events

Kubernetes Audit Events

Falco now supports a second source of events in addition to system call events: Kubernetes Audit Events. An improved implementation of Kubernetes audit events was introduced in Kubernetes v1.11 and provides a log of requests and responses to the kube-apiserver. Since virtually all cluster management tasks are done through the api server, the audit log is a way to track the changes made to your cluster. Examples of this include:

  • Creating/destroying pods, services, deployments, daemonsets, etc.
  • Creating/updating/removing config maps or secrets
  • Attempts to subscribe to changes to any endpoint

Once you’ve configured your cluster with audit logging and created an audit policy to determine which events you’d like to pass along to Falco, you can write Falco rules that process these events and send notifications for suspicious or other notable activity.

Falco Changes

The overall architecture of Falco remains the same, with events being matched against sets of rules, with a rule identifying suspicious/notable behavior. What’s new is that there are two parallel independent streams of events being read separately and matched separately against the sets of rules instead of just one.

To receive Kubernetes audit events, Falco embeds a civetweb based webserver that listens on a configurable port and accepts POST requests on a configurable endpoint. The posted json object comprises the event, and filter fields in a rule’s condition or output extract data out of that json object.

As part of this we’ve introduced a new attribute for rules: source. Now, a given rule is tied to either system call events or Kubernetes audit events, via the source attribute. If not specified, the source defaults to syscall. Rules with source syscall are matched against system call events. Rules with source k8s_audit are matched against Kubernetes audit events.

Kubernetes Audit Rules

As part of this release we are also shipping new rules to take advantage of the Kubernetes audit event source. Rules devoted to Kubernetes audit events are in k8s_audit_rules.yaml. When installed as a native package, Falco installs this rules file to /etc/falco/, so they are available for use.

There are three classes of rules. The first class of rules looks for suspicious/exceptional activity. This includes things like:

  • Any activity by a user outside of a set of allowed users, or by the anonymous user.
  • Creating a pod with an image outside of a set of allowed images.
  • Creating a privileged pod, a pod mounting a sensitive filesystem from the host, or a pod using host networking.
  • Creating a NodePort service.
  • Creating a configmap containing likely private credentials such as passwords, aws keys, etc.
  • Attaching or execing to a running pod.
  • Creating a namespace outside of a set of allowed namespaces.
  • Creating a pod or service account in the kube-system or kube-public namespaces.
  • Trying to modify or delete a system ClusterRole.
  • Creating a ClusterRoleBinding to the cluster-admin role.
  • Creating a ClusterRole with wildcarded verbs or resources (e.g. overly permissive)
  • Creating a ClusterRole with write permissions or a ClusterRole that can exec to pods.

A second class of rules tracks resources being created or destroyed, including:

  • Deployments
  • Services
  • ConfigMaps
  • Namespaces
  • Service accounts
  • Role/ClusterRoles
  • Role/ClusterRoleBindings

The final class of rules simply displays any audit event received by Falco. This rule is disabled by default, as it can be quite noisy. It’s easy to enable by overriding a macro in the rules file.

Kubernetes Audit Rule Example

One of the rules in k8s_audit_rules.yaml is the following (simplified a bit for clarity):

- macro: create
 condition: ka.verb=create

- macro: configmap
  condition: ka.target.resource=configmaps

- macro: contains_private_credentials
  condition: >
      (ka.req.configmap.obj contains "aws_access_key_id" or
       ka.req.configmap.obj contains "aws-access-key-id" or
       ka.req.configmap.obj contains "aws_s3_access_key_id" or
       ka.req.configmap.obj contains "aws-s3-access-key-id" or
       ka.req.configmap.obj contains "password" or
       ka.req.configmap.obj contains "passphrase")

- rule: Configmap contains private credentials
  desc: >
    Detect configmap operations with map containing a private credential (aws key, password, etc.)
  condition: configmap and modify and contains_private_credentials
  output: Kubernetes configmap with private credential (user=%ka.user.name verb=%ka.verb name=%ka.req.configmap.name configmap=%ka.req.configmap.name config=%ka.req.configmap.obj)
  priority: WARNING
  source: k8s_audit
  tags: [k8s]

The rule Configmap contains private credentials checks for a configmap being created where the contents of the configmap contain possibly sensitive items like aws keys or passwords.

If we create a configmap like the following (note the aws access key!):

apiVersion: v1
data:
  ui.properties: |
    color.good=purple
    color.bad=yellow
    allow.textmode=true
  access.properties: |
    aws_access_key_id = MY-ID
    aws_secret_access_key = MY-KEY
kind: ConfigMap
metadata:
  creationTimestamp: 2016-02-18T18:52:05Z
  name: my-config
  namespace: default
  resourceVersion: "516"
  selfLink: /api/v1/namespaces/default/configmaps/my-config
  uid: b4952dc3-d670-11e5-8cd0-68f728db1985

Assuming that Kubernetes audit logging is enabled, creating the configmap results in the following json object in the audit log (again, simplified for clarity):

{
    "kind": "Event",
    "apiVersion": "audit.k8s.io/v1beta1",
    "metadata": {
        "creationTimestamp": "2018-10-20T00:18:28Z"
    },
    "auditID": "33fa264e-1124-4252-af9e-2ce6e45fe07d",
    "stage": "ResponseComplete",
    "requestURI": "/api/v1/namespaces/default/configmaps",
    "verb": "create",
    "user": {
        "username": "minikube-user"
    },
    "objectRef": {
        "resource": "configmaps",
        "namespace": "default",
        "name": "my-config",
    },
    "responseStatus": {
        "code": 201
    },
    "requestObject": {
        "kind": "ConfigMap",
        "apiVersion": "v1",
        "metadata": {
            "name": "my-config",
            "namespace": "default"
        },
        "data": {
            "access.properties": "aws_access_key_id = MY-IDnaws_secret_access_key = MY-KEYn",
            "ui.properties": "color.good=purplencolor.bad=yellownallow.textmode=truen"
        }
    },
    "annotations": {
        "authorization.k8s.io/decision": "allow",
        "authorization.k8s.io/reason": ""
    }
}

When the rule runs against the audit event, it uses configmap to check the value of the property objectRef->resource is “configmaps”. modify checks that the value of verb is one of create,update,patch. contains_private_credentials looks at the configmap contents at requestObject->data to see if they contain any of the sensitive strings named in the contains_private_credentials macro.

If they do, a Falco event is generated:

17:18:28.428398080: Warning Kubernetes configmap with private credential (user=minikube-user verb=create configmap=my-config config={"access.properties":"aws_access_key_id = MY-IDnaws_secret_access_key = MY-KEYn","ui.properties":"color.good=purplencolor.bad=yellownallow.textmode=truen"})

The output string is used to print essential information about the audit event, including:

  • the user %ka.user.name, verb %ka.verb, and configmap name %ka.req.configmap.name
  • the full configmap contents %ka.req.configmap.obj

For the full details on Kubernetes Audit Support, check out the wiki.

Other Changes

Of course, we’ve also included the normal round of rules updates and other new features, including:

  • Properly load/unload the kernel module when the Falco service is started/stopped.
  • Reload all config files and rules files on SIGHUP. This allows transparent reloads of configuration without having to restart Falco.
  • Adding netcat to the Falco container, which makes it easier to pass events to external sources using the program output.

Further Information

For the full set of changes in this release, please look at the release’s changelog on github. The release is available via the usual channels – RPM/Debian packages, Falco Docker images and GitHub.

Let us know if you have any issues over in the Sysdig open source Slack team, or continue exploring the Falco open-source security stack.




Online session: Building an Open Source Container Security Stack

On this session Sysdig and Anchore are presenting how using Falco and Anchore Engine you can build a complete open source container security stack for Docker and Kubernetes.

This online session will live demo:

  • Using Falco, NATS and Kubeless to build a Kubernetes response engine and implement real-time attack remediation with security playbooks using FaaS.
  • How Anchore Engine can detect software vulnerabilities in your images, and how can be integrated with Jenkins, Kubernetes and Falco.

Watch now!

Share This

Stay up to date

Sign up to recieve our newest.

Related Posts

Detecting jQuery File Upload vulnerability using Falco (CVE-2018-9206)

29 Docker security tools compared

Active Kubernetes security with Sysdig Falco, NATS, and Kubeless.