Skip to content

Installing Knative Eventing using YAML files

This topic describes how to install Knative Eventing by applying YAML files using the kubectl CLI.

Prerequisites

Before installing Knative, you must meet the following prerequisites:

  • For prototyping purposes, Knative works on most local deployments of Kubernetes. For example, you can use a local, one-node cluster that has 3 CPUs and 4 GB of memory.

    Tip

    You can install a local distribution of Knative for development purposes using the Knative Quickstart plugin

  • For production purposes, it is recommended that:

    • If you have only one node in your cluster, you need at least 6 CPUs, 6 GB of memory, and 30 GB of disk storage.
    • If you have multiple nodes in your cluster, for each node you need at least 2 CPUs, 4 GB of memory, and 20 GB of disk storage.
    • You have a cluster that uses Kubernetes v1.27 or newer.
    • You have installed the kubectl CLI.
    • Your Kubernetes cluster must have access to the internet, because Kubernetes needs to be able to fetch images. To pull from a private registry, see Deploying images from a private container registry.

Caution

The system requirements provided are recommendations only. The requirements for your installation might vary, depending on whether you use optional components, such as a networking layer.

Verifying image signatures

Knative releases from 1.9 onwards are signed with cosign.

  1. Install cosign and jq.

  2. Extract the images from a manifeset and verify the signatures.

curl -sSL https://github.com/knative/serving/releases/download/knative-v1.10.1/serving-core.yaml \
  | grep 'gcr.io/' | awk '{print $2}' | sort | uniq \
  | xargs -n 1 \
    cosign verify -o text \
      --certificate-identity=signer@knative-releases.iam.gserviceaccount.com \
      --certificate-oidc-issuer=https://accounts.google.com

Note

Knative images are signed in KEYLESS mode. To learn more about keyless signing, please refer to Keyless Signatures Our signing identity(Subject) for our releases is signer@knative-releases.iam.gserviceaccount.com and the Issuer is https://accounts.google.com

Install Knative Eventing

To install Knative Eventing:

  1. Install the required custom resource definitions (CRDs) by running the command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing/latest/eventing-crds.yaml
    
  2. Install the core components of Eventing by running the command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing/latest/eventing-core.yaml
    

    Info

    For information about the YAML files in Knative Eventing, see Description Tables for YAML Files.

Verify the installation

Success

Monitor the Knative components until all of the components show a STATUS of Running or Completed. You can do this by running the following command and inspecting the output:

kubectl get pods -n knative-eventing

Example output:

NAME                                   READY   STATUS    RESTARTS   AGE
eventing-controller-7995d654c7-qg895   1/1     Running   0          2m18s
eventing-webhook-fff97b47c-8hmt8       1/1     Running   0          2m17s

Optional: Install a default Channel (messaging) layer

The following tabs expand to show instructions for installing a default Channel layer. Follow the procedure for the Channel of your choice:

The following commands install the KafkaChannel and run event routing in a system namespace. The knative-eventing namespace is used by default.

  1. Install the Kafka controller by running the following command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing-kafka-broker/latest/eventing-kafka-controller.yaml
    
  2. Install the KafkaChannel data plane by running the following command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing-kafka-broker/latest/eventing-kafka-channel.yaml
    
  3. If you're upgrading from the previous version, run the following command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing-kafka-broker/latest/eventing-kafka-post-install.yaml
    

Warning

This simple standalone implementation runs in-memory and is not suitable for production use cases.

  • Install an in-memory implementation of Channel by running the command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing/latest/in-memory-channel.yaml
    
  1. Install NATS Streaming for Kubernetes.

  2. Install the NATS Streaming Channel by running the command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing-natss/latest/eventing-natss.yaml
    

You can change the default channel implementation by following the instructions described in the Configure Channel defaults section.

Optional: Install a Broker layer

The following tabs expand to show instructions for installing the Broker layer. Follow the procedure for the Broker of your choice:

The following commands install the Apache Kafka Broker and run event routing in a system namespace. The knative-eventing namespace is used by default.

  1. Install the Kafka controller by running the following command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing-kafka-broker/latest/eventing-kafka-controller.yaml
    
  2. Install the Kafka Broker data plane by running the following command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing-kafka-broker/latest/eventing-kafka-broker.yaml
    
  3. If you're upgrading from the previous version, run the following command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing-kafka-broker/latest/eventing-kafka-post-install.yaml
    

For more information, see the Kafka Broker documentation.

This implementation of Broker uses Channels and runs event routing components in a system namespace, providing a smaller and simpler installation.

  • Install this implementation of Broker by running the command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing/latest/mt-channel-broker.yaml
    

    To customize which Broker Channel implementation is used, update the following ConfigMap to specify which configurations are used for which namespaces:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: config-br-defaults
      namespace: knative-eventing
    data:
      default-br-config: |
        # This is the cluster-wide default broker channel.
        clusterDefault:
          brokerClass: MTChannelBasedBroker
          apiVersion: v1
          kind: ConfigMap
          name: imc-channel
          namespace: knative-eventing
        # This allows you to specify different defaults per-namespace,
        # in this case the "some-namespace" namespace will use the Kafka
        # channel ConfigMap by default (only for example, you will need
        # to install kafka also to make use of this).
        namespaceDefaults:
          some-namespace:
            brokerClass: MTChannelBasedBroker
            apiVersion: v1
            kind: ConfigMap
            name: kafka-channel
            namespace: knative-eventing
    

    The referenced imc-channel and kafka-channel example ConfigMaps would look like:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: imc-channel
      namespace: knative-eventing
    data:
      channel-template-spec: |
        apiVersion: messaging.knative.dev/v1
        kind: InMemoryChannel
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kafka-channel
      namespace: knative-eventing
    data:
      channel-template-spec: |
        apiVersion: messaging.knative.dev/v1alpha1
        kind: KafkaChannel
        spec:
          numPartitions: 3
          replicationFactor: 1
    

Warning

In order to use the KafkaChannel, ensure that it is installed on your cluster, as mentioned previously in this topic.

For more information, see the RabbitMQ Broker in GitHub.

Install optional Eventing extensions

The following tabs expand to show instructions for installing each Eventing extension.

  1. Install the Kafka controller by running the command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing-kafka-broker/latest/eventing-kafka-controller.yaml
    
  2. Install the Kafka Sink data plane by running the command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing-kafka-broker/latest/eventing-kafka-sink.yaml
    

For more information, see the Kafka Sink documentation.

A single-tenant GitHub source creates one Knative service per GitHub source.

A multi-tenant GitHub source only creates one Knative Service, which handles all GitHub sources in the cluster. This source does not support logging or tracing configuration.

  • To install a single-tenant GitHub source run the command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing-github/latest/github.yaml
    
  • To install a multi-tenant GitHub source run the command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing-github/latest/mt-github.yaml
    

To learn more, try the GitHub source sample

  1. Install the Apache Kafka Source by running the command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing-kafka-broker/latest/eventing-kafka-source.yaml
    
  2. If you're upgrading from the previous version, run the following command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing-kafka-broker/latest/eventing-kafka-post-install.yaml
    

To learn more, try the Apache Kafka source sample.

  • Install the Apache CouchDB Source by running the command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/eventing-couchdb/latest/couchdb.yaml
    

To learn more, read the Apache CouchDB source documentation.

  • Install VMware Sources and Bindings by running the command:

    kubectl apply -f https://storage.googleapis.com/knative-nightly/sources-for-knative/latest/release.yaml
    

To learn more, try the VMware sources and bindings samples.

We use analytics and cookies to understand site traffic. Information about your use of our site is shared with Google for that purpose. Learn more.

× OK