cf-operator enables the deployment of BOSH Releases, especially Cloud Foundry, to Kubernetes.
It’s implemented as a k8s operator, an active controller component which acts upon custom k8s resources.
- Incubation Proposal: Containerizing Cloud Foundry
- Slack: #quarks-dev on https://slack.cloudfoundry.org
- Backlog: Pivotal Tracker
- Docker: https://hub.docker.com/r/cfcontainerization/cf-operator/tags
cf-operator assumes that the cluster root CA is also used for signing CSRs via the certificates.k8s.io API and will embed this CA in the generated certificate secrets. If your cluster is set up to use a different cluster-signing CA the generated certificates will have the wrong CA embedded. See https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/ for more information on cluster trust.
Using the helm chart
cf-operator can be installed via
helm. Make sure you have a running Kubernetes cluster and that tiller is reachable.
See the releases page for up-to-date instructions on how to install the operator.
For more information about the
cf-operator helm chart and how to configure it, please refer to deploy/helm/cf-operator/README.md
Recovering from a crash
If the operator pod crashes, it cannot be restarted in the same namespace before the existing mutating webhook configuration for that namespace is removed. The operator uses mutating webhooks to modify pods on the fly and Kubernetes fails to create pods if the webhook server is unreachable. The webhook configurations are installed cluster wide and don’t belong to a single namespace, just like custom resources.
To remove the webhook configurations for the cf-operator namespace run:
CF_OPERATOR_NAMESPACE=cf-operator kubectl delete mutatingwebhookconfiguration "cf-operator-hook-$CF_OPERATOR_NAMESPACE" kubectl delete validatingwebhookconfiguration "cf-operator-hook-$CF_OPERATOR_NAMESPACE"
From Kubernetes 1.15 onwards, it is possible to instead patch the webhook configurations for the cf-operator namespace via:
CF_OPERATOR_NAMESPACE=cf-operator kubectl patch mutatingwebhookconfigurations "cf-operator-hook-$CF_OPERATOR_NAMESPACE" -p ' webhooks: - name: mutate-pods.quarks.cloudfoundry.org objectSelector: matchExpressions: - key: name operator: NotIn values: - "cf-operator" '
cf-operator watches four different types of custom resources:
cf-operator requires the according CRDs to be installed in the cluster in order to work as expected. By default, the
cf-operator applies CRDs in your cluster automatically.
To verify that the CRD´s are installed:
$ kubectl get crds NAME CREATED AT boshdeployments.quarks.cloudfoundry.org 2019-06-25T07:08:37Z quarksjobs.quarks.cloudfoundry.org 2019-06-25T07:08:37Z quarkssecrets.quarks.cloudfoundry.org 2019-06-25T07:08:37Z quarksstatefulsets.quarks.cloudfoundry.org 2019-06-25T07:08:37Z
BOSH releases consume two types of variables, explicit and implicit ones.
Implicit variables have to be created before creating a BOSH deployment resource.
The previous example creates a secret named
nats-deployment.var-custom-password. That value will be used to fill
((custom-password)) place holders in the BOSH manifest.
The name of the secret has to follow this scheme: ‘.var-’
Missing implicit variables are treated as an error.
Explicit variables are explicitly defined in the BOSH manifest. They are generated automatically upon deployment and stored in secrets.
The naming scheme is the same as for implicit variables.
If an explicit variable secret already exists, it will not be generated. This allows users to set their own passwords, etc.
Using your fresh installation
With a running
cf-operator pod, you can try one of the files (see docs/examples/bosh-deployment/boshdeployment-with-custom-variable.yaml ), as follows:
kubectl -n cf-operator apply -f docs/examples/bosh-deployment/boshdeployment-with-custom-variable.yaml
The above will spawn two pods in your
cf-operator namespace (which needs to be created upfront), running the BOSH nats release.
You can access the
cf-operator logs by following the operator pod’s output:
kubectl logs -f -n cf-operator cf-operator
Or look at the k8s event log:
kubectl get events -n cf-operator --watch
Modifying the deployment
The main input to the operator is the
BOSH deployment custom resource and the according manifest config map or secret. Changes to the
Data fields of either of those will trigger the operator to recalculate the desired state and apply the required changes from the current state.
Besides that there are more things the user can change which will trigger an update of the deployment:
ops filescan be added or removed from the
BOSH deployment. Existing
ops fileconfig maps and secrets can be modified
- generated secrets for explicit variables can be modified
- secrets for implicit variables have to be created by the user beforehand anyway, but can also be changed after the initial deployment
Development and Tests
For more information about the operator development, see docs/development.md
For more information about testing, see docs/testing.md
For more information about building the operator from source, see docs/building.md
For more information about how to develop a BOSH release using Quarks and SCF, see the SCFv3 docs
Nice tools to use
It provides an easy way to navigate through your k8s resources, while watching lively to changes on them. Main features that can be helpful for containerized CF are:
inmediate access to resources YAMLs definition
inmediate access to services endpoints
inmediate access to pods/container logs
sort resources(e.g. pods) by cpu or memory consumption
inmediate access to a container secure shell
A tool-kit with different features around k8s and CloudFoundry
top, to get an overview on the cpu/memory/load of the cluster, per ns and pods.
logs, to download all logs from all pods into your local system
pod-exec, to open a shell into containers. This can execute cmds in different containers simultaneously.
node-exec, to open a shell into nodes. This can execute cmds in different containers simultaneously.
Allows you to tail multiple pods on k8s and multiple containers within the pod.
A more user friendly to navigate your k8s cluster resources.