To get stateful applications running in Kubernetes, you need to have persistent storage on which your data would live. This will allow your data to live on even when your deployments or pods are deleted and recreated.

Currently rook installs into two namespaces. The rook-ceph-system and rook-ceph where the rook operator and rook clusters live respectively.

$ kubectl -n rook-ceph-system get po
NAME                                 READY   STATUS    RESTARTS   AGE
rook-ceph-agent-2c4gv                1/1     Running   0          3d15h
rook-ceph-agent-7rc25                1/1     Running   0          3d15h
rook-ceph-operator-d97564799-6hsbc   1/1     Running   0          3d15h
rook-discover-ngzld                  1/1     Running   0          3d15h
rook-discover-vh7hk                  1/1     Running   0          3d15h
$

The rook-ceph namespace will have pods for the rook-operator, rook-ceph-agent and rook-discover. During deployment, ensure the pods in the rook-ceph namespace are running before deploying the rook cluster.

Review and adapt the ceph cluster manifest in examples before deploying. Be sure to pay attention to which nodes to use for storage and what drives in them to use of ceph OSDs.

$ kubectl -n rook-ceph get po
NAME                                   READY   STATUS      RESTARTS   AGE
rook-ceph-mgr-a-68cb58b456-gc5cm       1/1     Running     0          3d15h
rook-ceph-mon-a-86c4c974df-znjfl       1/1     Running     0          3d15h
rook-ceph-mon-b-6b998896d-dp4sj        1/1     Running     0          3d15h
rook-ceph-mon-c-7448f99444-phctb       1/1     Running     0          3d15h
rook-ceph-osd-0-65f6d4c879-rxnjd       1/1     Running     0          3d15h
rook-ceph-osd-1-dbc8c85d-fxfjt         1/1     Running     0          3d15h
rook-ceph-osd-prepare-worker01-k6t8k   0/2     Completed   0          3d15h
rook-ceph-osd-prepare-worker02-h98vg   0/2     Completed   0          3d15h
rook-ceph-tools-bffbf4d8f-47dgx        1/1     Running     0          3d15h
$

Upon deploying, you will notice you will have a prepare job for each node in your cluster.  Ceph monitor and OSDs will deploy based on your options in the deployed ceph cluster. At this point, you should have a  working ceph cluster.

To ensure your ceph cluster is running as expected, install the rook toolbox and run the ceph client commands to get cluster status.

$ kubectl -n rook-ceph exec -it rook-ceph-tools-bffbf4d8f-47dgx -- ceph -s
  cluster:
    id:     5898da63-4ec0-45a1-a0fe-8e13bb6fc157
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum b,a,c
    mgr: a(active)
    osd: 2 osds: 2 up, 2 in

  data:
    pools:   1 pools, 100 pgs
    objects: 0  objects, 0 B
    usage:   28 GiB used, 202 GiB / 230 GiB avail
    pgs:     100 active+clean

$ kubectl -n rook-ceph exec -it rook-ceph-tools-bffbf4d8f-47dgx -- ceph df
GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED
    230 GiB     202 GiB       28 GiB         12.20
POOLS:
    NAME            ID     USED     %USED     MAX AVAIL     OBJECTS
    replicapool     1       0 B         0       190 GiB           0
$

Deploy the storage class manifest and make it default if needed.

$ kubectl patch sc rook-ceph-block -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'
storageclass.storage.k8s.io/rook-ceph-block patched (no change)
$

One tip worth mentioning is, be sure to install using manifests from the same release version. Do not install from master branch.

You can determine the available releases using, `git tag -l`. Checkout the desired tag and install the desired manifests.

git checkout v0.9.3