Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save jbw976/50b5446751a9529da1cbdf8aceb05796 to your computer and use it in GitHub Desktop.
Save jbw976/50b5446751a9529da1cbdf8aceb05796 to your computer and use it in GitHub Desktop.
rook-agent failed to delete PV pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
for useful gist title, see https://github.com/isaacs/github/issues/194
core@core-02 ~ $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
rbd2 253:32 0 5G 0 disk /var/lib/rkt/pods/run/4c350eed-d9c6-4258-bab0-109d43f2bfac/stage1/rootfs/opt/stage2/hyperkube/rootfs/var/lib/kubelet/pods/a95feeee-fbce-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-a953c80d-fbce-11e7-a451-001c422fc6d5
rbd0 253:0 0 10G 0 disk /var/lib/rkt/pods/run/4c350eed-d9c6-4258-bab0-109d43f2bfac/stage1/rootfs/opt/stage2/hyperkube/rootfs/var/lib/kubelet/pods/fcd0f7dc-fbcb-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5
sdd 8:48 0 10G 0 disk
sdb 8:16 0 10G 0 disk
rbd1 253:16 0 5G 0 disk /var/lib/rkt/pods/run/4c350eed-d9c6-4258-bab0-109d43f2bfac/stage1/rootfs/opt/stage2/hyperkube/rootfs/var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
sdc 8:32 0 10G 0 disk
sda 8:0 0 18.5G 0 disk
|-sda4 8:4 0 1G 0 part
|-sda2 8:2 0 2M 0 part
|-sda9 8:9 0 16.1G 0 part /
|-sda7 8:7 0 64M 0 part
|-sda3 8:3 0 1G 0 part
| `-usr 254:0 0 1016M 1 crypt /usr
|-sda1 8:1 0 128M 0 part /boot
`-sda6 8:6 0 128M 0 part /usr/share/oem
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
ports:
- port: 5432
selector:
app: global
role: postgres
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
labels:
app: global
role: postgres
spec:
storageClassName: rook-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
labels:
app: global
role: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: global
role: postgres
spec:
containers:
- name: postgres
image: postgres:9.6
imagePullPolicy: IfNotPresent
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
name: postgres-port
volumeMounts:
- name: postgres-db
mountPath: /var/lib/postgresql/data/pgdata
subPath: "postgresql-db"
volumes:
- name: postgres-db
persistentVolumeClaim:
claimName: postgres-pv-claim
> kubectl -n rook exec -it rook-tools -- bash
root@rook-tools:/# rbd status replicapool/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
Watchers:
watcher=10.44.0.0:0/295039780 client.4225 cookie=18446462598732840962
> krs logs rook-agent-h7s71
2018-01-17 21:06:01.957844 I | rook: starting Rook v0.6.0-150.g2b5acad.dirty with arguments '/usr/local/bin/rook agent'
2018-01-17 21:06:01.958020 I | rook: flag values: --help=false, --log-level=INFO
2018-01-17 21:06:01.959715 I | rook: starting rook agent
2018-01-17 21:06:01.978782 I | exec: Running command: modinfo -F parm rbd
2018-01-17 21:06:01.991782 I | exec: Running command: modprobe rbd single_major=Y
2018-01-17 21:06:02.085531 I | flexvolume: Rook Flexvolume configured
2018-01-17 21:06:02.085877 I | flexvolume: Listening on unix socket for Kubernetes volume attach commands.
2018-01-17 21:06:02.097012 W | flexvolume: NOTE: The Kubelet must be restarted on this node since this pod appears to be running on a Kubernetes version prior to 1.8. More details can be found in the Rook docs at https://rook.io/docs/rook/master/common-problems.html#kubelet-restart
2018-01-17 21:06:02.097185 I | agent-cluster: start watching cluster resources
2018-01-17 21:18:43.222941 I | flexdriver: calling agent to attach volume replicapool/pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5
2018-01-17 21:18:43.254815 I | flexvolume: Creating Volume attach Resource rook-system/pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5: {Image:pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5 Pool:replicapool ClusterName:rook StorageClass:rook-block MountDir:/var/lib/kubelet/pods/fcd0f7dc-fbcb-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5 FsName: Path: RW:rw FsType: VolumeName:pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5 Pod:prometheus-k8s-0 PodID:fcd0f7dc-fbcb-11e7-a451-001c422fc6d5 PodNamespace:monitoring}
2018-01-17 21:18:43.267037 I | ceph-volumeattacher: attaching volume replicapool/pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5 cluster rook
2018-01-17 21:18:43.283928 I | cephmon: parsing mon endpoints: rook-ceph-mon0=10.106.21.163:6790,rook-ceph-mon1=10.108.42.15:6790,rook-ceph-mon2=10.96.218.16:6790
2018-01-17 21:18:43.284156 I | op-mon: loaded: maxMonID=2, mons=map[rook-ceph-mon0:0xc4202d7600 rook-ceph-mon1:0xc4202d77c0 rook-ceph-mon2:0xc4202d7920], mapping=&{Node:map[rook-ceph-mon2:0xc4201be640 rook-ceph-mon0:0xc4202d7b40 rook-ceph-mon1:0xc4201be3a0] Port:map[]}
2018-01-17 21:18:43.284586 I | exec: Running command: rbd map replicapool/pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5 --id admin --cluster=rook --keyring=/tmp/rook.keyring645059175 -m 10.106.21.163:6790,10.108.42.15:6790,10.96.218.16:6790 --conf=/dev/null
2018-01-17 21:18:47.731553 I | flexdriver: ERROR: logging before flag.Parse: I0117 21:18:43.549188 12943 mount_linux.go:379] `fsck` error fsck from util-linux 2.25.2
fsck.ext2: Bad magic number in super-block while trying to open /dev/rbd0
/dev/rbd0:
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
ERROR: logging before flag.Parse: E0117 21:18:43.576729 12943 mount_linux.go:140] Mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/rook.io/rook/mounts/pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5 --scope -- mount -o rw,defaults /dev/rbd0 /var/lib/kubelet/plugins/rook.io/rook/mounts/pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5
Output: Running as unit run-12968.scope.
mount: wrong fs type, bad option, bad superblock on /dev/rbd0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
ERROR: logging before flag.Parse: I0117 21:18:43.609092 12943 mount_linux.go:404] Disk "/dev/rbd0" appears to be unformatted, attempting to format as type: "ext4" with options: [-F /dev/rbd0]
ERROR: logging before flag.Parse: I0117 21:18:47.719224 12943 mount_linux.go:408] Disk successfully formatted (mkfs): ext4 - /dev/rbd0 /var/lib/kubelet/plugins/rook.io/rook/mounts/pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5
2018-01-17 21:18:47.731934 I | flexdriver: Ignore error about Mount failed: exit status 32. Kubernetes does this to check whether the volume has been formatted. It will format and retry again. https://github.com/kubernetes/kubernetes/blob/release-1.7/pkg/util/mount/mount_linux.go#L360
2018-01-17 21:18:47.732077 I | flexdriver: formatting volume pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5 devicePath /dev/rbd0 deviceMountPath /var/lib/kubelet/plugins/rook.io/rook/mounts/pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5 fs with options [rw]
2018-01-17 21:18:47.732171 I | flexdriver: mounting global mount path /var/lib/kubelet/plugins/rook.io/rook/mounts/pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5 on /var/lib/kubelet/pods/fcd0f7dc-fbcb-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5
2018-01-17 21:18:47.807473 I | flexdriver:
2018-01-17 21:18:47.807871 I | flexdriver: volume replicapool/pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5 has been attached and mounted
2018-01-17 21:19:03.679318 I | flexdriver: calling agent to attach volume replicapool/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:19:03.684080 I | flexvolume: Creating Volume attach Resource rook-system/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: {Image:pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 Pool:replicapool ClusterName:rook StorageClass:rook-block MountDir:/var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 FsName: Path: RW:rw FsType: VolumeName:pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 Pod:postgres-1888422428-n0z4q PodID:0a117d2d-fbcc-11e7-a451-001c422fc6d5 PodNamespace:default}
2018-01-17 21:19:03.690396 I | ceph-volumeattacher: attaching volume replicapool/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 cluster rook
2018-01-17 21:19:03.700942 I | cephmon: parsing mon endpoints: rook-ceph-mon0=10.106.21.163:6790,rook-ceph-mon1=10.108.42.15:6790,rook-ceph-mon2=10.96.218.16:6790
2018-01-17 21:19:03.701008 I | op-mon: loaded: maxMonID=2, mons=map[rook-ceph-mon0:0xc4205c5d80 rook-ceph-mon1:0xc4205c5de0 rook-ceph-mon2:0xc4205c5e20], mapping=&{Node:map[rook-ceph-mon1:0xc4205c5f00 rook-ceph-mon2:0xc420250240 rook-ceph-mon0:0xc4205c5e60] Port:map[]}
2018-01-17 21:19:03.701306 I | exec: Running command: rbd map replicapool/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 --id admin --cluster=rook --keyring=/tmp/rook.keyring291509402 -m 10.96.218.16:6790,10.106.21.163:6790,10.108.42.15:6790 --conf=/dev/null
2018-01-17 21:19:07.002254 I | flexdriver: ERROR: logging before flag.Parse: I0117 21:19:03.835792 13356 mount_linux.go:379] `fsck` error fsck from util-linux 2.25.2
fsck.ext2: Bad magic number in super-block while trying to open /dev/rbd1
/dev/rbd1:
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
ERROR: logging before flag.Parse: E0117 21:19:03.884910 13356 mount_linux.go:140] Mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/rook.io/rook/mounts/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 --scope -- mount -o rw,defaults /dev/rbd1 /var/lib/kubelet/plugins/rook.io/rook/mounts/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
Output: Running as unit run-13374.scope.
mount: wrong fs type, bad option, bad superblock on /dev/rbd1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
ERROR: logging before flag.Parse: I0117 21:19:03.946502 13356 mount_linux.go:404] Disk "/dev/rbd1" appears to be unformatted, attempting to format as type: "ext4" with options: [-F /dev/rbd1]
ERROR: logging before flag.Parse: I0117 21:19:06.954245 13356 mount_linux.go:408] Disk successfully formatted (mkfs): ext4 - /dev/rbd1 /var/lib/kubelet/plugins/rook.io/rook/mounts/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:19:07.003627 I | flexdriver: Ignore error about Mount failed: exit status 32. Kubernetes does this to check whether the volume has been formatted. It will format and retry again. https://github.com/kubernetes/kubernetes/blob/release-1.7/pkg/util/mount/mount_linux.go#L360
2018-01-17 21:19:07.003780 I | flexdriver: formatting volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 devicePath /dev/rbd1 deviceMountPath /var/lib/kubelet/plugins/rook.io/rook/mounts/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 fs with options [rw]
2018-01-17 21:19:07.003892 I | flexdriver: mounting global mount path /var/lib/kubelet/plugins/rook.io/rook/mounts/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 on /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:19:07.141955 I | flexdriver:
2018-01-17 21:19:07.142184 I | flexdriver: volume replicapool/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 has been attached and mounted
2018-01-17 21:34:04.930337 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:34:04.952159 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
2018-01-17 21:34:05.551752 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:34:05.562658 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
2018-01-17 21:34:06.682312 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:34:06.694397 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
2018-01-17 21:34:08.784047 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:34:08.798115 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
2018-01-17 21:34:12.979695 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:34:12.998820 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
2018-01-17 21:34:21.089621 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:34:21.101838 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
2018-01-17 21:34:37.216931 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:34:37.228335 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
2018-01-17 21:35:09.333736 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:35:09.343836 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
2018-01-17 21:36:13.505298 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:36:13.519035 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
2018-01-17 21:37:49.607009 I | flexdriver: calling agent to attach volume replicapool/pvc-a953c80d-fbce-11e7-a451-001c422fc6d5
2018-01-17 21:37:49.610485 I | flexvolume: Creating Volume attach Resource rook-system/pvc-a953c80d-fbce-11e7-a451-001c422fc6d5: {Image:pvc-a953c80d-fbce-11e7-a451-001c422fc6d5 Pool:replicapool ClusterName:rook StorageClass:rook-block MountDir:/var/lib/kubelet/pods/a95feeee-fbce-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-a953c80d-fbce-11e7-a451-001c422fc6d5 FsName: Path: RW:rw FsType: VolumeName:pvc-a953c80d-fbce-11e7-a451-001c422fc6d5 Pod:postgres-1888422428-gpndf PodID:a95feeee-fbce-11e7-a451-001c422fc6d5 PodNamespace:default}
2018-01-17 21:37:49.616661 I | ceph-volumeattacher: attaching volume replicapool/pvc-a953c80d-fbce-11e7-a451-001c422fc6d5 cluster rook
2018-01-17 21:37:49.622794 I | cephmon: parsing mon endpoints: rook-ceph-mon0=10.106.21.163:6790,rook-ceph-mon1=10.108.42.15:6790,rook-ceph-mon2=10.96.218.16:6790
2018-01-17 21:37:49.622847 I | op-mon: loaded: maxMonID=2, mons=map[rook-ceph-mon1:0xc420382e80 rook-ceph-mon2:0xc420382fe0 rook-ceph-mon0:0xc420382da0], mapping=&{Node:map[rook-ceph-mon2:0xc4203831a0 rook-ceph-mon0:0xc420383040 rook-ceph-mon1:0xc4203830e0] Port:map[]}
2018-01-17 21:37:49.623567 I | exec: Running command: rbd map replicapool/pvc-a953c80d-fbce-11e7-a451-001c422fc6d5 --id admin --cluster=rook --keyring=/tmp/rook.keyring979537201 -m 10.106.21.163:6790,10.108.42.15:6790,10.96.218.16:6790 --conf=/dev/null
2018-01-17 21:37:50.938542 I | flexdriver: ERROR: logging before flag.Parse: I0117 21:37:49.761529 30511 mount_linux.go:379] `fsck` error fsck from util-linux 2.25.2
fsck.ext2: Bad magic number in super-block while trying to open /dev/rbd2
/dev/rbd2:
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
ERROR: logging before flag.Parse: E0117 21:37:49.796748 30511 mount_linux.go:140] Mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/rook.io/rook/mounts/pvc-a953c80d-fbce-11e7-a451-001c422fc6d5 --scope -- mount -o rw,defaults /dev/rbd2 /var/lib/kubelet/plugins/rook.io/rook/mounts/pvc-a953c80d-fbce-11e7-a451-001c422fc6d5
Output: Running as unit run-30532.scope.
mount: wrong fs type, bad option, bad superblock on /dev/rbd2,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
ERROR: logging before flag.Parse: I0117 21:37:49.821543 30511 mount_linux.go:404] Disk "/dev/rbd2" appears to be unformatted, attempting to format as type: "ext4" with options: [-F /dev/rbd2]
ERROR: logging before flag.Parse: I0117 21:37:50.924345 30511 mount_linux.go:408] Disk successfully formatted (mkfs): ext4 - /dev/rbd2 /var/lib/kubelet/plugins/rook.io/rook/mounts/pvc-a953c80d-fbce-11e7-a451-001c422fc6d5
2018-01-17 21:37:50.938749 I | flexdriver: Ignore error about Mount failed: exit status 32. Kubernetes does this to check whether the volume has been formatted. It will format and retry again. https://github.com/kubernetes/kubernetes/blob/release-1.7/pkg/util/mount/mount_linux.go#L360
2018-01-17 21:37:50.938973 I | flexdriver: formatting volume pvc-a953c80d-fbce-11e7-a451-001c422fc6d5 devicePath /dev/rbd2 deviceMountPath /var/lib/kubelet/plugins/rook.io/rook/mounts/pvc-a953c80d-fbce-11e7-a451-001c422fc6d5 fs with options [rw]
2018-01-17 21:37:50.939095 I | flexdriver: mounting global mount path /var/lib/kubelet/plugins/rook.io/rook/mounts/pvc-a953c80d-fbce-11e7-a451-001c422fc6d5 on /var/lib/kubelet/pods/a95feeee-fbce-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-a953c80d-fbce-11e7-a451-001c422fc6d5
2018-01-17 21:37:51.026264 I | flexdriver:
2018-01-17 21:37:51.026787 I | flexdriver: volume replicapool/pvc-a953c80d-fbce-11e7-a451-001c422fc6d5 has been attached and mounted
2018-01-17 21:38:15.681678 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:38:15.707012 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
2018-01-17 21:40:18.134657 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:40:18.148850 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
2018-01-17 21:42:20.233874 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:42:20.245085 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
2018-01-17 21:44:22.363189 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:44:22.374993 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
2018-01-17 21:46:24.451096 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:46:24.465643 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
2018-01-17 21:48:26.591674 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:48:26.605288 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
2018-01-17 21:50:28.709227 I | flexdriver: unmounting mount dir: /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:50:28.724254 E | flexdriver: Unmount volume at mount dir /var/lib/kubelet/pods/0a117d2d-fbcc-11e7-a451-001c422fc6d5/volumes/rook.io~rook/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 failed: failed to get persistent volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: persistentvolumes "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" not found
> krs logs rook-operator-3244829837-jdstq
2018-01-17 21:05:58.269913 I | rook: starting Rook v0.6.0-150.g2b5acad.dirty with arguments '/usr/local/bin/rook operator'
2018-01-17 21:05:58.270098 I | rook: flag values: --help=false, --log-level=INFO, --mon-healthcheck-interval=45s, --mon-out-timeout=5m0s
2018-01-17 21:05:58.272051 I | rook: starting operator
2018-01-17 21:06:00.964577 I | op-k8sutil: creating cluster role rook-agent
2018-01-17 21:06:01.160973 I | op-agent: discovered flexvolume dir path from source NodeConfigKubelet. value: /etc/kubernetes/volumeplugins
2018-01-17 21:06:01.179124 I | op-agent: rook-agent daemonset started
2018-01-17 21:06:01.192984 I | operator: rook-provisioner started
2018-01-17 21:06:01.193191 I | op-cluster: start watching clusters in all namespaces
2018-01-17 21:06:13.946750 I | op-cluster: starting cluster in namespace rook
2018-01-17 21:06:20.005154 I | op-mon: start running mons
2018-01-17 21:06:20.016964 I | exec: Running command: ceph-authtool --create-keyring /var/lib/rook/rook/mon.keyring --gen-key -n mon. --cap mon 'allow *'
2018-01-17 21:06:25.484836 I | exec: Running command: ceph-authtool --create-keyring /var/lib/rook/rook/client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mgr 'allow *' --cap mds 'allow'
2018-01-17 21:06:30.518405 I | op-mon: creating mon secrets for a new cluster
2018-01-17 21:06:30.544298 I | op-mon: saved mon endpoints to config map map[data: maxMonId:-1 mapping:{"node":{},"port":{}}]
2018-01-17 21:06:30.544880 I | cephmon: writing config file /var/lib/rook/rook/rook.config
2018-01-17 21:06:30.545159 I | cephmon: generated admin config in /var/lib/rook/rook
2018-01-17 21:06:30.574292 I | op-mon: Found 2 running nodes without mons
2018-01-17 21:06:30.769443 I | op-mon: mon rook-ceph-mon0 running at 10.106.21.163:6790
2018-01-17 21:06:30.920783 I | op-mon: saved mon endpoints to config map map[data:rook-ceph-mon0=10.106.21.163:6790 maxMonId:2 mapping:{"node":{"rook-ceph-mon0":{"Name":"172.17.8.101","Address":"172.17.8.101"},"rook-ceph-mon1":{"Name":"172.17.8.102","Address":"172.17.8.102"},"rook-ceph-mon2":{"Name":"172.17.8.101","Address":"172.17.8.101"}},"port":{}}]
2018-01-17 21:06:30.921406 I | cephmon: writing config file /var/lib/rook/rook/rook.config
2018-01-17 21:06:30.921669 I | cephmon: generated admin config in /var/lib/rook/rook
2018-01-17 21:06:30.922080 I | cephmon: writing config file /var/lib/rook/rook/rook.config
2018-01-17 21:06:30.922306 I | cephmon: generated admin config in /var/lib/rook/rook
2018-01-17 21:06:30.970971 I | op-mon: mons created: 1
2018-01-17 21:06:30.971529 I | op-mon: waiting for mon quorum
2018-01-17 21:06:30.971868 I | exec: Running command: ceph mon_status --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/533452541
2018-01-17 21:06:34.405582 I | op-mon: Ceph monitors formed quorum
2018-01-17 21:06:34.444969 I | op-mon: mon rook-ceph-mon0 running at 10.106.21.163:6790
2018-01-17 21:06:34.461989 I | op-mon: mon rook-ceph-mon1 running at 10.108.42.15:6790
2018-01-17 21:06:34.499103 I | op-mon: saved mon endpoints to config map map[mapping:{"node":{"rook-ceph-mon0":{"Name":"172.17.8.101","Address":"172.17.8.101"},"rook-ceph-mon1":{"Name":"172.17.8.102","Address":"172.17.8.102"},"rook-ceph-mon2":{"Name":"172.17.8.101","Address":"172.17.8.101"}},"port":{}} data:rook-ceph-mon0=10.106.21.163:6790,rook-ceph-mon1=10.108.42.15:6790 maxMonId:2]
2018-01-17 21:06:34.499620 I | cephmon: writing config file /var/lib/rook/rook/rook.config
2018-01-17 21:06:34.499867 I | cephmon: generated admin config in /var/lib/rook/rook
2018-01-17 21:06:34.500161 I | cephmon: writing config file /var/lib/rook/rook/rook.config
2018-01-17 21:06:34.500314 I | cephmon: generated admin config in /var/lib/rook/rook
2018-01-17 21:06:34.571662 I | op-mon: replicaset rook-ceph-mon0 already exists
2018-01-17 21:06:34.598303 I | op-mon: mons created: 2
2018-01-17 21:06:34.598397 I | op-mon: waiting for mon quorum
2018-01-17 21:06:34.598632 I | exec: Running command: ceph mon_status --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/939971128
2018-01-17 21:06:34.857050 W | op-mon: failed to find initial monitor rook-ceph-mon1 in mon map
2018-01-17 21:06:39.857681 I | exec: Running command: ceph mon_status --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/617947703
2018-01-17 21:06:40.051329 W | op-mon: failed to find initial monitor rook-ceph-mon1 in mon map
2018-01-17 21:06:45.051626 I | exec: Running command: ceph mon_status --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/955457066
2018-01-17 21:06:46.661709 I | op-mon: Ceph monitors formed quorum
2018-01-17 21:06:46.689677 I | op-mon: mon rook-ceph-mon0 running at 10.106.21.163:6790
2018-01-17 21:06:46.707675 I | op-mon: mon rook-ceph-mon1 running at 10.108.42.15:6790
2018-01-17 21:06:46.718042 I | op-mon: mon rook-ceph-mon2 running at 10.96.218.16:6790
2018-01-17 21:06:46.749188 I | op-mon: saved mon endpoints to config map map[mapping:{"node":{"rook-ceph-mon0":{"Name":"172.17.8.101","Address":"172.17.8.101"},"rook-ceph-mon1":{"Name":"172.17.8.102","Address":"172.17.8.102"},"rook-ceph-mon2":{"Name":"172.17.8.101","Address":"172.17.8.101"}},"port":{}} data:rook-ceph-mon0=10.106.21.163:6790,rook-ceph-mon1=10.108.42.15:6790,rook-ceph-mon2=10.96.218.16:6790 maxMonId:2]
2018-01-17 21:06:46.749965 I | cephmon: writing config file /var/lib/rook/rook/rook.config
2018-01-17 21:06:46.750113 I | cephmon: generated admin config in /var/lib/rook/rook
2018-01-17 21:06:46.750678 I | cephmon: writing config file /var/lib/rook/rook/rook.config
2018-01-17 21:06:46.751124 I | cephmon: generated admin config in /var/lib/rook/rook
2018-01-17 21:06:46.756755 I | op-mon: replicaset rook-ceph-mon0 already exists
2018-01-17 21:06:46.760989 I | op-mon: replicaset rook-ceph-mon1 already exists
2018-01-17 21:06:46.765377 I | op-mon: mons created: 3
2018-01-17 21:06:46.765406 I | op-mon: waiting for mon quorum
2018-01-17 21:06:46.765528 I | exec: Running command: ceph mon_status --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/041619329
2018-01-17 21:06:47.198299 W | op-mon: failed to find initial monitor rook-ceph-mon2 in mon map
2018-01-17 21:06:52.198536 I | exec: Running command: ceph mon_status --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/648058604
2018-01-17 21:06:52.614327 W | op-mon: failed to find initial monitor rook-ceph-mon2 in mon map
2018-01-17 21:06:57.614673 I | exec: Running command: ceph mon_status --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/329567067
2018-01-17 21:06:58.027696 I | op-mon: Ceph monitors formed quorum
2018-01-17 21:06:58.031800 I | op-cluster: creating initial crushmap
2018-01-17 21:06:58.031817 I | cephclient: setting crush tunables to firefly
2018-01-17 21:06:58.031900 I | exec: Running command: ceph osd crush tunables firefly --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format plain --out-file /tmp/146755326
2018-01-17 21:06:58.236228 I | exec: adjusted tunables profile to firefly
2018-01-17 21:06:58.236439 I | cephclient: succeeded setting crush tunables to profile firefly:
2018-01-17 21:06:58.237443 I | exec: Running command: crushtool -c /tmp/657954373 -o /tmp/884881632
2018-01-17 21:06:58.258699 I | exec: Running command: ceph osd setcrushmap -i /tmp/884881632 --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/326387903
2018-01-17 21:06:59.250349 I | exec: 3
2018-01-17 21:06:59.250639 I | op-cluster: created initial crushmap
2018-01-17 21:06:59.254390 I | op-mgr: start running mgr
2018-01-17 21:06:59.256684 I | exec: Running command: ceph auth get-or-create-key mgr.rook-ceph-mgr0 mon allow * --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/208973586
2018-01-17 21:06:59.471790 I | exec: Running command: ceph mgr module enable prometheus --force --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/334806345
2018-01-17 21:06:59.690369 I | op-mgr: rook-ceph-mgr0 service started
2018-01-17 21:06:59.711060 I | op-mgr: rook-ceph-mgr0 deployment started
2018-01-17 21:06:59.711147 I | op-api: starting the Rook api
2018-01-17 21:06:59.791778 I | op-api: API service running at 10.98.107.241:8124
2018-01-17 21:06:59.862311 I | op-k8sutil: creating role rook-api in namespace rook
2018-01-17 21:06:59.906470 I | op-api: api deployment started
2018-01-17 21:06:59.906729 I | op-osd: start running osds in namespace rook
2018-01-17 21:06:59.930073 I | op-k8sutil: creating role rook-ceph-osd in namespace rook
2018-01-17 21:07:00.157094 W | op-osd: useAllNodes is set to false and no nodes are specified, no OSD pods are going to be created
2018-01-17 21:07:00.167591 I | exec: Running command: ceph osd set noscrub --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/898323476
2018-01-17 21:07:01.288940 I | exec: noscrub is set
2018-01-17 21:07:01.289263 I | exec: Running command: ceph osd set nodeep-scrub --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/292626531
2018-01-17 21:07:02.329003 I | exec: nodeep-scrub is set
2018-01-17 21:07:02.332178 I | op-osd: completed running osds in namespace rook
2018-01-17 21:07:02.332453 I | exec: Running command: ceph osd unset noscrub --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/913346150
2018-01-17 21:07:03.341725 I | exec: noscrub is unset
2018-01-17 21:07:03.343072 I | exec: Running command: ceph osd unset nodeep-scrub --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/235435149
2018-01-17 21:07:04.357200 I | exec: nodeep-scrub is unset
2018-01-17 21:07:04.357444 I | op-cluster: Done creating rook instance in namespace rook
2018-01-17 21:07:04.364251 I | op-pool: start watching pool resources in namespace rook
2018-01-17 21:07:04.364421 I | op-object: start watching object store resources in namespace rook
2018-01-17 21:07:04.364480 I | op-file: start watching filesystem resource in namespace rook
2018-01-17 21:07:04.377830 E | op-cluster: failed to add finalizer to cluster crd. failed to add finalizer to cluster. Operation cannot be fulfilled on clusters.rook.io "rook": the object has been modified; please apply your changes to the latest version and try again
2018-01-17 21:07:04.377926 I | op-cluster: update to cluster rook
2018-01-17 21:07:04.377949 I | op-cluster: no supported updates made to the cluster
2018-01-17 21:07:04.377986 I | op-cluster: update to cluster rook
2018-01-17 21:07:04.378001 I | op-cluster: no supported updates made to the cluster
2018-01-17 21:07:07.283380 I | op-pool: creating pool replicapool in namespace rook
2018-01-17 21:07:07.283532 I | exec: Running command: ceph osd pool create replicapool 0 replicated --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/212839048
2018-01-17 21:07:07.959419 I | exec: pool 'replicapool' created
2018-01-17 21:07:07.961894 I | exec: Running command: ceph osd pool set replicapool size 1 --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/005966407
2018-01-17 21:07:09.085179 I | exec: set pool 1 size to 1
2018-01-17 21:07:09.085535 I | exec: Running command: ceph osd pool application enable replicapool replicapool --yes-i-really-mean-it --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/117628154
2018-01-17 21:07:10.096331 I | exec: enabled application 'replicapool' on pool 'replicapool'
2018-01-17 21:07:10.096725 I | cephclient: creating pool replicapool succeeded, buf:
2018-01-17 21:07:10.096782 I | op-pool: created pool replicapool
2018-01-17 21:16:06.662706 I | op-cluster: update to cluster rook
2018-01-17 21:16:06.663543 I | op-cluster: updating cluster rook
2018-01-17 21:16:36.679206 I | op-mon: start running mons
2018-01-17 21:16:36.683613 I | cephmon: parsing mon endpoints: rook-ceph-mon0=10.106.21.163:6790,rook-ceph-mon1=10.108.42.15:6790,rook-ceph-mon2=10.96.218.16:6790
2018-01-17 21:16:36.683670 I | op-mon: loaded: maxMonID=2, mons=map[rook-ceph-mon0:0xc42020b8e0 rook-ceph-mon1:0xc42020bd60 rook-ceph-mon2:0xc4201beb40], mapping=&{Node:map[rook-ceph-mon0:0xc4201bed00 rook-ceph-mon1:0xc4201bee00 rook-ceph-mon2:0xc4201bf0c0] Port:map[]}
2018-01-17 21:16:36.690841 I | op-mon: saved mon endpoints to config map map[data:rook-ceph-mon0=10.106.21.163:6790,rook-ceph-mon1=10.108.42.15:6790,rook-ceph-mon2=10.96.218.16:6790 maxMonId:2 mapping:{"node":{"rook-ceph-mon0":{"Name":"172.17.8.101","Address":"172.17.8.101"},"rook-ceph-mon1":{"Name":"172.17.8.102","Address":"172.17.8.102"},"rook-ceph-mon2":{"Name":"172.17.8.101","Address":"172.17.8.101"}},"port":{}}]
2018-01-17 21:16:36.691613 I | cephmon: writing config file /var/lib/rook/rook/rook.config
2018-01-17 21:16:36.691838 I | cephmon: generated admin config in /var/lib/rook/rook
2018-01-17 21:16:36.885648 I | op-mgr: start running mgr
2018-01-17 21:16:36.888182 I | op-mgr: the mgr keyring was already generated
2018-01-17 21:16:36.888438 I | exec: Running command: ceph mgr module enable prometheus --force --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/483753688
2018-01-17 21:16:38.008016 I | op-mgr: rook-ceph-mgr0 service already exists
2018-01-17 21:16:38.017326 I | op-mgr: rook-ceph-mgr0 deployment already exists
2018-01-17 21:16:38.017451 I | op-api: starting the Rook api
2018-01-17 21:16:38.040293 I | op-api: api service already running
2018-01-17 21:16:38.046248 I | op-k8sutil: role rook-api already exists in namespace rook. updating if needed.
2018-01-17 21:16:38.066807 I | op-api: api deployment already exists
2018-01-17 21:16:38.066895 I | op-osd: start running osds in namespace rook
2018-01-17 21:16:38.072133 I | op-k8sutil: role rook-ceph-osd already exists in namespace rook. updating if needed.
2018-01-17 21:16:38.091939 I | exec: Running command: ceph osd set noscrub --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/487268951
2018-01-17 21:16:39.002586 I | exec: noscrub is set
2018-01-17 21:16:39.003175 I | exec: Running command: ceph osd set nodeep-scrub --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/524490186
2018-01-17 21:16:40.025718 I | exec: nodeep-scrub is set
2018-01-17 21:16:40.039224 I | op-osd: osd replica set started for node 172.17.8.101
2018-01-17 21:17:19.595381 I | op-osd: completed running osds in namespace rook
2018-01-17 21:17:19.595524 I | exec: Running command: ceph osd unset noscrub --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/442858892
2018-01-17 21:17:20.550536 I | exec: noscrub is unset
2018-01-17 21:17:20.551076 I | exec: Running command: ceph osd unset nodeep-scrub --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json --out-file /tmp/179130747
2018-01-17 21:17:21.565704 I | exec: nodeep-scrub is unset
2018-01-17 21:17:21.565791 I | op-cluster: Done creating rook instance in namespace rook
2018-01-17 21:17:21.574114 I | op-cluster: update to cluster rook
2018-01-17 21:17:21.574141 I | op-cluster: no supported updates made to the cluster
2018-01-17 21:17:21.574896 I | op-cluster: update to cluster rook
2018-01-17 21:17:21.574914 I | op-cluster: no supported updates made to the cluster
2018-01-17 21:18:38.802646 I | op-provisioner: creating volume with configuration {pool:replicapool clusterName:rook fstype:}
2018-01-17 21:18:38.802675 I | exec: Running command: rbd create replicapool/pvc-fc28b4e5-fbcb-11e7-a451-001c422fc6d5 --size 5120 --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring
2018-01-17 21:18:39.918945 I | op-provisioner: creating volume with configuration {pool:replicapool clusterName:rook fstype:}
2018-01-17 21:18:39.918980 I | exec: Running command: rbd create replicapool/pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5 --size 10240 --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring
2018-01-17 21:18:40.163974 I | exec: Running command: rbd ls -l replicapool --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json
2018-01-17 21:18:40.260797 I | op-provisioner: Rook block image created: pvc-fc28b4e5-fbcb-11e7-a451-001c422fc6d5
2018-01-17 21:18:40.263794 I | op-provisioner: successfully created Rook Block volume &FlexVolumeSource{Driver:rook.io/rook,FSType:,SecretRef:nil,ReadOnly:false,Options:map[string]string{image: pvc-fc28b4e5-fbcb-11e7-a451-001c422fc6d5,pool: replicapool,storageClass: rook-block,},}
2018-01-17 21:18:41.245343 I | exec: Running command: rbd ls -l replicapool --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json
2018-01-17 21:18:41.350986 I | op-provisioner: Rook block image created: pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5
2018-01-17 21:18:41.351022 I | op-provisioner: successfully created Rook Block volume &FlexVolumeSource{Driver:rook.io/rook,FSType:,SecretRef:nil,ReadOnly:false,Options:map[string]string{image: pvc-fccbc5ec-fbcb-11e7-a451-001c422fc6d5,pool: replicapool,storageClass: rook-block,},}
2018-01-17 21:19:01.935667 I | op-provisioner: creating volume with configuration {pool:replicapool clusterName:rook fstype:}
2018-01-17 21:19:01.935766 I | exec: Running command: rbd create replicapool/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 --size 5120 --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring
2018-01-17 21:19:02.250984 I | exec: Running command: rbd ls -l replicapool --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json
2018-01-17 21:19:02.364704 I | op-provisioner: Rook block image created: pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:19:02.364739 I | op-provisioner: successfully created Rook Block volume &FlexVolumeSource{Driver:rook.io/rook,FSType:,SecretRef:nil,ReadOnly:false,Options:map[string]string{image: pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5,pool: replicapool,storageClass: rook-block,},}
2018-01-17 21:33:33.342437 I | op-provisioner: Deleting volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:33:33.371170 I | exec: Running command: rbd rm replicapool/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring
E0117 21:33:33.486065 5 controller.go:1044] Deletion of volume "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" failed: Failed to delete rook block image replicapool/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: failed to delete image pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 in pool replicapool: Failed to complete : exit status 16. output: 2018-01-17 21:33:33.479518 7f1de7fff700 -1 librbd::image::RemoveRequest: 0x563dba9fa8f0 check_image_watchers: image has watchers - not removing
E0117 21:33:33.487825 5 goroutinemap.go:165] Operation for "delete-pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5[0a5b3492-fbcc-11e7-a451-001c422fc6d5]" failed. No retries permitted until 2018-01-17 21:33:33.987800159 +0000 UTC m=+1655.809844307 (durationBeforeRetry 500ms). Error: Failed to delete rook block image replicapool/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: failed to delete image pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 in pool replicapool: Failed to complete : exit status 16. output: 2018-01-17 21:33:33.479518 7f1de7fff700 -1 librbd::image::RemoveRequest: 0x563dba9fa8f0 check_image_watchers: image has watchers - not removing
2018-01-17 21:33:46.619538 I | op-provisioner: Deleting volume pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5
2018-01-17 21:33:46.619981 I | exec: Running command: rbd rm replicapool/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring
E0117 21:33:46.665127 5 controller.go:1044] Deletion of volume "pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5" failed: Failed to delete rook block image replicapool/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: failed to delete image pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 in pool replicapool: Failed to complete : exit status 16. output: 2018-01-17 21:33:46.658881 7f1df3fff700 -1 librbd::image::RemoveRequest: 0x55d63bfda8f0 check_image_watchers: image has watchers - not removing
E0117 21:33:46.665339 5 goroutinemap.go:165] Operation for "delete-pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5[0a5b3492-fbcc-11e7-a451-001c422fc6d5]" failed. No retries permitted until 2018-01-17 21:33:47.665305687 +0000 UTC m=+1669.487349836 (durationBeforeRetry 1s). Error: Failed to delete rook block image replicapool/pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5: failed to delete image pvc-0a060ee8-fbcc-11e7-a451-001c422fc6d5 in pool replicapool: Failed to complete : exit status 16. output: 2018-01-17 21:33:46.658881 7f1df3fff700 -1 librbd::image::RemoveRequest: 0x55d63bfda8f0 check_image_watchers: image has watchers - not removing
2018-01-17 21:37:48.237593 I | op-provisioner: creating volume with configuration {pool:replicapool clusterName:rook fstype:}
2018-01-17 21:37:48.237693 I | exec: Running command: rbd create replicapool/pvc-a953c80d-fbce-11e7-a451-001c422fc6d5 --size 5120 --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring
2018-01-17 21:37:48.401570 I | exec: Running command: rbd ls -l replicapool --cluster=rook --conf=/var/lib/rook/rook/rook.config --keyring=/var/lib/rook/rook/client.admin.keyring --format json
2018-01-17 21:37:48.470524 I | op-provisioner: Rook block image created: pvc-a953c80d-fbce-11e7-a451-001c422fc6d5
2018-01-17 21:37:48.471686 I | op-provisioner: successfully created Rook Block volume &FlexVolumeSource{Driver:rook.io/rook,FSType:,SecretRef:nil,ReadOnly:false,Options:map[string]string{image: pvc-a953c80d-fbce-11e7-a451-001c422fc6d5,pool: replicapool,storageClass: rook-block,},}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment