Created
June 3, 2019 08:22
-
-
Save lukasheinrich/8b06ac34ecd059c4a538666ee85426b4 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:21:45.456119607Z stdout F hostIP = 172.17.0.4 | |
2019-06-03T08:21:45.456157435Z stdout F podIP = 172.17.0.4 | |
2019-06-03T08:22:15.44426374Z stderr F panic: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout | |
2019-06-03T08:22:15.444311239Z stderr F | |
2019-06-03T08:22:15.444317729Z stderr F goroutine 1 [running]: | |
2019-06-03T08:22:15.44432285Z stderr F main.main() | |
2019-06-03T08:22:15.444330284Z stderr F /src/main.go:84 +0x423 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:22:16.481333505Z stdout F hostIP = 172.17.0.4 | |
2019-06-03T08:22:16.481391139Z stdout F podIP = 172.17.0.4 | |
2019-06-03T08:22:16.545990083Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T08:22:16.546042944Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24 | |
2019-06-03T08:22:16.547735677Z stdout F Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: <nil> Gw: 172.17.0.2 Flags: [] Table: 0} | |
2019-06-03T08:22:16.547774666Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T08:22:16.547789928Z stdout F handling current node | |
2019-06-03T08:22:16.552029885Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T08:22:16.552058766Z stdout F Node kind-worker2 has CIDR 10.244.1.0/24 | |
2019-06-03T08:22:16.552067357Z stdout F Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.0.3 Flags: [] Table: 0} | |
2019-06-03T08:22:26.556717629Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T08:22:26.556769824Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24 | |
2019-06-03T08:22:26.556775732Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T08:22:26.556779493Z stdout F handling current node | |
2019-06-03T08:22:26.556784762Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T08:22:26.55678842Z stdout F Node kind-worker2 has CIDR 10.244.1.0/24 | |
2019-06-03T08:22:36.561044707Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T08:22:36.561102623Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24 | |
2019-06-03T08:22:36.561124668Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T08:22:36.561128721Z stdout F handling current node | |
2019-06-03T08:22:36.561134559Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T08:22:36.561137765Z stdout F Node kind-worker2 has CIDR 10.244.1.0/24 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
-- Logs begin at Mon 2019-06-03 08:20:37 UTC, end at Mon 2019-06-03 08:22:30 UTC. -- | |
Jun 03 08:20:37 kind-worker systemd[1]: Starting containerd container runtime... | |
Jun 03 08:20:37 kind-worker systemd[1]: Started containerd container runtime. | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.465965995Z" level=info msg="starting containerd" revision= version=1.2.6-0ubuntu1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.468491256Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.469217648Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.469447390Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.469687391Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.470091839Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.470548260Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.470799986Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.474957953Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.475084370Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.475177175Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.475512938Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.475597593Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.475676602Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.475764168Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.475913154Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.476024360Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.476585244Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.478352203Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.478512066Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.478592269Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.486857560Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.486984412Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487060892Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487130080Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487197664Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487277716Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487347578Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487513370Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487614506Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487696157Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487766271Z" level=info msg="loading plugin "io.containerd.grpc.v1.cri"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.488084601Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntime:{Type:io.containerd.runtime.v1.linux Engine: Root: Options:<nil>} UntrustedWorkloadRuntime:{Type: Engine: Root: Options:<nil>} Runtimes:map[] NoPivot:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Auths:map[]} StreamServerAddress:127.0.0.1 StreamServerPort:0 EnableSelinux:false SandboxImage:k8s.gcr.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.488224167Z" level=info msg="Connect containerd service" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.488410134Z" level=info msg="Get image filesystem path "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.488625556Z" level=error msg="Failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.488950394Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.490910421Z" level=info msg=serving... address="/run/containerd/containerd.sock" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.491179616Z" level=info msg="containerd successfully booted in 0.026503s" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.511045542Z" level=info msg="Start subscribing containerd event" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.522756643Z" level=info msg="Start recovering state" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.523858617Z" level=warning msg="The image docker.io/kindest/kindnetd:0.1.0 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.534400143Z" level=warning msg="The image k8s.gcr.io/coredns:1.3.1 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.535538991Z" level=warning msg="The image k8s.gcr.io/etcd:3.3.10 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.540922253Z" level=warning msg="The image k8s.gcr.io/ip-masq-agent:v2.4.1 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.542098209Z" level=warning msg="The image k8s.gcr.io/kube-apiserver:v1.14.2 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.543388801Z" level=warning msg="The image k8s.gcr.io/kube-controller-manager:v1.14.2 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.544301826Z" level=warning msg="The image k8s.gcr.io/kube-proxy:v1.14.2 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.545017118Z" level=warning msg="The image k8s.gcr.io/kube-scheduler:v1.14.2 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.545684183Z" level=warning msg="The image k8s.gcr.io/pause:3.1 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.546356294Z" level=warning msg="The image sha256:19bb968f77bba3a5b5f56b5c033d71f699c22bdc8bbe9412f0bfaf7f674a64cc is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.547034842Z" level=warning msg="The image sha256:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.547639690Z" level=warning msg="The image sha256:5c24210246bb67af5f89150e947211a1c2a127fb3825eb18507c1039bc6e86f8 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.548162513Z" level=warning msg="The image sha256:5eeff402b659832b64b5634061eb3825008abb549e1d873faf3908beecea8dfc is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.548721569Z" level=warning msg="The image sha256:8be94bdae1399076ac29223a7f10230011d195e355dfc7027fa02dc95d34065f is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.549377377Z" level=warning msg="The image sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.549980639Z" level=warning msg="The image sha256:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.550651169Z" level=warning msg="The image sha256:ee18f350636d8e51ebb3749d1d7a1928da1d6e6fc0051852a6686c19b706c57c is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.551270388Z" level=warning msg="The image sha256:f227066bdc5f9aa2f8a9bb54854e5b7a23c6db8fce0f927e5c4feef8a9e74d46 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.551665433Z" level=info msg="Start event monitor" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.551713863Z" level=info msg="Start snapshots syncer" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.551725025Z" level=info msg="Start streaming server" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.551894130Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.552489248Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/etcd:3.3.10,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 08:21:13 kind-worker containerd[44]: time="2019-06-03T08:21:13.484196511Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:21:30 kind-worker containerd[44]: time="2019-06-03T08:21:30.050617212Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:21:35 kind-worker containerd[44]: time="2019-06-03T08:21:35.169191184Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:21:40 kind-worker containerd[44]: time="2019-06-03T08:21:40.170467487Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:21:42 kind-worker containerd[44]: time="2019-06-03T08:21:42.911689047Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." | |
Jun 03 08:21:42 kind-worker containerd[44]: time="2019-06-03T08:21:42.912338445Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.140892936Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-q6qbj,Uid:9e5f9619-85d8-11e9-bdc2-0242ac110002,Namespace:kube-system,Attempt:0,}" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.184555849Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-h2bsq,Uid:9e5f6fdf-85d8-11e9-bdc2-0242ac110002,Namespace:kube-system,Attempt:0,}" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.209141213Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/71c5255ddc47d268b5c9ce42d7dc909bc6c5bf9c6db97c05017f96242d5838b0/shim.sock" debug=false pid=198 | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.221242779Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/965f998e96cf6f2a41d92e52f3e42375fd20b8ec9bb14f1669a73223f391be25/shim.sock" debug=false pid=211 | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.251994196Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:ip-masq-agent-kcr75,Uid:9e63c396-85d8-11e9-bdc2-0242ac110002,Namespace:kube-system,Attempt:0,}" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.319897802Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/e1dc64c900d66d89c0b884abe34ceb614b9db09f2fb1bab96255212533370de7/shim.sock" debug=false pid=232 | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.554693266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-h2bsq,Uid:9e5f6fdf-85d8-11e9-bdc2-0242ac110002,Namespace:kube-system,Attempt:0,} returns sandbox id "71c5255ddc47d268b5c9ce42d7dc909bc6c5bf9c6db97c05017f96242d5838b0"" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.562490243Z" level=info msg="CreateContainer within sandbox "71c5255ddc47d268b5c9ce42d7dc909bc6c5bf9c6db97c05017f96242d5838b0" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.732246863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:ip-masq-agent-kcr75,Uid:9e63c396-85d8-11e9-bdc2-0242ac110002,Namespace:kube-system,Attempt:0,} returns sandbox id "e1dc64c900d66d89c0b884abe34ceb614b9db09f2fb1bab96255212533370de7"" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.739513953Z" level=info msg="CreateContainer within sandbox "e1dc64c900d66d89c0b884abe34ceb614b9db09f2fb1bab96255212533370de7" for container &ContainerMetadata{Name:ip-masq-agent,Attempt:0,}" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.924008726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q6qbj,Uid:9e5f9619-85d8-11e9-bdc2-0242ac110002,Namespace:kube-system,Attempt:0,} returns sandbox id "965f998e96cf6f2a41d92e52f3e42375fd20b8ec9bb14f1669a73223f391be25"" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.928842516Z" level=info msg="CreateContainer within sandbox "965f998e96cf6f2a41d92e52f3e42375fd20b8ec9bb14f1669a73223f391be25" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" | |
Jun 03 08:21:44 kind-worker containerd[44]: time="2019-06-03T08:21:44.856502232Z" level=info msg="CreateContainer within sandbox "71c5255ddc47d268b5c9ce42d7dc909bc6c5bf9c6db97c05017f96242d5838b0" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id "d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379"" | |
Jun 03 08:21:44 kind-worker containerd[44]: time="2019-06-03T08:21:44.933928142Z" level=info msg="StartContainer for "d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379"" | |
Jun 03 08:21:44 kind-worker containerd[44]: time="2019-06-03T08:21:44.952405814Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379/shim.sock" debug=false pid=352 | |
Jun 03 08:21:45 kind-worker containerd[44]: time="2019-06-03T08:21:45.171794032Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:21:45 kind-worker containerd[44]: time="2019-06-03T08:21:45.538834038Z" level=info msg="StartContainer for "d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379" returns successfully" | |
Jun 03 08:21:46 kind-worker containerd[44]: time="2019-06-03T08:21:46.448154231Z" level=info msg="CreateContainer within sandbox "e1dc64c900d66d89c0b884abe34ceb614b9db09f2fb1bab96255212533370de7" for &ContainerMetadata{Name:ip-masq-agent,Attempt:0,} returns container id "806e9d11c7e8760f33cdaeae2ab9cea80d6f53d015eaba1109b01ee69d1172ec"" | |
Jun 03 08:21:46 kind-worker containerd[44]: time="2019-06-03T08:21:46.449356817Z" level=info msg="StartContainer for "806e9d11c7e8760f33cdaeae2ab9cea80d6f53d015eaba1109b01ee69d1172ec"" | |
Jun 03 08:21:46 kind-worker containerd[44]: time="2019-06-03T08:21:46.467150623Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/806e9d11c7e8760f33cdaeae2ab9cea80d6f53d015eaba1109b01ee69d1172ec/shim.sock" debug=false pid=404 | |
Jun 03 08:21:47 kind-worker containerd[44]: time="2019-06-03T08:21:47.100385203Z" level=info msg="StartContainer for "806e9d11c7e8760f33cdaeae2ab9cea80d6f53d015eaba1109b01ee69d1172ec" returns successfully" | |
Jun 03 08:21:47 kind-worker containerd[44]: time="2019-06-03T08:21:47.231771945Z" level=info msg="CreateContainer within sandbox "965f998e96cf6f2a41d92e52f3e42375fd20b8ec9bb14f1669a73223f391be25" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id "7477cc71001f9db1e43bd9e417c984e1c4b1b38525e550b5fe8bfbd29b3d3620"" | |
Jun 03 08:21:47 kind-worker containerd[44]: time="2019-06-03T08:21:47.232771489Z" level=info msg="StartContainer for "7477cc71001f9db1e43bd9e417c984e1c4b1b38525e550b5fe8bfbd29b3d3620"" | |
Jun 03 08:21:47 kind-worker containerd[44]: time="2019-06-03T08:21:47.233977794Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/7477cc71001f9db1e43bd9e417c984e1c4b1b38525e550b5fe8bfbd29b3d3620/shim.sock" debug=false pid=461 | |
Jun 03 08:21:47 kind-worker containerd[44]: time="2019-06-03T08:21:47.459889328Z" level=info msg="StartContainer for "7477cc71001f9db1e43bd9e417c984e1c4b1b38525e550b5fe8bfbd29b3d3620" returns successfully" | |
Jun 03 08:21:50 kind-worker containerd[44]: time="2019-06-03T08:21:50.173149991Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:21:55 kind-worker containerd[44]: time="2019-06-03T08:21:55.175001692Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:22:00 kind-worker containerd[44]: time="2019-06-03T08:22:00.176146764Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:22:05 kind-worker containerd[44]: time="2019-06-03T08:22:05.177223831Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:22:10 kind-worker containerd[44]: time="2019-06-03T08:22:10.179014365Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:22:15 kind-worker containerd[44]: time="2019-06-03T08:22:15.180583790Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:22:15 kind-worker containerd[44]: time="2019-06-03T08:22:15.459249907Z" level=info msg="Finish piping stdout of container "d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379"" | |
Jun 03 08:22:15 kind-worker containerd[44]: time="2019-06-03T08:22:15.459316940Z" level=info msg="Finish piping stderr of container "d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379"" | |
Jun 03 08:22:15 kind-worker containerd[44]: time="2019-06-03T08:22:15.510505732Z" level=info msg="TaskExit event &TaskExit{ContainerID:d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379,ID:d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379,Pid:370,ExitStatus:2,ExitedAt:2019-06-03 08:22:15.46005987 +0000 UTC,}" | |
Jun 03 08:22:15 kind-worker containerd[44]: time="2019-06-03T08:22:15.576572310Z" level=info msg="shim reaped" id=d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379 | |
Jun 03 08:22:16 kind-worker containerd[44]: time="2019-06-03T08:22:16.157993779Z" level=info msg="CreateContainer within sandbox "71c5255ddc47d268b5c9ce42d7dc909bc6c5bf9c6db97c05017f96242d5838b0" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}" | |
Jun 03 08:22:16 kind-worker containerd[44]: time="2019-06-03T08:22:16.211748562Z" level=info msg="CreateContainer within sandbox "71c5255ddc47d268b5c9ce42d7dc909bc6c5bf9c6db97c05017f96242d5838b0" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id "14fbe60262eed73b03e17d52ce534aa67a06e418537a1a346b4d551c488d7cd7"" | |
Jun 03 08:22:16 kind-worker containerd[44]: time="2019-06-03T08:22:16.212766798Z" level=info msg="StartContainer for "14fbe60262eed73b03e17d52ce534aa67a06e418537a1a346b4d551c488d7cd7"" | |
Jun 03 08:22:16 kind-worker containerd[44]: time="2019-06-03T08:22:16.213730006Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/14fbe60262eed73b03e17d52ce534aa67a06e418537a1a346b4d551c488d7cd7/shim.sock" debug=false pid=610 | |
Jun 03 08:22:16 kind-worker containerd[44]: time="2019-06-03T08:22:16.493662691Z" level=info msg="StartContainer for "14fbe60262eed73b03e17d52ce534aa67a06e418537a1a346b4d551c488d7cd7" returns successfully" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:22:10.014512935Z stdout F .:53 | |
2019-06-03T08:22:10.015512884Z stdout F 2019-06-03T08:22:10.015Z [INFO] CoreDNS-1.3.1 | |
2019-06-03T08:22:10.015617255Z stdout F 2019-06-03T08:22:10.015Z [INFO] linux/amd64, go1.11.4, 6b56a9c | |
2019-06-03T08:22:10.015669539Z stdout F CoreDNS-1.3.1 | |
2019-06-03T08:22:10.015734891Z stdout F linux/amd64, go1.11.4, 6b56a9c | |
2019-06-03T08:22:10.015819823Z stdout F 2019-06-03T08:22:10.015Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669 | |
2019-06-03T08:22:17.025963018Z stdout F 2019-06-03T08:22:17.021Z [ERROR] plugin/errors: 2 2431816329507456001.5677201652194948739. HINFO: read udp 10.244.0.2:50950->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:19.022451157Z stdout F 2019-06-03T08:22:19.022Z [ERROR] plugin/errors: 2 2431816329507456001.5677201652194948739. HINFO: read udp 10.244.0.2:52940->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:20.016589195Z stdout F 2019-06-03T08:22:20.016Z [ERROR] plugin/errors: 2 2431816329507456001.5677201652194948739. HINFO: read udp 10.244.0.2:58301->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:22.019052875Z stdout F 2019-06-03T08:22:22.018Z [ERROR] plugin/errors: 2 2431816329507456001.5677201652194948739. HINFO: read udp 10.244.0.2:41563->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:25.019815566Z stdout F 2019-06-03T08:22:25.019Z [ERROR] plugin/errors: 2 2431816329507456001.5677201652194948739. HINFO: read udp 10.244.0.2:52615->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:28.020796136Z stdout F 2019-06-03T08:22:28.020Z [ERROR] plugin/errors: 2 2431816329507456001.5677201652194948739. HINFO: read udp 10.244.0.2:57173->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:28.705969414Z stdout F 2019-06-03T08:22:28.705Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. A: read udp 10.244.0.2:32975->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:28.706175925Z stdout F 2019-06-03T08:22:28.705Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. AAAA: read udp 10.244.0.2:59617->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:30.707076497Z stdout F 2019-06-03T08:22:30.706Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. A: read udp 10.244.0.2:53832->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:30.707161131Z stdout F 2019-06-03T08:22:30.706Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. AAAA: read udp 10.244.0.2:51944->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:31.021001131Z stdout F 2019-06-03T08:22:31.020Z [ERROR] plugin/errors: 2 2431816329507456001.5677201652194948739. HINFO: read udp 10.244.0.2:57706->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:31.207079099Z stdout F 2019-06-03T08:22:31.206Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. A: read udp 10.244.0.2:54285->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:31.20718545Z stdout F 2019-06-03T08:22:31.206Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. AAAA: read udp 10.244.0.2:50985->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:32.708006523Z stdout F 2019-06-03T08:22:32.707Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. A: read udp 10.244.0.2:48678->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:32.708375233Z stdout F 2019-06-03T08:22:32.707Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. AAAA: read udp 10.244.0.2:36594->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:33.208173755Z stdout F 2019-06-03T08:22:33.207Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. AAAA: read udp 10.244.0.2:58547->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:33.208315219Z stdout F 2019-06-03T08:22:33.207Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. A: read udp 10.244.0.2:43313->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:34.021755805Z stdout F 2019-06-03T08:22:34.021Z [ERROR] plugin/errors: 2 2431816329507456001.5677201652194948739. HINFO: read udp 10.244.0.2:33589->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:37.022313306Z stdout F 2019-06-03T08:22:37.021Z [ERROR] plugin/errors: 2 2431816329507456001.5677201652194948739. HINFO: read udp 10.244.0.2:43092->169.254.169.254:53: i/o timeout |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:22:10.635187887Z stdout F .:53 | |
2019-06-03T08:22:10.635593489Z stdout F 2019-06-03T08:22:10.635Z [INFO] CoreDNS-1.3.1 | |
2019-06-03T08:22:10.635711304Z stdout F 2019-06-03T08:22:10.635Z [INFO] linux/amd64, go1.11.4, 6b56a9c | |
2019-06-03T08:22:10.635792187Z stdout F CoreDNS-1.3.1 | |
2019-06-03T08:22:10.635851897Z stdout F linux/amd64, go1.11.4, 6b56a9c | |
2019-06-03T08:22:10.635951201Z stdout F 2019-06-03T08:22:10.635Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669 | |
2019-06-03T08:22:17.639562276Z stdout F 2019-06-03T08:22:17.639Z [ERROR] plugin/errors: 2 8018551262369446637.5952826493433791981. HINFO: read udp 10.244.0.3:55481->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:20.639516108Z stdout F 2019-06-03T08:22:20.639Z [ERROR] plugin/errors: 2 8018551262369446637.5952826493433791981. HINFO: read udp 10.244.0.3:53238->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:21.638785549Z stdout F 2019-06-03T08:22:21.638Z [ERROR] plugin/errors: 2 8018551262369446637.5952826493433791981. HINFO: read udp 10.244.0.3:47622->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:22.638600734Z stdout F 2019-06-03T08:22:22.638Z [ERROR] plugin/errors: 2 8018551262369446637.5952826493433791981. HINFO: read udp 10.244.0.3:56828->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:25.63941919Z stdout F 2019-06-03T08:22:25.639Z [ERROR] plugin/errors: 2 8018551262369446637.5952826493433791981. HINFO: read udp 10.244.0.3:36085->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:28.639854103Z stdout F 2019-06-03T08:22:28.639Z [ERROR] plugin/errors: 2 8018551262369446637.5952826493433791981. HINFO: read udp 10.244.0.3:52280->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:31.640334719Z stdout F 2019-06-03T08:22:31.640Z [ERROR] plugin/errors: 2 8018551262369446637.5952826493433791981. HINFO: read udp 10.244.0.3:49661->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:33.710163542Z stdout F 2019-06-03T08:22:33.709Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. A: read udp 10.244.0.3:41028->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:33.710626413Z stdout F 2019-06-03T08:22:33.710Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. AAAA: read udp 10.244.0.3:51352->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:34.641098966Z stdout F 2019-06-03T08:22:34.640Z [ERROR] plugin/errors: 2 8018551262369446637.5952826493433791981. HINFO: read udp 10.244.0.3:55867->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:35.711482413Z stdout F 2019-06-03T08:22:35.711Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. A: read udp 10.244.0.3:36158->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:35.711554284Z stdout F 2019-06-03T08:22:35.711Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. AAAA: read udp 10.244.0.3:45545->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:36.211547547Z stdout F 2019-06-03T08:22:36.211Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. A: read udp 10.244.0.3:38521->169.254.169.254:53: i/o timeout | |
2019-06-03T08:22:36.211646691Z stdout F 2019-06-03T08:22:36.211Z [ERROR] plugin/errors: 2 dl-cdn.alpinelinux.org.c.eco-emissary-99515.internal. AAAA: read udp 10.244.0.3:34426->169.254.169.254:53: i/o timeout |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Containers: 3 | |
Running: 3 | |
Paused: 0 | |
Stopped: 0 | |
Images: 12 | |
Server Version: 17.09.0-ce | |
Storage Driver: overlay2 | |
Backing Filesystem: extfs | |
Supports d_type: true | |
Native Overlay Diff: true | |
Logging Driver: json-file | |
Cgroup Driver: cgroupfs | |
Plugins: | |
Volume: local | |
Network: bridge host macvlan null overlay | |
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog | |
Swarm: inactive | |
Runtimes: runc | |
Default Runtime: runc | |
Init Binary: docker-init | |
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0 | |
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64 | |
init version: 949e6fa | |
Kernel Version: 4.4.0-101-generic | |
Operating System: Ubuntu 14.04.5 LTS | |
OSType: linux | |
Architecture: x86_64 | |
CPUs: 2 | |
Total Memory: 7.305GiB | |
Name: travis-job-1746d25f-fc3a-4b25-a2df-5f2588d81e20 | |
ID: DH3M:23FP:35CF:LCVT:ROBH:CV5W:C5W2:JSP4:7G7W:NH4L:6FOS:WJOW | |
Docker Root Dir: /var/lib/docker | |
Debug Mode (client): false | |
Debug Mode (server): false | |
Registry: https://index.docker.io/v1/ | |
Experimental: false | |
Insecure Registries: | |
127.0.0.0/8 | |
Live Restore Enabled: false | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:21:02.73810079Z stderr F 2019-06-03 08:21:02.737928 I | etcdmain: etcd Version: 3.3.10 | |
2019-06-03T08:21:02.738215842Z stderr F 2019-06-03 08:21:02.738180 I | etcdmain: Git SHA: 27fc7e2 | |
2019-06-03T08:21:02.738290397Z stderr F 2019-06-03 08:21:02.738262 I | etcdmain: Go Version: go1.10.4 | |
2019-06-03T08:21:02.738353381Z stderr F 2019-06-03 08:21:02.738308 I | etcdmain: Go OS/Arch: linux/amd64 | |
2019-06-03T08:21:02.738403526Z stderr F 2019-06-03 08:21:02.738376 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2 | |
2019-06-03T08:21:02.74075257Z stderr F 2019-06-03 08:21:02.740652 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = | |
2019-06-03T08:21:02.742378398Z stderr F 2019-06-03 08:21:02.742276 I | embed: listening for peers on https://172.17.0.2:2380 | |
2019-06-03T08:21:02.742534617Z stderr F 2019-06-03 08:21:02.742476 I | embed: listening for client requests on 127.0.0.1:2379 | |
2019-06-03T08:21:02.742639462Z stderr F 2019-06-03 08:21:02.742599 I | embed: listening for client requests on 172.17.0.2:2379 | |
2019-06-03T08:21:02.747713142Z stderr F 2019-06-03 08:21:02.747598 I | etcdserver: name = kind-control-plane | |
2019-06-03T08:21:02.74783013Z stderr F 2019-06-03 08:21:02.747792 I | etcdserver: data dir = /var/lib/etcd | |
2019-06-03T08:21:02.747910684Z stderr F 2019-06-03 08:21:02.747881 I | etcdserver: member dir = /var/lib/etcd/member | |
2019-06-03T08:21:02.747981889Z stderr F 2019-06-03 08:21:02.747931 I | etcdserver: heartbeat = 100ms | |
2019-06-03T08:21:02.748031535Z stderr F 2019-06-03 08:21:02.748007 I | etcdserver: election = 1000ms | |
2019-06-03T08:21:02.748098398Z stderr F 2019-06-03 08:21:02.748071 I | etcdserver: snapshot count = 10000 | |
2019-06-03T08:21:02.748291286Z stderr F 2019-06-03 08:21:02.748161 I | etcdserver: advertise client URLs = https://172.17.0.2:2379 | |
2019-06-03T08:21:02.748372093Z stderr F 2019-06-03 08:21:02.748342 I | etcdserver: initial advertise peer URLs = https://172.17.0.2:2380 | |
2019-06-03T08:21:02.748466889Z stderr F 2019-06-03 08:21:02.748401 I | etcdserver: initial cluster = kind-control-plane=https://172.17.0.2:2380 | |
2019-06-03T08:21:02.753674985Z stderr F 2019-06-03 08:21:02.753520 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f | |
2019-06-03T08:21:02.753793564Z stderr F 2019-06-03 08:21:02.753760 I | raft: b8e14bda2255bc24 became follower at term 0 | |
2019-06-03T08:21:02.753870379Z stderr F 2019-06-03 08:21:02.753843 I | raft: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] | |
2019-06-03T08:21:02.753945487Z stderr F 2019-06-03 08:21:02.753892 I | raft: b8e14bda2255bc24 became follower at term 1 | |
2019-06-03T08:21:02.760298302Z stderr F 2019-06-03 08:21:02.760162 W | auth: simple token is not cryptographically signed | |
2019-06-03T08:21:02.76505777Z stderr F 2019-06-03 08:21:02.764939 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided] | |
2019-06-03T08:21:02.768251677Z stderr F 2019-06-03 08:21:02.768129 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = | |
2019-06-03T08:21:02.768629057Z stderr F 2019-06-03 08:21:02.768579 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10) | |
2019-06-03T08:21:02.7693792Z stderr F 2019-06-03 08:21:02.769307 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f | |
2019-06-03T08:21:02.95449313Z stderr F 2019-06-03 08:21:02.954335 I | raft: b8e14bda2255bc24 is starting a new election at term 1 | |
2019-06-03T08:21:02.954618202Z stderr F 2019-06-03 08:21:02.954584 I | raft: b8e14bda2255bc24 became candidate at term 2 | |
2019-06-03T08:21:02.954744604Z stderr F 2019-06-03 08:21:02.954710 I | raft: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2 | |
2019-06-03T08:21:02.95481267Z stderr F 2019-06-03 08:21:02.954780 I | raft: b8e14bda2255bc24 became leader at term 2 | |
2019-06-03T08:21:02.954883663Z stderr F 2019-06-03 08:21:02.954856 I | raft: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2 | |
2019-06-03T08:21:02.955471396Z stderr F 2019-06-03 08:21:02.955384 I | etcdserver: setting up the initial cluster version to 3.3 | |
2019-06-03T08:21:02.955636562Z stderr F 2019-06-03 08:21:02.955600 I | etcdserver: published {Name:kind-control-plane ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f | |
2019-06-03T08:21:02.955828485Z stderr F 2019-06-03 08:21:02.955772 I | embed: ready to serve client requests | |
2019-06-03T08:21:02.958006436Z stderr F 2019-06-03 08:21:02.957921 I | embed: serving client requests on 127.0.0.1:2379 | |
2019-06-03T08:21:02.958160861Z stderr F 2019-06-03 08:21:02.958097 I | embed: ready to serve client requests | |
2019-06-03T08:21:02.960473599Z stderr F 2019-06-03 08:21:02.960391 I | embed: serving client requests on 172.17.0.2:2379 | |
2019-06-03T08:21:02.962215828Z stderr F 2019-06-03 08:21:02.962131 N | etcdserver/membership: set the initial cluster version to 3.3 | |
2019-06-03T08:21:02.962372116Z stderr F 2019-06-03 08:21:02.962336 I | etcdserver/api: enabled capabilities for version 3.3 | |
2019-06-03T08:21:07.676954324Z stderr F proto: no coders for int | |
2019-06-03T08:21:07.676982574Z stderr F proto: no encoder for ValueSize int [GetProperties] | |
2019-06-03T08:21:29.991016354Z stderr F 2019-06-03 08:21:29.990833 W | etcdserver: request "header:<ID:13557078548131390913 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-vp9jh\" mod_revision:0 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-vp9jh\" value_size:732 >> failure:<>>" with result "size:16" took too long (295.202556ms) to execute | |
2019-06-03T08:21:29.99667442Z stderr F 2019-06-03 08:21:29.996466 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-mwhdn\" " with result "range_response_count:1 size:2062" took too long (206.450091ms) to execute | |
2019-06-03T08:21:30.091982697Z stderr F 2019-06-03 08:21:30.085564 W | etcdserver: read-only range request "key:\"/registry/minions/kind-control-plane\" " with result "range_response_count:1 size:2081" took too long (227.648244ms) to execute | |
2019-06-03T08:21:30.153145463Z stderr F 2019-06-03 08:21:30.151801 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kindnet-s2cz8.15a4a28b1d709acc\" " with result "range_response_count:1 size:442" took too long (148.491149ms) to execute | |
2019-06-03T08:21:35.321778526Z stderr F 2019-06-03 08:21:35.321602 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker\" " with result "range_response_count:0 size:5" took too long (130.330537ms) to execute | |
2019-06-03T08:21:35.322431895Z stderr F 2019-06-03 08:21:35.322332 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker2\" " with result "range_response_count:0 size:5" took too long (158.922558ms) to execute | |
2019-06-03T08:21:36.322068813Z stderr F 2019-06-03 08:21:36.314334 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker2\" " with result "range_response_count:0 size:5" took too long (651.130015ms) to execute | |
2019-06-03T08:21:36.451436598Z stderr F 2019-06-03 08:21:36.441455 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker\" " with result "range_response_count:0 size:5" took too long (750.321747ms) to execute | |
2019-06-03T08:21:36.451453409Z stderr F 2019-06-03 08:21:36.441755 W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:5" took too long (123.019067ms) to execute | |
2019-06-03T08:21:37.3552064Z stderr F 2019-06-03 08:21:37.354347 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker\" " with result "range_response_count:0 size:5" took too long (163.079826ms) to execute | |
2019-06-03T08:21:37.355245437Z stderr F 2019-06-03 08:21:37.354853 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker2\" " with result "range_response_count:0 size:5" took too long (191.374897ms) to execute | |
2019-06-03T08:21:38.130960692Z stderr F 2019-06-03 08:21:38.130809 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker\" " with result "range_response_count:0 size:5" took too long (439.106431ms) to execute | |
2019-06-03T08:21:38.131610045Z stderr F 2019-06-03 08:21:38.131516 W | etcdserver: read-only range request "key:\"/registry/minions/kind-worker2\" " with result "range_response_count:0 size:5" took too long (468.216841ms) to execute | |
2019-06-03T08:21:44.003182099Z stderr F 2019-06-03 08:21:44.003019 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-fb8b8dccf-cb5wv\" " with result "range_response_count:1 size:1440" took too long (132.453049ms) to execute | |
2019-06-03T08:22:14.636615589Z stderr F 2019-06-03 08:22:14.636391 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/kind-control-plane\" " with result "range_response_count:1 size:328" took too long (448.897373ms) to execute | |
2019-06-03T08:22:14.839703468Z stderr F 2019-06-03 08:22:14.839535 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (132.126979ms) to execute | |
2019-06-03T08:22:14.840574704Z stderr F 2019-06-03 08:22:14.839773 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy\" range_end:\"/registry/podsecuritypolicz\" count_only:true " with result "range_response_count:0 size:7" took too long (181.617144ms) to execute |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:22:26.704179245Z stdout F fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz | |
2019-06-03T08:22:31.706529357Z stdout F ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.9/main: temporary error (try again later) | |
2019-06-03T08:22:31.70658168Z stdout F WARNING: Ignoring APKINDEX.b89edf6e.tar.gz: No such file or directory | |
2019-06-03T08:22:31.706590816Z stdout F fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz | |
2019-06-03T08:22:36.711585201Z stdout F ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.9/community: temporary error (try again later) | |
2019-06-03T08:22:36.711804801Z stdout F WARNING: Ignoring APKINDEX.737f7e01.tar.gz: No such file or directory | |
2019-06-03T08:22:36.711830065Z stdout F ERROR: unsatisfiable constraints: | |
2019-06-03T08:22:36.711836407Z stdout F curl (missing): | |
2019-06-03T08:22:36.711842909Z stdout F required by: world[curl] | |
2019-06-03T08:22:36.713871532Z stdout F sh: curl: not found |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[ | |
{ | |
"Id": "94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75", | |
"Created": "2019-06-03T08:19:45.321421409Z", | |
"Path": "/usr/local/bin/entrypoint", | |
"Args": [ | |
"/sbin/init" | |
], | |
"State": { | |
"Status": "running", | |
"Running": true, | |
"Paused": false, | |
"Restarting": false, | |
"OOMKilled": false, | |
"Dead": false, | |
"Pid": 8850, | |
"ExitCode": 0, | |
"Error": "", | |
"StartedAt": "2019-06-03T08:20:36.966740211Z", | |
"FinishedAt": "0001-01-01T00:00:00Z" | |
}, | |
"Image": "sha256:78e965c1c9fb9c36fc91605649a8b880a5982a304827b2317b4d96d6324e0571", | |
"ResolvConfPath": "/var/lib/docker/containers/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/resolv.conf", | |
"HostnamePath": "/var/lib/docker/containers/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/hostname", | |
"HostsPath": "/var/lib/docker/containers/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/hosts", | |
"LogPath": "/var/lib/docker/containers/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75-json.log", | |
"Name": "/kind-worker", | |
"RestartCount": 0, | |
"Driver": "overlay2", | |
"Platform": "linux", | |
"MountLabel": "", | |
"ProcessLabel": "", | |
"AppArmorProfile": "", | |
"ExecIDs": [ | |
"7bfd04bbaefd1e4d94bcc750b8392525e30462c065bd3c548afe197b3d1f63d4" | |
], | |
"HostConfig": { | |
"Binds": [ | |
"/lib/modules:/lib/modules:ro" | |
], | |
"ContainerIDFile": "", | |
"LogConfig": { | |
"Type": "json-file", | |
"Config": {} | |
}, | |
"NetworkMode": "default", | |
"PortBindings": {}, | |
"RestartPolicy": { | |
"Name": "no", | |
"MaximumRetryCount": 0 | |
}, | |
"AutoRemove": false, | |
"VolumeDriver": "", | |
"VolumesFrom": null, | |
"CapAdd": null, | |
"CapDrop": null, | |
"Dns": [], | |
"DnsOptions": [], | |
"DnsSearch": [], | |
"ExtraHosts": null, | |
"GroupAdd": null, | |
"IpcMode": "shareable", | |
"Cgroup": "", | |
"Links": null, | |
"OomScoreAdj": 0, | |
"PidMode": "", | |
"Privileged": true, | |
"PublishAllPorts": false, | |
"ReadonlyRootfs": false, | |
"SecurityOpt": [ | |
"seccomp=unconfined", | |
"label=disable" | |
], | |
"Tmpfs": { | |
"/run": "", | |
"/tmp": "" | |
}, | |
"UTSMode": "", | |
"UsernsMode": "", | |
"ShmSize": 67108864, | |
"Runtime": "runc", | |
"ConsoleSize": [ | |
0, | |
0 | |
], | |
"Isolation": "", | |
"CpuShares": 0, | |
"Memory": 0, | |
"NanoCpus": 0, | |
"CgroupParent": "", | |
"BlkioWeight": 0, | |
"BlkioWeightDevice": [], | |
"BlkioDeviceReadBps": null, | |
"BlkioDeviceWriteBps": null, | |
"BlkioDeviceReadIOps": null, | |
"BlkioDeviceWriteIOps": null, | |
"CpuPeriod": 0, | |
"CpuQuota": 0, | |
"CpuRealtimePeriod": 0, | |
"CpuRealtimeRuntime": 0, | |
"CpusetCpus": "", | |
"CpusetMems": "", | |
"Devices": [], | |
"DeviceCgroupRules": null, | |
"DiskQuota": 0, | |
"KernelMemory": 0, | |
"MemoryReservation": 0, | |
"MemorySwap": 0, | |
"MemorySwappiness": null, | |
"OomKillDisable": false, | |
"PidsLimit": 0, | |
"Ulimits": null, | |
"CpuCount": 0, | |
"CpuPercent": 0, | |
"IOMaximumIOps": 0, | |
"IOMaximumBandwidth": 0 | |
}, | |
"GraphDriver": { | |
"Data": { | |
"LowerDir": "/var/lib/docker/overlay2/01619ca9cceab982872b8cf30513a9c0199b46d6ab598eff287b0e54f19beaa6-init/diff:/var/lib/docker/overlay2/da1f63c3c0a952e957ad3bb520e5e0a57ccaf60dd75f89435096cb4ac8257cf5/diff:/var/lib/docker/overlay2/3ba6a84757fff2ccb3694e76d9ef0a355dc2c50e55abdf5d932c4fec642607f7/diff:/var/lib/docker/overlay2/7e49fa9eb314d05b89a6fd8d4c51cbc06c240e059d15407df76599df4cd16f50/diff:/var/lib/docker/overlay2/08e4118b04c3b7459790f78a14f4be27f4e3073f6b57150c12a3dd605f87b76e/diff:/var/lib/docker/overlay2/c4a6bd4cb7eea89cc8360390c2ba73842822cb39b948e367d586b0f67d5dc69e/diff:/var/lib/docker/overlay2/5d80e69be58f3f94c5190a4c312e782792aa7111c01d831b8fce88b844b80f93/diff:/var/lib/docker/overlay2/c6fa7be6afe839a7aff73abd8550f9bf6912863b82a34d6b85f6b789eea47570/diff:/var/lib/docker/overlay2/c89a99e1deb7f50e158c24647e7ade4bf7be42df1cd31e7bdb6d4b1a562f62ff/diff:/var/lib/docker/overlay2/9ff4526d4b1fa12af69a76cf07881c29547b3b01eb26bd653ecd88bd67564fd7/diff:/var/lib/docker/overlay2/c5f5b059a639243466c25e750c4a83c8dec5f893c27ec151e54b65f9043197cc/diff:/var/lib/docker/overlay2/55ecbe451ffd0282b3eb8c399c13536bdfb25f488a9e0f9a07a2f80bc130c038/diff:/var/lib/docker/overlay2/9dcef68ae90ef04c31c45c7fb91e7abe1260e9fe4617ce27bbecb407d1631ac0/diff:/var/lib/docker/overlay2/f5baf5bd86f5c10e11e947afbddceadac6acab8bbf7678e14db5015f4e8a1f9a/diff", | |
"MergedDir": "/var/lib/docker/overlay2/01619ca9cceab982872b8cf30513a9c0199b46d6ab598eff287b0e54f19beaa6/merged", | |
"UpperDir": "/var/lib/docker/overlay2/01619ca9cceab982872b8cf30513a9c0199b46d6ab598eff287b0e54f19beaa6/diff", | |
"WorkDir": "/var/lib/docker/overlay2/01619ca9cceab982872b8cf30513a9c0199b46d6ab598eff287b0e54f19beaa6/work" | |
}, | |
"Name": "overlay2" | |
}, | |
"Mounts": [ | |
{ | |
"Type": "bind", | |
"Source": "/lib/modules", | |
"Destination": "/lib/modules", | |
"Mode": "ro", | |
"RW": false, | |
"Propagation": "rprivate" | |
}, | |
{ | |
"Type": "volume", | |
"Name": "ad291680144ef5e22f13ececce880d6cc447cadd0ace510cbfd9a2fa81755031", | |
"Source": "/var/lib/docker/volumes/ad291680144ef5e22f13ececce880d6cc447cadd0ace510cbfd9a2fa81755031/_data", | |
"Destination": "/var/lib/containerd", | |
"Driver": "local", | |
"Mode": "", | |
"RW": true, | |
"Propagation": "" | |
} | |
], | |
"Config": { | |
"Hostname": "kind-worker", | |
"Domainname": "", | |
"User": "", | |
"AttachStdin": false, | |
"AttachStdout": false, | |
"AttachStderr": false, | |
"Tty": true, | |
"OpenStdin": false, | |
"StdinOnce": false, | |
"Env": [ | |
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", | |
"container=docker" | |
], | |
"Cmd": null, | |
"Image": "kindest/node:latest", | |
"Volumes": { | |
"/var/lib/containerd": {} | |
}, | |
"WorkingDir": "", | |
"Entrypoint": [ | |
"/usr/local/bin/entrypoint", | |
"/sbin/init" | |
], | |
"OnBuild": null, | |
"Labels": { | |
"io.k8s.sigs.kind.build": "2019-06-03T08:17:39.68225655Z", | |
"io.k8s.sigs.kind.cluster": "kind", | |
"io.k8s.sigs.kind.role": "worker" | |
}, | |
"StopSignal": "SIGRTMIN+3" | |
}, | |
"NetworkSettings": { | |
"Bridge": "", | |
"SandboxID": "8fa5d5d1e52c63b6bec2867b643b10608187325302c400e9c2651b2d94c82de2", | |
"HairpinMode": false, | |
"LinkLocalIPv6Address": "", | |
"LinkLocalIPv6PrefixLen": 0, | |
"Ports": {}, | |
"SandboxKey": "/var/run/docker/netns/8fa5d5d1e52c", | |
"SecondaryIPAddresses": null, | |
"SecondaryIPv6Addresses": null, | |
"EndpointID": "ed8ef390dae5891febc15a4c14dfcb1f6f6f7c8e83a43245f791691efe6b43a3", | |
"Gateway": "172.17.0.1", | |
"GlobalIPv6Address": "", | |
"GlobalIPv6PrefixLen": 0, | |
"IPAddress": "172.17.0.4", | |
"IPPrefixLen": 16, | |
"IPv6Gateway": "", | |
"MacAddress": "02:42:ac:11:00:04", | |
"Networks": { | |
"bridge": { | |
"IPAMConfig": null, | |
"Links": null, | |
"Aliases": null, | |
"NetworkID": "ada9d4781f74c5f763873acebc4c27b640cc2863cc76d89d71b640cbaef4cf61", | |
"EndpointID": "ed8ef390dae5891febc15a4c14dfcb1f6f6f7c8e83a43245f791691efe6b43a3", | |
"Gateway": "172.17.0.1", | |
"IPAddress": "172.17.0.4", | |
"IPPrefixLen": 16, | |
"IPv6Gateway": "", | |
"GlobalIPv6Address": "", | |
"GlobalIPv6PrefixLen": 0, | |
"MacAddress": "02:42:ac:11:00:04", | |
"DriverOpts": null | |
} | |
} | |
} | |
} | |
] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
-- Logs begin at Mon 2019-06-03 08:20:37 UTC, end at Mon 2019-06-03 08:22:30 UTC. -- | |
Jun 03 08:20:37 kind-worker systemd-journald[38]: Journal started | |
Jun 03 08:20:37 kind-worker systemd-journald[38]: Runtime journal (/run/log/journal/9d45381a7f6840f383f0f8f5377053fa) is 8.0M, max 373.9M, 365.9M free. | |
Jun 03 08:20:37 kind-worker systemd-sysctl[37]: Couldn't write 'fq_codel' to 'net/core/default_qdisc', ignoring: No such file or directory | |
Jun 03 08:20:37 kind-worker systemd-sysusers[32]: Creating group systemd-coredump with gid 999. | |
Jun 03 08:20:37 kind-worker systemd-sysusers[32]: Creating user systemd-coredump (systemd Core Dumper) with uid 999 and gid 999. | |
Jun 03 08:20:37 kind-worker systemd[1]: Starting Flush Journal to Persistent Storage... | |
Jun 03 08:20:37 kind-worker systemd[1]: Started Create System Users. | |
Jun 03 08:20:37 kind-worker systemd[1]: Starting Create Static Device Nodes in /dev... | |
Jun 03 08:20:37 kind-worker systemd[1]: Started Create Static Device Nodes in /dev. | |
Jun 03 08:20:37 kind-worker systemd[1]: Condition check resulted in udev Kernel Device Manager being skipped. | |
Jun 03 08:20:37 kind-worker systemd[1]: Reached target System Initialization. | |
Jun 03 08:20:37 kind-worker systemd[1]: Reached target Basic System. | |
Jun 03 08:20:37 kind-worker systemd[1]: Starting containerd container runtime... | |
Jun 03 08:20:37 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 08:20:37 kind-worker systemd[1]: Started Daily Cleanup of Temporary Directories. | |
Jun 03 08:20:37 kind-worker systemd[1]: Reached target Timers. | |
Jun 03 08:20:37 kind-worker systemd[1]: Started containerd container runtime. | |
Jun 03 08:20:37 kind-worker systemd[1]: Reached target Multi-User System. | |
Jun 03 08:20:37 kind-worker systemd[1]: Reached target Graphical Interface. | |
Jun 03 08:20:37 kind-worker systemd[1]: Starting Update UTMP about System Runlevel Changes... | |
Jun 03 08:20:37 kind-worker systemd-journald[38]: Runtime journal (/run/log/journal/9d45381a7f6840f383f0f8f5377053fa) is 8.0M, max 373.9M, 365.9M free. | |
Jun 03 08:20:37 kind-worker systemd[1]: Started Flush Journal to Persistent Storage. | |
Jun 03 08:20:37 kind-worker systemd[1]: systemd-update-utmp-runlevel.service: Succeeded. | |
Jun 03 08:20:37 kind-worker systemd[1]: Started Update UTMP about System Runlevel Changes. | |
Jun 03 08:20:37 kind-worker systemd[1]: Startup finished in 305ms. | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.465965995Z" level=info msg="starting containerd" revision= version=1.2.6-0ubuntu1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.468491256Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.469217648Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.469447390Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.469687391Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.470091839Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.470548260Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.470799986Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.474957953Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.475084370Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.475177175Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.475512938Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.475597593Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.475676602Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.475764168Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.475913154Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.476024360Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.476585244Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.478352203Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.478512066Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.478592269Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.486857560Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.486984412Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487060892Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487130080Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487197664Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487277716Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487347578Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487513370Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487614506Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487696157Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.487766271Z" level=info msg="loading plugin "io.containerd.grpc.v1.cri"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.488084601Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntime:{Type:io.containerd.runtime.v1.linux Engine: Root: Options:<nil>} UntrustedWorkloadRuntime:{Type: Engine: Root: Options:<nil>} Runtimes:map[] NoPivot:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Auths:map[]} StreamServerAddress:127.0.0.1 StreamServerPort:0 EnableSelinux:false SandboxImage:k8s.gcr.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.488224167Z" level=info msg="Connect containerd service" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.488410134Z" level=info msg="Get image filesystem path "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.488625556Z" level=error msg="Failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.488950394Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.490910421Z" level=info msg=serving... address="/run/containerd/containerd.sock" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.491179616Z" level=info msg="containerd successfully booted in 0.026503s" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.511045542Z" level=info msg="Start subscribing containerd event" | |
Jun 03 08:20:37 kind-worker kubelet[43]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:20:37 kind-worker kubelet[43]: F0603 08:20:37.519484 43 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.522756643Z" level=info msg="Start recovering state" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.523858617Z" level=warning msg="The image docker.io/kindest/kindnetd:0.1.0 is not unpacked." | |
Jun 03 08:20:37 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 08:20:37 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.534400143Z" level=warning msg="The image k8s.gcr.io/coredns:1.3.1 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.535538991Z" level=warning msg="The image k8s.gcr.io/etcd:3.3.10 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.540922253Z" level=warning msg="The image k8s.gcr.io/ip-masq-agent:v2.4.1 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.542098209Z" level=warning msg="The image k8s.gcr.io/kube-apiserver:v1.14.2 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.543388801Z" level=warning msg="The image k8s.gcr.io/kube-controller-manager:v1.14.2 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.544301826Z" level=warning msg="The image k8s.gcr.io/kube-proxy:v1.14.2 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.545017118Z" level=warning msg="The image k8s.gcr.io/kube-scheduler:v1.14.2 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.545684183Z" level=warning msg="The image k8s.gcr.io/pause:3.1 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.546356294Z" level=warning msg="The image sha256:19bb968f77bba3a5b5f56b5c033d71f699c22bdc8bbe9412f0bfaf7f674a64cc is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.547034842Z" level=warning msg="The image sha256:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.547639690Z" level=warning msg="The image sha256:5c24210246bb67af5f89150e947211a1c2a127fb3825eb18507c1039bc6e86f8 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.548162513Z" level=warning msg="The image sha256:5eeff402b659832b64b5634061eb3825008abb549e1d873faf3908beecea8dfc is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.548721569Z" level=warning msg="The image sha256:8be94bdae1399076ac29223a7f10230011d195e355dfc7027fa02dc95d34065f is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.549377377Z" level=warning msg="The image sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.549980639Z" level=warning msg="The image sha256:eb516548c180f8a6e0235034ccee2428027896af16a509786da13022fe95fe8c is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.550651169Z" level=warning msg="The image sha256:ee18f350636d8e51ebb3749d1d7a1928da1d6e6fc0051852a6686c19b706c57c is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.551270388Z" level=warning msg="The image sha256:f227066bdc5f9aa2f8a9bb54854e5b7a23c6db8fce0f927e5c4feef8a9e74d46 is not unpacked." | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.551665433Z" level=info msg="Start event monitor" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.551713863Z" level=info msg="Start snapshots syncer" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.551725025Z" level=info msg="Start streaming server" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.551894130Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:2c4adeb21b4ff8ed3309d0e42b6b4ae39872399f7b37e0856e673b13c4aba13d,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 08:20:37 kind-worker containerd[44]: time="2019-06-03T08:20:37.552489248Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/etcd:3.3.10,Labels:map[string]string{io.cri-containerd.image: managed,},}" | |
Jun 03 08:20:47 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 08:20:47 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. | |
Jun 03 08:20:47 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 08:20:47 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 08:20:47 kind-worker kubelet[63]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:20:47 kind-worker kubelet[63]: F0603 08:20:47.739422 63 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 08:20:47 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 08:20:47 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 08:20:57 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 08:20:57 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. | |
Jun 03 08:20:57 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 08:20:57 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 08:20:58 kind-worker kubelet[70]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:20:58 kind-worker kubelet[70]: F0603 08:20:58.078299 70 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 08:20:58 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 08:20:58 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 08:21:08 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 08:21:08 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. | |
Jun 03 08:21:08 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:08 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:08 kind-worker kubelet[78]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:21:08 kind-worker kubelet[78]: F0603 08:21:08.216227 78 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 08:21:08 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 08:21:08 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 08:21:13 kind-worker containerd[44]: time="2019-06-03T08:21:13.484196511Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:21:18 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 08:21:18 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. | |
Jun 03 08:21:18 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:18 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:18 kind-worker kubelet[115]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:21:18 kind-worker kubelet[115]: F0603 08:21:18.453342 115 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 08:21:18 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 08:21:18 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 08:21:28 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 08:21:28 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. | |
Jun 03 08:21:28 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:28 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:28 kind-worker kubelet[125]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:21:28 kind-worker kubelet[125]: F0603 08:21:28.715250 125 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 08:21:28 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 08:21:28 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 08:21:28 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:29 kind-worker systemd[1]: Reloading. | |
Jun 03 08:21:29 kind-worker systemd[1]: Configuration file /etc/systemd/system/containerd.service.d/10-restart.conf is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway. | |
Jun 03 08:21:29 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:29 kind-worker kubelet[159]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:21:29 kind-worker kubelet[159]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:21:29 kind-worker systemd[1]: Started Kubernetes systemd probe. | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.648770 159 server.go:417] Version: v1.14.2 | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.648977 159 plugins.go:103] No cloud provider specified. | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.648991 159 server.go:754] Client rotation is on, will bootstrap in background | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.666436 159 server.go:625] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.666864 159 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: [] | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.666880 159 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.666942 159 container_manager_linux.go:286] Creating device plugin manager: true | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.667023 159 state_mem.go:36] [cpumanager] initializing new in-memory state store | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.022749 159 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.023056 159 kubelet.go:304] Watching apiserver | |
Jun 03 08:21:30 kind-worker kubelet[159]: W0603 08:21:30.026048 159 util_unix.go:77] Using "/run/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/containerd/containerd.sock". | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026120 159 remote_runtime.go:62] parsed scheme: "" | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026131 159 remote_runtime.go:62] scheme "" not registered, fallback to default scheme | |
Jun 03 08:21:30 kind-worker kubelet[159]: W0603 08:21:30.026155 159 util_unix.go:77] Using "/run/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/containerd/containerd.sock". | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026168 159 remote_image.go:50] parsed scheme: "" | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026173 159 remote_image.go:50] scheme "" not registered, fallback to default scheme | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026313 159 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/run/containerd/containerd.sock 0 <nil>}] | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026324 159 clientconn.go:796] ClientConn switching balancer to "pick_first" | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026372 159 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0008719f0, CONNECTING | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026480 159 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/run/containerd/containerd.sock 0 <nil>}] | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026576 159 clientconn.go:796] ClientConn switching balancer to "pick_first" | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026712 159 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc00041f4d0, CONNECTING | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026827 159 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0008719f0, READY | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.027640 159 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc00041f4d0, READY | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.029107 159 kuberuntime_manager.go:210] Container runtime containerd initialized, version: 1.2.6-0ubuntu1, apiVersion: v1alpha2 | |
Jun 03 08:21:30 kind-worker kubelet[159]: W0603 08:21:30.029635 159 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.030954 159 server.go:1037] Started kubelet | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.031369 159 server.go:141] Starting to listen on 0.0.0.0:10250 | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.032035 159 server.go:343] Adding debug handlers to kubelet server. | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.043343 159 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.043567 159 status_manager.go:152] Starting to sync pod status with apiserver | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.044030 159 kubelet.go:1806] Starting kubelet main sync loop. | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.044258 159 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.] | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.044419 159 volume_manager.go:248] Starting Kubelet Volume Manager | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.050192 159 desired_state_of_world_populator.go:130] Desired state populator starts to run | |
Jun 03 08:21:30 kind-worker containerd[44]: time="2019-06-03T08:21:30.050617212Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.051333 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.057911 159 clientconn.go:440] parsed scheme: "unix" | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.058035 159 clientconn.go:440] scheme "unix" not registered, fallback to default scheme | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.058124 159 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 <nil>}] | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.058173 159 clientconn.go:796] ClientConn switching balancer to "pick_first" | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.058272 159 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000457430, CONNECTING | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.058520 159 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000457430, READY | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.067026 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.067252 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.067577 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b45b8b3c7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540281d7cfc7, ext:863938121, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540281d7cfc7, ext:863938121, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.067857 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.067969 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.069862 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.100605 159 controller.go:115] failed to ensure node lease exists, will retry in 200ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.113725 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.114944 159 cpu_manager.go:155] [cpumanager] starting with none policy | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.115059 159 cpu_manager.go:156] [cpumanager] reconciling every 10s | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.115113 159 policy_none.go:42] [cpumanager] none policy: Start | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.139328 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9b707", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.151102 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9d81c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.157034 159 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet. | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.157101 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.157762 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.159792 159 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.166352 159 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.166510 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9f69a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: W0603 08:21:30.168315 159 manager.go:538] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.169582 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9b707", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3554028984d599, ext:992717858, loc:(*time.Location)(0x8018900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9b707" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.173075 159 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.179732 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9d81c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540289859b53, ext:992768468, loc:(*time.Location)(0x8018900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9d81c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.181091 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9f69a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3554028985b6bd, ext:992775488, loc:(*time.Location)(0x8018900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9f69a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.181889 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4dee89c9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf3554028a0da5c9, ext:1001684053, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3554028a0da5c9, ext:1001684053, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.257971 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.302326 159 controller.go:115] failed to ensure node lease exists, will retry in 400ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.358401 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.366953 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.368284 159 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.370099 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9b707", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540295f3094f, ext:1201266647, loc:(*time.Location)(0x8018900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9b707" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.371508 159 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.371859 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9d81c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540295f329f7, ext:1201275013, loc:(*time.Location)(0x8018900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9d81c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.372975 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9f69a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540295f34047, ext:1201280722, loc:(*time.Location)(0x8018900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9f69a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.462359 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.566310 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.666498 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.703701 159 controller.go:115] failed to ensure node lease exists, will retry in 800ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.766760 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.771930 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.773069 159 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.774485 159 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.774589 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9b707", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355402ae13721c, ext:1606043811, loc:(*time.Location)(0x8018900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9b707" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.775489 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9d81c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355402ae13a883, ext:1606057742, loc:(*time.Location)(0x8018900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9d81c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.832733 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9f69a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355402ae13c217, ext:1606064288, loc:(*time.Location)(0x8018900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9f69a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.866963 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.967138 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.067312 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.068600 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.093227 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.094236 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.097866 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.101180 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.168226 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.268400 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.368599 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.468793 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.505025 159 controller.go:115] failed to ensure node lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.569063 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: I0603 08:21:31.574754 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:31 kind-worker kubelet[159]: I0603 08:21:31.575981 159 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.577525 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9b707", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355402e252d47c, ext:2408871166, loc:(*time.Location)(0x8018900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9b707" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.578037 159 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.578665 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9d81c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355402e25431b4, ext:2408960568, loc:(*time.Location)(0x8018900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9d81c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.579574 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9f69a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355402e2545e52, ext:2408971997, loc:(*time.Location)(0x8018900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9f69a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.669232 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.769408 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.869582 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.969768 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.069930 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.070147 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.095070 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.095720 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.098793 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.102160 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.170094 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.270288 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.370485 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.470710 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.570914 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.671076 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.771320 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.871717 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.971934 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.071523 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.072090 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.096848 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.097374 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.099732 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.103248 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.105849 159 controller.go:115] failed to ensure node lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.172280 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: I0603 08:21:33.178205 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:33 kind-worker kubelet[159]: I0603 08:21:33.179381 159 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.180378 159 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.180658 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9b707", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3554034ab09e34, ext:4012364475, loc:(*time.Location)(0x8018900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9b707" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.181560 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9d81c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3554034ab0c30c, ext:4012373908, loc:(*time.Location)(0x8018900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9d81c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.182380 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9f69a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3554034ab0d85b, ext:4012379368, loc:(*time.Location)(0x8018900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9f69a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.272563 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.372789 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.472981 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.573214 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.673630 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.773793 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.874019 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.974306 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.072920 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.074467 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.098539 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.099216 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.100525 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.104157 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.174637 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.274836 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.375047 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.475298 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.575476 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.675659 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.777493 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.877922 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.978135 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.074264 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.082048 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.100132 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.100882 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.101574 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.105067 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:35 kind-worker containerd[44]: time="2019-06-03T08:21:35.169191184Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.169854 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.182237 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.282448 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.382936 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.483184 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.583410 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.683582 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.783780 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.883977 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.984170 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.075916 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.084376 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.101833 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.102490 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.103469 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.105856 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.184666 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.284852 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.307237 159 controller.go:115] failed to ensure node lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 08:21:36 kind-worker kubelet[159]: I0603 08:21:36.380549 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:36 kind-worker kubelet[159]: I0603 08:21:36.381714 159 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.382971 159 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.383326 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9b707", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540416bfd823, ext:7214688937, loc:(*time.Location)(0x8018900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9b707" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.384457 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9d81c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540416c0143f, ext:7214704325, loc:(*time.Location)(0x8018900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9d81c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.384969 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.385267 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9f69a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540416c027bb, ext:7214709312, loc:(*time.Location)(0x8018900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9f69a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.485286 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.585720 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.686293 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.787200 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.887790 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.988362 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.083266 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.088780 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.103756 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.104257 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.105011 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.188994 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.225122 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.289217 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.389396 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.489647 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.589844 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.690044 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.790529 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.890762 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.991014 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.084851 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.091307 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.105564 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.106321 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.107180 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.191507 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.226703 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.292012 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.392200 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.492408 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.592608 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.692786 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.793286 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.893642 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.994061 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.086307 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.094248 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.107135 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.107749 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.108999 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.194421 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.228115 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.294618 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.394770 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.495448 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.595670 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: I0603 08:21:39.651804 159 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.696014 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.796430 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.896594 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.996750 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.096915 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: I0603 08:21:40.168817 159 reconciler.go:154] Reconciler: start to sync state | |
Jun 03 08:21:40 kind-worker containerd[44]: time="2019-06-03T08:21:40.170467487Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.171078 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.174009 159 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.197088 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.300016 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.400243 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.500447 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.600639 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.700828 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.800992 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.901179 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.001383 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.101604 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.201805 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.301959 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.402170 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.502371 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.602556 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.702762 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.802957 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.903149 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.003399 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.103648 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.203887 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.304137 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.404332 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.504553 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.604721 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.704924 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.716592 159 controller.go:194] failed to get node "kind-worker" when trying to set owner ref to the node lease: nodes "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.783338 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.784599 159 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.790244 159 kubelet_node_status.go:75] Successfully registered node kind-worker | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.873760 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/9e5f9619-85d8-11e9-bdc2-0242ac110002-xtables-lock") pod "kube-proxy-q6qbj" (UID: "9e5f9619-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.873829 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/9e5f9619-85d8-11e9-bdc2-0242ac110002-lib-modules") pod "kube-proxy-q6qbj" (UID: "9e5f9619-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.873871 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-snhxd" (UniqueName: "kubernetes.io/secret/9e5f9619-85d8-11e9-bdc2-0242ac110002-kube-proxy-token-snhxd") pod "kube-proxy-q6qbj" (UID: "9e5f9619-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.873916 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/9e5f6fdf-85d8-11e9-bdc2-0242ac110002-cni-cfg") pod "kindnet-h2bsq" (UID: "9e5f6fdf-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.873953 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-ztngt" (UniqueName: "kubernetes.io/secret/9e5f6fdf-85d8-11e9-bdc2-0242ac110002-kindnet-token-ztngt") pod "kindnet-h2bsq" (UID: "9e5f6fdf-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.874124 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9e5f9619-85d8-11e9-bdc2-0242ac110002-kube-proxy") pod "kube-proxy-q6qbj" (UID: "9e5f9619-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.911217 159 kuberuntime_manager.go:946] updating runtime config through cri with podcidr 10.244.2.0/24 | |
Jun 03 08:21:42 kind-worker containerd[44]: time="2019-06-03T08:21:42.911689047Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.912030 159 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.2.0/24 | |
Jun 03 08:21:42 kind-worker containerd[44]: time="2019-06-03T08:21:42.912338445Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.912519 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.974592 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/9e63c396-85d8-11e9-bdc2-0242ac110002-config") pod "ip-masq-agent-kcr75" (UID: "9e63c396-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.974643 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ip-masq-agent-token-vrrsp" (UniqueName: "kubernetes.io/secret/9e63c396-85d8-11e9-bdc2-0242ac110002-ip-masq-agent-token-vrrsp") pod "ip-masq-agent-kcr75" (UID: "9e63c396-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.140892936Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-q6qbj,Uid:9e5f9619-85d8-11e9-bdc2-0242ac110002,Namespace:kube-system,Attempt:0,}" | |
Jun 03 08:21:43 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611377230.mount: Succeeded. | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.184555849Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-h2bsq,Uid:9e5f6fdf-85d8-11e9-bdc2-0242ac110002,Namespace:kube-system,Attempt:0,}" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.209141213Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/71c5255ddc47d268b5c9ce42d7dc909bc6c5bf9c6db97c05017f96242d5838b0/shim.sock" debug=false pid=198 | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.221242779Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/965f998e96cf6f2a41d92e52f3e42375fd20b8ec9bb14f1669a73223f391be25/shim.sock" debug=false pid=211 | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.251994196Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:ip-masq-agent-kcr75,Uid:9e63c396-85d8-11e9-bdc2-0242ac110002,Namespace:kube-system,Attempt:0,}" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.319897802Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/e1dc64c900d66d89c0b884abe34ceb614b9db09f2fb1bab96255212533370de7/shim.sock" debug=false pid=232 | |
Jun 03 08:21:43 kind-worker systemd[1]: run-containerd-runc-k8s.io-e1dc64c900d66d89c0b884abe34ceb614b9db09f2fb1bab96255212533370de7-runc.EynLAy.mount: Succeeded. | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.554693266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-h2bsq,Uid:9e5f6fdf-85d8-11e9-bdc2-0242ac110002,Namespace:kube-system,Attempt:0,} returns sandbox id "71c5255ddc47d268b5c9ce42d7dc909bc6c5bf9c6db97c05017f96242d5838b0"" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.562490243Z" level=info msg="CreateContainer within sandbox "71c5255ddc47d268b5c9ce42d7dc909bc6c5bf9c6db97c05017f96242d5838b0" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.732246863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:ip-masq-agent-kcr75,Uid:9e63c396-85d8-11e9-bdc2-0242ac110002,Namespace:kube-system,Attempt:0,} returns sandbox id "e1dc64c900d66d89c0b884abe34ceb614b9db09f2fb1bab96255212533370de7"" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.739513953Z" level=info msg="CreateContainer within sandbox "e1dc64c900d66d89c0b884abe34ceb614b9db09f2fb1bab96255212533370de7" for container &ContainerMetadata{Name:ip-masq-agent,Attempt:0,}" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.924008726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q6qbj,Uid:9e5f9619-85d8-11e9-bdc2-0242ac110002,Namespace:kube-system,Attempt:0,} returns sandbox id "965f998e96cf6f2a41d92e52f3e42375fd20b8ec9bb14f1669a73223f391be25"" | |
Jun 03 08:21:43 kind-worker containerd[44]: time="2019-06-03T08:21:43.928842516Z" level=info msg="CreateContainer within sandbox "965f998e96cf6f2a41d92e52f3e42375fd20b8ec9bb14f1669a73223f391be25" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" | |
Jun 03 08:21:44 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount011890916.mount: Succeeded. | |
Jun 03 08:21:44 kind-worker containerd[44]: time="2019-06-03T08:21:44.856502232Z" level=info msg="CreateContainer within sandbox "71c5255ddc47d268b5c9ce42d7dc909bc6c5bf9c6db97c05017f96242d5838b0" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id "d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379"" | |
Jun 03 08:21:44 kind-worker containerd[44]: time="2019-06-03T08:21:44.933928142Z" level=info msg="StartContainer for "d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379"" | |
Jun 03 08:21:44 kind-worker containerd[44]: time="2019-06-03T08:21:44.952405814Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379/shim.sock" debug=false pid=352 | |
Jun 03 08:21:45 kind-worker systemd[1]: run-containerd-runc-k8s.io-d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379-runc.0ywqzi.mount: Succeeded. | |
Jun 03 08:21:45 kind-worker containerd[44]: time="2019-06-03T08:21:45.171794032Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:21:45 kind-worker kubelet[159]: E0603 08:21:45.172257 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:21:45 kind-worker containerd[44]: time="2019-06-03T08:21:45.538834038Z" level=info msg="StartContainer for "d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379" returns successfully" | |
Jun 03 08:21:45 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount214677405.mount: Succeeded. | |
Jun 03 08:21:46 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount421701215.mount: Succeeded. | |
Jun 03 08:21:46 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount897554762.mount: Succeeded. | |
Jun 03 08:21:46 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount631261187.mount: Succeeded. | |
Jun 03 08:21:46 kind-worker containerd[44]: time="2019-06-03T08:21:46.448154231Z" level=info msg="CreateContainer within sandbox "e1dc64c900d66d89c0b884abe34ceb614b9db09f2fb1bab96255212533370de7" for &ContainerMetadata{Name:ip-masq-agent,Attempt:0,} returns container id "806e9d11c7e8760f33cdaeae2ab9cea80d6f53d015eaba1109b01ee69d1172ec"" | |
Jun 03 08:21:46 kind-worker containerd[44]: time="2019-06-03T08:21:46.449356817Z" level=info msg="StartContainer for "806e9d11c7e8760f33cdaeae2ab9cea80d6f53d015eaba1109b01ee69d1172ec"" | |
Jun 03 08:21:46 kind-worker containerd[44]: time="2019-06-03T08:21:46.467150623Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/806e9d11c7e8760f33cdaeae2ab9cea80d6f53d015eaba1109b01ee69d1172ec/shim.sock" debug=false pid=404 | |
Jun 03 08:21:47 kind-worker containerd[44]: time="2019-06-03T08:21:47.100385203Z" level=info msg="StartContainer for "806e9d11c7e8760f33cdaeae2ab9cea80d6f53d015eaba1109b01ee69d1172ec" returns successfully" | |
Jun 03 08:21:47 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount086464489.mount: Succeeded. | |
Jun 03 08:21:47 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211877980.mount: Succeeded. | |
Jun 03 08:21:47 kind-worker systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount307055115.mount: Succeeded. | |
Jun 03 08:21:47 kind-worker containerd[44]: time="2019-06-03T08:21:47.231771945Z" level=info msg="CreateContainer within sandbox "965f998e96cf6f2a41d92e52f3e42375fd20b8ec9bb14f1669a73223f391be25" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id "7477cc71001f9db1e43bd9e417c984e1c4b1b38525e550b5fe8bfbd29b3d3620"" | |
Jun 03 08:21:47 kind-worker containerd[44]: time="2019-06-03T08:21:47.232771489Z" level=info msg="StartContainer for "7477cc71001f9db1e43bd9e417c984e1c4b1b38525e550b5fe8bfbd29b3d3620"" | |
Jun 03 08:21:47 kind-worker containerd[44]: time="2019-06-03T08:21:47.233977794Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/7477cc71001f9db1e43bd9e417c984e1c4b1b38525e550b5fe8bfbd29b3d3620/shim.sock" debug=false pid=461 | |
Jun 03 08:21:47 kind-worker systemd[1]: run-containerd-runc-k8s.io-7477cc71001f9db1e43bd9e417c984e1c4b1b38525e550b5fe8bfbd29b3d3620-runc.MNiXKy.mount: Succeeded. | |
Jun 03 08:21:47 kind-worker containerd[44]: time="2019-06-03T08:21:47.459889328Z" level=info msg="StartContainer for "7477cc71001f9db1e43bd9e417c984e1c4b1b38525e550b5fe8bfbd29b3d3620" returns successfully" | |
Jun 03 08:21:50 kind-worker containerd[44]: time="2019-06-03T08:21:50.173149991Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:21:50 kind-worker kubelet[159]: E0603 08:21:50.173765 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:21:50 kind-worker kubelet[159]: E0603 08:21:50.192930 159 summary_sys_containers.go:47] Failed to get system container stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get cgroup stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get container info for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": unknown container "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service" | |
Jun 03 08:21:55 kind-worker containerd[44]: time="2019-06-03T08:21:55.175001692Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:21:55 kind-worker kubelet[159]: E0603 08:21:55.175385 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:22:00 kind-worker containerd[44]: time="2019-06-03T08:22:00.176146764Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:22:00 kind-worker kubelet[159]: E0603 08:22:00.176328 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:22:00 kind-worker kubelet[159]: E0603 08:22:00.212800 159 summary_sys_containers.go:47] Failed to get system container stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get cgroup stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get container info for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": unknown container "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service" | |
Jun 03 08:22:05 kind-worker containerd[44]: time="2019-06-03T08:22:05.177223831Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:22:05 kind-worker kubelet[159]: E0603 08:22:05.177524 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:22:10 kind-worker containerd[44]: time="2019-06-03T08:22:10.179014365Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:22:10 kind-worker kubelet[159]: E0603 08:22:10.179730 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:22:10 kind-worker kubelet[159]: E0603 08:22:10.245210 159 summary_sys_containers.go:47] Failed to get system container stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get cgroup stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get container info for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": unknown container "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service" | |
Jun 03 08:22:15 kind-worker containerd[44]: time="2019-06-03T08:22:15.180583790Z" level=error msg="Failed to load cni configuration" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" | |
Jun 03 08:22:15 kind-worker kubelet[159]: E0603 08:22:15.180921 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:22:15 kind-worker containerd[44]: time="2019-06-03T08:22:15.459249907Z" level=info msg="Finish piping stdout of container "d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379"" | |
Jun 03 08:22:15 kind-worker containerd[44]: time="2019-06-03T08:22:15.459316940Z" level=info msg="Finish piping stderr of container "d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379"" | |
Jun 03 08:22:15 kind-worker containerd[44]: time="2019-06-03T08:22:15.510505732Z" level=info msg="TaskExit event &TaskExit{ContainerID:d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379,ID:d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379,Pid:370,ExitStatus:2,ExitedAt:2019-06-03 08:22:15.46005987 +0000 UTC,}" | |
Jun 03 08:22:15 kind-worker systemd[1]: run-containerd-io.containerd.runtime.v1.linux-k8s.io-d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379-rootfs.mount: Succeeded. | |
Jun 03 08:22:15 kind-worker containerd[44]: time="2019-06-03T08:22:15.576572310Z" level=info msg="shim reaped" id=d9ca045855adbb3372f01f146ea8d094be8cdbce2de0aa60fe49f970c0cd7379 | |
Jun 03 08:22:16 kind-worker containerd[44]: time="2019-06-03T08:22:16.157993779Z" level=info msg="CreateContainer within sandbox "71c5255ddc47d268b5c9ce42d7dc909bc6c5bf9c6db97c05017f96242d5838b0" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}" | |
Jun 03 08:22:16 kind-worker containerd[44]: time="2019-06-03T08:22:16.211748562Z" level=info msg="CreateContainer within sandbox "71c5255ddc47d268b5c9ce42d7dc909bc6c5bf9c6db97c05017f96242d5838b0" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id "14fbe60262eed73b03e17d52ce534aa67a06e418537a1a346b4d551c488d7cd7"" | |
Jun 03 08:22:16 kind-worker containerd[44]: time="2019-06-03T08:22:16.212766798Z" level=info msg="StartContainer for "14fbe60262eed73b03e17d52ce534aa67a06e418537a1a346b4d551c488d7cd7"" | |
Jun 03 08:22:16 kind-worker containerd[44]: time="2019-06-03T08:22:16.213730006Z" level=info msg="shim containerd-shim started" address="/containerd-shim/k8s.io/14fbe60262eed73b03e17d52ce534aa67a06e418537a1a346b4d551c488d7cd7/shim.sock" debug=false pid=610 | |
Jun 03 08:22:16 kind-worker systemd[1]: run-containerd-runc-k8s.io-14fbe60262eed73b03e17d52ce534aa67a06e418537a1a346b4d551c488d7cd7-runc.lUpkTG.mount: Succeeded. | |
Jun 03 08:22:16 kind-worker containerd[44]: time="2019-06-03T08:22:16.493662691Z" level=info msg="StartContainer for "14fbe60262eed73b03e17d52ce534aa67a06e418537a1a346b4d551c488d7cd7" returns successfully" | |
Jun 03 08:22:20 kind-worker kubelet[159]: E0603 08:22:20.269361 159 summary_sys_containers.go:47] Failed to get system container stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get cgroup stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get container info for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": unknown container "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service" | |
Jun 03 08:22:30 kind-worker kubelet[159]: E0603 08:22:30.290330 159 summary_sys_containers.go:47] Failed to get system container stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get cgroup stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get container info for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": unknown container "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:22:16.481333505Z stdout F hostIP = 172.17.0.4 | |
2019-06-03T08:22:16.481391139Z stdout F podIP = 172.17.0.4 | |
2019-06-03T08:22:16.545990083Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T08:22:16.546042944Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24 | |
2019-06-03T08:22:16.547735677Z stdout F Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: <nil> Gw: 172.17.0.2 Flags: [] Table: 0} | |
2019-06-03T08:22:16.547774666Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T08:22:16.547789928Z stdout F handling current node | |
2019-06-03T08:22:16.552029885Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T08:22:16.552058766Z stdout F Node kind-worker2 has CIDR 10.244.1.0/24 | |
2019-06-03T08:22:16.552067357Z stdout F Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.0.3 Flags: [] Table: 0} | |
2019-06-03T08:22:26.556717629Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T08:22:26.556769824Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24 | |
2019-06-03T08:22:26.556775732Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T08:22:26.556779493Z stdout F handling current node | |
2019-06-03T08:22:26.556784762Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T08:22:26.55678842Z stdout F Node kind-worker2 has CIDR 10.244.1.0/24 | |
2019-06-03T08:22:36.561044707Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T08:22:36.561102623Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24 | |
2019-06-03T08:22:36.561124668Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T08:22:36.561128721Z stdout F handling current node | |
2019-06-03T08:22:36.561134559Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T08:22:36.561137765Z stdout F Node kind-worker2 has CIDR 10.244.1.0/24 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:21:45.456119607Z stdout F hostIP = 172.17.0.4 | |
2019-06-03T08:21:45.456157435Z stdout F podIP = 172.17.0.4 | |
2019-06-03T08:22:15.44426374Z stderr F panic: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout | |
2019-06-03T08:22:15.444311239Z stderr F | |
2019-06-03T08:22:15.444317729Z stderr F goroutine 1 [running]: | |
2019-06-03T08:22:15.44432285Z stderr F main.main() | |
2019-06-03T08:22:15.444330284Z stderr F /src/main.go:84 +0x423 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:21:45.360989243Z stdout F hostIP = 172.17.0.3 | |
2019-06-03T08:21:45.361036809Z stdout F podIP = 172.17.0.3 | |
2019-06-03T08:22:15.3461429Z stderr F panic: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout | |
2019-06-03T08:22:15.34619124Z stderr F | |
2019-06-03T08:22:15.346196754Z stderr F goroutine 1 [running]: | |
2019-06-03T08:22:15.346201212Z stderr F main.main() | |
2019-06-03T08:22:15.34620588Z stderr F /src/main.go:84 +0x423 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:22:15.842770036Z stdout F hostIP = 172.17.0.3 | |
2019-06-03T08:22:15.842845438Z stdout F podIP = 172.17.0.3 | |
2019-06-03T08:22:15.942847069Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T08:22:15.942872395Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24 | |
2019-06-03T08:22:15.942878476Z stdout F Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: <nil> Gw: 172.17.0.2 Flags: [] Table: 0} | |
2019-06-03T08:22:15.942884046Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T08:22:15.942887562Z stdout F Node kind-worker has CIDR 10.244.2.0/24 | |
2019-06-03T08:22:15.942891266Z stdout F Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.0.4 Flags: [] Table: 0} | |
2019-06-03T08:22:15.9428948Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T08:22:15.942907303Z stdout F handling current node | |
2019-06-03T08:22:26.039853886Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T08:22:26.039896209Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24 | |
2019-06-03T08:22:26.039902507Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T08:22:26.039905989Z stdout F Node kind-worker has CIDR 10.244.2.0/24 | |
2019-06-03T08:22:26.039909541Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T08:22:26.039912944Z stdout F handling current node | |
2019-06-03T08:22:36.141176711Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T08:22:36.141227478Z stdout F Node kind-control-plane has CIDR 10.244.0.0/24 | |
2019-06-03T08:22:36.141262362Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T08:22:36.141268444Z stdout F Node kind-worker has CIDR 10.244.2.0/24 | |
2019-06-03T08:22:36.141283036Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T08:22:36.141288219Z stdout F handling current node |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:21:29.411605039Z stdout F hostIP = 172.17.0.2 | |
2019-06-03T08:21:29.41166972Z stdout F podIP = 172.17.0.2 | |
2019-06-03T08:21:59.386443553Z stderr F panic: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout | |
2019-06-03T08:21:59.386504187Z stderr F | |
2019-06-03T08:21:59.38651149Z stderr F goroutine 1 [running]: | |
2019-06-03T08:21:59.386516478Z stderr F main.main() | |
2019-06-03T08:21:59.386522002Z stderr F /src/main.go:84 +0x423 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:22:00.245782896Z stdout F hostIP = 172.17.0.2 | |
2019-06-03T08:22:00.245828179Z stdout F podIP = 172.17.0.2 | |
2019-06-03T08:22:00.341275168Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T08:22:00.341321003Z stdout F handling current node | |
2019-06-03T08:22:00.345266185Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T08:22:00.345322972Z stdout F Node kind-worker has CIDR 10.244.2.0/24 | |
2019-06-03T08:22:00.345330486Z stdout F Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.0.4 Flags: [] Table: 0} | |
2019-06-03T08:22:00.345373173Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T08:22:00.345378766Z stdout F Node kind-worker2 has CIDR 10.244.1.0/24 | |
2019-06-03T08:22:00.345384247Z stdout F Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.0.3 Flags: [] Table: 0} | |
2019-06-03T08:22:10.353119681Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T08:22:10.353186894Z stdout F handling current node | |
2019-06-03T08:22:10.353195261Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T08:22:10.353199845Z stdout F Node kind-worker has CIDR 10.244.2.0/24 | |
2019-06-03T08:22:10.353270434Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T08:22:10.353285527Z stdout F Node kind-worker2 has CIDR 10.244.1.0/24 | |
2019-06-03T08:22:20.363294244Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T08:22:20.363345122Z stdout F handling current node | |
2019-06-03T08:22:20.363352424Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T08:22:20.363357365Z stdout F Node kind-worker has CIDR 10.244.2.0/24 | |
2019-06-03T08:22:20.363542172Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T08:22:20.363560051Z stdout F Node kind-worker2 has CIDR 10.244.1.0/24 | |
2019-06-03T08:22:30.440535823Z stdout F Handling node with IP: 172.17.0.2 | |
2019-06-03T08:22:30.440613539Z stdout F handling current node | |
2019-06-03T08:22:30.440625584Z stdout F Handling node with IP: 172.17.0.4 | |
2019-06-03T08:22:30.440631099Z stdout F Node kind-worker has CIDR 10.244.2.0/24 | |
2019-06-03T08:22:30.440665598Z stdout F Handling node with IP: 172.17.0.3 | |
2019-06-03T08:22:30.440679339Z stdout F Node kind-worker2 has CIDR 10.244.1.0/24 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:21:01.75819253Z stderr F Flag --insecure-port has been deprecated, This flag will be removed in a future version. | |
2019-06-03T08:21:01.758514635Z stderr F I0603 08:21:01.758421 1 server.go:559] external host was not specified, using 172.17.0.2 | |
2019-06-03T08:21:01.758700473Z stderr F I0603 08:21:01.758646 1 server.go:146] Version: v1.14.2 | |
2019-06-03T08:21:02.607892529Z stderr F I0603 08:21:02.607730 1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook. | |
2019-06-03T08:21:02.608022855Z stderr F I0603 08:21:02.607986 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota. | |
2019-06-03T08:21:02.609610282Z stderr F E0603 08:21:02.609527 1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T08:21:02.609743847Z stderr F E0603 08:21:02.609696 1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T08:21:02.609863098Z stderr F E0603 08:21:02.609830 1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T08:21:02.609971505Z stderr F E0603 08:21:02.609939 1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T08:21:02.610075548Z stderr F E0603 08:21:02.610026 1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T08:21:02.610164872Z stderr F E0603 08:21:02.610118 1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T08:21:02.61026243Z stderr F I0603 08:21:02.610217 1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook. | |
2019-06-03T08:21:02.610311197Z stderr F I0603 08:21:02.610288 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota. | |
2019-06-03T08:21:02.613354436Z stderr F I0603 08:21:02.613266 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:02.613469776Z stderr F I0603 08:21:02.613421 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:02.61361749Z stderr F I0603 08:21:02.613577 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:02.613874368Z stderr F I0603 08:21:02.613836 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:02.615185543Z stderr F W0603 08:21:02.614957 1 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... | |
2019-06-03T08:21:03.602145131Z stderr F I0603 08:21:03.601884 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.602190862Z stderr F I0603 08:21:03.601908 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.602198771Z stderr F I0603 08:21:03.601956 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.602216309Z stderr F I0603 08:21:03.602018 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.613822104Z stderr F I0603 08:21:03.613679 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.624146163Z stderr F I0603 08:21:03.624000 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.624545428Z stderr F I0603 08:21:03.624478 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.624561332Z stderr F I0603 08:21:03.624497 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.624612729Z stderr F I0603 08:21:03.624537 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.624620343Z stderr F I0603 08:21:03.624581 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.635772507Z stderr F I0603 08:21:03.635634 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.661480342Z stderr F I0603 08:21:03.661310 1 master.go:233] Using reconciler: lease | |
2019-06-03T08:21:03.662003687Z stderr F I0603 08:21:03.661921 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.662021679Z stderr F I0603 08:21:03.661941 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.662073417Z stderr F I0603 08:21:03.661983 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.662205633Z stderr F I0603 08:21:03.662162 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.673098325Z stderr F I0603 08:21:03.672982 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.675696192Z stderr F I0603 08:21:03.675589 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.675728178Z stderr F I0603 08:21:03.675610 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.675797308Z stderr F I0603 08:21:03.675650 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.675806421Z stderr F I0603 08:21:03.675716 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.686029515Z stderr F I0603 08:21:03.685907 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.68706035Z stderr F I0603 08:21:03.686968 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.687081158Z stderr F I0603 08:21:03.686988 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.687088726Z stderr F I0603 08:21:03.687027 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.687112855Z stderr F I0603 08:21:03.687094 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.699097745Z stderr F I0603 08:21:03.698972 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.699754995Z stderr F I0603 08:21:03.699669 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.699772091Z stderr F I0603 08:21:03.699685 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.699833711Z stderr F I0603 08:21:03.699724 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.699841218Z stderr F I0603 08:21:03.699769 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.711236519Z stderr F I0603 08:21:03.711121 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.71193908Z stderr F I0603 08:21:03.711860 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.712108592Z stderr F I0603 08:21:03.712068 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.712261738Z stderr F I0603 08:21:03.712208 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.712471801Z stderr F I0603 08:21:03.712415 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.725329816Z stderr F I0603 08:21:03.725165 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.725371838Z stderr F I0603 08:21:03.725189 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.725379036Z stderr F I0603 08:21:03.725231 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.725384357Z stderr F I0603 08:21:03.725268 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.725630356Z stderr F I0603 08:21:03.725567 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.738338992Z stderr F I0603 08:21:03.738217 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.738380128Z stderr F I0603 08:21:03.738243 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.738387975Z stderr F I0603 08:21:03.738284 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.738813595Z stderr F I0603 08:21:03.738433 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.739157875Z stderr F I0603 08:21:03.738905 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.752249226Z stderr F I0603 08:21:03.752100 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.752291837Z stderr F I0603 08:21:03.752126 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.75422225Z stderr F I0603 08:21:03.754126 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.763281809Z stderr F I0603 08:21:03.756495 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.763517311Z stderr F I0603 08:21:03.763467 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.773338072Z stderr F I0603 08:21:03.773216 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.773881027Z stderr F I0603 08:21:03.773801 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.773963834Z stderr F I0603 08:21:03.773927 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.774065405Z stderr F I0603 08:21:03.774030 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.77417286Z stderr F I0603 08:21:03.774138 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.786618382Z stderr F I0603 08:21:03.786482 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.786661428Z stderr F I0603 08:21:03.786507 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.786715467Z stderr F I0603 08:21:03.786546 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.786781333Z stderr F I0603 08:21:03.786645 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.786963696Z stderr F I0603 08:21:03.786915 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.799772862Z stderr F I0603 08:21:03.799633 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.799815178Z stderr F I0603 08:21:03.799657 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.799822785Z stderr F I0603 08:21:03.799707 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.800123195Z stderr F I0603 08:21:03.799835 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.800207994Z stderr F I0603 08:21:03.800149 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.812359362Z stderr F I0603 08:21:03.812217 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.812412473Z stderr F I0603 08:21:03.812242 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.812419987Z stderr F I0603 08:21:03.812286 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.81247917Z stderr F I0603 08:21:03.812392 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.812670571Z stderr F I0603 08:21:03.812629 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.825348631Z stderr F I0603 08:21:03.825207 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.827034422Z stderr F I0603 08:21:03.826937 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.827139762Z stderr F I0603 08:21:03.827102 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.827317516Z stderr F I0603 08:21:03.827235 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.827494632Z stderr F I0603 08:21:03.827450 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.83837641Z stderr F I0603 08:21:03.838237 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.838428523Z stderr F I0603 08:21:03.838263 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.838436243Z stderr F I0603 08:21:03.838305 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.838523717Z stderr F I0603 08:21:03.838483 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.838853642Z stderr F I0603 08:21:03.838788 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.851890912Z stderr F I0603 08:21:03.851763 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.851929691Z stderr F I0603 08:21:03.851787 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.851983375Z stderr F I0603 08:21:03.851918 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.852340661Z stderr F I0603 08:21:03.852278 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.852379747Z stderr F I0603 08:21:03.852337 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.864095743Z stderr F I0603 08:21:03.863966 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.86536711Z stderr F I0603 08:21:03.865274 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.865503746Z stderr F I0603 08:21:03.865452 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.865774472Z stderr F I0603 08:21:03.865705 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.86596842Z stderr F I0603 08:21:03.865918 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.876084423Z stderr F I0603 08:21:03.875958 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.876116476Z stderr F I0603 08:21:03.875985 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.876169478Z stderr F I0603 08:21:03.876065 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.876291201Z stderr F I0603 08:21:03.876245 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.876433048Z stderr F I0603 08:21:03.876374 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.886656369Z stderr F I0603 08:21:03.886526 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.887275659Z stderr F I0603 08:21:03.887178 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.887293289Z stderr F I0603 08:21:03.887196 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.887349569Z stderr F I0603 08:21:03.887235 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.887369529Z stderr F I0603 08:21:03.887301 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.899601525Z stderr F I0603 08:21:03.899472 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:03.993556895Z stderr F I0603 08:21:03.993409 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:03.99359931Z stderr F I0603 08:21:03.993435 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:03.993606665Z stderr F I0603 08:21:03.993477 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:03.993663459Z stderr F I0603 08:21:03.993524 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.005330586Z stderr F I0603 08:21:04.005190 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.00537102Z stderr F I0603 08:21:04.005213 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.005379005Z stderr F I0603 08:21:04.005252 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.005469335Z stderr F I0603 08:21:04.005418 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.005753752Z stderr F I0603 08:21:04.005689 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.018330308Z stderr F I0603 08:21:04.018201 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.018372959Z stderr F I0603 08:21:04.018225 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.018380768Z stderr F I0603 08:21:04.018266 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.018447993Z stderr F I0603 08:21:04.018369 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.018659688Z stderr F I0603 08:21:04.018609 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.03023966Z stderr F I0603 08:21:04.030109 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.031072523Z stderr F I0603 08:21:04.030969 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.031093644Z stderr F I0603 08:21:04.030987 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.031100715Z stderr F I0603 08:21:04.031041 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.031383583Z stderr F I0603 08:21:04.031312 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.042963934Z stderr F I0603 08:21:04.042814 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.043695034Z stderr F I0603 08:21:04.043604 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.043748069Z stderr F I0603 08:21:04.043622 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.043773174Z stderr F I0603 08:21:04.043661 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.0440007Z stderr F I0603 08:21:04.043948 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.055201495Z stderr F I0603 08:21:04.055077 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.055957524Z stderr F I0603 08:21:04.055844 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.056027748Z stderr F I0603 08:21:04.055862 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.056035374Z stderr F I0603 08:21:04.055896 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.056048855Z stderr F I0603 08:21:04.055940 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.066857414Z stderr F I0603 08:21:04.066662 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.067060857Z stderr F I0603 08:21:04.066998 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.067203741Z stderr F I0603 08:21:04.067146 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.067313828Z stderr F I0603 08:21:04.066888 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.06736505Z stderr F I0603 08:21:04.067305 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.07972825Z stderr F I0603 08:21:04.079598 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.079759063Z stderr F I0603 08:21:04.079622 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.079766522Z stderr F I0603 08:21:04.079690 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.079927483Z stderr F I0603 08:21:04.079859 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.080203357Z stderr F I0603 08:21:04.080132 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.092706302Z stderr F I0603 08:21:04.092510 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.092758358Z stderr F I0603 08:21:04.092535 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.092765623Z stderr F I0603 08:21:04.092578 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.09280465Z stderr F I0603 08:21:04.092699 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.093079509Z stderr F I0603 08:21:04.092952 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.10609637Z stderr F I0603 08:21:04.105883 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.106131202Z stderr F I0603 08:21:04.105906 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.106138636Z stderr F I0603 08:21:04.105943 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.106174619Z stderr F I0603 08:21:04.106017 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.106306806Z stderr F I0603 08:21:04.106221 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.118268342Z stderr F I0603 08:21:04.118120 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.119144748Z stderr F I0603 08:21:04.119024 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.119300041Z stderr F I0603 08:21:04.119219 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.119485509Z stderr F I0603 08:21:04.119410 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.119657433Z stderr F I0603 08:21:04.119584 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.131425749Z stderr F I0603 08:21:04.131287 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.131457733Z stderr F I0603 08:21:04.131311 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.131474838Z stderr F I0603 08:21:04.131371 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.131625015Z stderr F I0603 08:21:04.131547 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.131864761Z stderr F I0603 08:21:04.131778 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.144435914Z stderr F I0603 08:21:04.144245 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.144522235Z stderr F I0603 08:21:04.144270 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.144531893Z stderr F I0603 08:21:04.144311 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.144588721Z stderr F I0603 08:21:04.144418 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.14480942Z stderr F I0603 08:21:04.144752 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.156564016Z stderr F I0603 08:21:04.156441 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.157531998Z stderr F I0603 08:21:04.157462 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.157611872Z stderr F I0603 08:21:04.157575 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.157730614Z stderr F I0603 08:21:04.157694 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.157862441Z stderr F I0603 08:21:04.157814 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.170149811Z stderr F I0603 08:21:04.169967 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.170229307Z stderr F I0603 08:21:04.169990 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.170238518Z stderr F I0603 08:21:04.170029 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.170316241Z stderr F I0603 08:21:04.170126 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.170469988Z stderr F I0603 08:21:04.170417 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.186200041Z stderr F I0603 08:21:04.186064 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.186231273Z stderr F I0603 08:21:04.186089 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.186239192Z stderr F I0603 08:21:04.186152 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.186299588Z stderr F I0603 08:21:04.186246 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.186643382Z stderr F I0603 08:21:04.186547 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.197567446Z stderr F I0603 08:21:04.197394 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.198562368Z stderr F I0603 08:21:04.198478 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.19907359Z stderr F I0603 08:21:04.198997 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.199291604Z stderr F I0603 08:21:04.199177 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.199449149Z stderr F I0603 08:21:04.199407 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.209625602Z stderr F I0603 08:21:04.209469 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.210177992Z stderr F I0603 08:21:04.210076 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.210194721Z stderr F I0603 08:21:04.210125 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.210216968Z stderr F I0603 08:21:04.210186 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.210546021Z stderr F I0603 08:21:04.210461 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.230092837Z stderr F I0603 08:21:04.229906 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.230397347Z stderr F I0603 08:21:04.230319 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.230491369Z stderr F I0603 08:21:04.230453 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.23062675Z stderr F I0603 08:21:04.230583 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.230867393Z stderr F I0603 08:21:04.230813 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.244237594Z stderr F I0603 08:21:04.244112 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.244899812Z stderr F I0603 08:21:04.244789 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.24513198Z stderr F I0603 08:21:04.245044 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.245317089Z stderr F I0603 08:21:04.245236 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.245532114Z stderr F I0603 08:21:04.245458 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.256139682Z stderr F I0603 08:21:04.256027 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.256821177Z stderr F I0603 08:21:04.256737 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.256840132Z stderr F I0603 08:21:04.256755 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.256882379Z stderr F I0603 08:21:04.256818 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.256942089Z stderr F I0603 08:21:04.256915 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.267885142Z stderr F I0603 08:21:04.267716 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.267914453Z stderr F I0603 08:21:04.267740 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.267921435Z stderr F I0603 08:21:04.267780 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.267976128Z stderr F I0603 08:21:04.267916 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.268334572Z stderr F I0603 08:21:04.268272 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.280124787Z stderr F I0603 08:21:04.279956 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.280476396Z stderr F I0603 08:21:04.280409 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.280548576Z stderr F I0603 08:21:04.280517 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.280641447Z stderr F I0603 08:21:04.280611 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.280751103Z stderr F I0603 08:21:04.280713 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.292601995Z stderr F I0603 08:21:04.292481 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.292630395Z stderr F I0603 08:21:04.292506 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.292647077Z stderr F I0603 08:21:04.292566 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.292797377Z stderr F I0603 08:21:04.292730 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.293051515Z stderr F I0603 08:21:04.292969 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.305040856Z stderr F I0603 08:21:04.304897 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.305096102Z stderr F I0603 08:21:04.304984 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.305154244Z stderr F I0603 08:21:04.305037 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.305188811Z stderr F I0603 08:21:04.305136 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.305427615Z stderr F I0603 08:21:04.305349 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.316882937Z stderr F I0603 08:21:04.316765 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.317780337Z stderr F I0603 08:21:04.317671 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.317796853Z stderr F I0603 08:21:04.317729 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.317868575Z stderr F I0603 08:21:04.317785 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.317891045Z stderr F I0603 08:21:04.317832 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.330459693Z stderr F I0603 08:21:04.330287 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.330582986Z stderr F I0603 08:21:04.330517 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.330625097Z stderr F I0603 08:21:04.330586 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.330825251Z stderr F I0603 08:21:04.330753 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.331042297Z stderr F I0603 08:21:04.330975 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.343099968Z stderr F I0603 08:21:04.342951 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.343820802Z stderr F I0603 08:21:04.343725 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.343942576Z stderr F I0603 08:21:04.343884 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.34407841Z stderr F I0603 08:21:04.344026 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.344211704Z stderr F I0603 08:21:04.344163 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.366761242Z stderr F I0603 08:21:04.366525 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.366791414Z stderr F I0603 08:21:04.366550 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.366798238Z stderr F I0603 08:21:04.366592 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.366902746Z stderr F I0603 08:21:04.366845 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.367160854Z stderr F I0603 08:21:04.367064 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.377627372Z stderr F I0603 08:21:04.377498 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.377659539Z stderr F I0603 08:21:04.377522 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.377704579Z stderr F I0603 08:21:04.377590 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.377880765Z stderr F I0603 08:21:04.377839 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.378230457Z stderr F I0603 08:21:04.378142 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.390172143Z stderr F I0603 08:21:04.390020 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.391409247Z stderr F I0603 08:21:04.391295 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.391545433Z stderr F I0603 08:21:04.391498 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.391725814Z stderr F I0603 08:21:04.391661 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.391881275Z stderr F I0603 08:21:04.391816 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.40404281Z stderr F I0603 08:21:04.403891 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.404697508Z stderr F I0603 08:21:04.404617 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.404790222Z stderr F I0603 08:21:04.404763 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.404937335Z stderr F I0603 08:21:04.404887 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.405064623Z stderr F I0603 08:21:04.405020 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.415705679Z stderr F I0603 08:21:04.415503 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.415757017Z stderr F I0603 08:21:04.415528 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.415763128Z stderr F I0603 08:21:04.415564 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.415778364Z stderr F I0603 08:21:04.415654 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.416001965Z stderr F I0603 08:21:04.415919 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.42845205Z stderr F I0603 08:21:04.428309 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.428483285Z stderr F I0603 08:21:04.428332 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.428568853Z stderr F I0603 08:21:04.428525 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.428954753Z stderr F I0603 08:21:04.428872 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.429027736Z stderr F I0603 08:21:04.428949 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.442202598Z stderr F I0603 08:21:04.442023 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.442249238Z stderr F I0603 08:21:04.442047 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.442264124Z stderr F I0603 08:21:04.442087 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.442359297Z stderr F I0603 08:21:04.442193 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.442511527Z stderr F I0603 08:21:04.442428 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.463559688Z stderr F I0603 08:21:04.463429 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.464189241Z stderr F I0603 08:21:04.464083 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.464207073Z stderr F I0603 08:21:04.464103 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.464245306Z stderr F I0603 08:21:04.464193 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.464321257Z stderr F I0603 08:21:04.464283 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.475967319Z stderr F I0603 08:21:04.475795 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.475998063Z stderr F I0603 08:21:04.475819 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.476005378Z stderr F I0603 08:21:04.475858 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.476128183Z stderr F I0603 08:21:04.476058 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.476435163Z stderr F I0603 08:21:04.476365 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.48680551Z stderr F I0603 08:21:04.486667 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.488212638Z stderr F I0603 08:21:04.488132 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.488230112Z stderr F I0603 08:21:04.488150 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.488358807Z stderr F I0603 08:21:04.488305 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.488481326Z stderr F I0603 08:21:04.488420 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.510918147Z stderr F I0603 08:21:04.510732 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.511337992Z stderr F I0603 08:21:04.511263 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.511432779Z stderr F I0603 08:21:04.511395 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.511536604Z stderr F I0603 08:21:04.511501 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.511694013Z stderr F I0603 08:21:04.511633 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.522222243Z stderr F I0603 08:21:04.522099 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.522992023Z stderr F I0603 08:21:04.522912 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.523087831Z stderr F I0603 08:21:04.523051 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.523197541Z stderr F I0603 08:21:04.523153 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.523315739Z stderr F I0603 08:21:04.523276 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.53573199Z stderr F I0603 08:21:04.535537 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.535792063Z stderr F I0603 08:21:04.535563 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.535800168Z stderr F I0603 08:21:04.535603 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.535880307Z stderr F I0603 08:21:04.535714 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.536072227Z stderr F I0603 08:21:04.535977 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.548558167Z stderr F I0603 08:21:04.548381 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.549153205Z stderr F I0603 08:21:04.549015 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.549180539Z stderr F I0603 08:21:04.549036 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.549187614Z stderr F I0603 08:21:04.549076 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.549275635Z stderr F I0603 08:21:04.549119 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.561538689Z stderr F I0603 08:21:04.561295 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.561647697Z stderr F I0603 08:21:04.561319 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.561684684Z stderr F I0603 08:21:04.561357 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.561690413Z stderr F I0603 08:21:04.561474 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.561778803Z stderr F I0603 08:21:04.561733 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.573056435Z stderr F I0603 08:21:04.572924 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.573089458Z stderr F I0603 08:21:04.572950 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.57314055Z stderr F I0603 08:21:04.573037 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.573231341Z stderr F I0603 08:21:04.573193 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.573565431Z stderr F I0603 08:21:04.573428 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.585934126Z stderr F I0603 08:21:04.585797 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.585968171Z stderr F I0603 08:21:04.585823 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.585976119Z stderr F I0603 08:21:04.585884 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.586097308Z stderr F I0603 08:21:04.585984 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.586448547Z stderr F I0603 08:21:04.586345 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.598953956Z stderr F I0603 08:21:04.598770 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.599055595Z stderr F I0603 08:21:04.598796 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.599085457Z stderr F I0603 08:21:04.598836 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.599110726Z stderr F I0603 08:21:04.598927 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.599229461Z stderr F I0603 08:21:04.599144 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.612348622Z stderr F I0603 08:21:04.612224 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.61371581Z stderr F I0603 08:21:04.613562 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.613849487Z stderr F I0603 08:21:04.613768 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.613962807Z stderr F I0603 08:21:04.613910 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.614111358Z stderr F I0603 08:21:04.614050 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.626223212Z stderr F I0603 08:21:04.626026 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.626274231Z stderr F I0603 08:21:04.626052 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.626282759Z stderr F I0603 08:21:04.626096 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.626307736Z stderr F I0603 08:21:04.626174 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.626614941Z stderr F I0603 08:21:04.626499 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.638744394Z stderr F I0603 08:21:04.638551 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.63943573Z stderr F I0603 08:21:04.639331 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.639511028Z stderr F I0603 08:21:04.639474 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.639683563Z stderr F I0603 08:21:04.639582 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.639701769Z stderr F I0603 08:21:04.639631 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.651653172Z stderr F I0603 08:21:04.651526 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.652419331Z stderr F I0603 08:21:04.652330 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.652437813Z stderr F I0603 08:21:04.652345 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.652483413Z stderr F I0603 08:21:04.652427 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.652538369Z stderr F I0603 08:21:04.652508 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.663091222Z stderr F I0603 08:21:04.662939 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.664905129Z stderr F I0603 08:21:04.664760 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.664928267Z stderr F I0603 08:21:04.664848 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.66498135Z stderr F I0603 08:21:04.664923 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.665016831Z stderr F I0603 08:21:04.664982 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.676942364Z stderr F I0603 08:21:04.676790 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.677908034Z stderr F I0603 08:21:04.677803 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:04.677926492Z stderr F I0603 08:21:04.677859 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:04.67800083Z stderr F I0603 08:21:04.677915 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:04.678009888Z stderr F I0603 08:21:04.677961 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.68873269Z stderr F I0603 08:21:04.688615 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:04.837146484Z stderr F W0603 08:21:04.836976 1 genericapiserver.go:344] Skipping API batch/v2alpha1 because it has no resources. | |
2019-06-03T08:21:04.843559533Z stderr F W0603 08:21:04.843379 1 genericapiserver.go:344] Skipping API node.k8s.io/v1alpha1 because it has no resources. | |
2019-06-03T08:21:04.846580035Z stderr F W0603 08:21:04.846415 1 genericapiserver.go:344] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. | |
2019-06-03T08:21:04.847303329Z stderr F W0603 08:21:04.847185 1 genericapiserver.go:344] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. | |
2019-06-03T08:21:04.849018466Z stderr F W0603 08:21:04.848899 1 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources. | |
2019-06-03T08:21:05.765012835Z stderr F E0603 08:21:05.764821 1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T08:21:05.765081746Z stderr F E0603 08:21:05.764871 1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T08:21:05.765088421Z stderr F E0603 08:21:05.764950 1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T08:21:05.7651794Z stderr F E0603 08:21:05.765030 1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T08:21:05.765190377Z stderr F E0603 08:21:05.765090 1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T08:21:05.765203802Z stderr F E0603 08:21:05.765110 1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted | |
2019-06-03T08:21:05.765251794Z stderr F I0603 08:21:05.765180 1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook. | |
2019-06-03T08:21:05.765266094Z stderr F I0603 08:21:05.765189 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota. | |
2019-06-03T08:21:05.767497795Z stderr F I0603 08:21:05.767360 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:05.767517854Z stderr F I0603 08:21:05.767388 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:05.767524555Z stderr F I0603 08:21:05.767454 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:05.767639083Z stderr F I0603 08:21:05.767576 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:05.779094041Z stderr F I0603 08:21:05.778964 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:05.779965002Z stderr F I0603 08:21:05.779854 1 client.go:352] parsed scheme: "" | |
2019-06-03T08:21:05.780007818Z stderr F I0603 08:21:05.779908 1 client.go:352] scheme "" not registered, fallback to default scheme | |
2019-06-03T08:21:05.780035497Z stderr F I0603 08:21:05.779989 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 <nil>}] | |
2019-06-03T08:21:05.780199466Z stderr F I0603 08:21:05.780112 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:05.799149182Z stderr F I0603 08:21:05.798945 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}] | |
2019-06-03T08:21:07.395807689Z stderr F I0603 08:21:07.395624 1 secure_serving.go:116] Serving securely on [::]:6443 | |
2019-06-03T08:21:07.395877659Z stderr F I0603 08:21:07.395753 1 autoregister_controller.go:139] Starting autoregister controller | |
2019-06-03T08:21:07.395887273Z stderr F I0603 08:21:07.395763 1 cache.go:32] Waiting for caches to sync for autoregister controller | |
2019-06-03T08:21:07.400126419Z stderr F I0603 08:21:07.399985 1 available_controller.go:320] Starting AvailableConditionController | |
2019-06-03T08:21:07.400250172Z stderr F I0603 08:21:07.400215 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller | |
2019-06-03T08:21:07.400434136Z stderr F I0603 08:21:07.400393 1 controller.go:81] Starting OpenAPI AggregationController | |
2019-06-03T08:21:07.406474786Z stderr F I0603 08:21:07.401714 1 crd_finalizer.go:242] Starting CRDFinalizer | |
2019-06-03T08:21:07.406505277Z stderr F I0603 08:21:07.401754 1 apiservice_controller.go:94] Starting APIServiceRegistrationController | |
2019-06-03T08:21:07.406511421Z stderr F I0603 08:21:07.401768 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller | |
2019-06-03T08:21:07.406516721Z stderr F I0603 08:21:07.401787 1 crdregistration_controller.go:112] Starting crd-autoregister controller | |
2019-06-03T08:21:07.406553782Z stderr F I0603 08:21:07.401794 1 controller_utils.go:1027] Waiting for caches to sync for crd-autoregister controller | |
2019-06-03T08:21:07.514800394Z stderr F I0603 08:21:07.509605 1 customresource_discovery_controller.go:208] Starting DiscoveryController | |
2019-06-03T08:21:07.514835282Z stderr F I0603 08:21:07.509666 1 naming_controller.go:284] Starting NamingConditionController | |
2019-06-03T08:21:07.514840508Z stderr F I0603 08:21:07.509681 1 establishing_controller.go:73] Starting EstablishingController | |
2019-06-03T08:21:07.617403771Z stderr F E0603 08:21:07.617253 1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg: | |
2019-06-03T08:21:07.696085017Z stderr F I0603 08:21:07.695912 1 cache.go:39] Caches are synced for autoregister controller | |
2019-06-03T08:21:07.700562677Z stderr F I0603 08:21:07.700408 1 cache.go:39] Caches are synced for AvailableConditionController controller | |
2019-06-03T08:21:07.701993511Z stderr F I0603 08:21:07.701865 1 controller_utils.go:1034] Caches are synced for crd-autoregister controller | |
2019-06-03T08:21:07.702011392Z stderr F I0603 08:21:07.701908 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller | |
2019-06-03T08:21:08.393795799Z stderr F I0603 08:21:08.393591 1 controller.go:107] OpenAPI AggregationController: Processing item | |
2019-06-03T08:21:08.393836281Z stderr F I0603 08:21:08.393634 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). | |
2019-06-03T08:21:08.393844641Z stderr F I0603 08:21:08.393649 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). | |
2019-06-03T08:21:08.42227488Z stderr F I0603 08:21:08.422087 1 storage_scheduling.go:113] created PriorityClass system-node-critical with value 2000001000 | |
2019-06-03T08:21:08.429855178Z stderr F I0603 08:21:08.429669 1 storage_scheduling.go:113] created PriorityClass system-cluster-critical with value 2000000000 | |
2019-06-03T08:21:08.429895403Z stderr F I0603 08:21:08.429716 1 storage_scheduling.go:122] all system priority classes are created successfully or already exist. | |
2019-06-03T08:21:08.448775467Z stderr F I0603 08:21:08.448589 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/cluster-admin | |
2019-06-03T08:21:08.46289808Z stderr F I0603 08:21:08.461464 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:discovery | |
2019-06-03T08:21:08.467712032Z stderr F I0603 08:21:08.467530 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:basic-user | |
2019-06-03T08:21:08.474712988Z stderr F I0603 08:21:08.473873 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer | |
2019-06-03T08:21:08.489543209Z stderr F I0603 08:21:08.489366 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/admin | |
2019-06-03T08:21:08.494287351Z stderr F I0603 08:21:08.494104 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/edit | |
2019-06-03T08:21:08.498588939Z stderr F I0603 08:21:08.498417 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/view | |
2019-06-03T08:21:08.511871412Z stderr F I0603 08:21:08.511724 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin | |
2019-06-03T08:21:08.5165423Z stderr F I0603 08:21:08.516407 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit | |
2019-06-03T08:21:08.52095769Z stderr F I0603 08:21:08.520827 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view | |
2019-06-03T08:21:08.524778329Z stderr F I0603 08:21:08.524647 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:heapster | |
2019-06-03T08:21:08.528725806Z stderr F I0603 08:21:08.528604 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node | |
2019-06-03T08:21:08.532664598Z stderr F I0603 08:21:08.532486 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector | |
2019-06-03T08:21:08.535931373Z stderr F I0603 08:21:08.535805 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-proxier | |
2019-06-03T08:21:08.539339355Z stderr F I0603 08:21:08.539169 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin | |
2019-06-03T08:21:08.542743242Z stderr F I0603 08:21:08.542542 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper | |
2019-06-03T08:21:08.545638314Z stderr F I0603 08:21:08.545472 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator | |
2019-06-03T08:21:08.550170859Z stderr F I0603 08:21:08.550037 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator | |
2019-06-03T08:21:08.554168255Z stderr F I0603 08:21:08.553971 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager | |
2019-06-03T08:21:08.558349184Z stderr F I0603 08:21:08.558134 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler | |
2019-06-03T08:21:08.561731829Z stderr F I0603 08:21:08.561533 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:kube-dns | |
2019-06-03T08:21:08.565458728Z stderr F I0603 08:21:08.565288 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner | |
2019-06-03T08:21:08.5685387Z stderr F I0603 08:21:08.568361 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher | |
2019-06-03T08:21:08.571757059Z stderr F I0603 08:21:08.571610 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider | |
2019-06-03T08:21:08.575390646Z stderr F I0603 08:21:08.575255 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient | |
2019-06-03T08:21:08.578971824Z stderr F I0603 08:21:08.578853 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient | |
2019-06-03T08:21:08.583696733Z stderr F I0603 08:21:08.583522 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler | |
2019-06-03T08:21:08.589907354Z stderr F I0603 08:21:08.589725 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner | |
2019-06-03T08:21:08.594147626Z stderr F I0603 08:21:08.593990 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller | |
2019-06-03T08:21:08.597678158Z stderr F I0603 08:21:08.597544 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller | |
2019-06-03T08:21:08.601362932Z stderr F I0603 08:21:08.601199 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller | |
2019-06-03T08:21:08.605007984Z stderr F I0603 08:21:08.604880 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller | |
2019-06-03T08:21:08.60954644Z stderr F I0603 08:21:08.609435 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller | |
2019-06-03T08:21:08.612878254Z stderr F I0603 08:21:08.612771 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller | |
2019-06-03T08:21:08.618185814Z stderr F I0603 08:21:08.618095 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller | |
2019-06-03T08:21:08.62279661Z stderr F I0603 08:21:08.622628 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller | |
2019-06-03T08:21:08.62685463Z stderr F I0603 08:21:08.626644 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector | |
2019-06-03T08:21:08.630017032Z stderr F I0603 08:21:08.629909 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler | |
2019-06-03T08:21:08.63379706Z stderr F I0603 08:21:08.633686 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller | |
2019-06-03T08:21:08.637646143Z stderr F I0603 08:21:08.637556 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller | |
2019-06-03T08:21:08.641533174Z stderr F I0603 08:21:08.641447 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller | |
2019-06-03T08:21:08.645714039Z stderr F I0603 08:21:08.645548 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder | |
2019-06-03T08:21:08.650772718Z stderr F I0603 08:21:08.650653 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector | |
2019-06-03T08:21:08.656460743Z stderr F I0603 08:21:08.656345 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller | |
2019-06-03T08:21:08.66081279Z stderr F I0603 08:21:08.660705 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller | |
2019-06-03T08:21:08.664442411Z stderr F I0603 08:21:08.664312 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller | |
2019-06-03T08:21:08.667964303Z stderr F I0603 08:21:08.667825 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller | |
2019-06-03T08:21:08.671350754Z stderr F I0603 08:21:08.671223 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller | |
2019-06-03T08:21:08.674784544Z stderr F I0603 08:21:08.674619 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller | |
2019-06-03T08:21:08.67895624Z stderr F I0603 08:21:08.678855 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller | |
2019-06-03T08:21:08.682136851Z stderr F I0603 08:21:08.681995 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller | |
2019-06-03T08:21:08.685515939Z stderr F I0603 08:21:08.685398 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller | |
2019-06-03T08:21:08.710744195Z stderr F I0603 08:21:08.710541 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller | |
2019-06-03T08:21:08.750190866Z stderr F I0603 08:21:08.750014 1 storage_rbac.go:195] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller | |
2019-06-03T08:21:08.790362356Z stderr F I0603 08:21:08.790120 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin | |
2019-06-03T08:21:08.830451436Z stderr F I0603 08:21:08.830301 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery | |
2019-06-03T08:21:08.869866637Z stderr F I0603 08:21:08.869680 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user | |
2019-06-03T08:21:08.910405973Z stderr F I0603 08:21:08.910207 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer | |
2019-06-03T08:21:08.950168301Z stderr F I0603 08:21:08.950036 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier | |
2019-06-03T08:21:08.990241244Z stderr F I0603 08:21:08.990071 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager | |
2019-06-03T08:21:09.030572018Z stderr F I0603 08:21:09.030424 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns | |
2019-06-03T08:21:09.070491034Z stderr F I0603 08:21:09.070338 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler | |
2019-06-03T08:21:09.110872878Z stderr F I0603 08:21:09.110731 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider | |
2019-06-03T08:21:09.15001627Z stderr F I0603 08:21:09.149871 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler | |
2019-06-03T08:21:09.189938222Z stderr F I0603 08:21:09.189802 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node | |
2019-06-03T08:21:09.230453905Z stderr F I0603 08:21:09.230285 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller | |
2019-06-03T08:21:09.269975511Z stderr F I0603 08:21:09.269805 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller | |
2019-06-03T08:21:09.311113895Z stderr F I0603 08:21:09.310961 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller | |
2019-06-03T08:21:09.350107868Z stderr F I0603 08:21:09.349968 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller | |
2019-06-03T08:21:09.390279022Z stderr F I0603 08:21:09.390127 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller | |
2019-06-03T08:21:09.430011347Z stderr F I0603 08:21:09.429851 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller | |
2019-06-03T08:21:09.470219966Z stderr F I0603 08:21:09.470069 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller | |
2019-06-03T08:21:09.510825016Z stderr F I0603 08:21:09.510618 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller | |
2019-06-03T08:21:09.550459798Z stderr F I0603 08:21:09.550328 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector | |
2019-06-03T08:21:09.593670222Z stderr F I0603 08:21:09.593497 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler | |
2019-06-03T08:21:09.629881339Z stderr F I0603 08:21:09.629698 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller | |
2019-06-03T08:21:09.669801433Z stderr F I0603 08:21:09.669619 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller | |
2019-06-03T08:21:09.710466522Z stderr F I0603 08:21:09.710312 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller | |
2019-06-03T08:21:09.749930643Z stderr F I0603 08:21:09.749778 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder | |
2019-06-03T08:21:09.790238126Z stderr F I0603 08:21:09.790101 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector | |
2019-06-03T08:21:09.83055341Z stderr F I0603 08:21:09.830379 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller | |
2019-06-03T08:21:09.869955718Z stderr F I0603 08:21:09.869803 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller | |
2019-06-03T08:21:09.911072027Z stderr F I0603 08:21:09.910932 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller | |
2019-06-03T08:21:09.949930868Z stderr F I0603 08:21:09.949776 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller | |
2019-06-03T08:21:09.990228617Z stderr F I0603 08:21:09.990055 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller | |
2019-06-03T08:21:10.030181995Z stderr F I0603 08:21:10.029981 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller | |
2019-06-03T08:21:10.070644513Z stderr F I0603 08:21:10.070496 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller | |
2019-06-03T08:21:10.110387223Z stderr F I0603 08:21:10.110212 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller | |
2019-06-03T08:21:10.150986449Z stderr F I0603 08:21:10.150849 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller | |
2019-06-03T08:21:10.19000001Z stderr F I0603 08:21:10.189840 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller | |
2019-06-03T08:21:10.230285454Z stderr F I0603 08:21:10.230083 1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller | |
2019-06-03T08:21:10.26811853Z stderr F I0603 08:21:10.267994 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io | |
2019-06-03T08:21:10.270431131Z stderr F I0603 08:21:10.270244 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system | |
2019-06-03T08:21:10.310613876Z stderr F I0603 08:21:10.310434 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system | |
2019-06-03T08:21:10.350037251Z stderr F I0603 08:21:10.349874 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system | |
2019-06-03T08:21:10.39011364Z stderr F I0603 08:21:10.389985 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system | |
2019-06-03T08:21:10.430212786Z stderr F I0603 08:21:10.430033 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system | |
2019-06-03T08:21:10.470117371Z stderr F I0603 08:21:10.469930 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system | |
2019-06-03T08:21:10.512099418Z stderr F I0603 08:21:10.511914 1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public | |
2019-06-03T08:21:10.548157524Z stderr F I0603 08:21:10.548009 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io | |
2019-06-03T08:21:10.550600158Z stderr F I0603 08:21:10.550490 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system | |
2019-06-03T08:21:10.590295982Z stderr F I0603 08:21:10.590112 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system | |
2019-06-03T08:21:10.624808795Z stderr F I0603 08:21:10.624615 1 controller.go:606] quota admission added evaluator for: endpoints | |
2019-06-03T08:21:10.634254926Z stderr F I0603 08:21:10.634087 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system | |
2019-06-03T08:21:10.670542383Z stderr F I0603 08:21:10.670402 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system | |
2019-06-03T08:21:10.711068771Z stderr F I0603 08:21:10.710828 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system | |
2019-06-03T08:21:10.752303258Z stderr F I0603 08:21:10.752143 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system | |
2019-06-03T08:21:10.790544944Z stderr F I0603 08:21:10.790394 1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public | |
2019-06-03T08:21:10.827281662Z stderr F W0603 08:21:10.827119 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [172.17.0.2] | |
2019-06-03T08:21:11.288959462Z stderr F I0603 08:21:11.288805 1 controller.go:606] quota admission added evaluator for: serviceaccounts | |
2019-06-03T08:21:12.005151827Z stderr F I0603 08:21:12.004986 1 controller.go:606] quota admission added evaluator for: deployments.apps | |
2019-06-03T08:21:12.335634891Z stderr F I0603 08:21:12.335435 1 controller.go:606] quota admission added evaluator for: daemonsets.apps | |
2019-06-03T08:21:13.065635391Z stderr F I0603 08:21:13.065418 1 controller.go:606] quota admission added evaluator for: daemonsets.extensions | |
2019-06-03T08:21:14.139446044Z stderr F I0603 08:21:14.139298 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io | |
2019-06-03T08:21:27.015672076Z stderr F I0603 08:21:27.015548 1 controller.go:606] quota admission added evaluator for: replicasets.apps | |
2019-06-03T08:21:27.610821593Z stderr F I0603 08:21:27.610667 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps | |
2019-06-03T08:21:29.498784535Z stderr F E0603 08:21:29.493110 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} | |
2019-06-03T08:21:36.321970833Z stderr F I0603 08:21:36.315074 1 trace.go:81] Trace[1687493254]: "Get /api/v1/nodes/kind-worker2" (started: 2019-06-03 08:21:35.66271731 +0000 UTC m=+34.071854482) (total time: 652.311958ms): | |
2019-06-03T08:21:36.322014155Z stderr F Trace[1687493254]: [652.311958ms] [652.29597ms] END | |
2019-06-03T08:21:36.450915432Z stderr F I0603 08:21:36.449248 1 trace.go:81] Trace[1361152691]: "Get /api/v1/nodes/kind-worker" (started: 2019-06-03 08:21:35.69055733 +0000 UTC m=+34.099694487) (total time: 758.655434ms): | |
2019-06-03T08:21:36.450964904Z stderr F Trace[1361152691]: [758.655434ms] [758.641503ms] END |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:21:02.748188447Z stderr F I0603 08:21:02.748078 1 serving.go:319] Generated self-signed cert in-memory | |
2019-06-03T08:21:03.166077751Z stderr F I0603 08:21:03.165929 1 controllermanager.go:155] Version: v1.14.2 | |
2019-06-03T08:21:03.166640919Z stderr F I0603 08:21:03.166561 1 secure_serving.go:116] Serving securely on 127.0.0.1:10257 | |
2019-06-03T08:21:03.167146816Z stderr F I0603 08:21:03.167080 1 deprecated_insecure_serving.go:51] Serving insecurely on [::]:10252 | |
2019-06-03T08:21:03.167345074Z stderr F I0603 08:21:03.167299 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-controller-manager... | |
2019-06-03T08:21:07.583632112Z stderr F E0603 08:21:07.583485 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system" | |
2019-06-03T08:21:11.073083458Z stderr F I0603 08:21:11.072954 1 leaderelection.go:227] successfully acquired lease kube-system/kube-controller-manager | |
2019-06-03T08:21:11.074018554Z stderr F I0603 08:21:11.073366 1 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"8b760115-85d8-11e9-bdc2-0242ac110002", APIVersion:"v1", ResourceVersion:"161", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kind-control-plane_86c006a6-85d8-11e9-a9bc-0242ac110002 became leader | |
2019-06-03T08:21:11.280555501Z stderr F I0603 08:21:11.280372 1 plugins.go:103] No cloud provider specified. | |
2019-06-03T08:21:11.282505564Z stderr F I0603 08:21:11.282362 1 controller_utils.go:1027] Waiting for caches to sync for tokens controller | |
2019-06-03T08:21:11.382879381Z stderr F I0603 08:21:11.382715 1 controller_utils.go:1034] Caches are synced for tokens controller | |
2019-06-03T08:21:11.396657019Z stderr F I0603 08:21:11.396505 1 controllermanager.go:497] Started "endpoint" | |
2019-06-03T08:21:11.397114651Z stderr F I0603 08:21:11.397027 1 endpoints_controller.go:166] Starting endpoint controller | |
2019-06-03T08:21:11.397925073Z stderr F I0603 08:21:11.397206 1 controller_utils.go:1027] Waiting for caches to sync for endpoint controller | |
2019-06-03T08:21:11.629343344Z stderr F I0603 08:21:11.629160 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io | |
2019-06-03T08:21:11.629548667Z stderr F I0603 08:21:11.629457 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch | |
2019-06-03T08:21:11.629619657Z stderr F I0603 08:21:11.629517 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates | |
2019-06-03T08:21:11.629654303Z stderr F I0603 08:21:11.629585 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints | |
2019-06-03T08:21:11.629663328Z stderr F I0603 08:21:11.629619 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions | |
2019-06-03T08:21:11.629706394Z stderr F I0603 08:21:11.629669 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps | |
2019-06-03T08:21:11.629738654Z stderr F I0603 08:21:11.629704 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps | |
2019-06-03T08:21:11.629778673Z stderr F I0603 08:21:11.629751 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io | |
2019-06-03T08:21:11.630850506Z stderr F I0603 08:21:11.630751 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch | |
2019-06-03T08:21:11.630913425Z stderr F I0603 08:21:11.630847 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io | |
2019-06-03T08:21:11.631126552Z stderr F I0603 08:21:11.631076 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io | |
2019-06-03T08:21:11.631300124Z stderr F W0603 08:21:11.631250 1 shared_informer.go:311] resyncPeriod 63208603678164 is smaller than resyncCheckPeriod 78776112956132 and the informer has already started. Changing it to 78776112956132 | |
2019-06-03T08:21:11.632217228Z stderr F I0603 08:21:11.632147 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges | |
2019-06-03T08:21:11.632271816Z stderr F I0603 08:21:11.632193 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions | |
2019-06-03T08:21:11.632406761Z stderr F I0603 08:21:11.632365 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling | |
2019-06-03T08:21:11.632476485Z stderr F W0603 08:21:11.632393 1 shared_informer.go:311] resyncPeriod 64078709594068 is smaller than resyncCheckPeriod 78776112956132 and the informer has already started. Changing it to 78776112956132 | |
2019-06-03T08:21:11.633148726Z stderr F I0603 08:21:11.633079 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts | |
2019-06-03T08:21:11.633217076Z stderr F I0603 08:21:11.633149 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps | |
2019-06-03T08:21:11.633225303Z stderr F I0603 08:21:11.633194 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps | |
2019-06-03T08:21:11.633393115Z stderr F I0603 08:21:11.633343 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions | |
2019-06-03T08:21:11.633654008Z stderr F I0603 08:21:11.633611 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io | |
2019-06-03T08:21:11.633947962Z stderr F I0603 08:21:11.633908 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions | |
2019-06-03T08:21:11.63429272Z stderr F I0603 08:21:11.634250 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps | |
2019-06-03T08:21:11.634544241Z stderr F I0603 08:21:11.634482 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io | |
2019-06-03T08:21:11.634574463Z stderr F I0603 08:21:11.634534 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy | |
2019-06-03T08:21:11.634606787Z stderr F E0603 08:21:11.634567 1 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies" | |
2019-06-03T08:21:11.63485269Z stderr F I0603 08:21:11.634797 1 controllermanager.go:497] Started "resourcequota" | |
2019-06-03T08:21:11.639288197Z stderr F I0603 08:21:11.639165 1 resource_quota_controller.go:276] Starting resource quota controller | |
2019-06-03T08:21:11.639433615Z stderr F I0603 08:21:11.639388 1 controller_utils.go:1027] Waiting for caches to sync for resource quota controller | |
2019-06-03T08:21:11.639544446Z stderr F I0603 08:21:11.639508 1 resource_quota_monitor.go:301] QuotaMonitor running | |
2019-06-03T08:21:11.657287406Z stderr F I0603 08:21:11.657108 1 controllermanager.go:497] Started "disruption" | |
2019-06-03T08:21:11.657579284Z stderr F I0603 08:21:11.657504 1 disruption.go:286] Starting disruption controller | |
2019-06-03T08:21:11.657701543Z stderr F I0603 08:21:11.657657 1 controller_utils.go:1027] Waiting for caches to sync for disruption controller | |
2019-06-03T08:21:11.678187204Z stderr F I0603 08:21:11.677991 1 controllermanager.go:497] Started "cronjob" | |
2019-06-03T08:21:11.67852199Z stderr F I0603 08:21:11.678439 1 cronjob_controller.go:94] Starting CronJob Manager | |
2019-06-03T08:21:11.699197649Z stderr F I0603 08:21:11.699011 1 node_ipam_controller.go:99] Sending events to api server. | |
2019-06-03T08:21:21.702783543Z stderr F I0603 08:21:21.702627 1 range_allocator.go:78] Sending events to api server. | |
2019-06-03T08:21:21.702992948Z stderr F I0603 08:21:21.702940 1 range_allocator.go:99] No Service CIDR provided. Skipping filtering out service addresses. | |
2019-06-03T08:21:21.70301304Z stderr F I0603 08:21:21.702956 1 range_allocator.go:105] Node kind-control-plane has no CIDR, ignoring | |
2019-06-03T08:21:21.703054171Z stderr F I0603 08:21:21.703015 1 controllermanager.go:497] Started "nodeipam" | |
2019-06-03T08:21:21.703061981Z stderr F W0603 08:21:21.703029 1 core.go:175] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes. | |
2019-06-03T08:21:21.703080075Z stderr F W0603 08:21:21.703036 1 controllermanager.go:489] Skipping "route" | |
2019-06-03T08:21:21.70332502Z stderr F I0603 08:21:21.703059 1 node_ipam_controller.go:167] Starting ipam controller | |
2019-06-03T08:21:21.703376993Z stderr F I0603 08:21:21.703350 1 controller_utils.go:1027] Waiting for caches to sync for node controller | |
2019-06-03T08:21:21.787643659Z stderr F I0603 08:21:21.787508 1 controllermanager.go:497] Started "horizontalpodautoscaling" | |
2019-06-03T08:21:21.787827642Z stderr F I0603 08:21:21.787529 1 horizontal.go:156] Starting HPA controller | |
2019-06-03T08:21:21.787944712Z stderr F I0603 08:21:21.787897 1 controller_utils.go:1027] Waiting for caches to sync for HPA controller | |
2019-06-03T08:21:21.840486553Z stderr F I0603 08:21:21.840342 1 controllermanager.go:497] Started "statefulset" | |
2019-06-03T08:21:21.840636068Z stderr F I0603 08:21:21.840464 1 stateful_set.go:151] Starting stateful set controller | |
2019-06-03T08:21:21.840744498Z stderr F I0603 08:21:21.840700 1 controller_utils.go:1027] Waiting for caches to sync for stateful set controller | |
2019-06-03T08:21:21.847181888Z stderr F I0603 08:21:21.847007 1 node_lifecycle_controller.go:77] Sending events to api server | |
2019-06-03T08:21:21.847211339Z stderr F E0603 08:21:21.847069 1 core.go:161] failed to start cloud node lifecycle controller: no cloud provider provided | |
2019-06-03T08:21:21.847226524Z stderr F W0603 08:21:21.847081 1 controllermanager.go:489] Skipping "cloud-node-lifecycle" | |
2019-06-03T08:21:21.864986032Z stderr F I0603 08:21:21.864791 1 controllermanager.go:497] Started "persistentvolume-expander" | |
2019-06-03T08:21:21.865466689Z stderr F I0603 08:21:21.865246 1 expand_controller.go:153] Starting expand controller | |
2019-06-03T08:21:21.865583298Z stderr F I0603 08:21:21.865538 1 controller_utils.go:1027] Waiting for caches to sync for expand controller | |
2019-06-03T08:21:22.362357508Z stderr F I0603 08:21:22.362219 1 garbagecollector.go:130] Starting garbage collector controller | |
2019-06-03T08:21:22.362402443Z stderr F I0603 08:21:22.362294 1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller | |
2019-06-03T08:21:22.362425659Z stderr F I0603 08:21:22.362333 1 graph_builder.go:308] GraphBuilder running | |
2019-06-03T08:21:22.362514184Z stderr F I0603 08:21:22.362475 1 controllermanager.go:497] Started "garbagecollector" | |
2019-06-03T08:21:22.385476381Z stderr F I0603 08:21:22.385335 1 controllermanager.go:497] Started "job" | |
2019-06-03T08:21:22.385566882Z stderr F I0603 08:21:22.385481 1 job_controller.go:143] Starting job controller | |
2019-06-03T08:21:22.38557543Z stderr F I0603 08:21:22.385510 1 controller_utils.go:1027] Waiting for caches to sync for job controller | |
2019-06-03T08:21:22.401944899Z stderr F I0603 08:21:22.401761 1 node_lifecycle_controller.go:292] Sending events to api server. | |
2019-06-03T08:21:22.402135043Z stderr F I0603 08:21:22.402079 1 node_lifecycle_controller.go:325] Controller is using taint based evictions. | |
2019-06-03T08:21:22.402245344Z stderr F I0603 08:21:22.402203 1 taint_manager.go:175] Sending events to api server. | |
2019-06-03T08:21:22.402660351Z stderr F I0603 08:21:22.402579 1 node_lifecycle_controller.go:390] Controller will reconcile labels. | |
2019-06-03T08:21:22.402713819Z stderr F I0603 08:21:22.402605 1 node_lifecycle_controller.go:403] Controller will taint node by condition. | |
2019-06-03T08:21:22.402734644Z stderr F I0603 08:21:22.402635 1 controllermanager.go:497] Started "nodelifecycle" | |
2019-06-03T08:21:22.402776674Z stderr F I0603 08:21:22.402733 1 node_lifecycle_controller.go:427] Starting node controller | |
2019-06-03T08:21:22.402870054Z stderr F I0603 08:21:22.402823 1 controller_utils.go:1027] Waiting for caches to sync for taint controller | |
2019-06-03T08:21:22.60613532Z stderr F I0603 08:21:22.605973 1 controllermanager.go:497] Started "pvc-protection" | |
2019-06-03T08:21:22.606433932Z stderr F I0603 08:21:22.606362 1 pvc_protection_controller.go:99] Starting PVC protection controller | |
2019-06-03T08:21:22.606556978Z stderr F I0603 08:21:22.606507 1 controller_utils.go:1027] Waiting for caches to sync for PVC protection controller | |
2019-06-03T08:21:22.856239424Z stderr F I0603 08:21:22.856042 1 controllermanager.go:497] Started "replicationcontroller" | |
2019-06-03T08:21:22.85627752Z stderr F I0603 08:21:22.856142 1 replica_set.go:182] Starting replicationcontroller controller | |
2019-06-03T08:21:22.856283818Z stderr F I0603 08:21:22.856171 1 controller_utils.go:1027] Waiting for caches to sync for ReplicationController controller | |
2019-06-03T08:21:23.106583047Z stderr F I0603 08:21:23.106394 1 controllermanager.go:497] Started "csrsigning" | |
2019-06-03T08:21:23.106809176Z stderr F I0603 08:21:23.106591 1 certificate_controller.go:113] Starting certificate controller | |
2019-06-03T08:21:23.106917438Z stderr F I0603 08:21:23.106871 1 controller_utils.go:1027] Waiting for caches to sync for certificate controller | |
2019-06-03T08:21:23.255598803Z stderr F I0603 08:21:23.255417 1 controllermanager.go:497] Started "csrcleaner" | |
2019-06-03T08:21:23.255804002Z stderr F I0603 08:21:23.255460 1 cleaner.go:81] Starting CSR cleaner controller | |
2019-06-03T08:21:23.506543732Z stderr F I0603 08:21:23.506338 1 controllermanager.go:497] Started "ttl" | |
2019-06-03T08:21:23.506583225Z stderr F I0603 08:21:23.506408 1 ttl_controller.go:116] Starting TTL controller | |
2019-06-03T08:21:23.506590311Z stderr F I0603 08:21:23.506432 1 controller_utils.go:1027] Waiting for caches to sync for TTL controller | |
2019-06-03T08:21:23.767330433Z stderr F I0603 08:21:23.767152 1 controllermanager.go:497] Started "namespace" | |
2019-06-03T08:21:23.76736835Z stderr F I0603 08:21:23.767223 1 namespace_controller.go:186] Starting namespace controller | |
2019-06-03T08:21:23.767374814Z stderr F I0603 08:21:23.767247 1 controller_utils.go:1027] Waiting for caches to sync for namespace controller | |
2019-06-03T08:21:24.006257053Z stderr F I0603 08:21:24.006064 1 controllermanager.go:497] Started "serviceaccount" | |
2019-06-03T08:21:24.00629986Z stderr F I0603 08:21:24.006128 1 serviceaccounts_controller.go:115] Starting service account controller | |
2019-06-03T08:21:24.006305993Z stderr F I0603 08:21:24.006151 1 controller_utils.go:1027] Waiting for caches to sync for service account controller | |
2019-06-03T08:21:24.256824328Z stderr F I0603 08:21:24.256628 1 controllermanager.go:497] Started "bootstrapsigner" | |
2019-06-03T08:21:24.256904218Z stderr F I0603 08:21:24.256743 1 controller_utils.go:1027] Waiting for caches to sync for bootstrap_signer controller | |
2019-06-03T08:21:24.510326172Z stderr F I0603 08:21:24.510174 1 controllermanager.go:497] Started "persistentvolume-binder" | |
2019-06-03T08:21:24.511013393Z stderr F I0603 08:21:24.510937 1 pv_controller_base.go:270] Starting persistent volume controller | |
2019-06-03T08:21:24.511032809Z stderr F I0603 08:21:24.510970 1 controller_utils.go:1027] Waiting for caches to sync for persistent volume controller | |
2019-06-03T08:21:24.756291117Z stderr F W0603 08:21:24.756104 1 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. | |
2019-06-03T08:21:24.757766229Z stderr F I0603 08:21:24.757320 1 controllermanager.go:497] Started "attachdetach" | |
2019-06-03T08:21:24.75778793Z stderr F I0603 08:21:24.757396 1 attach_detach_controller.go:323] Starting attach detach controller | |
2019-06-03T08:21:24.757838106Z stderr F I0603 08:21:24.757405 1 controller_utils.go:1027] Waiting for caches to sync for attach detach controller | |
2019-06-03T08:21:25.00613342Z stderr F I0603 08:21:25.005934 1 controllermanager.go:497] Started "clusterrole-aggregation" | |
2019-06-03T08:21:25.006171319Z stderr F I0603 08:21:25.006007 1 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator | |
2019-06-03T08:21:25.006177622Z stderr F I0603 08:21:25.006032 1 controller_utils.go:1027] Waiting for caches to sync for ClusterRoleAggregator controller | |
2019-06-03T08:21:25.256524992Z stderr F I0603 08:21:25.256379 1 controllermanager.go:497] Started "podgc" | |
2019-06-03T08:21:25.256588627Z stderr F I0603 08:21:25.256429 1 gc_controller.go:76] Starting GC controller | |
2019-06-03T08:21:25.256594816Z stderr F I0603 08:21:25.256452 1 controller_utils.go:1027] Waiting for caches to sync for GC controller | |
2019-06-03T08:21:25.506081797Z stderr F I0603 08:21:25.505881 1 controllermanager.go:497] Started "daemonset" | |
2019-06-03T08:21:25.506134868Z stderr F I0603 08:21:25.505952 1 daemon_controller.go:267] Starting daemon sets controller | |
2019-06-03T08:21:25.506142263Z stderr F I0603 08:21:25.505977 1 controller_utils.go:1027] Waiting for caches to sync for daemon sets controller | |
2019-06-03T08:21:25.756948113Z stderr F I0603 08:21:25.756715 1 controllermanager.go:497] Started "replicaset" | |
2019-06-03T08:21:25.757006769Z stderr F I0603 08:21:25.756804 1 replica_set.go:182] Starting replicaset controller | |
2019-06-03T08:21:25.757013615Z stderr F I0603 08:21:25.756834 1 controller_utils.go:1027] Waiting for caches to sync for ReplicaSet controller | |
2019-06-03T08:21:25.906205319Z stderr F E0603 08:21:25.905985 1 prometheus.go:138] failed to register depth metric certificate: duplicate metrics collector registration attempted | |
2019-06-03T08:21:25.906293905Z stderr F E0603 08:21:25.906145 1 prometheus.go:150] failed to register adds metric certificate: duplicate metrics collector registration attempted | |
2019-06-03T08:21:25.906303193Z stderr F E0603 08:21:25.906229 1 prometheus.go:162] failed to register latency metric certificate: duplicate metrics collector registration attempted | |
2019-06-03T08:21:25.906463914Z stderr F E0603 08:21:25.906356 1 prometheus.go:174] failed to register work_duration metric certificate: duplicate metrics collector registration attempted | |
2019-06-03T08:21:25.906523473Z stderr F E0603 08:21:25.906419 1 prometheus.go:189] failed to register unfinished_work_seconds metric certificate: duplicate metrics collector registration attempted | |
2019-06-03T08:21:25.906536245Z stderr F E0603 08:21:25.906447 1 prometheus.go:202] failed to register longest_running_processor_microseconds metric certificate: duplicate metrics collector registration attempted | |
2019-06-03T08:21:25.906594083Z stderr F E0603 08:21:25.906532 1 prometheus.go:214] failed to register retries metric certificate: duplicate metrics collector registration attempted | |
2019-06-03T08:21:25.90665661Z stderr F I0603 08:21:25.906608 1 controllermanager.go:497] Started "csrapproving" | |
2019-06-03T08:21:25.906788791Z stderr F I0603 08:21:25.906740 1 certificate_controller.go:113] Starting certificate controller | |
2019-06-03T08:21:25.906842443Z stderr F I0603 08:21:25.906796 1 controller_utils.go:1027] Waiting for caches to sync for certificate controller | |
2019-06-03T08:21:26.157117703Z stderr F E0603 08:21:26.156921 1 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail | |
2019-06-03T08:21:26.157169588Z stderr F W0603 08:21:26.156982 1 controllermanager.go:489] Skipping "service" | |
2019-06-03T08:21:26.406779822Z stderr F I0603 08:21:26.406488 1 controllermanager.go:497] Started "deployment" | |
2019-06-03T08:21:26.406907396Z stderr F I0603 08:21:26.406580 1 deployment_controller.go:152] Starting deployment controller | |
2019-06-03T08:21:26.406931715Z stderr F I0603 08:21:26.406608 1 controller_utils.go:1027] Waiting for caches to sync for deployment controller | |
2019-06-03T08:21:26.656173936Z stderr F I0603 08:21:26.656012 1 controllermanager.go:497] Started "tokencleaner" | |
2019-06-03T08:21:26.656299475Z stderr F W0603 08:21:26.656243 1 controllermanager.go:489] Skipping "ttl-after-finished" | |
2019-06-03T08:21:26.656335265Z stderr F I0603 08:21:26.656076 1 tokencleaner.go:116] Starting token cleaner controller | |
2019-06-03T08:21:26.656381047Z stderr F I0603 08:21:26.656330 1 controller_utils.go:1027] Waiting for caches to sync for token_cleaner controller | |
2019-06-03T08:21:26.756667365Z stderr F I0603 08:21:26.756482 1 controller_utils.go:1034] Caches are synced for token_cleaner controller | |
2019-06-03T08:21:26.906617853Z stderr F I0603 08:21:26.906434 1 controllermanager.go:497] Started "pv-protection" | |
2019-06-03T08:21:26.906765189Z stderr F W0603 08:21:26.906469 1 controllermanager.go:489] Skipping "root-ca-cert-publisher" | |
2019-06-03T08:21:26.907115373Z stderr F E0603 08:21:26.907037 1 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies" | |
2019-06-03T08:21:26.907160614Z stderr F I0603 08:21:26.907098 1 pv_protection_controller.go:81] Starting PV protection controller | |
2019-06-03T08:21:26.907181097Z stderr F I0603 08:21:26.907121 1 controller_utils.go:1027] Waiting for caches to sync for PV protection controller | |
2019-06-03T08:21:26.917535891Z stderr F I0603 08:21:26.917407 1 controller_utils.go:1027] Waiting for caches to sync for garbage collector controller | |
2019-06-03T08:21:26.934650251Z stderr F W0603 08:21:26.933859 1 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="kind-control-plane" does not exist | |
2019-06-03T08:21:26.955292904Z stderr F I0603 08:21:26.955112 1 controller_utils.go:1034] Caches are synced for stateful set controller | |
2019-06-03T08:21:26.957420449Z stderr F I0603 08:21:26.957312 1 controller_utils.go:1034] Caches are synced for GC controller | |
2019-06-03T08:21:26.958607693Z stderr F I0603 08:21:26.958517 1 controller_utils.go:1034] Caches are synced for ReplicaSet controller | |
2019-06-03T08:21:26.967179656Z stderr F I0603 08:21:26.967048 1 controller_utils.go:1034] Caches are synced for attach detach controller | |
2019-06-03T08:21:26.967286341Z stderr F I0603 08:21:26.967243 1 controller_utils.go:1034] Caches are synced for expand controller | |
2019-06-03T08:21:26.967491656Z stderr F I0603 08:21:26.967441 1 controller_utils.go:1034] Caches are synced for namespace controller | |
2019-06-03T08:21:26.988422324Z stderr F I0603 08:21:26.988254 1 controller_utils.go:1034] Caches are synced for HPA controller | |
2019-06-03T08:21:26.997694707Z stderr F I0603 08:21:26.997513 1 controller_utils.go:1034] Caches are synced for endpoint controller | |
2019-06-03T08:21:27.003806654Z stderr F I0603 08:21:27.003665 1 controller_utils.go:1034] Caches are synced for node controller | |
2019-06-03T08:21:27.003839699Z stderr F I0603 08:21:27.003699 1 range_allocator.go:157] Starting range CIDR allocator | |
2019-06-03T08:21:27.00384662Z stderr F I0603 08:21:27.003721 1 controller_utils.go:1027] Waiting for caches to sync for cidrallocator controller | |
2019-06-03T08:21:27.006438111Z stderr F I0603 08:21:27.006329 1 controller_utils.go:1034] Caches are synced for service account controller | |
2019-06-03T08:21:27.006877705Z stderr F I0603 08:21:27.006806 1 controller_utils.go:1034] Caches are synced for ClusterRoleAggregator controller | |
2019-06-03T08:21:27.008347323Z stderr F I0603 08:21:27.008262 1 controller_utils.go:1034] Caches are synced for TTL controller | |
2019-06-03T08:21:27.010025794Z stderr F I0603 08:21:27.009946 1 controller_utils.go:1034] Caches are synced for deployment controller | |
2019-06-03T08:21:27.011673151Z stderr F I0603 08:21:27.011581 1 controller_utils.go:1034] Caches are synced for PVC protection controller | |
2019-06-03T08:21:27.011726214Z stderr F I0603 08:21:27.011639 1 controller_utils.go:1034] Caches are synced for PV protection controller | |
2019-06-03T08:21:27.011731446Z stderr F I0603 08:21:27.011658 1 controller_utils.go:1034] Caches are synced for persistent volume controller | |
2019-06-03T08:21:27.02321591Z stderr F I0603 08:21:27.023072 1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"8c04849d-85d8-11e9-bdc2-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"195", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-fb8b8dccf to 2 | |
2019-06-03T08:21:27.052919178Z stderr F I0603 08:21:27.052737 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-fb8b8dccf", UID:"94f6f4fc-85d8-11e9-bdc2-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"333", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-fb8b8dccf-cb5wv | |
2019-06-03T08:21:27.0723454Z stderr F I0603 08:21:27.072117 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-fb8b8dccf", UID:"94f6f4fc-85d8-11e9-bdc2-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"333", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-fb8b8dccf-lkfhd | |
2019-06-03T08:21:27.085851314Z stderr F I0603 08:21:27.085693 1 controller_utils.go:1034] Caches are synced for job controller | |
2019-06-03T08:21:27.09102257Z stderr F E0603 08:21:27.090810 1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again | |
2019-06-03T08:21:27.092094472Z stderr F E0603 08:21:27.091987 1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again | |
2019-06-03T08:21:27.104901236Z stderr F I0603 08:21:27.104768 1 controller_utils.go:1034] Caches are synced for cidrallocator controller | |
2019-06-03T08:21:27.128439069Z stderr F I0603 08:21:27.128297 1 range_allocator.go:310] Set node kind-control-plane PodCIDR to 10.244.0.0/24 | |
2019-06-03T08:21:27.158216238Z stderr F I0603 08:21:27.158039 1 controller_utils.go:1034] Caches are synced for ReplicationController controller | |
2019-06-03T08:21:27.158350851Z stderr F I0603 08:21:27.158042 1 controller_utils.go:1034] Caches are synced for disruption controller | |
2019-06-03T08:21:27.158366272Z stderr F I0603 08:21:27.158316 1 disruption.go:294] Sending events to api server. | |
2019-06-03T08:21:27.257330866Z stderr F I0603 08:21:27.257149 1 controller_utils.go:1034] Caches are synced for bootstrap_signer controller | |
2019-06-03T08:21:27.307114293Z stderr F I0603 08:21:27.306959 1 controller_utils.go:1034] Caches are synced for certificate controller | |
2019-06-03T08:21:27.307820328Z stderr F I0603 08:21:27.307749 1 controller_utils.go:1034] Caches are synced for certificate controller | |
2019-06-03T08:21:27.329418988Z stderr F I0603 08:21:27.329241 1 log.go:172] [INFO] signed certificate with serial number 327474727781264750827961006478248569663437768845 | |
2019-06-03T08:21:27.603255757Z stderr F I0603 08:21:27.603082 1 controller_utils.go:1034] Caches are synced for taint controller | |
2019-06-03T08:21:27.603403582Z stderr F I0603 08:21:27.603191 1 node_lifecycle_controller.go:1159] Initializing eviction metric for zone: | |
2019-06-03T08:21:27.603411515Z stderr F W0603 08:21:27.603270 1 node_lifecycle_controller.go:833] Missing timestamp for Node kind-control-plane. Assuming now as a timestamp. | |
2019-06-03T08:21:27.603418261Z stderr F I0603 08:21:27.603312 1 node_lifecycle_controller.go:1009] Controller detected that all Nodes are not-Ready. Entering master disruption mode. | |
2019-06-03T08:21:27.603562251Z stderr F I0603 08:21:27.603473 1 taint_manager.go:198] Starting NoExecuteTaintManager | |
2019-06-03T08:21:27.603664881Z stderr F I0603 08:21:27.603584 1 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-control-plane", UID:"89703616-85d8-11e9-bdc2-0242ac110002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kind-control-plane event: Registered Node kind-control-plane in Controller | |
2019-06-03T08:21:27.606269765Z stderr F I0603 08:21:27.606154 1 controller_utils.go:1034] Caches are synced for daemon sets controller | |
2019-06-03T08:21:27.627913862Z stderr F I0603 08:21:27.627700 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"8ca65230-85d8-11e9-bdc2-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"220", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-s2cz8 | |
2019-06-03T08:21:27.628266265Z stderr F I0603 08:21:27.628151 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"8c36f42a-85d8-11e9-bdc2-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"204", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-mwhdn | |
2019-06-03T08:21:27.646890522Z stderr F I0603 08:21:27.640245 1 controller_utils.go:1034] Caches are synced for resource quota controller | |
2019-06-03T08:21:27.663239028Z stderr F I0603 08:21:27.662935 1 controller_utils.go:1034] Caches are synced for garbage collector controller | |
2019-06-03T08:21:27.66329577Z stderr F I0603 08:21:27.662967 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage | |
2019-06-03T08:21:27.673573844Z stderr F I0603 08:21:27.673341 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"ip-masq-agent", UID:"8cac15ba-85d8-11e9-bdc2-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"227", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ip-masq-agent-7mlrv | |
2019-06-03T08:21:27.700131643Z stderr F E0603 08:21:27.699852 1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"8c36f42a-85d8-11e9-bdc2-0242ac110002", ResourceVersion:"204", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63695146872, loc:(*time.Location)(0x724ce00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0002b75c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0013d5580), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0002b75e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0002b7620), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.14.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0002b7680)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000b540f0), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0010ab4a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c39e00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00161c3b0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0010ab4e8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again | |
2019-06-03T08:21:27.718728054Z stderr F I0603 08:21:27.718559 1 controller_utils.go:1034] Caches are synced for garbage collector controller | |
2019-06-03T08:21:30.084786337Z stderr F I0603 08:21:30.084656 1 log.go:172] [INFO] signed certificate with serial number 632041505758720968903678686549894147814218769357 | |
2019-06-03T08:21:30.176585215Z stderr F I0603 08:21:30.176408 1 log.go:172] [INFO] signed certificate with serial number 517044012426615110385843436754368079094010153170 | |
2019-06-03T08:21:42.396411944Z stderr F W0603 08:21:42.396207 1 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="kind-worker2" does not exist | |
2019-06-03T08:21:42.422077032Z stderr F I0603 08:21:42.421912 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"ip-masq-agent", UID:"8cac15ba-85d8-11e9-bdc2-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ip-masq-agent-rxptr | |
2019-06-03T08:21:42.423670153Z stderr F I0603 08:21:42.423553 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"8ca65230-85d8-11e9-bdc2-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-nbhv7 | |
2019-06-03T08:21:42.426655468Z stderr F I0603 08:21:42.426549 1 range_allocator.go:310] Set node kind-worker2 PodCIDR to 10.244.1.0/24 | |
2019-06-03T08:21:42.447744745Z stderr F I0603 08:21:42.447566 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"8c36f42a-85d8-11e9-bdc2-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-lbbkh | |
2019-06-03T08:21:42.470298383Z stderr F E0603 08:21:42.470094 1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"8c36f42a-85d8-11e9-bdc2-0242ac110002", ResourceVersion:"408", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63695146872, loc:(*time.Location)(0x724ce00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001e59940), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00190bec0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001e59960), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001e59980), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.14.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001e599c0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0017f8f00), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0017fb6e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0017eac00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0017e8138)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0017fb728)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again | |
2019-06-03T08:21:42.604317386Z stderr F W0603 08:21:42.604130 1 node_lifecycle_controller.go:833] Missing timestamp for Node kind-worker2. Assuming now as a timestamp. | |
2019-06-03T08:21:42.604786542Z stderr F I0603 08:21:42.604564 1 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker2", UID:"9e2144fe-85d8-11e9-bdc2-0242ac110002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kind-worker2 event: Registered Node kind-worker2 in Controller | |
2019-06-03T08:21:42.793224341Z stderr F W0603 08:21:42.793074 1 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="kind-worker" does not exist | |
2019-06-03T08:21:42.807084207Z stderr F I0603 08:21:42.806594 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"8c36f42a-85d8-11e9-bdc2-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-q6qbj | |
2019-06-03T08:21:42.825263217Z stderr F I0603 08:21:42.825073 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"8ca65230-85d8-11e9-bdc2-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-h2bsq | |
2019-06-03T08:21:42.842924197Z stderr F I0603 08:21:42.842787 1 range_allocator.go:310] Set node kind-worker PodCIDR to 10.244.2.0/24 | |
2019-06-03T08:21:42.854577566Z stderr F I0603 08:21:42.854425 1 event.go:209] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"ip-masq-agent", UID:"8cac15ba-85d8-11e9-bdc2-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ip-masq-agent-kcr75 | |
2019-06-03T08:21:43.026846893Z stderr F E0603 08:21:43.026528 1 daemon_controller.go:302] kube-system/ip-masq-agent failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-masq-agent", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/ip-masq-agent", UID:"8cac15ba-85d8-11e9-bdc2-0242ac110002", ResourceVersion:"445", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63695146873, loc:(*time.Location)(0x724ce00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ip-masq-agent", "k8s-app":"ip-masq-agent", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00202ca80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ip-masq-agent", "k8s-app":"ip-masq-agent", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"config", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00202a500), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"ip-masq-agent", Image:"k8s.gcr.io/ip-masq-agent:v2.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"config", ReadOnly:false, MountPath:"/etc/config", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00203b630), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002034558), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"ip-masq-agent", DeprecatedServiceAccount:"ip-masq-agent", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fcf500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"OnDelete", RollingUpdate:(*v1.RollingUpdateDaemonSet)(nil)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0020345a8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:1, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "ip-masq-agent": the object has been modified; please apply your changes to the latest version and try again | |
2019-06-03T08:21:47.511344602Z stderr F I0603 08:21:47.511119 1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello", UID:"a1259afd-85d8-11e9-bdc2-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"530", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-6d6586c69c to 1 | |
2019-06-03T08:21:47.541526875Z stderr F I0603 08:21:47.541251 1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-6d6586c69c", UID:"a12a83b2-85d8-11e9-bdc2-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"531", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-6d6586c69c-g46df | |
2019-06-03T08:21:47.60486977Z stderr F W0603 08:21:47.604596 1 node_lifecycle_controller.go:833] Missing timestamp for Node kind-worker. Assuming now as a timestamp. | |
2019-06-03T08:21:47.604912436Z stderr F I0603 08:21:47.604650 1 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"9e5d5fdd-85d8-11e9-bdc2-0242ac110002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node kind-worker event: Registered Node kind-worker in Controller | |
2019-06-03T08:22:12.606490853Z stderr F I0603 08:22:12.606276 1 node_lifecycle_controller.go:1036] Controller detected that some Nodes are Ready. Exiting master disruption mode. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:21:47.118961711Z stderr F W0603 08:21:47.116266 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy | |
2019-06-03T08:21:47.19875902Z stderr F I0603 08:21:47.188678 1 server_others.go:146] Using iptables Proxier. | |
2019-06-03T08:21:47.198796119Z stderr F I0603 08:21:47.189224 1 server.go:562] Version: v1.14.2 | |
2019-06-03T08:21:47.241684799Z stderr F I0603 08:21:47.239027 1 conntrack.go:52] Setting nf_conntrack_max to 131072 | |
2019-06-03T08:21:47.242616949Z stderr F I0603 08:21:47.239494 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 | |
2019-06-03T08:21:47.242647507Z stderr F I0603 08:21:47.239594 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 | |
2019-06-03T08:21:47.242652581Z stderr F I0603 08:21:47.240941 1 config.go:202] Starting service config controller | |
2019-06-03T08:21:47.242746384Z stderr F I0603 08:21:47.241731 1 controller_utils.go:1027] Waiting for caches to sync for service config controller | |
2019-06-03T08:21:47.242759683Z stderr F I0603 08:21:47.241144 1 config.go:102] Starting endpoints config controller | |
2019-06-03T08:21:47.242764898Z stderr F I0603 08:21:47.242458 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller | |
2019-06-03T08:21:47.342069793Z stderr F I0603 08:21:47.341910 1 controller_utils.go:1034] Caches are synced for service config controller | |
2019-06-03T08:21:47.342960331Z stderr F I0603 08:21:47.342583 1 controller_utils.go:1034] Caches are synced for endpoints config controller |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:21:29.846765767Z stderr F W0603 08:21:29.846422 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy | |
2019-06-03T08:21:30.145166968Z stderr F I0603 08:21:30.137936 1 server_others.go:146] Using iptables Proxier. | |
2019-06-03T08:21:30.145200843Z stderr F I0603 08:21:30.138577 1 server.go:562] Version: v1.14.2 | |
2019-06-03T08:21:30.212614109Z stderr F I0603 08:21:30.212139 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 | |
2019-06-03T08:21:30.212777659Z stderr F I0603 08:21:30.212180 1 conntrack.go:52] Setting nf_conntrack_max to 131072 | |
2019-06-03T08:21:30.212797626Z stderr F I0603 08:21:30.212248 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 | |
2019-06-03T08:21:30.212820876Z stderr F I0603 08:21:30.212290 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 | |
2019-06-03T08:21:30.213915263Z stderr F I0603 08:21:30.213788 1 config.go:202] Starting service config controller | |
2019-06-03T08:21:30.214998963Z stderr F I0603 08:21:30.214413 1 controller_utils.go:1027] Waiting for caches to sync for service config controller | |
2019-06-03T08:21:30.221465084Z stderr F I0603 08:21:30.221357 1 config.go:102] Starting endpoints config controller | |
2019-06-03T08:21:30.229871956Z stderr F I0603 08:21:30.229631 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller | |
2019-06-03T08:21:30.338165322Z stderr F I0603 08:21:30.338033 1 controller_utils.go:1034] Caches are synced for endpoints config controller | |
2019-06-03T08:21:30.338329525Z stderr F I0603 08:21:30.338275 1 controller_utils.go:1034] Caches are synced for service config controller |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:21:47.498338622Z stderr F W0603 08:21:47.497874 1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy | |
2019-06-03T08:21:47.522470867Z stderr F I0603 08:21:47.522314 1 server_others.go:146] Using iptables Proxier. | |
2019-06-03T08:21:47.530963523Z stderr F I0603 08:21:47.523208 1 server.go:562] Version: v1.14.2 | |
2019-06-03T08:21:47.57317287Z stderr F I0603 08:21:47.573017 1 conntrack.go:52] Setting nf_conntrack_max to 131072 | |
2019-06-03T08:21:47.573613562Z stderr F I0603 08:21:47.573530 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 | |
2019-06-03T08:21:47.573832176Z stderr F I0603 08:21:47.573778 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 | |
2019-06-03T08:21:47.574867211Z stderr F I0603 08:21:47.574773 1 config.go:102] Starting endpoints config controller | |
2019-06-03T08:21:47.575053368Z stderr F I0603 08:21:47.574921 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller | |
2019-06-03T08:21:47.575220223Z stderr F I0603 08:21:47.575152 1 config.go:202] Starting service config controller | |
2019-06-03T08:21:47.575332853Z stderr F I0603 08:21:47.575272 1 controller_utils.go:1027] Waiting for caches to sync for service config controller | |
2019-06-03T08:21:47.676359369Z stderr F I0603 08:21:47.676204 1 controller_utils.go:1034] Caches are synced for endpoints config controller | |
2019-06-03T08:21:47.676542135Z stderr F I0603 08:21:47.676365 1 controller_utils.go:1034] Caches are synced for service config controller |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2019-06-03T08:20:59.84138405Z stderr F I0603 08:20:59.834445 1 serving.go:319] Generated self-signed cert in-memory | |
2019-06-03T08:21:01.499778036Z stderr F W0603 08:21:01.498944 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work. | |
2019-06-03T08:21:01.499824427Z stderr F W0603 08:21:01.498972 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work. | |
2019-06-03T08:21:01.499832392Z stderr F W0603 08:21:01.498989 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work. | |
2019-06-03T08:21:01.523562427Z stderr F I0603 08:21:01.515735 1 server.go:142] Version: v1.14.2 | |
2019-06-03T08:21:01.52358876Z stderr F I0603 08:21:01.515820 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory | |
2019-06-03T08:21:01.523596347Z stderr F W0603 08:21:01.517378 1 authorization.go:47] Authorization is disabled | |
2019-06-03T08:21:01.523602789Z stderr F W0603 08:21:01.517390 1 authentication.go:55] Authentication is disabled | |
2019-06-03T08:21:01.523607927Z stderr F I0603 08:21:01.517400 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251 | |
2019-06-03T08:21:01.523614237Z stderr F I0603 08:21:01.517971 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259 | |
2019-06-03T08:21:01.523633928Z stderr F E0603 08:21:01.522568 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: Get https://172.17.0.2:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 172.17.0.2:6443: connect: connection refused | |
2019-06-03T08:21:01.523640276Z stderr F E0603 08:21:01.522812 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://172.17.0.2:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.17.0.2:6443: connect: connection refused | |
2019-06-03T08:21:01.523646159Z stderr F E0603 08:21:01.522899 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: Get https://172.17.0.2:6443/api/v1/nodes?limit=500&resourceVersion=0: dial tcp 172.17.0.2:6443: connect: connection refused | |
2019-06-03T08:21:01.523651644Z stderr F E0603 08:21:01.522979 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: Get https://172.17.0.2:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: dial tcp 172.17.0.2:6443: connect: connection refused | |
2019-06-03T08:21:01.523656351Z stderr F E0603 08:21:01.523042 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: Get https://172.17.0.2:6443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: dial tcp 172.17.0.2:6443: connect: connection refused | |
2019-06-03T08:21:01.523662643Z stderr F E0603 08:21:01.523103 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: Get https://172.17.0.2:6443/apis/apps/v1/replicasets?limit=500&resourceVersion=0: dial tcp 172.17.0.2:6443: connect: connection refused | |
2019-06-03T08:21:01.523666928Z stderr F E0603 08:21:01.523170 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: Get https://172.17.0.2:6443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0: dial tcp 172.17.0.2:6443: connect: connection refused | |
2019-06-03T08:21:01.523671464Z stderr F E0603 08:21:01.523230 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: Get https://172.17.0.2:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: dial tcp 172.17.0.2:6443: connect: connection refused | |
2019-06-03T08:21:01.523699354Z stderr F E0603 08:21:01.523290 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: Get https://172.17.0.2:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 172.17.0.2:6443: connect: connection refused | |
2019-06-03T08:21:01.523704353Z stderr F E0603 08:21:01.523368 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: Get https://172.17.0.2:6443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 172.17.0.2:6443: connect: connection refused | |
2019-06-03T08:21:07.604250827Z stderr F E0603 08:21:07.604090 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope | |
2019-06-03T08:21:07.615371247Z stderr F E0603 08:21:07.615211 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
2019-06-03T08:21:07.615467114Z stderr F E0603 08:21:07.615430 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope | |
2019-06-03T08:21:07.635089658Z stderr F E0603 08:21:07.634916 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope | |
2019-06-03T08:21:07.635193346Z stderr F E0603 08:21:07.635143 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope | |
2019-06-03T08:21:07.635343573Z stderr F E0603 08:21:07.635295 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope | |
2019-06-03T08:21:07.635458695Z stderr F E0603 08:21:07.635419 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope | |
2019-06-03T08:21:07.643173545Z stderr F E0603 08:21:07.643042 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope | |
2019-06-03T08:21:07.643304169Z stderr F E0603 08:21:07.643254 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope | |
2019-06-03T08:21:07.643449318Z stderr F E0603 08:21:07.643402 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope | |
2019-06-03T08:21:08.606932212Z stderr F E0603 08:21:08.606812 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope | |
2019-06-03T08:21:08.617015856Z stderr F E0603 08:21:08.616872 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope | |
2019-06-03T08:21:08.621071365Z stderr F E0603 08:21:08.620951 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope | |
2019-06-03T08:21:08.637041544Z stderr F E0603 08:21:08.636906 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope | |
2019-06-03T08:21:08.64031929Z stderr F E0603 08:21:08.640217 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope | |
2019-06-03T08:21:08.646782693Z stderr F E0603 08:21:08.646644 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope | |
2019-06-03T08:21:08.647900075Z stderr F E0603 08:21:08.647782 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope | |
2019-06-03T08:21:08.651675756Z stderr F E0603 08:21:08.651589 1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope | |
2019-06-03T08:21:08.656797703Z stderr F E0603 08:21:08.656700 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope | |
2019-06-03T08:21:08.65962569Z stderr F E0603 08:21:08.659512 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope | |
2019-06-03T08:21:10.519526968Z stderr F I0603 08:21:10.519338 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller | |
2019-06-03T08:21:10.619753113Z stderr F I0603 08:21:10.619568 1 controller_utils.go:1034] Caches are synced for scheduler controller | |
2019-06-03T08:21:10.619841364Z stderr F I0603 08:21:10.619723 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler... | |
2019-06-03T08:21:10.627703702Z stderr F I0603 08:21:10.627569 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
-- Logs begin at Mon 2019-06-03 08:20:37 UTC, end at Mon 2019-06-03 08:22:30 UTC. -- | |
Jun 03 08:20:37 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 08:20:37 kind-worker kubelet[43]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:20:37 kind-worker kubelet[43]: F0603 08:20:37.519484 43 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 08:20:37 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 08:20:37 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 08:20:47 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 08:20:47 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. | |
Jun 03 08:20:47 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 08:20:47 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 08:20:47 kind-worker kubelet[63]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:20:47 kind-worker kubelet[63]: F0603 08:20:47.739422 63 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 08:20:47 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 08:20:47 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 08:20:57 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 08:20:57 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. | |
Jun 03 08:20:57 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 08:20:57 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 08:20:58 kind-worker kubelet[70]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:20:58 kind-worker kubelet[70]: F0603 08:20:58.078299 70 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 08:20:58 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 08:20:58 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 08:21:08 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 08:21:08 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. | |
Jun 03 08:21:08 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:08 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:08 kind-worker kubelet[78]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:21:08 kind-worker kubelet[78]: F0603 08:21:08.216227 78 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 08:21:08 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 08:21:08 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 08:21:18 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 08:21:18 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. | |
Jun 03 08:21:18 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:18 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:18 kind-worker kubelet[115]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:21:18 kind-worker kubelet[115]: F0603 08:21:18.453342 115 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 08:21:18 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 08:21:18 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 08:21:28 kind-worker systemd[1]: kubelet.service: Service RestartSec=10s expired, scheduling restart. | |
Jun 03 08:21:28 kind-worker systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. | |
Jun 03 08:21:28 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:28 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:28 kind-worker kubelet[125]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:21:28 kind-worker kubelet[125]: F0603 08:21:28.715250 125 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory | |
Jun 03 08:21:28 kind-worker systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION | |
Jun 03 08:21:28 kind-worker systemd[1]: kubelet.service: Failed with result 'exit-code'. | |
Jun 03 08:21:28 kind-worker systemd[1]: Stopped kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:29 kind-worker systemd[1]: Started kubelet: The Kubernetes Node Agent. | |
Jun 03 08:21:29 kind-worker kubelet[159]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:21:29 kind-worker kubelet[159]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.648770 159 server.go:417] Version: v1.14.2 | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.648977 159 plugins.go:103] No cloud provider specified. | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.648991 159 server.go:754] Client rotation is on, will bootstrap in background | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.666436 159 server.go:625] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.666864 159 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: [] | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.666880 159 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.666942 159 container_manager_linux.go:286] Creating device plugin manager: true | |
Jun 03 08:21:29 kind-worker kubelet[159]: I0603 08:21:29.667023 159 state_mem.go:36] [cpumanager] initializing new in-memory state store | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.022749 159 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.023056 159 kubelet.go:304] Watching apiserver | |
Jun 03 08:21:30 kind-worker kubelet[159]: W0603 08:21:30.026048 159 util_unix.go:77] Using "/run/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/containerd/containerd.sock". | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026120 159 remote_runtime.go:62] parsed scheme: "" | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026131 159 remote_runtime.go:62] scheme "" not registered, fallback to default scheme | |
Jun 03 08:21:30 kind-worker kubelet[159]: W0603 08:21:30.026155 159 util_unix.go:77] Using "/run/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/containerd/containerd.sock". | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026168 159 remote_image.go:50] parsed scheme: "" | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026173 159 remote_image.go:50] scheme "" not registered, fallback to default scheme | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026313 159 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/run/containerd/containerd.sock 0 <nil>}] | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026324 159 clientconn.go:796] ClientConn switching balancer to "pick_first" | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026372 159 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0008719f0, CONNECTING | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026480 159 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/run/containerd/containerd.sock 0 <nil>}] | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026576 159 clientconn.go:796] ClientConn switching balancer to "pick_first" | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026712 159 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc00041f4d0, CONNECTING | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.026827 159 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0008719f0, READY | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.027640 159 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc00041f4d0, READY | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.029107 159 kuberuntime_manager.go:210] Container runtime containerd initialized, version: 1.2.6-0ubuntu1, apiVersion: v1alpha2 | |
Jun 03 08:21:30 kind-worker kubelet[159]: W0603 08:21:30.029635 159 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.030954 159 server.go:1037] Started kubelet | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.031369 159 server.go:141] Starting to listen on 0.0.0.0:10250 | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.032035 159 server.go:343] Adding debug handlers to kubelet server. | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.043343 159 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.043567 159 status_manager.go:152] Starting to sync pod status with apiserver | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.044030 159 kubelet.go:1806] Starting kubelet main sync loop. | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.044258 159 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.] | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.044419 159 volume_manager.go:248] Starting Kubelet Volume Manager | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.050192 159 desired_state_of_world_populator.go:130] Desired state populator starts to run | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.051333 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.057911 159 clientconn.go:440] parsed scheme: "unix" | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.058035 159 clientconn.go:440] scheme "unix" not registered, fallback to default scheme | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.058124 159 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 <nil>}] | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.058173 159 clientconn.go:796] ClientConn switching balancer to "pick_first" | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.058272 159 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000457430, CONNECTING | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.058520 159 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000457430, READY | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.067026 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.067252 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.067577 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b45b8b3c7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540281d7cfc7, ext:863938121, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540281d7cfc7, ext:863938121, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.067857 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.067969 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.069862 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.100605 159 controller.go:115] failed to ensure node lease exists, will retry in 200ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.113725 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.114944 159 cpu_manager.go:155] [cpumanager] starting with none policy | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.115059 159 cpu_manager.go:156] [cpumanager] reconciling every 10s | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.115113 159 policy_none.go:42] [cpumanager] none policy: Start | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.139328 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9b707", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.151102 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9d81c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.157034 159 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet. | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.157101 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.157762 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.159792 159 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.166352 159 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.166510 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9f69a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: W0603 08:21:30.168315 159 manager.go:538] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.169582 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9b707", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3554028984d599, ext:992717858, loc:(*time.Location)(0x8018900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9b707" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.173075 159 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.179732 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9d81c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540289859b53, ext:992768468, loc:(*time.Location)(0x8018900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9d81c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.181091 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9f69a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3554028985b6bd, ext:992775488, loc:(*time.Location)(0x8018900)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9f69a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.181889 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4dee89c9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf3554028a0da5c9, ext:1001684053, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3554028a0da5c9, ext:1001684053, loc:(*time.Location)(0x8018900)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.257971 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.302326 159 controller.go:115] failed to ensure node lease exists, will retry in 400ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.358401 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.366953 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.368284 159 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.370099 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9b707", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540295f3094f, ext:1201266647, loc:(*time.Location)(0x8018900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9b707" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.371508 159 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.371859 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9d81c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540295f329f7, ext:1201275013, loc:(*time.Location)(0x8018900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9d81c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.372975 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9f69a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540295f34047, ext:1201280722, loc:(*time.Location)(0x8018900)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9f69a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.462359 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.566310 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.666498 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.703701 159 controller.go:115] failed to ensure node lease exists, will retry in 800ms, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.766760 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.771930 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:30 kind-worker kubelet[159]: I0603 08:21:30.773069 159 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.774485 159 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.774589 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9b707", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355402ae13721c, ext:1606043811, loc:(*time.Location)(0x8018900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9b707" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.775489 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9d81c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355402ae13a883, ext:1606057742, loc:(*time.Location)(0x8018900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9d81c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.832733 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9f69a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355402ae13c217, ext:1606064288, loc:(*time.Location)(0x8018900)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9f69a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.866963 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:30 kind-worker kubelet[159]: E0603 08:21:30.967138 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.067312 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.068600 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.093227 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.094236 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.097866 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.101180 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.168226 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.268400 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.368599 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.468793 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.505025 159 controller.go:115] failed to ensure node lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.569063 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: I0603 08:21:31.574754 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:31 kind-worker kubelet[159]: I0603 08:21:31.575981 159 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.577525 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9b707", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355402e252d47c, ext:2408871166, loc:(*time.Location)(0x8018900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9b707" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.578037 159 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.578665 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9d81c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355402e25431b4, ext:2408960568, loc:(*time.Location)(0x8018900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9d81c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.579574 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9f69a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf355402e2545e52, ext:2408971997, loc:(*time.Location)(0x8018900)}}, Count:5, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9f69a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.669232 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.769408 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.869582 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:31 kind-worker kubelet[159]: E0603 08:21:31.969768 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.069930 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.070147 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.095070 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.095720 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.098793 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.102160 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.170094 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.270288 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.370485 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.470710 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.570914 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.671076 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.771320 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.871717 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:32 kind-worker kubelet[159]: E0603 08:21:32.971934 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.071523 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.072090 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.096848 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.097374 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.099732 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.103248 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.105849 159 controller.go:115] failed to ensure node lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.172280 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: I0603 08:21:33.178205 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:33 kind-worker kubelet[159]: I0603 08:21:33.179381 159 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.180378 159 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.180658 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9b707", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3554034ab09e34, ext:4012364475, loc:(*time.Location)(0x8018900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9b707" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.181560 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9d81c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3554034ab0c30c, ext:4012373908, loc:(*time.Location)(0x8018900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9d81c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.182380 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9f69a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf3554034ab0d85b, ext:4012379368, loc:(*time.Location)(0x8018900)}}, Count:6, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9f69a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.272563 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.372789 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.472981 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.573214 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.673630 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.773793 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.874019 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:33 kind-worker kubelet[159]: E0603 08:21:33.974306 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.072920 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.074467 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.098539 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.099216 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.100525 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.104157 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.174637 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.274836 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.375047 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.475298 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.575476 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.675659 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.777493 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.877922 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:34 kind-worker kubelet[159]: E0603 08:21:34.978135 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.074264 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.082048 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.100132 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.100882 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.101574 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.105067 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.169854 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.182237 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.282448 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.382936 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.483184 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.583410 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.683582 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.783780 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.883977 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:35 kind-worker kubelet[159]: E0603 08:21:35.984170 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.075916 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.084376 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.101833 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.102490 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.103469 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.105856 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.184666 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.284852 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.307237 159 controller.go:115] failed to ensure node lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "kind-worker" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" | |
Jun 03 08:21:36 kind-worker kubelet[159]: I0603 08:21:36.380549 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:36 kind-worker kubelet[159]: I0603 08:21:36.381714 159 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.382971 159 kubelet_node_status.go:94] Unable to register node "kind-worker" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.383326 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9b707", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node kind-worker status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8d307, ext:947890577, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540416bfd823, ext:7214688937, loc:(*time.Location)(0x8018900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9b707" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.384457 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9d81c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node kind-worker status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d8f41c, ext:947899037, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540416c0143f, ext:7214704325, loc:(*time.Location)(0x8018900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9d81c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.384969 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.385267 159 event.go:191] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kind-worker.15a4a28b4ab9f69a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"kind-worker", UID:"kind-worker", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node kind-worker status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"kind-worker"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf35540286d9129a, ext:947906842, loc:(*time.Location)(0x8018900)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf35540416c027bb, ext:7214709312, loc:(*time.Location)(0x8018900)}}, Count:7, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "kind-worker.15a4a28b4ab9f69a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.485286 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.585720 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.686293 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.787200 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.887790 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:36 kind-worker kubelet[159]: E0603 08:21:36.988362 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.083266 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.088780 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.103756 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.104257 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.105011 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.188994 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.225122 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.289217 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.389396 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.489647 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.589844 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.690044 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.790529 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.890762 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:37 kind-worker kubelet[159]: E0603 08:21:37.991014 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.084851 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.091307 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.105564 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.106321 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.107180 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.191507 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.226703 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.292012 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.392200 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.492408 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.592608 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.692786 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.793286 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.893642 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:38 kind-worker kubelet[159]: E0603 08:21:38.994061 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.086307 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.094248 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.107135 159 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.107749 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: nodes "kind-worker" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.108999 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "system:anonymous" cannot list resource "pods" in API group "" at the cluster scope | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.194421 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.228115 159 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.294618 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.394770 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.495448 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.595670 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: I0603 08:21:39.651804 159 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.696014 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.796430 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.896594 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:39 kind-worker kubelet[159]: E0603 08:21:39.996750 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.096915 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: I0603 08:21:40.168817 159 reconciler.go:154] Reconciler: start to sync state | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.171078 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.174009 159 eviction_manager.go:247] eviction manager: failed to get summary stats: failed to get node info: node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.197088 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.300016 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.400243 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.500447 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.600639 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.700828 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.800992 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:40 kind-worker kubelet[159]: E0603 08:21:40.901179 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.001383 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.101604 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.201805 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.301959 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.402170 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.502371 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.602556 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.702762 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.802957 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:41 kind-worker kubelet[159]: E0603 08:21:41.903149 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.003399 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.103648 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.203887 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.304137 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.404332 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.504553 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.604721 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.704924 159 kubelet.go:2244] node "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.716592 159 controller.go:194] failed to get node "kind-worker" when trying to set owner ref to the node lease: nodes "kind-worker" not found | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.783338 159 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.784599 159 kubelet_node_status.go:72] Attempting to register node kind-worker | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.790244 159 kubelet_node_status.go:75] Successfully registered node kind-worker | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.873760 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/9e5f9619-85d8-11e9-bdc2-0242ac110002-xtables-lock") pod "kube-proxy-q6qbj" (UID: "9e5f9619-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.873829 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/9e5f9619-85d8-11e9-bdc2-0242ac110002-lib-modules") pod "kube-proxy-q6qbj" (UID: "9e5f9619-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.873871 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-snhxd" (UniqueName: "kubernetes.io/secret/9e5f9619-85d8-11e9-bdc2-0242ac110002-kube-proxy-token-snhxd") pod "kube-proxy-q6qbj" (UID: "9e5f9619-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.873916 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/9e5f6fdf-85d8-11e9-bdc2-0242ac110002-cni-cfg") pod "kindnet-h2bsq" (UID: "9e5f6fdf-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.873953 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-ztngt" (UniqueName: "kubernetes.io/secret/9e5f6fdf-85d8-11e9-bdc2-0242ac110002-kindnet-token-ztngt") pod "kindnet-h2bsq" (UID: "9e5f6fdf-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.874124 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9e5f9619-85d8-11e9-bdc2-0242ac110002-kube-proxy") pod "kube-proxy-q6qbj" (UID: "9e5f9619-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.911217 159 kuberuntime_manager.go:946] updating runtime config through cri with podcidr 10.244.2.0/24 | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.912030 159 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.2.0/24 | |
Jun 03 08:21:42 kind-worker kubelet[159]: E0603 08:21:42.912519 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.974592 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/9e63c396-85d8-11e9-bdc2-0242ac110002-config") pod "ip-masq-agent-kcr75" (UID: "9e63c396-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:42 kind-worker kubelet[159]: I0603 08:21:42.974643 159 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ip-masq-agent-token-vrrsp" (UniqueName: "kubernetes.io/secret/9e63c396-85d8-11e9-bdc2-0242ac110002-ip-masq-agent-token-vrrsp") pod "ip-masq-agent-kcr75" (UID: "9e63c396-85d8-11e9-bdc2-0242ac110002") | |
Jun 03 08:21:45 kind-worker kubelet[159]: E0603 08:21:45.172257 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:21:50 kind-worker kubelet[159]: E0603 08:21:50.173765 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:21:50 kind-worker kubelet[159]: E0603 08:21:50.192930 159 summary_sys_containers.go:47] Failed to get system container stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get cgroup stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get container info for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": unknown container "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service" | |
Jun 03 08:21:55 kind-worker kubelet[159]: E0603 08:21:55.175385 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:22:00 kind-worker kubelet[159]: E0603 08:22:00.176328 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:22:00 kind-worker kubelet[159]: E0603 08:22:00.212800 159 summary_sys_containers.go:47] Failed to get system container stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get cgroup stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get container info for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": unknown container "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service" | |
Jun 03 08:22:05 kind-worker kubelet[159]: E0603 08:22:05.177524 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:22:10 kind-worker kubelet[159]: E0603 08:22:10.179730 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:22:10 kind-worker kubelet[159]: E0603 08:22:10.245210 159 summary_sys_containers.go:47] Failed to get system container stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get cgroup stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get container info for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": unknown container "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service" | |
Jun 03 08:22:15 kind-worker kubelet[159]: E0603 08:22:15.180921 159 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized | |
Jun 03 08:22:20 kind-worker kubelet[159]: E0603 08:22:20.269361 159 summary_sys_containers.go:47] Failed to get system container stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get cgroup stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get container info for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": unknown container "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service" | |
Jun 03 08:22:30 kind-worker kubelet[159]: E0603 08:22:30.290330 159 summary_sys_containers.go:47] Failed to get system container stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get cgroup stats for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": failed to get container info for "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service": unknown container "/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/docker/94d8130870c7e18bc71ede2585ca6ccb4a1cef8a6c93c63634ffd6e762ca9a75/system.slice/kubelet.service" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Initializing machine ID from random generator. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment