Skip to content

Instantly share code, notes, and snippets.

@benjaminapetersen
Last active March 16, 2017 04:12
Show Gist options
  • Save benjaminapetersen/93b34979092fdaec04f29822b09c96ea to your computer and use it in GitHub Desktop.
Save benjaminapetersen/93b34979092fdaec04f29822b09c96ea to your computer and use it in GitHub Desktop.
OpenShift events (processed)

Generating a list from these files then:

OPENSHIFT 
  BuildConfig
    Warning
      FailedCreate
        Pod already exists
        Error creating
      invalidOutput
        Error starting build 
      HandleBuildError
        Build has error

  ?
    Normal 
      Starting 
        Starting kube-proxy

  Deployment
    Warning
      FailedCreate 
        Error creating deployer pod since another pod with the same name (%q) exists
      Failed
        Deployer pod %q has gone missing
      FailedCancellation 
        Succeeded before cancel recorded
      FailedRetry
        About to stop retrying
    Normal
      DeploymentCancelled
        Deployment cancelled 

  DeploymentConfig 
    Warning
      DeploymentCancellationFailed
        Failed to cancel deployment %q superceded by version
      DeploymentCleanupFailed
        Couldn't clean up deployments
      DeploymentCreationFailed
        Couldn't deploy version
      DeploymentCleanupFailed
        Couldn't clean up deployments
      ReplicationControllerScaleFailed
        ?
      ReplicationControllerCleanupFailed
        Couldn't clean up replication controllers
    Normal
      DeploymentCancelled
        Cancelled deployment %q superceded by version 
      DeploymentAwaitingCancellation
        Deployment of version %d awaiting cancellation of older running dep
      DeploymentCreated
        ?
      ReplicationControllerScaled
        Scaled replication controlle

  Unidler 
    Normal 
      NeedPods
        The service-port %s:%s needs pods

  IngressIP 
    Warning 
      IngressIPRangeFull
        No available ingress ip to allocate
      IngressIPReallocated 
        ?

KUBERNETES
  ResourceLock 
    Normal 
      LeaderElection 
        ?

  Pod
    Warning
      FailedCreate
        Error Creating
      FailedDelete
        Error Deleting
    Normal
      SuccessfulCreate
        Created Pod
      SuccessfulDelete
        Deleted Pod

  Cron 
    Warning 
      UnexpectedJob 
        Saw a job that the controller did not create or forgot
      FailedCreate 
        Error creating job
      FailedGet 
        Get job err 
      FailedUpdate 
        Update job err 
      FailedList 
        List job-pods err 
      FailedDelete 
        Deleted job-pods err  
        Deleted job err
    Normal 
      SawCompletedJob 
        Saw completed job 
      SuccessfulDelete
        Deleted job 
      SuccessfulCreate
        Created job 
        
  Daemon 
    Warning 
      SelectingAll 
        Non-empty selector required 
        
    Normal 
      FailedPlacement
        failed to place pod 
        failed to place pod, host port conflict 

  Deployment 
    Warning 
      SelectingAll 
        non-empty selector required 
      SelectorOverlap 
        ?
      RollbackRevisionNotFound 
        Unable to find last revision 
      RollbackTemplateChanged 
        The rollback revision contains the same template as current deployment
      ReplicaSetCreateError
        ?
    Normal 
      RollbackDone
        Rolled back deployment %q to revision
      ScalingReplicaSet 
        Scaled up replica set 

  Disruption  
    Warning 
      NoPods 
        No matching pods 
        Failed to get pods 
      NotDeleted
        Pod was expected by PDB to be dleted but wasn't 
    Normal 
      ExpectedPods 
        Failed to calculate number of expected pods

  Node 
    Normal 
      NodeControllerEviction 
        Marking for deletion Pod
      DeletingAllPods 
        Deleting all Pods from Node

  PetSet
    Normal 
      SuccessfulDelete 
        ?
      SuccessfulCreate 
        ?
      SuccessfulDelete
    Warning 
      FailedDelete
      FailedCreate

  PodAutoscaler 
    Normal 
      DesiredReplicasComputed 
      DesiredReplicasComputedCustomMetric
      SuccessfulRescale 
    Warning 
        FailedRescale 

  Service 
    Normal 
      Type 
        ? unclear, has something to do wiht wantsLoadBalancer 
      LoadBalancerSourceRanges
        ?
      LoadbalancerIP
        ?
      ExternalIP
        Count 
        Added 
      UID 
        ?
    Warning 
      LoadBalancerUpdateFailed
        Error updating load balancer

  PVC 
    ?
      RecyclerPod 
        ? not sure 
    Normal 
      ExternalProvisioning 
      VolumeRecycled 
        Volume Recycled 
      VolumeDeleted 
      ProvisioningCleanupFailed
      ProvisioningIgnoreAlpha
      FailedBinding 
        no persistent volumes available for this claim
    Warning 
      ProvisioningFailed

  Kubelet 
    Warning 
      FailedValidation 
        Error validating pod 

  DockerTools 
    Warning 
      FailedToCreateContainer 
        Failed to create docker container
      FailedToStartContainer  
      BackOffStartContainer
        Back-off restarting failed
    Normal
      CreatedContainer 
      StartedContainer 
      InfraChanged 
        Pod infrastructure changed 

  Eviction 
    Warning
      EvictionThresholdMet
        Attempting to reclaim 
    Normal 

  Images 
    Warning 
      InvalidDiskCapacity 
      FreeDiskSpaceFailed 

  Kubelet 
    Warning
      ContainerCGFailed
      ImageCGFailed 
      KubeletSetupFailed 
      MissingClusterDNS
        kubelet does not have ClusterDNS IP configured
      FailedMountVolume
        Unable to mount volumes for pod
      NodeRebooted 
    Normal 
      StartingKubelet 
      NodeReady 
      NodeNotReady 
      NodeHasInsufficientMemory
      NodeHasSufficientMemory
      NodeHasDiskPressure
      NodeHasNoDiskPressure
      NodeOutOfDisk
      NodeHasSufficientDisk
      NodeNotSchedulable
      NodeSchedulable

  KubeRuntime 
    Warning 
      FailedToCreateContainer
        Failed to create container
      FailedToStartContainer
        Failed to start container with id
      BackOffStartContainer
        Back-off restarting failed container
      FailedSync 
        Error syncing pod
      ExceededGracePeriod 
        Container runtime did not kill the pod within specified grace period
    Normal 
      CreatedContainer
        Created container with id 
      StartedContainer
        Started container with id
      SandboxChanged
        Pod sandbox changed, it will be killed and re-created.
      SandboxReceived
        Pod sandbox received, it will be created.

  Prober 
    Warning 
      ContainerUnhealthy 
        probe errored 
        probe failed 
    Normal 

  RKT 
    Warning 
      FailedToStartContainer 
        Failed to start with rkt id 
      FailedToCreateContainer
        Failed to create rkt container 
      FailedPreStopHook
      FailedPostStartHook
    Normal 
      CreatedContainer 
        Created with rkt id 
      StartedContainer 
        Started with rkt id 
      KillingContainer 
        Killing with rkt id 

  OperationExecutor 
    Warning
      FailedMountVolume 
        ?
    Normal 
      
  Scheduler 
    Warning
      FailedScheduling 
        ?
    Normal 
      FailedScheduling
        Binding rejected
      Scheduled 
        Successfully assigned 
@benjaminapetersen
Copy link
Author

benjaminapetersen commented Mar 16, 2017

Fishing around the origin repo with ag -Q '.Eventf( | sort | uniq yields this, which seems pretty clear (--nofilename --nonumbers ` for just lines):

$ ag -Q '.Eventf(' | sort | uniq
Build
pkg/build/controller/controller.go:206:			bc.Recorder.Eventf(build, kapi.EventTypeWarning, "FailedCreate", "Pod already exists: %s/%s", podSpec.Namespace, podSpec.Name)
pkg/build/controller/controller.go:219:		bc.Recorder.Eventf(build, kapi.EventTypeWarning, "FailedCreate", "Error creating: %v", err)
pkg/build/controller/controller.go:271:			bc.Recorder.Eventf(build, kapi.EventTypeWarning, "invalidOutput", "Error starting build: %v", e)
pkg/build/controller/factory/factory.go:133:					buildController.Recorder.Eventf(build, kapi.EventTypeWarning, "HandleBuildError", "Build has error: %v", err)

?
pkg/cmd/server/kubernetes/node.go:497:	recorder.Eventf(c.ProxyConfig.NodeRef, kapi.EventTypeNormal, "Starting", "Starting kube-proxy.")

Deployment
pkg/deploy/controller/deployment/controller.go:408:		c.recorder.Eventf(config, eventType, title, fmt.Sprintf("%s: %s", deployment.Name, message))
pkg/deploy/controller/deployment/controller.go:410:		c.recorder.Eventf(deployment, eventType, title, message)

DeploymentConfig
pkg/deploy/controller/deploymentconfig/controller.go:136:				c.recorder.Eventf(config, kapi.EventTypeWarning, "DeploymentCancellationFailed", "Failed to cancel deployment %q superceded by version %d: %s", deployment.Name, config.Status.LatestVersion, err)
pkg/deploy/controller/deploymentconfig/controller.go:140:				c.recorder.Eventf(config, kapi.EventTypeNormal, "DeploymentCancelled", "Cancelled deployment %q superceded by version %d", deployment.Name, config.Status.LatestVersion)
pkg/deploy/controller/deploymentconfig/controller.go:147:		c.recorder.Eventf(config, kapi.EventTypeNormal, "DeploymentAwaitingCancellation", "Deployment of version %d awaiting cancellation of older running deployments", config.Status.LatestVersion)
pkg/deploy/controller/deploymentconfig/controller.go:166:			c.recorder.Eventf(config, kapi.EventTypeWarning, "DeploymentCleanupFailed", "Couldn't clean up deployments: %v", err)
pkg/deploy/controller/deploymentconfig/controller.go:184:		c.recorder.Eventf(config, kapi.EventTypeWarning, "DeploymentCreationFailed", "Couldn't deploy version %d: %s", config.Status.LatestVersion, err)
pkg/deploy/controller/deploymentconfig/controller.go:191:	c.recorder.Eventf(config, kapi.EventTypeNormal, "DeploymentCreated", msg)
pkg/deploy/controller/deploymentconfig/controller.go:197:		c.recorder.Eventf(config, kapi.EventTypeWarning, "DeploymentCleanupFailed", "Couldn't clean up deployments: %v", err)
pkg/deploy/controller/deploymentconfig/controller.go:250:				c.recorder.Eventf(config, kapi.EventTypeWarning, "ReplicationControllerScaleFailed",
pkg/deploy/controller/deploymentconfig/controller.go:255:			c.recorder.Eventf(config, kapi.EventTypeNormal, "ReplicationControllerScaled", "Scaled replication controller %q from %d to %d", copied.Name, oldReplicaCount, newReplicaCount)
pkg/deploy/controller/deploymentconfig/controller.go:265:		c.recorder.Eventf(config, kapi.EventTypeWarning, "ReplicationControllerCleanupFailed", "Couldn't clean up replication controllers: %v", err)

Unidler
pkg/proxy/unidler/unidlerproxy.go:29:	sig.recorder.Eventf(&serviceRef, api.EventTypeNormal, unidlingapi.NeedPodsReason, "The service-port %s:%s needs pods.", serviceRef.Name, port)

Ingress 
pkg/service/controller/ingressip/controller.go:378:				ic.recorder.Eventf(service, kapi.EventTypeWarning, "IngressIPRangeFull", "No available ingress ip to allocate to service %s", change.key)
pkg/service/controller/ingressip/controller.go:480:		ic.recorder.Eventf(service, kapi.EventTypeWarning, "IngressIPReallocated", reallocateMessage)

KubeProxy cmd 
vendor/k8s.io/kubernetes/cmd/kube-proxy/app/server.go:353:				s.Recorder.Eventf(s.Config.NodeRef, api.EventTypeWarning, err.Error(), message)
vendor/k8s.io/kubernetes/cmd/kube-proxy/app/server.go:461:	s.Recorder.Eventf(s.Config.NodeRef, api.EventTypeNormal, "Starting", "Starting kube-proxy.")

ResourceLock - n/a
vendor/k8s.io/kubernetes/pkg/client/leaderelection/resourcelock/endpointslock.go:90:	el.LockConfig.EventRecorder.Eventf(&api.Endpoints{ObjectMeta: el.e.ObjectMeta}, api.EventTypeNormal, "LeaderElection", events)

Record - n/a
vendor/k8s.io/kubernetes/pkg/client/record/event_test.go:359:		recorder.Eventf(item.obj, item.eventtype, item.reason, item.messageFmt, item.elements...)
vendor/k8s.io/kubernetes/pkg/client/record/event_test.go:508:		go recorder.Eventf(ref, api.EventTypeNormal, "Reason-"+string(i), strconv.Itoa(i))
vendor/k8s.io/kubernetes/pkg/client/record/event_test.go:604:		recorder.Eventf(item.obj, item.eventtype, item.reason, item.messageFmt, item.elements...)
vendor/k8s.io/kubernetes/pkg/client/record/event_test.go:884:		recorder.Eventf(item.obj, item.eventtype, item.reason, item.messageFmt, item.elements...)
vendor/k8s.io/kubernetes/pkg/client/record/event_test.go:900:		recorder.Eventf(item.obj, item.eventtype, item.reason, item.messageFmt, item.elements...)


Pod
vendor/k8s.io/kubernetes/pkg/controller/controller_utils.go:504:		r.Recorder.Eventf(object, api.EventTypeWarning, FailedCreatePodReason, "Error creating: %v", err)
vendor/k8s.io/kubernetes/pkg/controller/controller_utils.go:513:		r.Recorder.Eventf(object, api.EventTypeNormal, SuccessfulCreatePodReason, "Created pod: %v", newPod.Name)
vendor/k8s.io/kubernetes/pkg/controller/controller_utils.go:525:		r.Recorder.Eventf(object, api.EventTypeWarning, FailedDeletePodReason, "Error deleting: %v", err)
vendor/k8s.io/kubernetes/pkg/controller/controller_utils.go:528:		r.Recorder.Eventf(object, api.EventTypeNormal, SuccessfulDeletePodReason, "Deleted pod: %v", podID)

Cron 
vendor/k8s.io/kubernetes/pkg/controller/cronjob/controller.go:135:			recorder.Eventf(&sj, api.EventTypeWarning, "UnexpectedJob", "Saw a job that the controller did not create or forgot: %v", j.Name)
vendor/k8s.io/kubernetes/pkg/controller/cronjob/controller.go:149:			recorder.Eventf(&sj, api.EventTypeNormal, "SawCompletedJob", "Saw completed job: %v", j.Name)
vendor/k8s.io/kubernetes/pkg/controller/cronjob/controller.go:213:				recorder.Eventf(&sj, api.EventTypeWarning, "FailedGet", "Get job: %v", err)
vendor/k8s.io/kubernetes/pkg/controller/cronjob/controller.go:222:					recorder.Eventf(&sj, api.EventTypeWarning, "FailedUpdate", "Update job: %v", err)
vendor/k8s.io/kubernetes/pkg/controller/cronjob/controller.go:231:				recorder.Eventf(&sj, api.EventTypeWarning, "FailedList", "List job-pods: %v", err)
vendor/k8s.io/kubernetes/pkg/controller/cronjob/controller.go:244:				recorder.Eventf(&sj, api.EventTypeWarning, "FailedDelete", "Deleted job-pods: %v", utilerrors.NewAggregate(errList))
vendor/k8s.io/kubernetes/pkg/controller/cronjob/controller.go:249:				recorder.Eventf(&sj, api.EventTypeWarning, "FailedDelete", "Deleted job: %v", err)
vendor/k8s.io/kubernetes/pkg/controller/cronjob/controller.go:255:			recorder.Eventf(&sj, api.EventTypeNormal, "SuccessfulDelete", "Deleted job %v", j.Name)
vendor/k8s.io/kubernetes/pkg/controller/cronjob/controller.go:266:		recorder.Eventf(&sj, api.EventTypeWarning, "FailedCreate", "Error creating job: %v", err)
vendor/k8s.io/kubernetes/pkg/controller/cronjob/controller.go:270:	recorder.Eventf(&sj, api.EventTypeNormal, "SuccessfulCreate", "Created job %v", jobResp.Name)

Daemon 
vendor/k8s.io/kubernetes/pkg/controller/daemon/daemoncontroller.go:637:		dsc.eventRecorder.Eventf(ds, api.EventTypeWarning, "SelectingAll", "This daemon set is selecting all pods. A non-empty selector is required.")
vendor/k8s.io/kubernetes/pkg/controller/daemon/daemoncontroller.go:703:			dsc.eventRecorder.Eventf(ds, api.EventTypeNormal, "FailedPlacement", "failed to place pod on %q: %s", node.ObjectMeta.Name, reason.Error())
vendor/k8s.io/kubernetes/pkg/controller/daemon/daemoncontroller.go:706:				dsc.eventRecorder.Eventf(ds, api.EventTypeNormal, "FailedPlacement", "failed to place pod on %q: host port conflict", node.ObjectMeta.Name)

Deployment 
vendor/k8s.io/kubernetes/pkg/controller/deployment/deployment_controller.go:336:		dc.eventRecorder.Eventf(d, api.EventTypeWarning, "SelectingAll", "This deployment is selecting all pods. A non-empty selector is required.")
vendor/k8s.io/kubernetes/pkg/controller/deployment/deployment_controller.go:350:		dc.eventRecorder.Eventf(d, api.EventTypeWarning, "SelectorOverlap", err.Error())
vendor/k8s.io/kubernetes/pkg/controller/deployment/rollback.go:94:	dc.eventRecorder.Eventf(deployment, api.EventTypeWarning, reason, message)
vendor/k8s.io/kubernetes/pkg/controller/deployment/rollback.go:98:	dc.eventRecorder.Eventf(deployment, api.EventTypeNormal, deploymentutil.RollbackDone, message)
vendor/k8s.io/kubernetes/pkg/controller/deployment/sync.go:378:		dc.eventRecorder.Eventf(deployment, api.EventTypeWarning, deploymentutil.FailedRSCreateReason, msg)
vendor/k8s.io/kubernetes/pkg/controller/deployment/sync.go:382:		dc.eventRecorder.Eventf(deployment, api.EventTypeNormal, "ScalingReplicaSet", "Scaled up replica set %s to %d", createdRS.Name, newReplicasCount)
vendor/k8s.io/kubernetes/pkg/controller/deployment/sync.go:527:			dc.eventRecorder.Eventf(deployment, api.EventTypeNormal, "ScalingReplicaSet", "Scaled %s replica set %s to %d", scalingOperation, rs.Name, newScale)

Disruption
vendor/k8s.io/kubernetes/pkg/controller/disruption/disruption.go:488:		dc.recorder.Eventf(pdb, api.EventTypeWarning, "NoPods", "Failed to get pods: %v", err)
vendor/k8s.io/kubernetes/pkg/controller/disruption/disruption.go:492:		dc.recorder.Eventf(pdb, api.EventTypeNormal, "NoPods", "No matching pods found")
vendor/k8s.io/kubernetes/pkg/controller/disruption/disruption.go:497:		dc.recorder.Eventf(pdb, api.EventTypeNormal, "ExpectedPods", "Failed to calculate the number of expected pods: %v", err)
vendor/k8s.io/kubernetes/pkg/controller/disruption/disruption.go:628:			dc.recorder.Eventf(pod, api.EventTypeWarning, "NotDeleted", "Pod was expected by PDB %s/%s to be deleted but it wasn't",

Node
vendor/k8s.io/kubernetes/pkg/controller/node/controller_utils.go:275:	recorder.Eventf(ref, eventtype, reason, "Node %s event: %s", nodeName, event)
vendor/k8s.io/kubernetes/pkg/controller/node/controller_utils.go:288:	recorder.Eventf(ref, api.EventTypeNormal, new_status, "Node %s status is now: %s", node.Name, new_status)
vendor/k8s.io/kubernetes/pkg/controller/node/controller_utils.go:88:		recorder.Eventf(&pod, api.EventTypeNormal, "NodeControllerEviction", "Marking for deletion Pod %s from Node %s", pod.Name, nodeName)

PetSet 
vendor/k8s.io/kubernetes/pkg/controller/petset/fakes.go:171:			f.recorder.Eventf(pet.parent, api.EventTypeNormal, "SuccessfulDelete", "pod: %v", pet.pod.Name)
vendor/k8s.io/kubernetes/pkg/controller/petset/fakes.go:202:	f.recorder.Eventf(p.parent, api.EventTypeNormal, "SuccessfulCreate", "pod: %v", p.pod.Name)
vendor/k8s.io/kubernetes/pkg/controller/petset/fakes.go:299:		f.recorder.Eventf(pet.parent, api.EventTypeNormal, "SuccessfulCreate", "pvc: %v", remaining.Name)
vendor/k8s.io/kubernetes/pkg/controller/petset/fakes.go:317:			f.recorder.Eventf(pet.parent, api.EventTypeNormal, "SuccessfulDelete", "pvc: %v", existing.Name)
vendor/k8s.io/kubernetes/pkg/controller/petset/pet.go:283:		p.recorder.Eventf(obj, api.EventTypeWarning, fmt.Sprintf("Failed%v", reason), fmt.Sprintf("%v, error: %v", msg, err))
vendor/k8s.io/kubernetes/pkg/controller/petset/pet.go:285:		p.recorder.Eventf(obj, api.EventTypeNormal, fmt.Sprintf("Successful%v", reason), msg)

PodAutoscaler 
vendor/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:180:		a.eventRecorder.Eventf(hpa, api.EventTypeNormal, "DesiredReplicasComputed",
vendor/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:264:		a.eventRecorder.Eventf(hpa, api.EventTypeNormal, "DesiredReplicasComputedCustomMetric",
vendor/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:374:			a.eventRecorder.Eventf(hpa, api.EventTypeWarning, "FailedRescale", "New size: %d; reason: %s; error: %v", desiredReplicas, rescaleReason, err.Error())
vendor/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:377:		a.eventRecorder.Eventf(hpa, api.EventTypeNormal, "SuccessfulRescale", "New size: %d; reason: %s", desiredReplicas, rescaleReason)

Service 
vendor/k8s.io/kubernetes/pkg/controller/service/servicecontroller.go:427:		s.eventRecorder.Eventf(newService, api.EventTypeNormal, "Type", "%v -> %v",
vendor/k8s.io/kubernetes/pkg/controller/service/servicecontroller.go:433:		s.eventRecorder.Eventf(newService, api.EventTypeNormal, "LoadBalancerSourceRanges", "%v -> %v",
vendor/k8s.io/kubernetes/pkg/controller/service/servicecontroller.go:442:		s.eventRecorder.Eventf(newService, api.EventTypeNormal, "LoadbalancerIP", "%v -> %v",
vendor/k8s.io/kubernetes/pkg/controller/service/servicecontroller.go:447:		s.eventRecorder.Eventf(newService, api.EventTypeNormal, "ExternalIP", "Count: %v -> %v",
vendor/k8s.io/kubernetes/pkg/controller/service/servicecontroller.go:453:			s.eventRecorder.Eventf(newService, api.EventTypeNormal, "ExternalIP", "Added: %v",
vendor/k8s.io/kubernetes/pkg/controller/service/servicecontroller.go:462:		s.eventRecorder.Eventf(newService, api.EventTypeNormal, "UID", "%v -> %v",
vendor/k8s.io/kubernetes/pkg/controller/service/servicecontroller.go:692:	s.eventRecorder.Eventf(service, api.EventTypeWarning, "LoadBalancerUpdateFailed", "Error updating load balancer with new hosts %v: %v", hosts, err)

PVC 
vendor/k8s.io/kubernetes/pkg/controller/volume/persistentvolume/pv_controller.go:1419:		ctrl.eventRecorder.Eventf(volume, eventtype, "RecyclerPod", "Recycler pod: %s", message)

Kubelet 
vendor/k8s.io/kubernetes/pkg/kubelet/active_deadline.go:74:	m.recorder.Eventf(pod, api.EventTypeNormal, reason, message)
vendor/k8s.io/kubernetes/pkg/kubelet/config/config.go:344:			recorder.Eventf(pod, api.EventTypeWarning, events.FailedValidation, "Error validating pod %s from %s, ignoring: %v", name, source, err)
vendor/k8s.io/kubernetes/pkg/kubelet/container/helpers.go:159:		irecorder.recorder.Eventf(ref, eventtype, reason, messageFmt, args...)

DockerTools 
vendor/k8s.io/kubernetes/pkg/kubelet/dockertools/docker_manager.go:2088:		dm.recorder.Eventf(pod, api.EventTypeNormal, "InfraChanged", "Pod infrastructure changed, it will be killed and re-created.")
vendor/k8s.io/kubernetes/pkg/kubelet/dockertools/docker_manager.go:2506:				dm.recorder.Eventf(ref, api.EventTypeWarning, events.BackOffStartContainer, "Back-off restarting failed docker container")
vendor/k8s.io/kubernetes/pkg/kubelet/dockertools/docker_manager.go:722:			dm.recorder.Eventf(ref, api.EventTypeWarning, events.FailedToCreateContainer, "Failed to create docker container %q of pod %q with error: %v", container.Name, format.Pod(pod), err)
vendor/k8s.io/kubernetes/pkg/kubelet/dockertools/docker_manager.go:798:		dm.recorder.Eventf(ref, api.EventTypeWarning, events.FailedToCreateContainer, "Failed to create docker container %q of pod %q with error: %v", container.Name, format.Pod(pod), err)
vendor/k8s.io/kubernetes/pkg/kubelet/dockertools/docker_manager.go:817:	dm.recorder.Eventf(ref, api.EventTypeNormal, events.CreatedContainer, createdEventMsg)
vendor/k8s.io/kubernetes/pkg/kubelet/dockertools/docker_manager.go:820:		dm.recorder.Eventf(ref, api.EventTypeWarning, events.FailedToStartContainer,
vendor/k8s.io/kubernetes/pkg/kubelet/dockertools/docker_manager.go:824:	dm.recorder.Eventf(ref, api.EventTypeNormal, events.StartedContainer, "Started container with docker id %v", utilstrings.ShortenString(createResp.ID, 12))

Eviction 
vendor/k8s.io/kubernetes/pkg/kubelet/eviction/eviction_manager.go:295:	m.recorder.Eventf(m.nodeRef, api.EventTypeWarning, "EvictionThresholdMet", "Attempting to reclaim %s", resourceToReclaim)
vendor/k8s.io/kubernetes/pkg/kubelet/eviction/eviction_manager.go:333:		m.recorder.Eventf(pod, api.EventTypeWarning, reason, fmt.Sprintf(message, resourceToReclaim))

Images
vendor/k8s.io/kubernetes/pkg/kubelet/images/image_gc_manager.go:262:		im.recorder.Eventf(im.nodeRef, api.EventTypeWarning, events.InvalidDiskCapacity, err.Error())
vendor/k8s.io/kubernetes/pkg/kubelet/images/image_gc_manager.go:278:			im.recorder.Eventf(im.nodeRef, api.EventTypeWarning, events.FreeDiskSpaceFailed, err.Error())

Kubelet 
vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1130:			kl.recorder.Eventf(kl.nodeRef, api.EventTypeWarning, events.ContainerGCFailed, err.Error())
vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1147:			kl.recorder.Eventf(kl.nodeRef, api.EventTypeWarning, events.ImageGCFailed, err.Error())
vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1228:		kl.recorder.Eventf(kl.nodeRef, api.EventTypeWarning, events.KubeletSetupFailed, err.Error())
vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1289:		kl.recorder.Eventf(pod, api.EventTypeWarning, "MissingClusterDNS", "kubelet does not have ClusterDNS IP configured and cannot create Pod using %q policy. Falling back to DNSDefault policy.", pod.Spec.DNSPolicy)
vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1291:		kl.recorder.Eventf(kl.nodeRef, api.EventTypeWarning, "MissingClusterDNS", log)
vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1522:		kl.recorder.Eventf(pod, api.EventTypeWarning, events.FailedMountVolume, "Unable to mount volumes for pod %q: %v", format.Pod(pod), err)
vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1653:	kl.recorder.Eventf(pod, api.EventTypeWarning, reason, message)
vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:2089:	kl.recorder.Eventf(kl.nodeRef, api.EventTypeNormal, events.StartingKubelet, "Starting kubelet.")
vendor/k8s.io/kubernetes/pkg/kubelet/kubelet_node_status.go:380:	kl.recorder.Eventf(kl.nodeRef, eventtype, event, "Node %s status is now: %s", kl.nodeName, event)
vendor/k8s.io/kubernetes/pkg/kubelet/kubelet_node_status.go:515:			kl.recorder.Eventf(kl.nodeRef, api.EventTypeWarning, events.NodeRebooted,

Kuberuntime
vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go:75:		m.recorder.Eventf(ref, api.EventTypeWarning, events.FailedToCreateContainer, "Failed to create container with error: %v", err)
vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go:80:		m.recorder.Eventf(ref, api.EventTypeWarning, events.FailedToCreateContainer, "Failed to create container with error: %v", err)
vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go:83:	m.recorder.Eventf(ref, api.EventTypeNormal, events.CreatedContainer, "Created container with id %v", containerID)
vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go:94:		m.recorder.Eventf(ref, api.EventTypeWarning, events.FailedToStartContainer,
vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go:98:	m.recorder.Eventf(ref, api.EventTypeNormal, events.StartedContainer, "Started container with id %v", containerID)
vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_manager.go:554:			m.recorder.Eventf(ref, api.EventTypeNormal, "SandboxChanged", "Pod sandbox changed, it will be killed and re-created.")
vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_manager.go:556:			m.recorder.Eventf(ref, api.EventTypeNormal, "SandboxReceived", "Pod sandbox received, it will be created.")
vendor/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_manager.go:743:			m.recorder.Eventf(ref, api.EventTypeWarning, events.BackOffStartContainer, "Back-off restarting failed container")
vendor/k8s.io/kubernetes/pkg/kubelet/pod_workers.go:185:			p.recorder.Eventf(update.Pod, api.EventTypeWarning, events.FailedSync, "Error syncing pod, skipping: %v", err)
vendor/k8s.io/kubernetes/pkg/kubelet/pod_workers.go:328:			recorder.Eventf(pod, api.EventTypeWarning, events.ExceededGracePeriod, "Container runtime did not kill the pod within specified grace period.")

Prober 
vendor/k8s.io/kubernetes/pkg/kubelet/prober/prober.go:103:				pb.recorder.Eventf(ref, api.EventTypeWarning, events.ContainerUnhealthy, "%s probe errored: %v", probeType, err)
vendor/k8s.io/kubernetes/pkg/kubelet/prober/prober.go:108:				pb.recorder.Eventf(ref, api.EventTypeWarning, events.ContainerUnhealthy, "%s probe failed: %s", probeType, output)

RKT
vendor/k8s.io/kubernetes/pkg/kubelet/rkt/rkt.go:1233:			r.recorder.Eventf(ref, api.EventTypeNormal, events.CreatedContainer, "Created with rkt id %v", uuid)
vendor/k8s.io/kubernetes/pkg/kubelet/rkt/rkt.go:1235:			r.recorder.Eventf(ref, api.EventTypeNormal, events.StartedContainer, "Started with rkt id %v", uuid)
vendor/k8s.io/kubernetes/pkg/kubelet/rkt/rkt.go:1237:			r.recorder.Eventf(ref, api.EventTypeWarning, events.FailedToStartContainer, "Failed to start with rkt id %v with error %v", uuid, failure)
vendor/k8s.io/kubernetes/pkg/kubelet/rkt/rkt.go:1239:			r.recorder.Eventf(ref, api.EventTypeNormal, events.KillingContainer, "Killing with rkt id %v", uuid)
vendor/k8s.io/kubernetes/pkg/kubelet/rkt/rkt.go:1325:			r.recorder.Eventf(ref, api.EventTypeWarning, events.FailedToCreateContainer, "Failed to create rkt container with error: %v", prepareErr)
vendor/k8s.io/kubernetes/pkg/kubelet/rkt/rkt.go:1380:			r.recorder.Eventf(ref, api.EventTypeWarning, events.FailedPreStopHook, msg)
vendor/k8s.io/kubernetes/pkg/kubelet/rkt/rkt.go:1422:			r.recorder.Eventf(ref, api.EventTypeWarning, events.FailedPostStartHook, msg)

OperationsExecutor 
vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_executor.go:611:				oe.recorder.Eventf(pod, api.EventTypeWarning, kevents.FailedMountVolume, err.Error())
vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_executor.go:861:				oe.recorder.Eventf(volumeToMount.Pod, api.EventTypeWarning, kevents.FailedMountVolume, err.Error())
vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_executor.go:891:				oe.recorder.Eventf(volumeToMount.Pod, api.EventTypeWarning, kevents.FailedMountVolume, errMsg)
vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_executor.go:908:			oe.recorder.Eventf(volumeToMount.Pod, api.EventTypeWarning, kevents.FailedMountVolume, err.Error())

Scheduler
vendor/k8s.io/kubernetes/plugin/pkg/scheduler/scheduler.go:147:			s.config.Recorder.Eventf(pod, api.EventTypeNormal, "FailedScheduling", "Binding rejected: %v", err)
vendor/k8s.io/kubernetes/plugin/pkg/scheduler/scheduler.go:156:		s.config.Recorder.Eventf(pod, api.EventTypeNormal, "Scheduled", "Successfully assigned %v to %v", pod.Name, dest)
vendor/k8s.io/kubernetes/plugin/pkg/scheduler/scheduler.go:99:		s.config.Recorder.Eventf(pod, api.EventTypeWarning, "FailedScheduling", "%v", err)

@benjaminapetersen
Copy link
Author

Original scraped from events monitored from UI:

// Not comprehensive, however

{
   "Pod":{
      "Killing":{
         "Normal":[
            "mongodb-3-hhp4b: Killing container with docker id 86c76c4edc01: pod \"mongodb-3-hhp4b_myproject(48655a11-0992-11e7-9dc7-080027242396)\" container \"mongodb\" is unhealthy, it will be killed and re-created.",
            "cakephp-mysql-persistent-3-mpng2: Killing container with docker id 35957a8b6224: pod \"cakephp-mysql-persistent-3-mpng2_my-cake(7b9ff60d-082b-11e7-9dc7-080027242396)\" container \"cakephp-mysql-persistent\" is unhealthy, it will be killed and re-created.",
            "cakephp-mysql-persistent-3-mpng2: Killing container with docker id b21506efaf7d: pod \"cakephp-mysql-persistent-3-mpng2_my-cake(7b9ff60d-082b-11e7-9dc7-080027242396)\" container \"cakephp-mysql-persistent\" is unhealthy, it will be killed and re-created.",
            "mysql-2-deploy: Killing container with docker id 4fabe1cd4b96: Need to kill pod."
         ]
      },
      "Scheduled":{
         "Normal":[
            "database-1-hook-post: Successfully assigned database-1-hook-post to 10.0.2.15",
            "cakephp-mysql-persistent-4-hook-pre: Successfully assigned cakephp-mysql-persistent-4-hook-pre to 10.0.2.15",
            "mysql-1-fmksr: Successfully assigned mysql-1-fmksr to 10.0.2.15"
         ]
      },
      "Pulled":{
         "Normal":[
            "database-1-hook-post: Successfully pulled image \"centos/mysql-56-centos7@sha256:f8603dadddf5dc3b4a46333a7c3d9c2496d1fbc1f77cced44fdd2f02732e0b77\"",
            "database-1-0tdk7: Successfully pulled image \"centos/mysql-56-centos7@sha256:f8603dadddf5dc3b4a46333a7c3d9c2496d1fbc1f77cced44fdd2f02732e0b77\"",
            "nginx-deployment-4087004473-rdzh3: Container image \"nginx:1.7.9\" already present on machine"
         ]
      },
      "Created":{
         "Normal":[
            "mongodb-3-hhp4b: Created container with docker id 86c76c4edc01; Security:[seccomp=unconfined]",
            "mongodb-3-hhp4b: Created container with docker id 84c155890e4c; Security:[seccomp=unconfined]",
            "mongodb-3-hhp4b: Created container with docker id cd96c130e1eb; Security:[seccomp=unconfined]",
            "nginx-deployment-4087004473-rdzh3: Created container with docker id dc3dd30dcd4b; Security:[seccomp=unconfined]",
            "nginx-deployment-4087004473-rdzh3: Created container with docker id b4f0bdb29b23; Security:[seccomp=unconfined]",
            "cakephp-mysql-persistent-3-mpng2: Created container with docker id ec3f3e23b205; Security:[seccomp=unconfined]"
         ]
      },
      "Started":{
         "Normal":[
            "mongodb-3-hhp4b: Started container with docker id 86c76c4edc01",
            "mongodb-3-hhp4b: Started container with docker id cd96c130e1eb",
                "nginx-deployment-4087004473-rdzh3: Started container with docker id 534bd528bf1d",
            "cakephp-mysql-persistent-3-mpng2: Started container with docker id ec3f3e23b205"
         ]
      },
      "Pulling":{
         "Normal":[
            "database-1-hook-post: pulling image \"centos/mysql-56-centos7@sha256:f8603dadddf5dc3b4a46333a7c3d9c2496d1fbc1f77cced44fdd2f02732e0b77\"",
            "nodejs-mongo-persistent-7-0l38s: pulling image \"172.30.1.1:5000/myproject/nodejs-mongo-persistent@sha256:71ecf384a1cf9ee0bf4996b6861d3bc597eabb906f0aff18182510359c64e045\"",
            "cakephp-mysql-persistent-4-hook-pre: pulling image \"172.30.1.1:5000/my-cake/cakephp-mysql-persistent@sha256:1f6e92a928fba8081201bac3b6d14845c9270f960586db86dfba67b214e79837\""
         ]
      },
      "Unhealthy":{
         "Warning":[
            "mongodb-3-hhp4b: Readiness probe failed: sh: cannot set terminal process group (-1): Inappropriate ioctl for device\nsh: no job control in this shell\nMongoDB shell version: 3.2.10\nconnecting to: 127.0.0.1:27017/sampledb\n2017-03-15T19:57:06.044+0000 W NETWORK  [thread1] Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused\n2017-03-15T19:57:06.044+0000 E QUERY    [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :\nconnect@src/mongo/shell/mongo.js:231:14\n@(connect):1:6\n\nexception: connect failed\n",
            "nodejs-mongo-persistent-7-0l38s: Liveness probe failed: Get http://172.17.0.11:8080/pagecount: net/http: request canceled (Client.Timeout exceeded while awaiting headers)",
            "mongodb-2-9t4w8: Liveness probe failed: dial tcp 172.17.0.5:27017: i/o timeout",
            "cakephp-mysql-persistent-3-mpng2: Liveness probe failed: Get http://172.17.0.4:8080/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)",
            "cakephp-mysql-persistent-3-mpng2: Readiness probe failed: Get http://172.17.0.4:8080/health.php: net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
         ]
      },
      "FailedSync":{
         "Warning":[
            "mongodb-3-hhp4b: Error syncing pod, skipping: failed to \"StartContainer\" for \"mongodb\" with CrashLoopBackOff: \"Back-off 5m0s restarting failed container=mongodb pod=mongodb-3-hhp4b_myproject(48655a11-0992-11e7-9dc7-080027242396)\"\n",
            "nginx-deployment-4087004473-3p3b1: Error syncing pod, skipping: failed to \"StartContainer\" for \"nginx\" with CrashLoopBackOff: \"Back-off 5m0s restarting failed container=nginx pod=nginx-deployment-4087004473-3p3b1_nginx(cfc57632-0506-11e7-9dc7-080027242396)\"\n",
            "nginx-deployment-4087004473-75n3m: Error syncing pod, skipping: failed to \"StartContainer\" for \"nginx\" with CrashLoopBackOff: \"Back-off 5m0s restarting failed container=nginx pod=nginx-deployment-4087004473-75n3m_nginx(cfc5f171-0506-11e7-9dc7-080027242396)\"\n",
            "nginx-deployment-4087004473-rdzh3: Error syncing pod, skipping: failed to \"StartContainer\" for \"nginx\" with CrashLoopBackOff: \"Back-off 5m0s restarting failed container=nginx pod=nginx-deployment-4087004473-rdzh3_nginx(cfc5b385-0506-11e7-9dc7-080027242396)\"\n",
            "nginx-deployment-4087004473-rdzh3: Error syncing pod, skipping: failed to \"StartContainer\" for \"nginx\" with RunContainerError: \"runContainer: Error response from daemon: open /dev/mapper/docker-253:0-1311017-81c11895cc77fb4c4edef6d8ddc94f96a34855f095b675d1958d3c5ea0da241f: no such file or directory\"\n"
         ]
      },
      "BackOff":{
         "Warning":[
            "mongodb-3-hhp4b: Back-off restarting failed docker container",
            "nginx-deployment-4087004473-75n3m: Back-off restarting failed docker container",
            "nginx-deployment-4087004473-rdzh3: Back-off restarting failed docker container"
         ]
      },
      "FailedMount":{
         "Warning":[
            "mongodb-2-lpdw7: Unable to mount volumes for pod \"mongodb-2-lpdw7_myproject(f9ee9b3f-0991-11e7-9dc7-080027242396)\": timeout expired waiting for volumes to attach/mount for pod \"myproject\"/\"mongodb-2-lpdw7\". list of unattached/unmounted volumes=[volume-akvyv]",
            "mysql-2-d95lc: Unable to mount volumes for pod \"mysql-2-d95lc_my-cake(3bc7ca39-08f5-11e7-9dc7-080027242396)\": timeout expired waiting for volumes to attach/mount for pod \"my-cake\"/\"mysql-2-d95lc\". list of unattached/unmounted volumes=[mysql-data]"
         ]
      },
      "Failed":{
         "Warning":[
            "nginx-deployment-4087004473-rdzh3: Failed to start container with docker id ffbcb7608dcc with error: Error response from daemon: open /dev/mapper/docker-253:0-1311017-81c11895cc77fb4c4edef6d8ddc94f96a34855f095b675d1958d3c5ea0da241f: no such file or directory"
         ]
      }
   },
   "ReplicationController":{
      "SuccessfulDelete":{
         "Normal":[
            "mongodb-2: Deleted pod: mongodb-2-9t4w8",
            "nodejs-mongo-persistent-8: Deleted pod: nodejs-mongo-persistent-8-70n1j",
            "mysql-1: Deleted pod: mysql-1-401dd",
            "mysql-2: Deleted pod: mysql-2-d95lc"
         ]
      },
      "SuccessfulCreate":{
         "Normal":[
            "database-1: Created pod: database-1-0tdk7",
            "nodejs-mongo-persistent-9: Created pod: nodejs-mongo-persistent-9-3m1kp",
            "mysql-2: Created pod: mysql-2-d95lc",
            "mysql-1: Created pod: mysql-1-fmksr"
         ]
      }
   },
   "DeploymentConfig":{
      "DeploymentCreated":{
         "Normal":[
                     "jenkins: Created new replication controller \"jenkins-1\" for version 1",
            "nodejs-mongo-persistent: Created new replication controller \"nodejs-mongo-persistent-9\" for version 9",
            "mysql: Created new replication controller \"mysql-2\" for version 2",
            "cakephp-mysql-persistent: Created new replication controller \"cakephp-mysql-persistent-4\" for version 4"
         ]
      },
      "Started":{
         "Normal":[
            "database: Running post-hook (\"/bin/true\") for rc myproject/database-1",
            "database: Running mid-hook (\"/bin/true\") for rc myproject/database-1",
            "database: Running pre-hook (\"/bin/true\") for rc myproject/database-1",
            "cakephp-mysql-persistent: Running pre-hook (\"./migrate-database.sh\") for rc my-cake/cakephp-mysql-persistent-4"
         ]
      },
      "ReplicationControllerScaled":{
         "Normal":[
            "mongodb: Scaled replication controller \"mongodb-2\" from 1 to 2",
            "mysql: Scaled replication controller \"mysql-2\" from 1 to 0",
            "mysql: Scaled replication controller \"mysql-1\" from 0 to 1"
         ]
      },
      "Completed":{
         "Normal":[
            "database: The post-hook for rc myproject/database-1 completed successfully",
            "database: The mid-hook for rc myproject/database-1 completed successfully",
            "database: The pre-hook for rc myproject/database-1 completed successfully"
         ]
      }
   },
   "HorizontalPodAutoscaler":{
      "FailedGetScale":{
         "Warning":[
            "nginx-deployment: User \"system:serviceaccount:openshift-infra:hpa-controller\" cannot get extensions.deployments/scale in project \"nginx\""
         ]
      }
   },
   "Deployment":{
      "ScalingReplicaSet":{
         "Normal":[
            "nginx-deployment: Scaled up replica set nginx-deployment-2639649840 to 2",
            "nginx-deployment: Scaled down replica set nginx-deployment-4087004473 to 2",
            "nginx-deployment: Scaled up replica set nginx-deployment-2639649840 to 1"
         ]
      }
   },
   "BuildConfig":{
      "BuildConfigInstantiateFailed":{
         "Warning":[
            "nodejs-mongo-persistent: error instantiating Build from BuildConfig myproject/nodejs-mongo-persistent: builds \"nodejs-mongo-persistent-1\" already exists"
         ]
      }
   }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment