astefanutti commented on pull request #2292:
URL: https://github.com/apache/camel-k/pull/2292#issuecomment-842925048


   > From my testing cases (routes in java dsl that succeed in build but fails 
at runtime), see below:
   > 
   > > * Could the Integration status be reconciled from the delegating 
controller, i.e., Deployment, KnativeService or CronJob, rather than going down 
to the Pods level?
   > 
   > I don't think it is. Is there an example of this delegating controller so 
I can take a look at it ?
   
   To turn Integration into Pods, Camel K either creates a Deployment, a 
KnativeService, or a CronJob, depending on the deployment strategy. These are 
the controllers that manage the Integration Pods. They already take care of 
aggregating the Pods statuses into the Deployment (resp. KnativeService, or 
CronJob) status. I would like to make sure we don't reinvent the wheel of 
reconciling Pod statuses, if the Deployment (or the other primitives) already 
provide an aggregated status.
   
   > 
   > > * If getting the Pods statuses is the only option, I don't think the 
Integration controller currently watches for Pods changes, so updates may be 
missed;
   > 
   > I found that the running pod status is the way to inspect the reason why 
pod failed to start.
   > The integration_controller.go doesn't watch pods, but the monitor_pod.go 
checks if the pod is properly running to set the integration running status.
   
   The action is called by the controller. So the `monitor_pod.go` is called 
for events that are related to other resources, being watched by the 
`integration_controller.go`.
   
   > 
   > > * I'd suggest to call the status update logic from the existing 
`monitor` action, as it's already responsible for reconciling the Integration 
status when it's running, and generally action are bound to phases, not 
resources;
   > 
   > At least looking at how `build/monitor_pod` works, it was my instinct to 
have a similar dedicated and specific monitor_pod.go action. The `CanHandle` in 
monitor_pod.go runs when integration is in error state too, so I think it 
should be in its own action.
   
   The main difference with `build/monitor_pod` is that the Build controller 
creates the Pod, while the Integration controller creates Deployment, 
KnativeService, or CronJob, and the `monitor.go` action is responsible for 
reconciling their statuses.
   
   > 
   > > * Generally, `Error` state is unrecoverable, while `CrashLoopBackOff` 
may recover (which explain why the Pod is still in running phase): should 
`CrashLoopBackOff` container state and `PodFailed` phase be handled differently?
   > 
   > While testing I noticed the pod can be either `CrashLoopBackOff` or 
`Error`, that why the monitor_pod checks both.
   
   I understand you made the decision to map both `CrashLoopBackOff` or `Error` 
Pod statuses into the Integration `Error` phase. In the Pod state machine, 
`CrashLoopBackOff` is more a container condition, and the Pod is still in 
`Running` phase, because retries occur. Should we make a closer mapping between 
Pods and Integration phases?
   
   > 
   > > * Ideally, it'd be great to have an e2e test.
   > 
   > Sure, the tests are coming.
   
   Great!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to