Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package nelm for openSUSE:Factory checked in 
at 2025-08-19 16:45:30
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/nelm (Old)
 and      /work/SRC/openSUSE:Factory/.nelm.new.1085 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "nelm"

Tue Aug 19 16:45:30 2025 rev:16 rq:1300197 version:1.12.2

Changes:
--------
--- /work/SRC/openSUSE:Factory/nelm/nelm.changes        2025-08-04 
15:22:53.925107548 +0200
+++ /work/SRC/openSUSE:Factory/.nelm.new.1085/nelm.changes      2025-08-19 
16:47:12.541585234 +0200
@@ -1,0 +2,35 @@
+Tue Aug 19 05:38:33 UTC 2025 - Johannes Kastl 
<opensuse_buildserv...@ojkastl.de>
+
+- Update to version 1.12.2:
+  * Bug Fixes
+    - release namespace deletes after stopping being part of a
+      release (2bba22b)
+
+-------------------------------------------------------------------
+Tue Aug 19 05:35:57 UTC 2025 - Johannes Kastl 
<opensuse_buildserv...@ojkastl.de>
+
+- Update to version 1.12.1:
+  * Bug Fixes
+    - error "werf.io/show-logs-only-for-containers", expected
+      integer value (209bd1c)
+
+-------------------------------------------------------------------
+Tue Aug 19 05:32:51 UTC 2025 - Johannes Kastl 
<opensuse_buildserv...@ojkastl.de>
+
+- Update to version 1.12.0:
+  * Features
+    - display logs only from 1 replica by default (configured with
+      annotation werf.io/show-logs-only-for-number-of-replicas)
+      (47072bf)
+
+-------------------------------------------------------------------
+Tue Aug 19 05:24:32 UTC 2025 - Johannes Kastl 
<opensuse_buildserv...@ojkastl.de>
+
+- Update to version 1.11.0:
+  * Features
+    - greatly decrease Kubernetes apiserver load (7afe7ad)
+  * Bug Fixes
+    - panic "rules file not valid" (075b8e0)
+    - panic "validate rules file" (6bb4e3b)
+
+-------------------------------------------------------------------

Old:
----
  nelm-1.10.0.obscpio

New:
----
  nelm-1.12.2.obscpio

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ nelm.spec ++++++
--- /var/tmp/diff_new_pack.fe7kxB/_old  2025-08-19 16:47:13.653631537 +0200
+++ /var/tmp/diff_new_pack.fe7kxB/_new  2025-08-19 16:47:13.657631703 +0200
@@ -17,7 +17,7 @@
 
 
 Name:           nelm
-Version:        1.10.0
+Version:        1.12.2
 Release:        0
 Summary:        Helm 3 alternative
 License:        Apache-2.0

++++++ _service ++++++
--- /var/tmp/diff_new_pack.fe7kxB/_old  2025-08-19 16:47:13.689633036 +0200
+++ /var/tmp/diff_new_pack.fe7kxB/_new  2025-08-19 16:47:13.697633369 +0200
@@ -3,7 +3,7 @@
     <param name="url">https://github.com/werf/nelm</param>
     <param name="scm">git</param>
     <param name="exclude">.git</param>
-    <param name="revision">v1.10.0</param>
+    <param name="revision">v1.12.2</param>
     <param name="versionformat">@PARENT_TAG@</param>
     <param name="versionrewrite-pattern">v(.*)</param>
     <param name="changesgenerate">enable</param>

++++++ _servicedata ++++++
--- /var/tmp/diff_new_pack.fe7kxB/_old  2025-08-19 16:47:13.733634868 +0200
+++ /var/tmp/diff_new_pack.fe7kxB/_new  2025-08-19 16:47:13.737635035 +0200
@@ -1,6 +1,6 @@
 <servicedata>
 <service name="tar_scm">
                 <param name="url">https://github.com/werf/nelm</param>
-              <param 
name="changesrevision">f74daf684359e9b6dd3026920ec93b20129ca85e</param></service></servicedata>
+              <param 
name="changesrevision">f26d944597598dbbadbdc409823e5b7d1d3be685</param></service></servicedata>
 (No newline at EOF)
 

++++++ nelm-1.10.0.obscpio -> nelm-1.12.2.obscpio ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/CHANGELOG.md new/nelm-1.12.2/CHANGELOG.md
--- old/nelm-1.10.0/CHANGELOG.md        2025-08-01 14:08:23.000000000 +0200
+++ new/nelm-1.12.2/CHANGELOG.md        2025-08-15 13:10:21.000000000 +0200
@@ -1,5 +1,39 @@
 # Changelog
 
+### [1.12.2](https://www.github.com/werf/nelm/compare/v1.12.1...v1.12.2) 
(2025-08-15)
+
+
+### Bug Fixes
+
+* release namespace deletes after stopping being part of a release 
([2bba22b](https://www.github.com/werf/nelm/commit/2bba22bf08f5e14f8b5851e23ab16425698d17d6))
+
+### [1.12.1](https://www.github.com/werf/nelm/compare/v1.12.0...v1.12.1) 
(2025-08-14)
+
+
+### Bug Fixes
+
+* error `"werf.io/show-logs-only-for-containers", expected integer value` 
([209bd1c](https://www.github.com/werf/nelm/commit/209bd1c5ae9201426cb047435cec2fdaf5cfae48))
+
+## [1.12.0](https://www.github.com/werf/nelm/compare/v1.11.0...v1.12.0) 
(2025-08-13)
+
+
+### Features
+
+* display logs only from 1 replica by default (configured with annotation 
`werf.io/show-logs-only-for-number-of-replicas`) 
([47072bf](https://www.github.com/werf/nelm/commit/47072bf102d4a366f3ee00bed07182296c4cece8))
+
+## [1.11.0](https://www.github.com/werf/nelm/compare/v1.10.0...v1.11.0) 
(2025-08-08)
+
+
+### Features
+
+* greatly decrease Kubernetes apiserver load 
([7afe7ad](https://www.github.com/werf/nelm/commit/7afe7ad1b5a9ec0b3c5301f5d51b82d5d51f947e))
+
+
+### Bug Fixes
+
+* panic "rules file not valid" 
([075b8e0](https://www.github.com/werf/nelm/commit/075b8e0f658ce708d140c4c07a6668b74bb6ec21))
+* panic "validate rules file" 
([6bb4e3b](https://www.github.com/werf/nelm/commit/6bb4e3b43b8582a08584b588cd0d7babf88819c8))
+
 ## [1.10.0](https://www.github.com/werf/nelm/compare/v1.9.0...v1.10.0) 
(2025-08-01)
 
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/README.md new/nelm-1.12.2/README.md
--- old/nelm-1.10.0/README.md   2025-08-01 14:08:23.000000000 +0200
+++ new/nelm-1.12.2/README.md   2025-08-15 13:10:21.000000000 +0200
@@ -48,6 +48,7 @@
     - [Annotation `werf.io/log-regex-skip`](#annotation-werfiolog-regex-skip)
     - [Annotation `werf.io/skip-logs`](#annotation-werfioskip-logs)
     - [Annotation 
`werf.io/skip-logs-for-containers`](#annotation-werfioskip-logs-for-containers)
+    - [Annotation 
`werf.io/show-logs-only-for-number-of-replicas`](#annotation-werfioshow-logs-only-for-number-of-replicas)
     - [Annotation 
`werf.io/show-logs-only-for-containers`](#annotation-werfioshow-logs-only-for-containers)
     - [Annotation 
`werf.io/show-service-messages`](#annotation-werfioshow-service-messages)
     - [Function `werf_secret_file`](#function-werf_secret_file)
@@ -513,6 +514,14 @@
 
 Don't print logs for specified containers during resource tracking.
 
+#### Annotation `werf.io/show-logs-only-for-number-of-replicas`
+
+Format: `<any positive number or zero>` \
+Default: `1` \
+Example: `werf.io/show-logs-only-for-number-of-replicas: "999"`
+
+Print logs only for the specified number of replicas during resource tracking. 
We print logs only for a single replica by default to avoid excessive log 
output and to optimize resource usage.
+
 #### Annotation `werf.io/show-logs-only-for-containers`
 
 Format: `<container_name>[,<container_name>...]` \
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/go.mod new/nelm-1.12.2/go.mod
--- old/nelm-1.10.0/go.mod      2025-08-01 14:08:23.000000000 +0200
+++ new/nelm-1.12.2/go.mod      2025-08-15 13:10:21.000000000 +0200
@@ -1,6 +1,6 @@
 module github.com/werf/nelm
 
-go 1.23
+go 1.23.0
 
 require (
        github.com/Masterminds/semver/v3 v3.3.1
@@ -38,7 +38,7 @@
        github.com/wI2L/jsondiff v0.5.0
        github.com/werf/3p-helm v0.0.0-20250731134240-58a9eff8ec5b
        github.com/werf/common-go v0.0.0-20250520111308-b0eda28dde0d
-       github.com/werf/kubedog v0.13.1-0.20250801120242-28c356abdc84
+       github.com/werf/kubedog v0.13.1-0.20250813095923-12d70b6780b0
        github.com/werf/lockgate v0.1.1
        github.com/werf/logboek v0.6.1
        github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/go.sum new/nelm-1.12.2/go.sum
--- old/nelm-1.10.0/go.sum      2025-08-01 14:08:23.000000000 +0200
+++ new/nelm-1.12.2/go.sum      2025-08-15 13:10:21.000000000 +0200
@@ -419,8 +419,8 @@
 github.com/werf/3p-helm v0.0.0-20250731134240-58a9eff8ec5b/go.mod 
h1:KDjmOsjFiOmj0fB0+q+0gGvlejPMjTgckLC59bX0BLg=
 github.com/werf/common-go v0.0.0-20250520111308-b0eda28dde0d 
h1:nVN0E4lQdToFiPty19uwj5TF+bCI/kAp5LLG4stWdO4=
 github.com/werf/common-go v0.0.0-20250520111308-b0eda28dde0d/go.mod 
h1:taKDUxKmGfqNOlVx1O0ad5vdV4duKexTLO7Rch9HfeA=
-github.com/werf/kubedog v0.13.1-0.20250801120242-28c356abdc84 
h1:1pdu8pC+Gyen2k59eEin/3lxK7z+qoV1vHt8iU3U9KQ=
-github.com/werf/kubedog v0.13.1-0.20250801120242-28c356abdc84/go.mod 
h1:Y6pesrIN5uhFKqmHnHSoeW4jmVyZlWPFWv5SjB0rUPg=
+github.com/werf/kubedog v0.13.1-0.20250813095923-12d70b6780b0 
h1:E7odWm4YBrYee/g9UyDVJ++C6xrKq2orNrkLiNrrN9k=
+github.com/werf/kubedog v0.13.1-0.20250813095923-12d70b6780b0/go.mod 
h1:gu4EY4hxtiYVDy5o6WE2lRZS0YWqrOV0HS//GTYyrUE=
 github.com/werf/lockgate v0.1.1 h1:S400JFYjtWfE4i4LY9FA8zx0fMdfui9DPrBiTciCrx4=
 github.com/werf/lockgate v0.1.1/go.mod 
h1:0yIFSLq9ausy6ejNxF5uUBf/Ib6daMAfXuCaTMZJzIE=
 github.com/werf/logboek v0.6.1 h1:oEe6FkmlKg0z0n80oZjLplj6sXcBeLleCkjfOOZEL2g=
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/nelm-1.10.0/internal/plan/deploy_failure_plan_builder.go 
new/nelm-1.12.2/internal/plan/deploy_failure_plan_builder.go
--- old/nelm-1.10.0/internal/plan/deploy_failure_plan_builder.go        
2025-08-01 14:08:23.000000000 +0200
+++ new/nelm-1.12.2/internal/plan/deploy_failure_plan_builder.go        
2025-08-15 13:10:21.000000000 +0200
@@ -9,6 +9,7 @@
        "k8s.io/apimachinery/pkg/api/meta"
        "k8s.io/client-go/dynamic"
 
+       "github.com/werf/kubedog/pkg/informer"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/statestore"
        kdutil "github.com/werf/kubedog/pkg/trackers/dyntracker/util"
        "github.com/werf/nelm/internal/common"
@@ -25,6 +26,7 @@
        deployType common.DeployType,
        deployPlan *Plan,
        taskStore *statestore.TaskStore,
+       informerFactory *kdutil.Concurrent[*informer.InformerFactory],
        hookResourcesInfos []*info.DeployableHookResourceInfo,
        generalResourceInfos []*info.DeployableGeneralResourceInfo,
        history release.Historier,
@@ -40,6 +42,7 @@
                releaseNamespace:     releaseNamespace,
                deployType:           deployType,
                taskStore:            taskStore,
+               informerFactory:      informerFactory,
                hookResourceInfos:    hookResourcesInfos,
                generalResourceInfos: generalResourceInfos,
                newRelease:           opts.NewRelease,
@@ -65,6 +68,7 @@
        releaseNamespace     string
        deployType           common.DeployType
        taskStore            *statestore.TaskStore
+       informerFactory      *kdutil.Concurrent[*informer.InformerFactory]
        hookResourceInfos    []*info.DeployableHookResourceInfo
        generalResourceInfos []*info.DeployableGeneralResourceInfo
        newRelease           *release.Release
@@ -143,6 +147,7 @@
                trackDeletionOp := operation.NewTrackResourceAbsenceOperation(
                        info.ResourceID,
                        taskState,
+                       b.informerFactory,
                        b.dynamicClient,
                        b.mapper,
                        operation.TrackResourceAbsenceOperationOptions{
@@ -188,6 +193,7 @@
                trackDeletionOp := operation.NewTrackResourceAbsenceOperation(
                        info.ResourceID,
                        taskState,
+                       b.informerFactory,
                        b.dynamicClient,
                        b.mapper,
                        operation.TrackResourceAbsenceOperationOptions{
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/internal/plan/deploy_plan_builder.go 
new/nelm-1.12.2/internal/plan/deploy_plan_builder.go
--- old/nelm-1.10.0/internal/plan/deploy_plan_builder.go        2025-08-01 
14:08:23.000000000 +0200
+++ new/nelm-1.12.2/internal/plan/deploy_plan_builder.go        2025-08-15 
13:10:21.000000000 +0200
@@ -16,6 +16,7 @@
        "k8s.io/client-go/dynamic"
        "k8s.io/client-go/kubernetes"
 
+       "github.com/werf/kubedog/pkg/informer"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/logstore"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/statestore"
        kdutil "github.com/werf/kubedog/pkg/trackers/dyntracker/util"
@@ -59,6 +60,7 @@
        deployType common.DeployType,
        taskStore *statestore.TaskStore,
        logStore *kdutil.Concurrent[*logstore.LogStore],
+       informerFactory *kdutil.Concurrent[*informer.InformerFactory],
        standaloneCRDsInfos []*info.DeployableStandaloneCRDInfo,
        hookResourcesInfos []*info.DeployableHookResourceInfo,
        generalResourcesInfos []*info.DeployableGeneralResourceInfo,
@@ -121,6 +123,7 @@
                ignoreLogs:                      opts.IgnoreLogs,
                taskStore:                       taskStore,
                logStore:                        logStore,
+               informerFactory:                 informerFactory,
                deployType:                      deployType,
                plan:                            plan,
                releaseNamespace:                releaseNamespace,
@@ -159,6 +162,7 @@
        ignoreLogs                      bool
        taskStore                       *statestore.TaskStore
        logStore                        *kdutil.Concurrent[*logstore.LogStore]
+       informerFactory                 
*kdutil.Concurrent[*informer.InformerFactory]
        releaseNamespace                string
        deployType                      common.DeployType
        standaloneCRDsInfos             []*info.DeployableStandaloneCRDInfo
@@ -432,6 +436,7 @@
                        opTrackDeletion := 
operation.NewTrackResourceAbsenceOperation(
                                info.ResourceID,
                                taskState,
+                               b.informerFactory,
                                b.dynamicClient,
                                b.mapper,
                                operation.TrackResourceAbsenceOperationOptions{
@@ -730,6 +735,7 @@
                                info.ResourceID,
                                info.Resource().Unstructured(),
                                absenceTaskState,
+                               b.informerFactory,
                                b.kubeClient,
                                b.dynamicClient,
                                b.mapper,
@@ -817,6 +823,7 @@
                                opTrackReadiness := 
operation.NewTrackResourcePresenceOperation(
                                        dep.ResourceID,
                                        taskState,
+                                       b.informerFactory,
                                        b.dynamicClient,
                                        b.mapper,
                                        
operation.TrackResourcePresenceOperationOptions{
@@ -837,6 +844,7 @@
                if trackReadiness {
                        logRegex, _ := info.Resource().LogRegex()
                        logRegexesFor, _ := 
info.Resource().LogRegexesForContainers()
+                       showLogsOnlyForNumberOfReplicas := 
info.Resource().ShowLogsOnlyForNumberOfReplicas()
                        skipLogsFor, _ := 
info.Resource().SkipLogsForContainers()
                        showLogsOnlyFor, _ := 
info.Resource().ShowLogsOnlyForContainers()
                        ignoreReadinessProbes, _ := 
info.Resource().IgnoreReadinessProbeFailsForContainers()
@@ -856,6 +864,7 @@
                        opTrackReadiness = 
operation.NewTrackResourceReadinessOperation(
                                info.ResourceID,
                                taskState,
+                               b.informerFactory,
                                b.logStore,
                                b.staticClient,
                                b.dynamicClient,
@@ -865,6 +874,7 @@
                                        Timeout:                                
  b.readinessTimeout,
                                        NoActivityTimeout:                      
  noActivityTimeout,
                                        
IgnoreReadinessProbeFailsByContainerName: ignoreReadinessProbes,
+                                       SaveLogsOnlyForNumberOfReplicas:        
  showLogsOnlyForNumberOfReplicas,
                                        SaveLogsOnlyForContainers:              
  showLogsOnlyFor,
                                        SaveLogsByRegex:                        
  logRegex,
                                        SaveLogsByRegexForContainers:           
  logRegexesFor,
@@ -926,6 +936,7 @@
                        opTrackDeletion := 
operation.NewTrackResourceAbsenceOperation(
                                info.ResourceID,
                                taskState,
+                               b.informerFactory,
                                b.dynamicClient,
                                b.mapper,
                                operation.TrackResourceAbsenceOperationOptions{
@@ -987,6 +998,7 @@
                                info.ResourceID,
                                info.Resource().Unstructured(),
                                absenceTaskState,
+                               b.informerFactory,
                                b.kubeClient,
                                b.dynamicClient,
                                b.mapper,
@@ -1071,6 +1083,7 @@
                                opTrackReadiness := 
operation.NewTrackResourcePresenceOperation(
                                        dep.ResourceID,
                                        taskState,
+                                       b.informerFactory,
                                        b.dynamicClient,
                                        b.mapper,
                                        
operation.TrackResourcePresenceOperationOptions{
@@ -1091,6 +1104,7 @@
                if trackReadiness {
                        logRegex, _ := info.Resource().LogRegex()
                        logRegexesFor, _ := 
info.Resource().LogRegexesForContainers()
+                       showLogsOnlyForNumberOfReplicas := 
info.Resource().ShowLogsOnlyForNumberOfReplicas()
                        skipLogsFor, _ := 
info.Resource().SkipLogsForContainers()
                        showLogsOnlyFor, _ := 
info.Resource().ShowLogsOnlyForContainers()
                        ignoreReadinessProbes, _ := 
info.Resource().IgnoreReadinessProbeFailsForContainers()
@@ -1110,6 +1124,7 @@
                        opTrackReadiness = 
operation.NewTrackResourceReadinessOperation(
                                info.ResourceID,
                                taskState,
+                               b.informerFactory,
                                b.logStore,
                                b.staticClient,
                                b.dynamicClient,
@@ -1119,6 +1134,7 @@
                                        Timeout:                                
  b.readinessTimeout,
                                        NoActivityTimeout:                      
  noActivityTimeout,
                                        
IgnoreReadinessProbeFailsByContainerName: ignoreReadinessProbes,
+                                       SaveLogsOnlyForNumberOfReplicas:        
  showLogsOnlyForNumberOfReplicas,
                                        SaveLogsOnlyForContainers:              
  showLogsOnlyFor,
                                        SaveLogsByRegex:                        
  logRegex,
                                        SaveLogsByRegexForContainers:           
  logRegexesFor,
@@ -1178,6 +1194,7 @@
                        opTrackDeletion := 
operation.NewTrackResourceAbsenceOperation(
                                info.ResourceID,
                                taskState,
+                               b.informerFactory,
                                b.dynamicClient,
                                b.mapper,
                                operation.TrackResourceAbsenceOperationOptions{
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/nelm-1.10.0/internal/plan/operation/recreate_resource_operation.go 
new/nelm-1.12.2/internal/plan/operation/recreate_resource_operation.go
--- old/nelm-1.10.0/internal/plan/operation/recreate_resource_operation.go      
2025-08-01 14:08:23.000000000 +0200
+++ new/nelm-1.12.2/internal/plan/operation/recreate_resource_operation.go      
2025-08-15 13:10:21.000000000 +0200
@@ -9,6 +9,7 @@
        "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
        "k8s.io/client-go/dynamic"
 
+       "github.com/werf/kubedog/pkg/informer"
        "github.com/werf/kubedog/pkg/trackers/dyntracker"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/statestore"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/util"
@@ -28,6 +29,7 @@
        resource *id.ResourceID,
        unstruct *unstructured.Unstructured,
        absenceTaskState *util.Concurrent[*statestore.AbsenceTaskState],
+       informerFactory *util.Concurrent[*informer.InformerFactory],
        kubeClient kube.KubeClienter,
        dynamicClient dynamic.Interface,
        mapper meta.ResettableRESTMapper,
@@ -37,6 +39,7 @@
                resource:                resource,
                unstruct:                unstruct,
                taskState:               absenceTaskState,
+               informerFactory:         informerFactory,
                kubeClient:              kubeClient,
                dynamicClient:           dynamicClient,
                mapper:                  mapper,
@@ -60,6 +63,7 @@
        resource                *id.ResourceID
        unstruct                *unstructured.Unstructured
        taskState               *util.Concurrent[*statestore.AbsenceTaskState]
+       informerFactory         *util.Concurrent[*informer.InformerFactory]
        kubeClient              kube.KubeClienter
        dynamicClient           dynamic.Interface
        mapper                  meta.ResettableRESTMapper
@@ -78,7 +82,7 @@
                return fmt.Errorf("error deleting resource: %w", err)
        }
 
-       tracker := dyntracker.NewDynamicAbsenceTracker(o.taskState, 
o.dynamicClient, o.mapper, dyntracker.DynamicAbsenceTrackerOptions{
+       tracker := dyntracker.NewDynamicAbsenceTracker(o.taskState, 
o.informerFactory, o.dynamicClient, o.mapper, 
dyntracker.DynamicAbsenceTrackerOptions{
                Timeout:    o.deletionTrackTimeout,
                PollPeriod: o.deletionTrackPollPeriod,
        })
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/nelm-1.10.0/internal/plan/operation/track_resource_absence.go 
new/nelm-1.12.2/internal/plan/operation/track_resource_absence.go
--- old/nelm-1.10.0/internal/plan/operation/track_resource_absence.go   
2025-08-01 14:08:23.000000000 +0200
+++ new/nelm-1.12.2/internal/plan/operation/track_resource_absence.go   
2025-08-15 13:10:21.000000000 +0200
@@ -8,6 +8,7 @@
        "k8s.io/apimachinery/pkg/api/meta"
        "k8s.io/client-go/dynamic"
 
+       "github.com/werf/kubedog/pkg/informer"
        "github.com/werf/kubedog/pkg/trackers/dyntracker"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/statestore"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/util"
@@ -21,17 +22,19 @@
 func NewTrackResourceAbsenceOperation(
        resource *id.ResourceID,
        taskState *util.Concurrent[*statestore.AbsenceTaskState],
+       informerFactory *util.Concurrent[*informer.InformerFactory],
        dynamicClient dynamic.Interface,
        mapper meta.ResettableRESTMapper,
        opts TrackResourceAbsenceOperationOptions,
 ) *TrackResourceAbsenceOperation {
        return &TrackResourceAbsenceOperation{
-               resource:      resource,
-               taskState:     taskState,
-               dynamicClient: dynamicClient,
-               mapper:        mapper,
-               timeout:       opts.Timeout,
-               pollPeriod:    opts.PollPeriod,
+               resource:        resource,
+               taskState:       taskState,
+               informerFactory: informerFactory,
+               dynamicClient:   dynamicClient,
+               mapper:          mapper,
+               timeout:         opts.Timeout,
+               pollPeriod:      opts.PollPeriod,
        }
 }
 
@@ -41,18 +44,19 @@
 }
 
 type TrackResourceAbsenceOperation struct {
-       resource      *id.ResourceID
-       taskState     *util.Concurrent[*statestore.AbsenceTaskState]
-       dynamicClient dynamic.Interface
-       mapper        meta.ResettableRESTMapper
-       timeout       time.Duration
-       pollPeriod    time.Duration
+       resource        *id.ResourceID
+       taskState       *util.Concurrent[*statestore.AbsenceTaskState]
+       informerFactory *util.Concurrent[*informer.InformerFactory]
+       dynamicClient   dynamic.Interface
+       mapper          meta.ResettableRESTMapper
+       timeout         time.Duration
+       pollPeriod      time.Duration
 
        status Status
 }
 
 func (o *TrackResourceAbsenceOperation) Execute(ctx context.Context) error {
-       tracker := dyntracker.NewDynamicAbsenceTracker(o.taskState, 
o.dynamicClient, o.mapper, dyntracker.DynamicAbsenceTrackerOptions{
+       tracker := dyntracker.NewDynamicAbsenceTracker(o.taskState, 
o.informerFactory, o.dynamicClient, o.mapper, 
dyntracker.DynamicAbsenceTrackerOptions{
                Timeout:    o.timeout,
                PollPeriod: o.pollPeriod,
        })
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/nelm-1.10.0/internal/plan/operation/track_resource_presence.go 
new/nelm-1.12.2/internal/plan/operation/track_resource_presence.go
--- old/nelm-1.10.0/internal/plan/operation/track_resource_presence.go  
2025-08-01 14:08:23.000000000 +0200
+++ new/nelm-1.12.2/internal/plan/operation/track_resource_presence.go  
2025-08-15 13:10:21.000000000 +0200
@@ -8,6 +8,7 @@
        "k8s.io/apimachinery/pkg/api/meta"
        "k8s.io/client-go/dynamic"
 
+       "github.com/werf/kubedog/pkg/informer"
        "github.com/werf/kubedog/pkg/trackers/dyntracker"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/statestore"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/util"
@@ -21,17 +22,19 @@
 func NewTrackResourcePresenceOperation(
        resource *id.ResourceID,
        taskState *util.Concurrent[*statestore.PresenceTaskState],
+       informerFactory *util.Concurrent[*informer.InformerFactory],
        dynamicClient dynamic.Interface,
        mapper meta.ResettableRESTMapper,
        opts TrackResourcePresenceOperationOptions,
 ) *TrackResourcePresenceOperation {
        return &TrackResourcePresenceOperation{
-               resource:      resource,
-               taskState:     taskState,
-               dynamicClient: dynamicClient,
-               mapper:        mapper,
-               timeout:       opts.Timeout,
-               pollPeriod:    opts.PollPeriod,
+               resource:        resource,
+               taskState:       taskState,
+               informerFactory: informerFactory,
+               dynamicClient:   dynamicClient,
+               mapper:          mapper,
+               timeout:         opts.Timeout,
+               pollPeriod:      opts.PollPeriod,
        }
 }
 
@@ -41,18 +44,19 @@
 }
 
 type TrackResourcePresenceOperation struct {
-       resource      *id.ResourceID
-       taskState     *util.Concurrent[*statestore.PresenceTaskState]
-       dynamicClient dynamic.Interface
-       mapper        meta.ResettableRESTMapper
-       timeout       time.Duration
-       pollPeriod    time.Duration
+       resource        *id.ResourceID
+       taskState       *util.Concurrent[*statestore.PresenceTaskState]
+       informerFactory *util.Concurrent[*informer.InformerFactory]
+       dynamicClient   dynamic.Interface
+       mapper          meta.ResettableRESTMapper
+       timeout         time.Duration
+       pollPeriod      time.Duration
 
        status Status
 }
 
 func (o *TrackResourcePresenceOperation) Execute(ctx context.Context) error {
-       tracker := dyntracker.NewDynamicPresenceTracker(o.taskState, 
o.dynamicClient, o.mapper, dyntracker.DynamicPresenceTrackerOptions{
+       tracker := dyntracker.NewDynamicPresenceTracker(o.taskState, 
o.informerFactory, o.dynamicClient, o.mapper, 
dyntracker.DynamicPresenceTrackerOptions{
                Timeout:    o.timeout,
                PollPeriod: o.pollPeriod,
        })
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/nelm-1.10.0/internal/plan/operation/track_resource_readiness_operation.go 
new/nelm-1.12.2/internal/plan/operation/track_resource_readiness_operation.go
--- 
old/nelm-1.10.0/internal/plan/operation/track_resource_readiness_operation.go   
    2025-08-01 14:08:23.000000000 +0200
+++ 
new/nelm-1.12.2/internal/plan/operation/track_resource_readiness_operation.go   
    2025-08-15 13:10:21.000000000 +0200
@@ -11,6 +11,7 @@
        "k8s.io/client-go/dynamic"
        "k8s.io/client-go/kubernetes"
 
+       "github.com/werf/kubedog/pkg/informer"
        "github.com/werf/kubedog/pkg/trackers/dyntracker"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/logstore"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/statestore"
@@ -25,6 +26,7 @@
 func NewTrackResourceReadinessOperation(
        resource *id.ResourceID,
        taskState *util.Concurrent[*statestore.ReadinessTaskState],
+       informerFactory *util.Concurrent[*informer.InformerFactory],
        logStore *util.Concurrent[*logstore.LogStore],
        staticClient kubernetes.Interface,
        dynamicClient dynamic.Interface,
@@ -35,6 +37,7 @@
        return &TrackResourceReadinessOperation{
                resource:                                 resource,
                taskState:                                taskState,
+               informerFactory:                          informerFactory,
                logStore:                                 logStore,
                staticClient:                             staticClient,
                dynamicClient:                            dynamicClient,
@@ -44,6 +47,7 @@
                noActivityTimeout:                        
opts.NoActivityTimeout,
                ignoreReadinessProbeFailsByContainerName: 
opts.IgnoreReadinessProbeFailsByContainerName,
                captureLogsFromTime:                      
opts.CaptureLogsFromTime,
+               saveLogsOnlyForNumberOfReplicas:          
opts.SaveLogsOnlyForNumberOfReplicas,
                saveLogsOnlyForContainers:                
opts.SaveLogsOnlyForContainers,
                saveLogsByRegex:                          opts.SaveLogsByRegex,
                saveLogsByRegexForContainers:             
opts.SaveLogsByRegexForContainers,
@@ -58,6 +62,7 @@
        NoActivityTimeout                        time.Duration
        IgnoreReadinessProbeFailsByContainerName map[string]time.Duration
        CaptureLogsFromTime                      time.Time
+       SaveLogsOnlyForNumberOfReplicas          int
        SaveLogsOnlyForContainers                []string
        SaveLogsByRegex                          *regexp.Regexp
        SaveLogsByRegexForContainers             map[string]*regexp.Regexp
@@ -69,6 +74,7 @@
 type TrackResourceReadinessOperation struct {
        resource                                 *id.ResourceID
        taskState                                
*util.Concurrent[*statestore.ReadinessTaskState]
+       informerFactory                          
*util.Concurrent[*informer.InformerFactory]
        logStore                                 
*util.Concurrent[*logstore.LogStore]
        staticClient                             kubernetes.Interface
        dynamicClient                            dynamic.Interface
@@ -78,6 +84,7 @@
        noActivityTimeout                        time.Duration
        ignoreReadinessProbeFailsByContainerName map[string]time.Duration
        captureLogsFromTime                      time.Time
+       saveLogsOnlyForNumberOfReplicas          int
        saveLogsOnlyForContainers                []string
        saveLogsByRegex                          *regexp.Regexp
        saveLogsByRegexForContainers             map[string]*regexp.Regexp
@@ -89,11 +96,12 @@
 }
 
 func (o *TrackResourceReadinessOperation) Execute(ctx context.Context) error {
-       tracker, err := dyntracker.NewDynamicReadinessTracker(ctx, o.taskState, 
o.logStore, o.staticClient, o.dynamicClient, o.discoveryClient, o.mapper, 
dyntracker.DynamicReadinessTrackerOptions{
+       tracker, err := dyntracker.NewDynamicReadinessTracker(ctx, o.taskState, 
o.logStore, o.informerFactory, o.staticClient, o.dynamicClient, 
o.discoveryClient, o.mapper, dyntracker.DynamicReadinessTrackerOptions{
                Timeout:                                  o.timeout,
                NoActivityTimeout:                        o.noActivityTimeout,
                IgnoreReadinessProbeFailsByContainerName: 
o.ignoreReadinessProbeFailsByContainerName,
                CaptureLogsFromTime:                      o.captureLogsFromTime,
+               SaveLogsOnlyForNumberOfReplicas:          
o.saveLogsOnlyForNumberOfReplicas,
                SaveLogsOnlyForContainers:                
o.saveLogsOnlyForContainers,
                SaveLogsByRegex:                          o.saveLogsByRegex,
                SaveLogsByRegexForContainers:             
o.saveLogsByRegexForContainers,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/nelm-1.10.0/internal/plan/resourceinfo/deployable_general_resource_info.go 
new/nelm-1.12.2/internal/plan/resourceinfo/deployable_general_resource_info.go
--- 
old/nelm-1.10.0/internal/plan/resourceinfo/deployable_general_resource_info.go  
    2025-08-01 14:08:23.000000000 +0200
+++ 
new/nelm-1.12.2/internal/plan/resourceinfo/deployable_general_resource_info.go  
    2025-08-15 13:10:21.000000000 +0200
@@ -134,7 +134,7 @@
 }
 
 func (i *DeployableGeneralResourceInfo) ShouldKeepOnDelete(releaseName, 
releaseNamespace string) bool {
-       return i.resource.KeepOnDelete() || (i.exists && 
i.getResource.KeepOnDelete(releaseName, releaseNamespace))
+       return i.resource.KeepOnDelete(releaseNamespace) || (i.exists && 
i.getResource.KeepOnDelete(releaseName, releaseNamespace))
 }
 
 func (i *DeployableGeneralResourceInfo) ShouldTrackReadiness(prevRelFailed 
bool) bool {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/nelm-1.10.0/internal/plan/resourceinfo/deployable_hook_resource_info.go 
new/nelm-1.12.2/internal/plan/resourceinfo/deployable_hook_resource_info.go
--- old/nelm-1.10.0/internal/plan/resourceinfo/deployable_hook_resource_info.go 
2025-08-01 14:08:23.000000000 +0200
+++ new/nelm-1.12.2/internal/plan/resourceinfo/deployable_hook_resource_info.go 
2025-08-15 13:10:21.000000000 +0200
@@ -133,7 +133,7 @@
 }
 
 func (i *DeployableHookResourceInfo) ShouldKeepOnDelete(releaseName, 
releaseNamespace string) bool {
-       return i.resource.KeepOnDelete() || (i.exists && 
i.getResource.KeepOnDelete(releaseName, releaseNamespace))
+       return i.resource.KeepOnDelete(releaseNamespace) || (i.exists && 
i.getResource.KeepOnDelete(releaseName, releaseNamespace))
 }
 
 func (i *DeployableHookResourceInfo) ShouldTrackReadiness(prevRelFailed bool) 
bool {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/nelm-1.10.0/internal/plan/resourceinfo/deployable_prev_release_general_resource_info.go
 
new/nelm-1.12.2/internal/plan/resourceinfo/deployable_prev_release_general_resource_info.go
--- 
old/nelm-1.10.0/internal/plan/resourceinfo/deployable_prev_release_general_resource_info.go
 2025-08-01 14:08:23.000000000 +0200
+++ 
new/nelm-1.12.2/internal/plan/resourceinfo/deployable_prev_release_general_resource_info.go
 2025-08-15 13:10:21.000000000 +0200
@@ -59,7 +59,7 @@
 }
 
 func (i *DeployablePrevReleaseGeneralResourceInfo) 
ShouldKeepOnDelete(releaseName, releaseNamespace string) bool {
-       return i.resource.KeepOnDelete() || (i.exists && 
i.getResource.KeepOnDelete(releaseName, releaseNamespace))
+       return i.resource.KeepOnDelete(releaseNamespace) || (i.exists && 
i.getResource.KeepOnDelete(releaseName, releaseNamespace))
 }
 
 func (i *DeployablePrevReleaseGeneralResourceInfo) 
ShouldDelete(curReleaseExistingResourcesUIDs []types.UID, releaseName, 
releaseNamespace string, deployType common.DeployType) bool {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/nelm-1.10.0/internal/plan/resourceinfo/deployable_prev_release_hook_resource_info.go
 
new/nelm-1.12.2/internal/plan/resourceinfo/deployable_prev_release_hook_resource_info.go
--- 
old/nelm-1.10.0/internal/plan/resourceinfo/deployable_prev_release_hook_resource_info.go
    2025-08-01 14:08:23.000000000 +0200
+++ 
new/nelm-1.12.2/internal/plan/resourceinfo/deployable_prev_release_hook_resource_info.go
    2025-08-15 13:10:21.000000000 +0200
@@ -133,7 +133,7 @@
 }
 
 func (i *DeployablePrevReleaseHookResourceInfo) 
ShouldKeepOnDelete(releaseName, releaseNamespace string) bool {
-       return i.resource.KeepOnDelete() || (i.exists && 
i.getResource.KeepOnDelete(releaseName, releaseNamespace))
+       return i.resource.KeepOnDelete(releaseNamespace) || (i.exists && 
i.getResource.KeepOnDelete(releaseName, releaseNamespace))
 }
 
 func (i *DeployablePrevReleaseHookResourceInfo) ShouldTrackReadiness() bool {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/nelm-1.10.0/internal/plan/resourceinfo/deployable_resources_processor.go 
new/nelm-1.12.2/internal/plan/resourceinfo/deployable_resources_processor.go
--- 
old/nelm-1.10.0/internal/plan/resourceinfo/deployable_resources_processor.go    
    2025-08-01 14:08:23.000000000 +0200
+++ 
new/nelm-1.12.2/internal/plan/resourceinfo/deployable_resources_processor.go    
    2025-08-15 13:10:21.000000000 +0200
@@ -728,7 +728,7 @@
 
        for _, res := range resources {
                if res.GroupVersionKind() == (schema.GroupVersionKind{Kind: 
"Namespace", Version: "v1"}) && res.Name() == p.releaseNamespace {
-                       return fmt.Errorf("release namespace %q cannot be 
deployed as part of the release")
+                       return fmt.Errorf("release namespace %q cannot be 
deployed as part of the release", res.Name())
                }
        }
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/internal/plan/uninstall_plan_builder.go 
new/nelm-1.12.2/internal/plan/uninstall_plan_builder.go
--- old/nelm-1.10.0/internal/plan/uninstall_plan_builder.go     2025-08-01 
14:08:23.000000000 +0200
+++ new/nelm-1.12.2/internal/plan/uninstall_plan_builder.go     2025-08-15 
13:10:21.000000000 +0200
@@ -15,6 +15,7 @@
        "k8s.io/client-go/dynamic"
        "k8s.io/client-go/kubernetes"
 
+       "github.com/werf/kubedog/pkg/informer"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/logstore"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/statestore"
        kdutil "github.com/werf/kubedog/pkg/trackers/dyntracker/util"
@@ -34,6 +35,7 @@
        releaseNamespace string,
        taskStore *statestore.TaskStore,
        logStore *kdutil.Concurrent[*logstore.LogStore],
+       informerFactory *kdutil.Concurrent[*informer.InformerFactory],
        prevReleaseHookResourceInfos 
[]*info.DeployablePrevReleaseHookResourceInfo,
        prevReleaseGeneralResourceInfos 
[]*info.DeployablePrevReleaseGeneralResourceInfo,
        prevRelease *release.Release,
@@ -63,6 +65,7 @@
        return &UninstallPlanBuilder{
                taskStore:                       taskStore,
                logStore:                        logStore,
+               informerFactory:                 informerFactory,
                plan:                            plan,
                releaseName:                     releaseName,
                releaseNamespace:                releaseNamespace,
@@ -94,6 +97,7 @@
        ignoreLogs                      bool
        taskStore                       *statestore.TaskStore
        logStore                        *kdutil.Concurrent[*logstore.LogStore]
+       informerFactory                 
*kdutil.Concurrent[*informer.InformerFactory]
        releaseName                     string
        releaseNamespace                string
        preHookResourcesInfos           
[]*info.DeployablePrevReleaseHookResourceInfo
@@ -290,6 +294,7 @@
                                info.ResourceID,
                                info.Resource().Unstructured(),
                                absenceTaskState,
+                               b.informerFactory,
                                b.kubeClient,
                                b.dynamicClient,
                                b.mapper,
@@ -377,6 +382,7 @@
                                opTrackReadiness := 
operation.NewTrackResourcePresenceOperation(
                                        dep.ResourceID,
                                        taskState,
+                                       b.informerFactory,
                                        b.dynamicClient,
                                        b.mapper,
                                        
operation.TrackResourcePresenceOperationOptions{
@@ -397,6 +403,7 @@
                if trackReadiness {
                        logRegex, _ := info.Resource().LogRegex()
                        logRegexesFor, _ := 
info.Resource().LogRegexesForContainers()
+                       showLogsOnlyForNumberOfReplicas := 
info.Resource().ShowLogsOnlyForNumberOfReplicas()
                        skipLogsFor, _ := 
info.Resource().SkipLogsForContainers()
                        showLogsOnlyFor, _ := 
info.Resource().ShowLogsOnlyForContainers()
                        ignoreReadinessProbes, _ := 
info.Resource().IgnoreReadinessProbeFailsForContainers()
@@ -416,6 +423,7 @@
                        opTrackReadiness = 
operation.NewTrackResourceReadinessOperation(
                                info.ResourceID,
                                taskState,
+                               b.informerFactory,
                                b.logStore,
                                b.staticClient,
                                b.dynamicClient,
@@ -425,6 +433,7 @@
                                        Timeout:                                
  b.readinessTimeout,
                                        NoActivityTimeout:                      
  noActivityTimeout,
                                        
IgnoreReadinessProbeFailsByContainerName: ignoreReadinessProbes,
+                                       SaveLogsOnlyForNumberOfReplicas:        
  showLogsOnlyForNumberOfReplicas,
                                        SaveLogsOnlyForContainers:              
  showLogsOnlyFor,
                                        SaveLogsByRegex:                        
  logRegex,
                                        SaveLogsByRegexForContainers:           
  logRegexesFor,
@@ -486,6 +495,7 @@
                        opTrackDeletion := 
operation.NewTrackResourceAbsenceOperation(
                                info.ResourceID,
                                taskState,
+                               b.informerFactory,
                                b.dynamicClient,
                                b.mapper,
                                operation.TrackResourceAbsenceOperationOptions{
@@ -534,6 +544,7 @@
                        opTrackDeletion := 
operation.NewTrackResourceAbsenceOperation(
                                info.ResourceID,
                                taskState,
+                               b.informerFactory,
                                b.dynamicClient,
                                b.mapper,
                                operation.TrackResourceAbsenceOperationOptions{
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/internal/resource/common.go 
new/nelm-1.12.2/internal/resource/common.go
--- old/nelm-1.10.0/internal/resource/common.go 2025-08-01 14:08:23.000000000 
+0200
+++ new/nelm-1.12.2/internal/resource/common.go 2025-08-15 13:10:21.000000000 
+0200
@@ -116,6 +116,11 @@
 )
 
 var (
+       annotationKeyHumanShowLogsOnlyForNumberOfReplicas   = 
"werf.io/show-logs-only-for-number-of-replicas"
+       annotationKeyPatternShowLogsOnlyForNumberOfReplicas = 
regexp.MustCompile(`^werf.io/show-logs-only-for-number-of-replicas$`)
+)
+
+var (
        annotationKeyHumanSkipLogs   = "werf.io/skip-logs"
        annotationKeyPatternSkipLogs = regexp.MustCompile(`^werf.io/skip-logs$`)
 )
@@ -443,6 +448,18 @@
                }
        }
 
+       if key, value, found := 
FindAnnotationOrLabelByKeyPattern(unstruct.GetAnnotations(), 
annotationKeyPatternShowLogsOnlyForNumberOfReplicas); found {
+               if value == "" {
+                       return fmt.Errorf("invalid value %q for annotation %q, 
expected non-empty integer value", value, key)
+               }
+
+               if replicas, err := strconv.Atoi(value); err != nil {
+                       return fmt.Errorf("invalid value %q for annotation %q, 
expected integer value", value, key)
+               } else if replicas < 0 {
+                       return fmt.Errorf("invalid value %q for annotation %q, 
expected non-negative integer value", value, key)
+               }
+       }
+
        if key, value, found := 
FindAnnotationOrLabelByKeyPattern(unstruct.GetAnnotations(), 
annotationKeyPatternSkipLogs); found {
                if value == "" {
                        return fmt.Errorf("invalid value %q for annotation %q, 
expected non-empty boolean value", value, key)
@@ -783,8 +800,7 @@
 }
 
 func orphaned(unstruct *unstructured.Unstructured, releaseName, 
releaseNamespace string) bool {
-       if IsHook(unstruct.GetAnnotations()) ||
-               (unstruct.GetKind() == "Namespace" && unstruct.GetName() == 
releaseNamespace) {
+       if IsHook(unstruct.GetAnnotations()) {
                return false
        }
 
@@ -803,6 +819,10 @@
        return false
 }
 
+func isReleaseNamespace(unstruct *unstructured.Unstructured, releaseNamespace 
string) bool {
+       return unstruct.GetKind() == "Namespace" && unstruct.GetName() == 
releaseNamespace
+}
+
 func recreate(unstruct *unstructured.Unstructured) bool {
        deletePolicies := deletePolicies(unstruct.GetAnnotations())
 
@@ -941,6 +961,17 @@
        return showServiceMessages
 }
 
+func showLogsOnlyForNumberOfReplicas(unstruct *unstructured.Unstructured) int {
+       _, value, found := 
FindAnnotationOrLabelByKeyPattern(unstruct.GetAnnotations(), 
annotationKeyPatternShowLogsOnlyForNumberOfReplicas)
+       if !found {
+               return 1
+       }
+
+       replicas := lo.Must(strconv.Atoi(value))
+
+       return replicas
+}
+
 func skipLogs(unstruct *unstructured.Unstructured) bool {
        _, value, found := 
FindAnnotationOrLabelByKeyPattern(unstruct.GetAnnotations(), 
annotationKeyPatternSkipLogs)
        if !found {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/internal/resource/general_resource.go 
new/nelm-1.12.2/internal/resource/general_resource.go
--- old/nelm-1.10.0/internal/resource/general_resource.go       2025-08-01 
14:08:23.000000000 +0200
+++ new/nelm-1.12.2/internal/resource/general_resource.go       2025-08-15 
13:10:21.000000000 +0200
@@ -147,8 +147,8 @@
        return deleteOnFailed(r.unstruct)
 }
 
-func (r *GeneralResource) KeepOnDelete() bool {
-       return keepOnDelete(r.unstruct)
+func (r *GeneralResource) KeepOnDelete(releaseNamespace string) bool {
+       return keepOnDelete(r.unstruct) || isReleaseNamespace(r.unstruct, 
releaseNamespace)
 }
 
 func (r *GeneralResource) FailMode() multitrack.FailMode {
@@ -183,6 +183,10 @@
        return showServiceMessages(r.unstruct)
 }
 
+func (r *GeneralResource) ShowLogsOnlyForNumberOfReplicas() int {
+       return showLogsOnlyForNumberOfReplicas(r.unstruct)
+}
+
 func (r *GeneralResource) SkipLogs() bool {
        return skipLogs(r.unstruct)
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/internal/resource/hook_resource.go 
new/nelm-1.12.2/internal/resource/hook_resource.go
--- old/nelm-1.10.0/internal/resource/hook_resource.go  2025-08-01 
14:08:23.000000000 +0200
+++ new/nelm-1.12.2/internal/resource/hook_resource.go  2025-08-15 
13:10:21.000000000 +0200
@@ -155,8 +155,8 @@
        return deleteOnFailed(r.unstruct)
 }
 
-func (r *HookResource) KeepOnDelete() bool {
-       return keepOnDelete(r.unstruct)
+func (r *HookResource) KeepOnDelete(releaseNamespace string) bool {
+       return keepOnDelete(r.unstruct) || isReleaseNamespace(r.unstruct, 
releaseNamespace)
 }
 
 func (r *HookResource) FailMode() multitrack.FailMode {
@@ -191,6 +191,10 @@
        return showServiceMessages(r.unstruct)
 }
 
+func (r *HookResource) ShowLogsOnlyForNumberOfReplicas() int {
+       return showLogsOnlyForNumberOfReplicas(r.unstruct)
+}
+
 func (r *HookResource) SkipLogs() bool {
        return skipLogs(r.unstruct)
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/pkg/action/progress_printer.go 
new/nelm-1.12.2/pkg/action/progress_printer.go
--- old/nelm-1.10.0/pkg/action/progress_printer.go      2025-08-01 
14:08:23.000000000 +0200
+++ new/nelm-1.12.2/pkg/action/progress_printer.go      2025-08-15 
13:10:21.000000000 +0200
@@ -2,61 +2,97 @@
 
 import (
        "context"
+       "fmt"
+       "sort"
        "time"
+
+       "github.com/gookit/color"
+       "github.com/samber/lo"
+
+       "github.com/werf/nelm/internal/track"
+       "github.com/werf/nelm/pkg/log"
 )
 
-type progressTablePrinter struct {
-       ctx      context.Context
-       cancel   context.CancelFunc
-       interval time.Duration
-       callback func()
-       finished chan bool
+func newProgressPrinter() *progressPrinter {
+       return &progressPrinter{}
 }
 
-func newProgressTablePrinter(ctx context.Context, interval time.Duration, 
callback func()) *progressTablePrinter {
-       return &progressTablePrinter{
-               ctx:      ctx,
-               interval: interval,
-               callback: callback,
-               finished: make(chan bool),
-       }
+type progressPrinter struct {
+       ctxCancelFn context.CancelCauseFunc
+       finishedCh  chan struct{}
 }
 
-func (p *progressTablePrinter) Start() {
-       p.ctx, p.cancel = context.WithCancel(p.ctx)
-       // Cancel function is called inside the goroutine below.
-
+func (p *progressPrinter) Start(ctx context.Context, interval time.Duration, 
tablesBuilder *track.TablesBuilder) {
        go func() {
-               defer p.finish()
-               defer p.cancel()
+               p.finishedCh = make(chan struct{})
 
-               ticker := time.NewTicker(p.interval)
+               ctx, p.ctxCancelFn = context.WithCancelCause(ctx)
+               defer func() {
+                       p.ctxCancelFn(fmt.Errorf("context canceled: table 
printer finished"))
+                       p.finishedCh <- struct{}{}
+               }()
+
+               ticker := time.NewTicker(interval)
                defer ticker.Stop()
 
                for {
                        select {
-                       case <-p.ctx.Done():
-                               p.callback()
-                               return
                        case <-ticker.C:
-                               p.callback()
+                               printTables(ctx, tablesBuilder)
+                       case <-ctx.Done():
+                               printTables(ctx, tablesBuilder)
+                               return
                        }
                }
        }()
 }
 
-func (p *progressTablePrinter) Stop() {
-       if p.cancel != nil {
-               p.cancel()
-       }
+func (p *progressPrinter) Stop() {
+       p.ctxCancelFn(fmt.Errorf("context canceled: table printer stopped"))
 }
 
-func (p *progressTablePrinter) Wait() {
-       if p.cancel != nil {
-               <-p.finished // Wait for graceful stop
-       }
+func (p *progressPrinter) Wait() {
+       <-p.finishedCh
 }
 
-func (p *progressTablePrinter) finish() {
-       p.finished <- true // Used for graceful stop
+func printTables(
+       ctx context.Context,
+       tablesBuilder *track.TablesBuilder,
+) {
+       maxTableWidth := log.Default.BlockContentWidth(ctx) - 2
+       tablesBuilder.SetMaxTableWidth(maxTableWidth)
+
+       if tables, nonEmpty := tablesBuilder.BuildEventTables(); nonEmpty {
+               headers := lo.Keys(tables)
+               sort.Strings(headers)
+
+               for _, header := range headers {
+                       log.Default.InfoBlock(ctx, log.BlockOptions{
+                               BlockTitle: header,
+                       }, func() {
+                               log.Default.Info(ctx, tables[header].Render())
+                       })
+               }
+       }
+
+       if tables, nonEmpty := tablesBuilder.BuildLogTables(); nonEmpty {
+               headers := lo.Keys(tables)
+               sort.Strings(headers)
+
+               for _, header := range headers {
+                       log.Default.InfoBlock(ctx, log.BlockOptions{
+                               BlockTitle: header,
+                       }, func() {
+                               log.Default.Info(ctx, tables[header].Render())
+                       })
+               }
+       }
+
+       if table, nonEmpty := tablesBuilder.BuildProgressTable(); nonEmpty {
+               log.Default.InfoBlock(ctx, log.BlockOptions{
+                       BlockTitle: color.Style{color.Bold, 
color.Blue}.Render("Progress status"),
+               }, func() {
+                       log.Default.Info(ctx, table.Render())
+               })
+       }
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/pkg/action/progress_printer_test.go 
new/nelm-1.12.2/pkg/action/progress_printer_test.go
--- old/nelm-1.10.0/pkg/action/progress_printer_test.go 2025-08-01 
14:08:23.000000000 +0200
+++ new/nelm-1.12.2/pkg/action/progress_printer_test.go 1970-01-01 
01:00:00.000000000 +0100
@@ -1,79 +0,0 @@
-package action
-
-import (
-       "context"
-       "time"
-
-       . "github.com/onsi/ginkgo/v2"
-       . "github.com/onsi/gomega"
-)
-
-var _ = Describe("progress printer", func() {
-       Describe("unit testing", func() {
-               Describe("Stop()", func() {
-                       It("should do nothing if not started", func(ctx 
SpecContext) {
-                               progressPrinter := newProgressTablePrinter(ctx, 
0, func() {
-                                       // do nothing
-                               })
-                               
Eventually(progressPrinter.Stop).WithTimeout(time.Second)
-                       })
-               })
-               Describe("Wait()", func() {
-                       It("should do nothing if not started", func(ctx 
SpecContext) {
-                               progressPrinter := newProgressTablePrinter(ctx, 
0, func() {
-                                       // do nothing
-                               })
-                               
Eventually(progressPrinter.Wait).WithTimeout(time.Second)
-                       })
-               })
-       })
-       DescribeTable("functional testing",
-               func(ctx SpecContext, cancelTimeout, stopTimeout, interval 
time.Duration, expectedTimes int) {
-                       ctxNew, cancel := context.WithTimeout(ctx, 
cancelTimeout)
-                       defer cancel()
-
-                       counter := 0
-                       progressPrinter := newProgressTablePrinter(ctxNew, 
interval, func() {
-                               counter++
-                       })
-
-                       progressPrinter.Start()
-
-                       if stopTimeout > 0 {
-                               time.Sleep(stopTimeout)
-                               progressPrinter.Stop()
-                       }
-
-                       progressPrinter.Wait()
-                       Expect(counter).To(Equal(expectedTimes))
-               },
-               Entry(
-                       "should stop using timeout and print 5 times with 
cancelTimeout=1min, stopTimeout=0, interval=25ms",
-                       time.Millisecond*100,
-                       time.Duration(0),
-                       time.Millisecond*25,
-                       5,
-               ),
-               Entry(
-                       "should stop using ctx and print 3 times with 
cancelTimeout=50ms, stopTimeout=0, interval=25ms",
-                       time.Millisecond*50,
-                       time.Duration(0),
-                       time.Millisecond*20,
-                       3,
-               ),
-               Entry(
-                       "should stop using Stop() and print 3 times with 
cancelTimeout=1min, stopTimeout=50ms, interval=25ms",
-                       time.Millisecond*100,
-                       time.Millisecond*50,
-                       time.Millisecond*25,
-                       3,
-               ),
-               Entry(
-                       "should consider timeout=0 as 24 hours and print 3 
times with cancelTimeout=1min, stopTimeout=50ms, interval=25ms",
-                       time.Millisecond*100,
-                       time.Millisecond*50,
-                       time.Millisecond*25,
-                       3,
-               ),
-       )
-})
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/pkg/action/release_install.go 
new/nelm-1.12.2/pkg/action/release_install.go
--- old/nelm-1.10.0/pkg/action/release_install.go       2025-08-01 
14:08:23.000000000 +0200
+++ new/nelm-1.12.2/pkg/action/release_install.go       2025-08-15 
13:10:21.000000000 +0200
@@ -7,7 +7,6 @@
        "os"
        "path/filepath"
        "regexp"
-       "sort"
        "time"
 
        "github.com/gookit/color"
@@ -18,6 +17,7 @@
 
        "github.com/werf/3p-helm/pkg/registry"
        "github.com/werf/3p-helm/pkg/werf/helmopts"
+       "github.com/werf/kubedog/pkg/informer"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/logstore"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/statestore"
        kubeutil "github.com/werf/kubedog/pkg/trackers/dyntracker/util"
@@ -101,16 +101,18 @@
 }
 
 func ReleaseInstall(ctx context.Context, releaseName, releaseNamespace string, 
opts ReleaseInstallOptions) error {
+       ctx, ctxCancelFn := context.WithCancelCause(ctx)
+
        if opts.Timeout == 0 {
-               return releaseInstall(ctx, releaseName, releaseNamespace, opts)
+               return releaseInstall(ctx, ctxCancelFn, releaseName, 
releaseNamespace, opts)
        }
 
-       ctx, ctxCancelFn := context.WithTimeoutCause(ctx, opts.Timeout, 
fmt.Errorf("context timed out: action timed out after %s", 
opts.Timeout.String()))
-       defer ctxCancelFn()
+       ctx, _ = context.WithTimeoutCause(ctx, opts.Timeout, 
fmt.Errorf("context timed out: action timed out after %s", 
opts.Timeout.String()))
+       defer ctxCancelFn(fmt.Errorf("context canceled: action finished"))
 
        actionCh := make(chan error, 1)
        go func() {
-               actionCh <- releaseInstall(ctx, releaseName, releaseNamespace, 
opts)
+               actionCh <- releaseInstall(ctx, ctxCancelFn, releaseName, 
releaseNamespace, opts)
        }()
 
        for {
@@ -123,7 +125,7 @@
        }
 }
 
-func releaseInstall(ctx context.Context, releaseName, releaseNamespace string, 
opts ReleaseInstallOptions) error {
+func releaseInstall(ctx context.Context, ctxCancelFn context.CancelCauseFunc, 
releaseName, releaseNamespace string, opts ReleaseInstallOptions) error {
        currentDir, err := os.Getwd()
        if err != nil {
                return fmt.Errorf("get current working directory: %w", err)
@@ -396,6 +398,8 @@
        logStore := kubeutil.NewConcurrent(
                logstore.NewLogStore(),
        )
+       watchErrCh := make(chan error, 1)
+       informerFactory := informer.NewConcurrentInformerFactory(ctx.Done(), 
watchErrCh, clientFactory.Dynamic(), 
informer.ConcurrentInformerFactoryOptions{})
 
        log.Default.Debug(ctx, "Constructing new deploy plan")
        deployPlanBuilder := plan.NewDeployPlanBuilder(
@@ -403,6 +407,7 @@
                deployType,
                taskStore,
                logStore,
+               informerFactory,
                resProcessor.DeployableStandaloneCRDsInfos(),
                resProcessor.DeployableHookResourcesInfos(),
                resProcessor.DeployableGeneralResourcesInfos(),
@@ -493,12 +498,16 @@
        )
 
        log.Default.Debug(ctx, "Starting tracking")
-       progressPrinter := newProgressTablePrinter(ctx, 
opts.ProgressTablePrintInterval, func() {
-               printTables(ctx, tablesBuilder)
-       })
+       go func() {
+               if err := <-watchErrCh; err != nil {
+                       ctxCancelFn(fmt.Errorf("context canceled: watch error: 
%w", err))
+               }
+       }()
 
+       var progressPrinter *progressPrinter
        if !opts.NoProgressTablePrint {
-               progressPrinter.Start()
+               progressPrinter = newProgressPrinter()
+               progressPrinter.Start(ctx, opts.ProgressTablePrintInterval, 
tablesBuilder)
        }
 
        log.Default.Debug(ctx, "Executing release install plan")
@@ -554,6 +563,7 @@
                        deployType,
                        deployPlan,
                        taskStore,
+                       informerFactory,
                        resProcessor,
                        newRel,
                        prevRelease,
@@ -573,6 +583,7 @@
                                ctx,
                                taskStore,
                                logStore,
+                               informerFactory,
                                releaseName,
                                releaseNamespace,
                                deployType,
@@ -763,48 +774,6 @@
        })
 }
 
-func printTables(
-       ctx context.Context,
-       tablesBuilder *track.TablesBuilder,
-) {
-       maxTableWidth := log.Default.BlockContentWidth(ctx) - 2
-       tablesBuilder.SetMaxTableWidth(maxTableWidth)
-
-       if tables, nonEmpty := tablesBuilder.BuildEventTables(); nonEmpty {
-               headers := lo.Keys(tables)
-               sort.Strings(headers)
-
-               for _, header := range headers {
-                       log.Default.InfoBlock(ctx, log.BlockOptions{
-                               BlockTitle: header,
-                       }, func() {
-                               log.Default.Info(ctx, tables[header].Render())
-                       })
-               }
-       }
-
-       if tables, nonEmpty := tablesBuilder.BuildLogTables(); nonEmpty {
-               headers := lo.Keys(tables)
-               sort.Strings(headers)
-
-               for _, header := range headers {
-                       log.Default.InfoBlock(ctx, log.BlockOptions{
-                               BlockTitle: header,
-                       }, func() {
-                               log.Default.Info(ctx, tables[header].Render())
-                       })
-               }
-       }
-
-       if table, nonEmpty := tablesBuilder.BuildProgressTable(); nonEmpty {
-               log.Default.InfoBlock(ctx, log.BlockOptions{
-                       BlockTitle: color.Style{color.Bold, 
color.Blue}.Render("Progress status"),
-               }, func() {
-                       log.Default.Info(ctx, table.Render())
-               })
-       }
-}
-
 func runFailureDeployPlan(
        ctx context.Context,
        releaseName string,
@@ -812,6 +781,7 @@
        deployType common.DeployType,
        failedPlan *plan.Plan,
        taskStore *statestore.TaskStore,
+       informerFactory *kubeutil.Concurrent[*informer.InformerFactory],
        resProcessor *resourceinfo.DeployableResourcesProcessor,
        newRel, prevRelease *release.Release,
        history *release.History,
@@ -831,6 +801,7 @@
                deployType,
                failedPlan,
                taskStore,
+               informerFactory,
                resProcessor.DeployableHookResourcesInfos(),
                resProcessor.DeployableGeneralResourcesInfos(),
                history,
@@ -891,6 +862,7 @@
        ctx context.Context,
        taskStore *statestore.TaskStore,
        logStore *kubeutil.Concurrent[*logstore.LogStore],
+       informerFactory *kubeutil.Concurrent[*informer.InformerFactory],
        releaseName string,
        releaseNamespace string,
        deployType common.DeployType,
@@ -988,6 +960,7 @@
                common.DeployTypeRollback,
                taskStore,
                logStore,
+               informerFactory,
                nil,
                resProcessor.DeployableHookResourcesInfos(),
                resProcessor.DeployableGeneralResourcesInfos(),
@@ -1075,6 +1048,7 @@
                        deployType,
                        rollbackPlan,
                        taskStore,
+                       informerFactory,
                        resProcessor,
                        rollbackRel,
                        failedRelease,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/pkg/action/release_plan_install.go 
new/nelm-1.12.2/pkg/action/release_plan_install.go
--- old/nelm-1.10.0/pkg/action/release_plan_install.go  2025-08-01 
14:08:23.000000000 +0200
+++ new/nelm-1.12.2/pkg/action/release_plan_install.go  2025-08-15 
13:10:21.000000000 +0200
@@ -80,16 +80,18 @@
 }
 
 func ReleasePlanInstall(ctx context.Context, releaseName, releaseNamespace 
string, opts ReleasePlanInstallOptions) error {
+       ctx, ctxCancelFn := context.WithCancelCause(ctx)
+
        if opts.Timeout == 0 {
-               return releasePlanInstall(ctx, releaseName, releaseNamespace, 
opts)
+               return releasePlanInstall(ctx, ctxCancelFn, releaseName, 
releaseNamespace, opts)
        }
 
-       ctx, ctxCancelFn := context.WithTimeoutCause(ctx, opts.Timeout, 
fmt.Errorf("context timed out: action timed out after %s", 
opts.Timeout.String()))
-       defer ctxCancelFn()
+       ctx, _ = context.WithTimeoutCause(ctx, opts.Timeout, 
fmt.Errorf("context timed out: action timed out after %s", 
opts.Timeout.String()))
+       defer ctxCancelFn(fmt.Errorf("context canceled: action finished"))
 
        actionCh := make(chan error, 1)
        go func() {
-               actionCh <- releasePlanInstall(ctx, releaseName, 
releaseNamespace, opts)
+               actionCh <- releasePlanInstall(ctx, ctxCancelFn, releaseName, 
releaseNamespace, opts)
        }()
 
        for {
@@ -102,7 +104,7 @@
        }
 }
 
-func releasePlanInstall(ctx context.Context, releaseName, releaseNamespace 
string, opts ReleasePlanInstallOptions) error {
+func releasePlanInstall(ctx context.Context, ctxCancelFn 
context.CancelCauseFunc, releaseName, releaseNamespace string, opts 
ReleasePlanInstallOptions) error {
        currentDir, err := os.Getwd()
        if err != nil {
                return fmt.Errorf("get current working directory: %w", err)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/pkg/action/release_rollback.go 
new/nelm-1.12.2/pkg/action/release_rollback.go
--- old/nelm-1.10.0/pkg/action/release_rollback.go      2025-08-01 
14:08:23.000000000 +0200
+++ new/nelm-1.12.2/pkg/action/release_rollback.go      2025-08-15 
13:10:21.000000000 +0200
@@ -12,6 +12,7 @@
        "github.com/samber/lo"
        "k8s.io/client-go/kubernetes"
 
+       "github.com/werf/kubedog/pkg/informer"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/logstore"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/statestore"
        kubeutil "github.com/werf/kubedog/pkg/trackers/dyntracker/util"
@@ -63,16 +64,18 @@
 }
 
 func ReleaseRollback(ctx context.Context, releaseName, releaseNamespace 
string, opts ReleaseRollbackOptions) error {
+       ctx, ctxCancelFn := context.WithCancelCause(ctx)
+
        if opts.Timeout == 0 {
-               return releaseRollback(ctx, releaseName, releaseNamespace, opts)
+               return releaseRollback(ctx, ctxCancelFn, releaseName, 
releaseNamespace, opts)
        }
 
-       ctx, ctxCancelFn := context.WithTimeoutCause(ctx, opts.Timeout, 
fmt.Errorf("context timed out: action timed out after %s", 
opts.Timeout.String()))
-       defer ctxCancelFn()
+       ctx, _ = context.WithTimeoutCause(ctx, opts.Timeout, 
fmt.Errorf("context timed out: action timed out after %s", 
opts.Timeout.String()))
+       defer ctxCancelFn(fmt.Errorf("context canceled: action finished"))
 
        actionCh := make(chan error, 1)
        go func() {
-               actionCh <- releaseRollback(ctx, releaseName, releaseNamespace, 
opts)
+               actionCh <- releaseRollback(ctx, ctxCancelFn, releaseName, 
releaseNamespace, opts)
        }()
 
        for {
@@ -85,7 +88,7 @@
        }
 }
 
-func releaseRollback(ctx context.Context, releaseName, releaseNamespace 
string, opts ReleaseRollbackOptions) error {
+func releaseRollback(ctx context.Context, ctxCancelFn context.CancelCauseFunc, 
releaseName, releaseNamespace string, opts ReleaseRollbackOptions) error {
        homeDir, err := os.UserHomeDir()
        if err != nil {
                return fmt.Errorf("get home directory: %w", err)
@@ -280,6 +283,8 @@
        logStore := kubeutil.NewConcurrent(
                logstore.NewLogStore(),
        )
+       watchErrCh := make(chan error, 1)
+       informerFactory := informer.NewConcurrentInformerFactory(ctx.Done(), 
watchErrCh, clientFactory.Dynamic(), 
informer.ConcurrentInformerFactoryOptions{})
 
        log.Default.Debug(ctx, "Constructing new rollback plan")
        deployPlanBuilder := plan.NewDeployPlanBuilder(
@@ -287,6 +292,7 @@
                deployType,
                taskStore,
                logStore,
+               informerFactory,
                resProcessor.DeployableStandaloneCRDsInfos(),
                resProcessor.DeployableHookResourcesInfos(),
                resProcessor.DeployableGeneralResourcesInfos(),
@@ -377,12 +383,16 @@
        )
 
        log.Default.Debug(ctx, "Starting tracking")
-       progressPrinter := newProgressTablePrinter(ctx, 
opts.ProgressTablePrintInterval, func() {
-               printTables(ctx, tablesBuilder)
-       })
+       go func() {
+               if err := <-watchErrCh; err != nil {
+                       ctxCancelFn(fmt.Errorf("context canceled: watch error: 
%w", err))
+               }
+       }()
 
+       var progressPrinter *progressPrinter
        if !opts.NoProgressTablePrint {
-               progressPrinter.Start()
+               progressPrinter = newProgressPrinter()
+               progressPrinter.Start(ctx, opts.ProgressTablePrintInterval, 
tablesBuilder)
        }
 
        log.Default.Debug(ctx, "Executing release rollback plan")
@@ -438,6 +448,7 @@
                        deployType,
                        deployPlan,
                        taskStore,
+                       informerFactory,
                        resProcessor,
                        newRel,
                        prevRelease,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/pkg/action/release_uninstall.go 
new/nelm-1.12.2/pkg/action/release_uninstall.go
--- old/nelm-1.10.0/pkg/action/release_uninstall.go     2025-08-01 
14:08:23.000000000 +0200
+++ new/nelm-1.12.2/pkg/action/release_uninstall.go     2025-08-15 
13:10:21.000000000 +0200
@@ -13,6 +13,7 @@
        "k8s.io/apimachinery/pkg/runtime/schema"
        "k8s.io/client-go/kubernetes"
 
+       "github.com/werf/kubedog/pkg/informer"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/logstore"
        "github.com/werf/kubedog/pkg/trackers/dyntracker/statestore"
        kubeutil "github.com/werf/kubedog/pkg/trackers/dyntracker/util"
@@ -62,16 +63,18 @@
 }
 
 func ReleaseUninstall(ctx context.Context, releaseName, releaseNamespace 
string, opts ReleaseUninstallOptions) error {
+       ctx, ctxCancelFn := context.WithCancelCause(ctx)
+
        if opts.Timeout == 0 {
-               return releaseUninstall(ctx, releaseName, releaseNamespace, 
opts)
+               return releaseUninstall(ctx, ctxCancelFn, releaseName, 
releaseNamespace, opts)
        }
 
-       ctx, ctxCancelFn := context.WithTimeoutCause(ctx, opts.Timeout, 
fmt.Errorf("context timed out: action timed out after %s", 
opts.Timeout.String()))
-       defer ctxCancelFn()
+       ctx, _ = context.WithTimeoutCause(ctx, opts.Timeout, 
fmt.Errorf("context timed out: action timed out after %s", 
opts.Timeout.String()))
+       defer ctxCancelFn(fmt.Errorf("context canceled: action finished"))
 
        actionCh := make(chan error, 1)
        go func() {
-               actionCh <- releaseUninstall(ctx, releaseName, 
releaseNamespace, opts)
+               actionCh <- releaseUninstall(ctx, ctxCancelFn, releaseName, 
releaseNamespace, opts)
        }()
 
        for {
@@ -84,7 +87,7 @@
        }
 }
 
-func releaseUninstall(ctx context.Context, releaseName, releaseNamespace 
string, opts ReleaseUninstallOptions) error {
+func releaseUninstall(ctx context.Context, ctxCancelFn 
context.CancelCauseFunc, releaseName, releaseNamespace string, opts 
ReleaseUninstallOptions) error {
        currentDir, err := os.Getwd()
        if err != nil {
                return fmt.Errorf("get current working directory: %w", err)
@@ -233,6 +236,8 @@
                logStore := kubeutil.NewConcurrent(
                        logstore.NewLogStore(),
                )
+               watchErrCh := make(chan error, 1)
+               informerFactory := 
informer.NewConcurrentInformerFactory(ctx.Done(), watchErrCh, 
clientFactory.Dynamic(), informer.ConcurrentInformerFactoryOptions{})
 
                log.Default.Debug(ctx, "Constructing new uninstall plan")
                uninstallPlanBuilder := plan.NewUninstallPlanBuilder(
@@ -240,6 +245,7 @@
                        releaseNamespace,
                        taskStore,
                        logStore,
+                       informerFactory,
                        resProcessor.DeployablePrevReleaseHookResourcesInfos(),
                        
resProcessor.DeployablePrevReleaseGeneralResourcesInfos(),
                        prevRelease,
@@ -295,12 +301,16 @@
                )
 
                log.Default.Debug(ctx, "Starting tracking")
-               progressPrinter := newProgressTablePrinter(ctx, 
opts.ProgressTablePrintInterval, func() {
-                       printTables(ctx, tablesBuilder)
-               })
+               go func() {
+                       if err := <-watchErrCh; err != nil {
+                               ctxCancelFn(fmt.Errorf("context canceled: watch 
error: %w", err))
+                       }
+               }()
 
+               var progressPrinter *progressPrinter
                if !opts.NoProgressTablePrint {
-                       progressPrinter.Start()
+                       progressPrinter = newProgressPrinter()
+                       progressPrinter.Start(ctx, 
opts.ProgressTablePrintInterval, tablesBuilder)
                }
 
                log.Default.Debug(ctx, "Executing release uninstall plan")
@@ -347,6 +357,7 @@
                                common.DeployTypeUninstall,
                                uninstallPlan,
                                taskStore,
+                               informerFactory,
                                resProcessor,
                                nil,
                                prevRelease,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/nelm-1.10.0/trdl_channels.yaml 
new/nelm-1.12.2/trdl_channels.yaml
--- old/nelm-1.10.0/trdl_channels.yaml  2025-08-01 14:08:23.000000000 +0200
+++ new/nelm-1.12.2/trdl_channels.yaml  2025-08-15 13:10:21.000000000 +0200
@@ -2,12 +2,12 @@
   - name: "1"
     channels:
       - name: alpha
-        version: 1.9.0
+        version: 1.12.2
       - name: beta
-        version: 1.8.0
+        version: 1.10.0
       - name: ea
-        version: 1.8.0
+        version: 1.10.0
       - name: stable
-        version: 1.7.0
+        version: 1.8.0
       - name: rock-solid
-        version: 1.6.0
+        version: 1.8.0

++++++ nelm.obsinfo ++++++
--- /var/tmp/diff_new_pack.fe7kxB/_old  2025-08-19 16:47:14.169653022 +0200
+++ /var/tmp/diff_new_pack.fe7kxB/_new  2025-08-19 16:47:14.173653189 +0200
@@ -1,5 +1,5 @@
 name: nelm
-version: 1.10.0
-mtime: 1754050103
-commit: f74daf684359e9b6dd3026920ec93b20129ca85e
+version: 1.12.2
+mtime: 1755256221
+commit: f26d944597598dbbadbdc409823e5b7d1d3be685
 

++++++ vendor.tar.gz ++++++
/work/SRC/openSUSE:Factory/nelm/vendor.tar.gz 
/work/SRC/openSUSE:Factory/.nelm.new.1085/vendor.tar.gz differ: char 12, line 1

Reply via email to