PR #22257 opened by Raja-89
URL: https://code.ffmpeg.org/FFmpeg/FFmpeg/pulls/22257
Patch URL: https://code.ffmpeg.org/FFmpeg/FFmpeg/pulls/22257.patch

## avfilter/dnn: standardize Torch backend options

This PR aligns the Torch backend option handling with TensorFlow and OpenVINO 
backends.

### Problem
The `optimize` flag existed in the Torch backend but was not accessible to 
users via command line.

### Solution
- Fixed OFFSET macro to properly reference nested struct
- Exposed the `optimize` option in vf_dnn_processing.c
- Fixed missing `.unit` parameter for proper option grouping

### Usage
```bash
ffmpeg -vf dnn_processing=dnn_backend=torch:model=model.pt:optimize=1


>From a62b7fde9f4558582c64d405d82cb4838e484f5c Mon Sep 17 00:00:00 2001
From: Raja-89 <[email protected]>
Date: Sun, 22 Feb 2026 21:42:38 +0530
Subject: [PATCH] avfilter/dnn: standardize Torch backend options

This patch aligns the Torch backend option handling with TensorFlow and
OpenVINO backends, making options accessible via command-line interface.

Changes in libavfilter/dnn/dnn_backend_torch.cpp:
- Fix OFFSET macro to use proper nested struct syntax: offsetof(DnnContext, 
torch_option.x)
  instead of the incorrect pointer arithmetic
- Rename options array from dnn_th_options to dnn_torch_options for consistency
- Update DNN_DEFINE_CLASS to use dnn_torch to match the new options array name

Changes in libavfilter/vf_dnn_processing.c:
- Fix missing .unit parameter in torch backend flag (was "backend", now .unit = 
"backend")
  to properly group the option with other backend choices
- Add user-facing 'optimize' option that maps to 
DnnContext.torch_option.optimize,
  enabling command-line control like :optimize=1

Before this patch, the optimize flag existed in the backend but was not 
accessible
to users. Now users can control graph executor optimization from the command 
line:
  ffmpeg -vf dnn_processing=dnn_backend=torch:model=model.pt:optimize=1

This standardization improves consistency across DNN backends and follows the
same pattern used by TensorFlow and OpenVINO backends.

Tested with:
- Compilation: No errors or warnings
- Help output: ./ffmpeg -h filter=dnn_processing shows optimize option
- Option parsing: Verified with -v debug that optimize=1 is correctly set
- Runtime: Confirmed the flag reaches the backend and controls 
torch::jit::setGraphExecutorOptimize()

Signed-off-by: Raja Rathour <[email protected]>
---
 libavfilter/dnn/dnn_backend_torch.cpp | 6 +++---
 libavfilter/vf_dnn_processing.c       | 5 ++++-
 2 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/libavfilter/dnn/dnn_backend_torch.cpp 
b/libavfilter/dnn/dnn_backend_torch.cpp
index d3c4966c09..be765ed4b7 100644
--- a/libavfilter/dnn/dnn_backend_torch.cpp
+++ b/libavfilter/dnn/dnn_backend_torch.cpp
@@ -65,9 +65,9 @@ typedef struct THRequestItem {
 } THRequestItem;
 
 
-#define OFFSET(x) offsetof(THOptions, x)
+#define OFFSET(x) offsetof(DnnContext, torch_option.x)
 #define FLAGS AV_OPT_FLAG_FILTERING_PARAM
-static const AVOption dnn_th_options[] = {
+static const AVOption dnn_torch_options[] = {
     { "optimize", "turn on graph executor optimization", OFFSET(optimize), 
AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, FLAGS},
     { NULL }
 };
@@ -667,7 +667,7 @@ static int dnn_flush_th(const DNNModel *model)
 }
 
 extern const DNNModule ff_dnn_backend_torch = {
-    .clazz          = DNN_DEFINE_CLASS(dnn_th),
+    .clazz          = DNN_DEFINE_CLASS(dnn_torch),
     .type           = DNN_TH,
     .load_model     = dnn_load_model_th,
     .execute_model  = dnn_execute_model_th,
diff --git a/libavfilter/vf_dnn_processing.c b/libavfilter/vf_dnn_processing.c
index 0771ceb5fc..d7e453355f 100644
--- a/libavfilter/vf_dnn_processing.c
+++ b/libavfilter/vf_dnn_processing.c
@@ -51,7 +51,10 @@ static const AVOption dnn_processing_options[] = {
     { "openvino",    "openvino backend flag",      0,                        
AV_OPT_TYPE_CONST,     { .i64 = DNN_OV },    0, 0, FLAGS, .unit = "backend" },
 #endif
 #if (CONFIG_LIBTORCH == 1)
-    { "torch",       "torch backend flag",         0,                        
AV_OPT_TYPE_CONST,     { .i64 = DNN_TH },    0, 0, FLAGS, "backend" },
+    { "torch",       "torch backend flag",         0,                        
AV_OPT_TYPE_CONST,     { .i64 = DNN_TH },    0, 0, FLAGS, .unit = "backend" },
+#endif
+#if (CONFIG_LIBTORCH == 1)
+    { "optimize",    "enable graph executor optimization (torch backend)", 
OFFSET(torch_option.optimize), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, FLAGS },
 #endif
     { NULL }
 };
-- 
2.52.0

_______________________________________________
ffmpeg-devel mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to