Re: [FFmpeg-devel] [PATCH] avcodec/nvenc: High bit depth encoding for HEVC

2024-04-19 Thread Diego Felix de Souza via ffmpeg-devel
Hi Roman and Timo,

Timo is right. As a general rule, hybrid video coding standards allow encoders 
to take advantage of encoding
a 8-bit input as 10-bit due to the interpolation filters (inter and intra) and 
transform coding at 10-bit depth.
This can generate a better prediction and reduced banding artifacts in smooth 
gradient areas, e.g., in the sky.

On the particular case of the NVIDIA Video Codec SDK, we do a simple 8 bit > 10 
bit conversion. No SDR > HDR
conversion is performed. Due to video being encoded as 10 bits, it results to 
better de-correlation and hence
better compression at same quality. We have observed ~3-5% BD-rate savings due 
to this feature.

Although you are right, that the same could be acomplished with an external 
filter, I would still humbly ask
you to consider including this patch into FFmpeg. Besides the fact that this 
patch, as I explained before, is a
more efficient way to achieve the same result due to memory accesses and 
storage, the same feature is already
supported in FFmpeg for AV1 (av1_nvenc). Hence, it would not make sense to the 
user perfom in one way
for AV1 and another way for HEVC.

Best regards,

Diego


On 19.04.24, 09:39, "Roman Arzumanyan"  wrote:

External email: Use caution opening links or attachments

Thanks for the explanation, Timo!

I was hoping that 8>10 bit up-conversion which happens in the driver may bring 
some goodness like SDR > HDR conversion, recently presented by NV. Or some 
other algo which is easier to keep proprietary.
Otherwise, although it is convenient in some use cases, it doesn't look more 
tempting than, say, a similar 8>10 bit NPP up-conversion which shall yield the 
same (presumably SoL) performance.

чт, 18 апр. 2024 г. в 16:32, Timo Rothenpieler 
mailto:t...@rothenpieler.org>>:
On 18/04/2024 14:29, Roman Arzumanyan wrote:
> Hi Diego,
> Asking for my own education.
>
> As far as you've explained, the 8 > 10 bit conversion happens within the
> driver, that's understandable.
> But how does it influence the output? Does it perform some sort of
> proprietary SDR > HDR conversion under the hood that maps the ranges?
> What's gonna be the user observable difference between these 2 scenarios?
> 1) 8 bit input > HEVC 8 bit profile > 8 bit HEVC output
> 2) 8 bit input > 10 bit up conversion > HEVC 10 bit profile > 10 bit
> HEVC output
>
> Better visual quality? Smaller compressed file size?
> In other words, what's the purpose of this feature except enabling new
> Video Codec SDK capability?

Video Codecs tend to be more efficient with 10 bit, even if it's just 8
bit content that's been up-converted to 10 bit.
I.e. yes, it'll (Or can, at least. Not sure if it's a given.) produce
smaller/higher quality content for the same input.

As for the exact reason, I can't explain, but it's a well known concept.

---
NVIDIA GmbH
Wuerselen
Amtsgericht Aachen
HRB 8361
Managing Directors: Rebecca Peters, Donald Robertson, Janet Hall, Ludwig von 
Reiche

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avcodec/nvenc: High bit depth encoding for HEVC

2024-04-19 Thread Roman Arzumanyan
Thanks for the explanation, Timo!

I was hoping that 8>10 bit up-conversion which happens in the driver may
bring some goodness like SDR > HDR conversion, recently presented by NV. Or
some other algo which is easier to keep proprietary.
Otherwise, although it is convenient in some use cases, it doesn't look
more tempting than, say, a similar 8>10 bit NPP up-conversion which shall
yield the same (presumably SoL) performance.

чт, 18 апр. 2024 г. в 16:32, Timo Rothenpieler :

> On 18/04/2024 14:29, Roman Arzumanyan wrote:
> > Hi Diego,
> > Asking for my own education.
> >
> > As far as you've explained, the 8 > 10 bit conversion happens within the
> > driver, that's understandable.
> > But how does it influence the output? Does it perform some sort of
> > proprietary SDR > HDR conversion under the hood that maps the ranges?
> > What's gonna be the user observable difference between these 2 scenarios?
> > 1) 8 bit input > HEVC 8 bit profile > 8 bit HEVC output
> > 2) 8 bit input > 10 bit up conversion > HEVC 10 bit profile > 10 bit
> > HEVC output
> >
> > Better visual quality? Smaller compressed file size?
> > In other words, what's the purpose of this feature except enabling new
> > Video Codec SDK capability?
>
> Video Codecs tend to be more efficient with 10 bit, even if it's just 8
> bit content that's been up-converted to 10 bit.
> I.e. yes, it'll (Or can, at least. Not sure if it's a given.) produce
> smaller/higher quality content for the same input.
>
> As for the exact reason, I can't explain, but it's a well known concept.
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avcodec/nvenc: High bit depth encoding for HEVC

2024-04-18 Thread Timo Rothenpieler

On 18/04/2024 14:29, Roman Arzumanyan wrote:

Hi Diego,
Asking for my own education.

As far as you've explained, the 8 > 10 bit conversion happens within the 
driver, that's understandable.
But how does it influence the output? Does it perform some sort of 
proprietary SDR > HDR conversion under the hood that maps the ranges? 
What's gonna be the user observable difference between these 2 scenarios?

1) 8 bit input > HEVC 8 bit profile > 8 bit HEVC output
2) 8 bit input > 10 bit up conversion > HEVC 10 bit profile > 10 bit 
HEVC output


Better visual quality? Smaller compressed file size?
In other words, what's the purpose of this feature except enabling new 
Video Codec SDK capability?


Video Codecs tend to be more efficient with 10 bit, even if it's just 8 
bit content that's been up-converted to 10 bit.
I.e. yes, it'll (Or can, at least. Not sure if it's a given.) produce 
smaller/higher quality content for the same input.


As for the exact reason, I can't explain, but it's a well known concept.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avcodec/nvenc: High bit depth encoding for HEVC

2024-04-18 Thread Roman Arzumanyan
Hi Diego,
Asking for my own education.

As far as you've explained, the 8 > 10 bit conversion happens within the
driver, that's understandable.
But how does it influence the output? Does it perform some sort of
proprietary SDR > HDR conversion under the hood that maps the ranges?
What's gonna be the user observable difference between these 2 scenarios?
1) 8 bit input > HEVC 8 bit profile > 8 bit HEVC output
2) 8 bit input > 10 bit up conversion > HEVC 10 bit profile > 10 bit HEVC
output

Better visual quality? Smaller compressed file size?
In other words, what's the purpose of this feature except enabling new
Video Codec SDK capability?

чт, 18 апр. 2024 г. в 13:44, Diego Felix de Souza via ffmpeg-devel <
ffmpeg-devel@ffmpeg.org>:

> Hi Timo,
>
> Thank you for your review. Please check my answers below.
>
> Best regards,
>
> Diego
>
> On 17.04.24, 16:27, "Timo Rothenpieler"  wrote:
>
> External email: Use caution opening links or attachments
>
>
> On 15/04/2024 16:39, Diego Felix de Souza via ffmpeg-devel wrote:
> > From: Diego Felix de Souza 
> >
> > Adding 10-bit encoding support for HEVC if the input is 8-bit. In
> > case of 8-bit input content, NVENC performs an internal CUDA 8 to
> > 10-bit conversion of the input prior to encoding. Currently, only
> > AV1 supports encoding 8-bit content as 10-bit.
>
> I'm not sure about this one.
> Since it's just "SW", or rather CUDA based, conversion, this job could
> also be done by scale_cuda, outside of some niche formats it doesn't
> support yet.
> Which would also allow for more consistent command lines across versions.
>
>
> Although it is a software-based solution, the conversion is integrated
> with other kernels
> inside the driver. In this way, it will demand fewer memory accesses and
> better Streaming
> Multiprocessor (SM) utilization, leading to a improved performance and a
> lower latency
> compared to the scale_cuda approach. Moreover, the proposed approach
> overall consumes
> less memory since it only requires an 8-bit per channel frame as input and
> no extra 10-bit frames.
>
>
> > Signed-off-by: Diego Felix de Souza 
> > ---
> >   libavcodec/nvenc.c  | 8 
> >   libavcodec/nvenc_hevc.c | 1 +
> >   2 files changed, 5 insertions(+), 4 deletions(-)
> >
> > diff --git a/libavcodec/nvenc.c b/libavcodec/nvenc.c
> > index 794174a53f..c302cc7dc4 100644
> > --- a/libavcodec/nvenc.c
> > +++ b/libavcodec/nvenc.c
> > @@ -514,7 +514,7 @@ static int nvenc_check_capabilities(AVCodecContext
> *avctx)
> >   }
> >
> >   ret = nvenc_check_cap(avctx, NV_ENC_CAPS_SUPPORT_10BIT_ENCODE);
> > -if (IS_10BIT(ctx->data_pix_fmt) && ret <= 0) {
> > +if ((IS_10BIT(ctx->data_pix_fmt) || ctx->highbitdepth) && ret <= 0)
> {
> >   av_log(avctx, AV_LOG_WARNING, "10 bit encode not supported\n");
> >   return AVERROR(ENOSYS);
> >   }
> > @@ -1421,7 +1421,7 @@ static av_cold int
> nvenc_setup_hevc_config(AVCodecContext *avctx)
> >   }
> >
> >   // force setting profile as main10 if input is 10 bit
> > -if (IS_10BIT(ctx->data_pix_fmt)) {
> > +if (IS_10BIT(ctx->data_pix_fmt) || ctx->highbitdepth) {
>
> Won't this need guarded behind the NVENC_HAVE_NEW_BIT_DEPTH_API as well?
> Or would this also work fine with older headers, by just setting this?
>
> For older SDK versions, the HEVC main 10 profile would be selected, but
> the encoded bitstream
> would be still 8-bits. For the sake of consistence and clarity, I will put
> it behind the
> NVENC_HAVE_NEW_BIT_DEPTH_API as you suggested.
>
> >   cc->profileGUID = NV_ENC_HEVC_PROFILE_MAIN10_GUID;
> >   avctx->profile = AV_PROFILE_HEVC_MAIN_10;
> >   }
> > @@ -1435,8 +1435,8 @@ static av_cold int
> nvenc_setup_hevc_config(AVCodecContext *avctx)
> >   hevc->chromaFormatIDC = IS_YUV444(ctx->data_pix_fmt) ? 3 : 1;
> >
> >   #ifdef NVENC_HAVE_NEW_BIT_DEPTH_API
> > -hevc->inputBitDepth = hevc->outputBitDepth =
> > -IS_10BIT(ctx->data_pix_fmt) ? NV_ENC_BIT_DEPTH_10 :
> NV_ENC_BIT_DEPTH_8;
> > +hevc->inputBitDepth = IS_10BIT(ctx->data_pix_fmt) ?
> NV_ENC_BIT_DEPTH_10 : NV_ENC_BIT_DEPTH_8;
> > +hevc->outputBitDepth = (IS_10BIT(ctx->data_pix_fmt) ||
> ctx->highbitdepth) ? NV_ENC_BIT_DEPTH_10 : NV_ENC_BIT_DEPTH_8;
> >   #else
> >   hevc->pixelBitDepthMinus8 = IS_10BIT(ctx->data_pix_fmt) ? 2 : 0;
> >   #endif
> > diff --git a/libavcodec/nvenc_hevc.c b/libavcodec/nvenc_hevc.c
> > index b949cb1bd7..02e9c9c8eb 100644
> > --- a/libavcodec/nvenc_hevc.c
> > +++ b/libavcodec/nvenc_hevc.c
> > @@ -183,6 +183,7 @@ static const AVOption options[] = {
> >   { "fullres",  "Two Pass encoding is enabled where first Pass
> is full resolution",
> >   0,
> AV_OPT_TYPE_CONST, { .i64 = NV_ENC_TWO_PASS_FULL_RESOLUTION },
>   0,  0,   VE, .unit =
> "multipass" },
> >   #endif
> > +{ "highbitdepth", "Enable 10 bit encode for 8 bit
> 

Re: [FFmpeg-devel] [PATCH] avcodec/nvenc: High bit depth encoding for HEVC

2024-04-18 Thread Diego Felix de Souza via ffmpeg-devel
Hi Timo,

Thank you for your review. Please check my answers below.

Best regards,

Diego

On 17.04.24, 16:27, "Timo Rothenpieler"  wrote:

External email: Use caution opening links or attachments


On 15/04/2024 16:39, Diego Felix de Souza via ffmpeg-devel wrote:
> From: Diego Felix de Souza 
>
> Adding 10-bit encoding support for HEVC if the input is 8-bit. In
> case of 8-bit input content, NVENC performs an internal CUDA 8 to
> 10-bit conversion of the input prior to encoding. Currently, only
> AV1 supports encoding 8-bit content as 10-bit.

I'm not sure about this one.
Since it's just "SW", or rather CUDA based, conversion, this job could
also be done by scale_cuda, outside of some niche formats it doesn't
support yet.
Which would also allow for more consistent command lines across versions.


Although it is a software-based solution, the conversion is integrated with 
other kernels
inside the driver. In this way, it will demand fewer memory accesses and better 
Streaming
Multiprocessor (SM) utilization, leading to a improved performance and a lower 
latency
compared to the scale_cuda approach. Moreover, the proposed approach overall 
consumes
less memory since it only requires an 8-bit per channel frame as input and no 
extra 10-bit frames.


> Signed-off-by: Diego Felix de Souza 
> ---
>   libavcodec/nvenc.c  | 8 
>   libavcodec/nvenc_hevc.c | 1 +
>   2 files changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/libavcodec/nvenc.c b/libavcodec/nvenc.c
> index 794174a53f..c302cc7dc4 100644
> --- a/libavcodec/nvenc.c
> +++ b/libavcodec/nvenc.c
> @@ -514,7 +514,7 @@ static int nvenc_check_capabilities(AVCodecContext *avctx)
>   }
>
>   ret = nvenc_check_cap(avctx, NV_ENC_CAPS_SUPPORT_10BIT_ENCODE);
> -if (IS_10BIT(ctx->data_pix_fmt) && ret <= 0) {
> +if ((IS_10BIT(ctx->data_pix_fmt) || ctx->highbitdepth) && ret <= 0) {
>   av_log(avctx, AV_LOG_WARNING, "10 bit encode not supported\n");
>   return AVERROR(ENOSYS);
>   }
> @@ -1421,7 +1421,7 @@ static av_cold int 
> nvenc_setup_hevc_config(AVCodecContext *avctx)
>   }
>
>   // force setting profile as main10 if input is 10 bit
> -if (IS_10BIT(ctx->data_pix_fmt)) {
> +if (IS_10BIT(ctx->data_pix_fmt) || ctx->highbitdepth) {

Won't this need guarded behind the NVENC_HAVE_NEW_BIT_DEPTH_API as well?
Or would this also work fine with older headers, by just setting this?

For older SDK versions, the HEVC main 10 profile would be selected, but the 
encoded bitstream
would be still 8-bits. For the sake of consistence and clarity, I will put it 
behind the
NVENC_HAVE_NEW_BIT_DEPTH_API as you suggested.

>   cc->profileGUID = NV_ENC_HEVC_PROFILE_MAIN10_GUID;
>   avctx->profile = AV_PROFILE_HEVC_MAIN_10;
>   }
> @@ -1435,8 +1435,8 @@ static av_cold int 
> nvenc_setup_hevc_config(AVCodecContext *avctx)
>   hevc->chromaFormatIDC = IS_YUV444(ctx->data_pix_fmt) ? 3 : 1;
>
>   #ifdef NVENC_HAVE_NEW_BIT_DEPTH_API
> -hevc->inputBitDepth = hevc->outputBitDepth =
> -IS_10BIT(ctx->data_pix_fmt) ? NV_ENC_BIT_DEPTH_10 : 
> NV_ENC_BIT_DEPTH_8;
> +hevc->inputBitDepth = IS_10BIT(ctx->data_pix_fmt) ? NV_ENC_BIT_DEPTH_10 
> : NV_ENC_BIT_DEPTH_8;
> +hevc->outputBitDepth = (IS_10BIT(ctx->data_pix_fmt) || 
> ctx->highbitdepth) ? NV_ENC_BIT_DEPTH_10 : NV_ENC_BIT_DEPTH_8;
>   #else
>   hevc->pixelBitDepthMinus8 = IS_10BIT(ctx->data_pix_fmt) ? 2 : 0;
>   #endif
> diff --git a/libavcodec/nvenc_hevc.c b/libavcodec/nvenc_hevc.c
> index b949cb1bd7..02e9c9c8eb 100644
> --- a/libavcodec/nvenc_hevc.c
> +++ b/libavcodec/nvenc_hevc.c
> @@ -183,6 +183,7 @@ static const AVOption options[] = {
>   { "fullres",  "Two Pass encoding is enabled where first Pass is 
> full resolution",
>   0,  
>   AV_OPT_TYPE_CONST, { .i64 = NV_ENC_TWO_PASS_FULL_RESOLUTION },0,
>   0,   VE, .unit = 
> "multipass" },
>   #endif
> +{ "highbitdepth", "Enable 10 bit encode for 8 bit 
> input",OFFSET(highbitdepth),AV_OPT_TYPE_BOOL,  { .i64 = 0 }, 0, 1, VE },

Same question as above, does this always work, even without the new bit
depth API?
If not, it also needs the #ifdef.

Same as above, it would give the wrong impression to the user. I will put it 
behind the NVENC_HAVE_NEW_BIT_DEPTH_API.

>   #ifdef NVENC_HAVE_LDKFS
>   { "ldkfs","Low delay key frame scale; Specifies the Scene 
> Change frame size increase allowed in case of single frame VBV and CBR",
>   OFFSET(ldkfs),  
>   AV_OPT_TYPE_INT,   { .i64 = 0 }, 0, UCHAR_MAX, VE },
> --
> 2.34.1
>
> ---
> NVIDIA GmbH
> Wuerselen
> Amtsgericht Aachen
> HRB 8361
> Managing Directors: Rebecca Peters, Donald Robertson, Janet Hall, Ludwig von 
> Reiche
>
> 

Re: [FFmpeg-devel] [PATCH] avcodec/nvenc: High bit depth encoding for HEVC

2024-04-17 Thread Timo Rothenpieler

On 15/04/2024 16:39, Diego Felix de Souza via ffmpeg-devel wrote:

From: Diego Felix de Souza 

Adding 10-bit encoding support for HEVC if the input is 8-bit. In
case of 8-bit input content, NVENC performs an internal CUDA 8 to
10-bit conversion of the input prior to encoding. Currently, only
AV1 supports encoding 8-bit content as 10-bit.


I'm not sure about this one.
Since it's just "SW", or rather CUDA based, conversion, this job could 
also be done by scale_cuda, outside of some niche formats it doesn't 
support yet.

Which would also allow for more consistent command lines across versions.


Signed-off-by: Diego Felix de Souza 
---
  libavcodec/nvenc.c  | 8 
  libavcodec/nvenc_hevc.c | 1 +
  2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/libavcodec/nvenc.c b/libavcodec/nvenc.c
index 794174a53f..c302cc7dc4 100644
--- a/libavcodec/nvenc.c
+++ b/libavcodec/nvenc.c
@@ -514,7 +514,7 @@ static int nvenc_check_capabilities(AVCodecContext *avctx)
  }

  ret = nvenc_check_cap(avctx, NV_ENC_CAPS_SUPPORT_10BIT_ENCODE);
-if (IS_10BIT(ctx->data_pix_fmt) && ret <= 0) {
+if ((IS_10BIT(ctx->data_pix_fmt) || ctx->highbitdepth) && ret <= 0) {
  av_log(avctx, AV_LOG_WARNING, "10 bit encode not supported\n");
  return AVERROR(ENOSYS);
  }
@@ -1421,7 +1421,7 @@ static av_cold int nvenc_setup_hevc_config(AVCodecContext 
*avctx)
  }

  // force setting profile as main10 if input is 10 bit
-if (IS_10BIT(ctx->data_pix_fmt)) {
+if (IS_10BIT(ctx->data_pix_fmt) || ctx->highbitdepth) {


Won't this need guarded behind the NVENC_HAVE_NEW_BIT_DEPTH_API as well?
Or would this also work fine with older headers, by just setting this?


  cc->profileGUID = NV_ENC_HEVC_PROFILE_MAIN10_GUID;
  avctx->profile = AV_PROFILE_HEVC_MAIN_10;
  }
@@ -1435,8 +1435,8 @@ static av_cold int nvenc_setup_hevc_config(AVCodecContext 
*avctx)
  hevc->chromaFormatIDC = IS_YUV444(ctx->data_pix_fmt) ? 3 : 1;

  #ifdef NVENC_HAVE_NEW_BIT_DEPTH_API
-hevc->inputBitDepth = hevc->outputBitDepth =
-IS_10BIT(ctx->data_pix_fmt) ? NV_ENC_BIT_DEPTH_10 : NV_ENC_BIT_DEPTH_8;
+hevc->inputBitDepth = IS_10BIT(ctx->data_pix_fmt) ? NV_ENC_BIT_DEPTH_10 : 
NV_ENC_BIT_DEPTH_8;
+hevc->outputBitDepth = (IS_10BIT(ctx->data_pix_fmt) || ctx->highbitdepth) 
? NV_ENC_BIT_DEPTH_10 : NV_ENC_BIT_DEPTH_8;
  #else
  hevc->pixelBitDepthMinus8 = IS_10BIT(ctx->data_pix_fmt) ? 2 : 0;
  #endif
diff --git a/libavcodec/nvenc_hevc.c b/libavcodec/nvenc_hevc.c
index b949cb1bd7..02e9c9c8eb 100644
--- a/libavcodec/nvenc_hevc.c
+++ b/libavcodec/nvenc_hevc.c
@@ -183,6 +183,7 @@ static const AVOption options[] = {
  { "fullres",  "Two Pass encoding is enabled where first Pass is full 
resolution",
  0,
AV_OPT_TYPE_CONST, { .i64 = NV_ENC_TWO_PASS_FULL_RESOLUTION },0,  
0,   VE, .unit = "multipass" },
  #endif
+{ "highbitdepth", "Enable 10 bit encode for 8 bit 
input",OFFSET(highbitdepth),AV_OPT_TYPE_BOOL,  { .i64 = 0 }, 0, 1, VE },


Same question as above, does this always work, even without the new bit 
depth API?

If not, it also needs the #ifdef.


  #ifdef NVENC_HAVE_LDKFS
  { "ldkfs","Low delay key frame scale; Specifies the Scene Change frame 
size increase allowed in case of single frame VBV and CBR",
  OFFSET(ldkfs),
AV_OPT_TYPE_INT,   { .i64 = 0 }, 0, UCHAR_MAX, VE },
--
2.34.1

---
NVIDIA GmbH
Wuerselen
Amtsgericht Aachen
HRB 8361
Managing Directors: Rebecca Peters, Donald Robertson, Janet Hall, Ludwig von 
Reiche

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] avcodec/nvenc: High bit depth encoding for HEVC

2024-04-15 Thread Diego Felix de Souza via ffmpeg-devel
From: Diego Felix de Souza 

Adding 10-bit encoding support for HEVC if the input is 8-bit. In
case of 8-bit input content, NVENC performs an internal CUDA 8 to
10-bit conversion of the input prior to encoding. Currently, only
AV1 supports encoding 8-bit content as 10-bit.

Signed-off-by: Diego Felix de Souza 
---
 libavcodec/nvenc.c  | 8 
 libavcodec/nvenc_hevc.c | 1 +
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/libavcodec/nvenc.c b/libavcodec/nvenc.c
index 794174a53f..c302cc7dc4 100644
--- a/libavcodec/nvenc.c
+++ b/libavcodec/nvenc.c
@@ -514,7 +514,7 @@ static int nvenc_check_capabilities(AVCodecContext *avctx)
 }

 ret = nvenc_check_cap(avctx, NV_ENC_CAPS_SUPPORT_10BIT_ENCODE);
-if (IS_10BIT(ctx->data_pix_fmt) && ret <= 0) {
+if ((IS_10BIT(ctx->data_pix_fmt) || ctx->highbitdepth) && ret <= 0) {
 av_log(avctx, AV_LOG_WARNING, "10 bit encode not supported\n");
 return AVERROR(ENOSYS);
 }
@@ -1421,7 +1421,7 @@ static av_cold int nvenc_setup_hevc_config(AVCodecContext 
*avctx)
 }

 // force setting profile as main10 if input is 10 bit
-if (IS_10BIT(ctx->data_pix_fmt)) {
+if (IS_10BIT(ctx->data_pix_fmt) || ctx->highbitdepth) {
 cc->profileGUID = NV_ENC_HEVC_PROFILE_MAIN10_GUID;
 avctx->profile = AV_PROFILE_HEVC_MAIN_10;
 }
@@ -1435,8 +1435,8 @@ static av_cold int nvenc_setup_hevc_config(AVCodecContext 
*avctx)
 hevc->chromaFormatIDC = IS_YUV444(ctx->data_pix_fmt) ? 3 : 1;

 #ifdef NVENC_HAVE_NEW_BIT_DEPTH_API
-hevc->inputBitDepth = hevc->outputBitDepth =
-IS_10BIT(ctx->data_pix_fmt) ? NV_ENC_BIT_DEPTH_10 : NV_ENC_BIT_DEPTH_8;
+hevc->inputBitDepth = IS_10BIT(ctx->data_pix_fmt) ? NV_ENC_BIT_DEPTH_10 : 
NV_ENC_BIT_DEPTH_8;
+hevc->outputBitDepth = (IS_10BIT(ctx->data_pix_fmt) || ctx->highbitdepth) 
? NV_ENC_BIT_DEPTH_10 : NV_ENC_BIT_DEPTH_8;
 #else
 hevc->pixelBitDepthMinus8 = IS_10BIT(ctx->data_pix_fmt) ? 2 : 0;
 #endif
diff --git a/libavcodec/nvenc_hevc.c b/libavcodec/nvenc_hevc.c
index b949cb1bd7..02e9c9c8eb 100644
--- a/libavcodec/nvenc_hevc.c
+++ b/libavcodec/nvenc_hevc.c
@@ -183,6 +183,7 @@ static const AVOption options[] = {
 { "fullres",  "Two Pass encoding is enabled where first Pass is full 
resolution",
 0, 
   AV_OPT_TYPE_CONST, { .i64 = NV_ENC_TWO_PASS_FULL_RESOLUTION },0, 
 0,   VE, .unit = "multipass" },
 #endif
+{ "highbitdepth", "Enable 10 bit encode for 8 bit 
input",OFFSET(highbitdepth),AV_OPT_TYPE_BOOL,  { .i64 = 0 }, 0, 1, VE },
 #ifdef NVENC_HAVE_LDKFS
 { "ldkfs","Low delay key frame scale; Specifies the Scene Change 
frame size increase allowed in case of single frame VBV and CBR",
 OFFSET(ldkfs), 
   AV_OPT_TYPE_INT,   { .i64 = 0 }, 0, UCHAR_MAX, VE },
--
2.34.1

---
NVIDIA GmbH
Wuerselen
Amtsgericht Aachen
HRB 8361
Managing Directors: Rebecca Peters, Donald Robertson, Janet Hall, Ludwig von 
Reiche

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".