If you didn't specify -hwaccel option, then the full gpu transcoding is disabled. for example:
1. full gpu transcoding. $ ffmpeg -hwaccel cuvid -c:v h264_cuvid -i test.ts -c:v h264_nvenc ... 2. gpu transcoding via memory copy. $ ffmpeg -c:v h264_cuvid -i test.ts -c:v h264_nvenc ... 3. cpu decoding and gpu encoding. $ ffmpeg -i test.ts -c:v h264_nvenc ... On 8/24/18, Dennis Mungai <[email protected]> wrote: > Is it possible to disable hwaccel decode explicitly in an FFmpeg instance? > > Say, -hwaccel none such that no hardware-accelerated decoder instance is > initialized on launch? > > > > On Fri, 24 Aug 2018 at 01:14, Yang Zhang <[email protected]> wrote: > >> Yes, 25 hd1080p channels use -hwaccel cuvid options to do full hardware >> acceleration transcoding. other channels use the GPU encoding only. this >> should can let you transcoding more channels. and fix the stutter problem. >> >> You can use "nvidia-smi dmon" to check the usage of NVDEC/ENC chips. >> >> On Fri, Aug 24, 2018 at 6:02 AM Dennis Mungai <[email protected]> wrote: >> >> > So, to clarify: >> > >> > If you were to disable hwaccel decode (and rely solely on software-based >> > decoding) BUT utilize NVENC on a system with a beefy processor and >> adequate >> > RAM, you should be able to eliminate the bottleneck, correct? >> > >> > Because what you're implying (by the bottleneck at decode side) would be >> a >> > limitation in how FFmpeg handles filter performance (hwupload, to be >> > exact), an issue that can be circumvented by eliminating hwaccel in >> decode. >> > >> > >> > >> > On Fri, 24 Aug 2018 at 00:57, Yang Zhang <[email protected]> wrote: >> > >> > > The problem is even you using the Tesla P100, and you can see it has 3 >> > > NVENC chips inside it. but it has only 1 NVDEC chip, this is very very >> > > important. the bottleneck is at decoding side. this is why you got >> > freezing >> > > or stutter problem. basically, perhaps you can use Tesla P100 encoding >> 75 >> > > hd1080p channels, but you can not decoding more than 25 hd1080p >> channels >> > on >> > > it. >> > > >> > > On Fri, Aug 24, 2018 at 5:51 AM Dennis Mungai <[email protected]> >> wrote: >> > > >> > > > No. >> > > > >> > > > This is as tested on the Tesla P100. >> > > > >> > > > On Fri, 24 Aug 2018 at 00:26, Pedro Daniel Costa < >> > > > [email protected]> >> > > > wrote: >> > > > >> > > > > Ok >> > > > > >> > > > > But with the TESLA models, did you have any issues over 40 >> > > > > streams? >> > > > > >> > > > > I am planning on budge project to run minimum 100channels >> > > > > >> > > > > -----Mensagem original----- >> > > > > De: ffmpeg-user [mailto:[email protected]] Em nome de >> > > > Dennis >> > > > > Mungai >> > > > > Enviada em: quinta-feira, 23 de agosto de 2018 13:54 >> > > > > Para: FFmpeg user questions >> > > > > Assunto: [FFmpeg-user] A question on the Quadro P6000 and maximum >> > > > > simultaneous NVENC encodes in FFmpeg >> > > > > >> > > > > Hello there, >> > > > > >> > > > > For users with this specific card (and FFmpeg installed), kindly >> > > clarify >> > > > > on the following: >> > > > > >> > > > > (a). How many simultaneous encoder sessions can you run on this >> card? >> > > > > >> > > > > (b). Have you ran into any issues, such as stuttering and dropping >> > > > streams >> > > > > with multiple concurrent encodes? >> > > > > >> > > > > I ask because of a recent case where adding more than ~40 >> concurrent >> > > > > encodes would make the output "drop" and stutter massively, as if >> > > FFmpeg >> > > > > itself was hanging, despite the GPU having more than enough VRAM >> > > (24GB?) >> > > > > and resources (under ~11% utilization according to nvidia-smi), >> > > implying >> > > > > some sort of artificial limitation in effect despite the GPU >> support >> > > > matrix >> > > > > specifying otherwise: >> > > > > >> https://developer.nvidia.com/video-encode-decode-gpu-support-matrix >> > > > > >> > > > > Your feedback on this is appreciated. >> > > > > _______________________________________________ >> > > > > ffmpeg-user mailing list >> > > > > [email protected] >> > > > > http://ffmpeg.org/mailman/listinfo/ffmpeg-user >> > > > > >> > > > > To unsubscribe, visit link above, or email >> > > > [email protected] >> > > > > with subject "unsubscribe". >> > > > > _______________________________________________ >> > > > > ffmpeg-user mailing list >> > > > > [email protected] >> > > > > http://ffmpeg.org/mailman/listinfo/ffmpeg-user >> > > > > >> > > > > To unsubscribe, visit link above, or email >> > > > > [email protected] with subject "unsubscribe". >> > > > _______________________________________________ >> > > > ffmpeg-user mailing list >> > > > [email protected] >> > > > http://ffmpeg.org/mailman/listinfo/ffmpeg-user >> > > > >> > > > To unsubscribe, visit link above, or email >> > > > [email protected] with subject "unsubscribe". >> > > _______________________________________________ >> > > ffmpeg-user mailing list >> > > [email protected] >> > > http://ffmpeg.org/mailman/listinfo/ffmpeg-user >> > > >> > > To unsubscribe, visit link above, or email >> > > [email protected] with subject "unsubscribe". >> > _______________________________________________ >> > ffmpeg-user mailing list >> > [email protected] >> > http://ffmpeg.org/mailman/listinfo/ffmpeg-user >> > >> > To unsubscribe, visit link above, or email >> > [email protected] with subject "unsubscribe". >> _______________________________________________ >> ffmpeg-user mailing list >> [email protected] >> http://ffmpeg.org/mailman/listinfo/ffmpeg-user >> >> To unsubscribe, visit link above, or email >> [email protected] with subject "unsubscribe". > _______________________________________________ > ffmpeg-user mailing list > [email protected] > http://ffmpeg.org/mailman/listinfo/ffmpeg-user > > To unsubscribe, visit link above, or email > [email protected] with subject "unsubscribe". _______________________________________________ ffmpeg-user mailing list [email protected] http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email [email protected] with subject "unsubscribe".
