Send Motion-user mailing list submissions to
        motion-user@lists.sourceforge.net

To subscribe or unsubscribe via the World Wide Web, visit
        https://lists.sourceforge.net/lists/listinfo/motion-user
or, via email, send a message with subject or body 'help' to
        motion-user-requ...@lists.sourceforge.net

You can reach the person managing the list at
        motion-user-ow...@lists.sourceforge.net

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Motion-user digest..."


Today's Topics:

   1. Re: Script on_movie_end does not run after OS update
      (Jack Christensen)
   2. Low latency realtime web interface view
      (=?UTF-8?B?0JDQvdGC0L7QvSDQn9GD0L/QutC+0LI=?=)


----------------------------------------------------------------------

Message: 1
Date: Mon, 31 Jan 2022 21:56:41 -0500
From: Jack Christensen <christensen.jac...@gmail.com>
To: motion-user@lists.sourceforge.net
Subject: Re: [Motion-user] Script on_movie_end does not run after OS
        update
Message-ID: <5ef45bc7-e86f-abce-1557-710c35326...@gmail.com>
Content-Type: text/plain; charset="utf-8"; Format="flowed"

I found the issue. My script uses the USER environment variable to build 
some path names and it is apparently now empty. I've hard coded it as a 
workaround. I wonder if the updates were a fix for CVE-2021-4034, and if 
they're working as intended.

When I run the script from the command line, all is well and USER is 
properly populated. But when Motion runs the script, then USER is empty. 
Also I found that if cron runs the script, USER is empty.

On 1/31/22 20:28, Ronnie McMaster wrote:
> Does the script still work when run manually?
>
> On Mon, Jan 31, 2022 at 6:19 PM Jack Christensen 
> <christensen.jac...@gmail.com> wrote:
>
>     Hello all,
>
>     I have three Raspberry Pi Zero W machines with Pi cameras running
>     RasPi
>     OS Buster Lite. Today I updated the OS (apt update; apt full-upgrade)
>     and after rebooting, the on__movie_end script no longer executes.
>     This
>     is a simple bash script that uses rsync to forward the video files
>     to an
>     archive machine. The script runs fine if I start it manually.
>
>     I was running Motion 4.3.2 so I installed 4.4.0 and the problem
>     persists. I increased the log level to 9 and it appears as though
>     Motion
>     is executing the script. I find no clues in syslog, auth.log, etc.
>     The
>     script writes to its own log file and there is no output there so
>     I am
>     sure it's not actually running.
>
>     Any help is appreciated!
>
>     Here is part of the Motion log showing the script (allegedly)
>     executing:
>
>     [1:ml1:CAM1] [NTC] [ALL] [Jan 31 20:00:33] motion_detected: Motion
>     detected - starting event 2
>     [1:ml1:CAM1] [INF] [EVT] [Jan 31 20:00:33] event_ffmpeg_newfile:
>     Source
>     FPS 4
>     [1:ml1:CAM1] [INF] [ENC] [Jan 31 20:00:33] ffmpeg_set_quality:
>     libx264
>     codec vbr/crf/bit_rate: 25
>     [1:ml1:CAM1] [NTC] [EVT] [Jan 31 20:00:33] event_newfile: Writing
>     movie
>     to file: /home/jack/motion-output/1-20220131-200032-002.mkv
>     [1:ml1:CAM1] [INF] [ALL] [Jan 31 20:00:33] mlp_tuning:
>     micro-lightswitch!
>     [1:ml1:CAM1] [DBG] [EVT] [Jan 31 20:01:37] exec_command: Executing
>     external command '/home/jack/sync.sh
>     /home/jack/motion-output/1-20220131-200032-002.mkv'
>     [1:ml1:CAM1] [DBG] [EVT] [Jan 31 20:01:37] event_closefile: Saved
>     movie
>     to file: /home/jack/motion-output/1-20220131-200032-002.mkv
>     [1:ml1:CAM1] [NTC] [ALL] [Jan 31 20:01:37] mlp_actions: End of event 2
>
>     Here are the updates applied today:
>
>     Start-Date: 2022-01-31? 08:48:05
>     Commandline: apt full-upgrade
>     Requested-By: jack (1000)
>     Upgrade: libpolkit-gobject-1-0:armhf (0.105-25+rpt1,
>     0.105-25+rpt1+deb10u1), rpi-eeprom:armhf (13.3-1~buster,
>     13.5-1~buster),
>     libpolkit-agent-1-0:armhf (0.105-25+rpt1, 0.105-25+rpt1+deb10u1),
>     libpolkit-backend-1-0:armhf (0.105-25+rpt1, 0.105-25+rpt1+deb10u1),
>     policykit-1:armhf (0.105-25+rpt1, 0.105-25+rpt1+deb10u1)
>     End-Date: 2022-01-31? 08:48:36
>
>     -- 
>     Jack Christensen
>     Sent from Linux Mint 20.1 using Mozilla Thunderbird
>
>
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------

Message: 2
Date: Tue, 1 Feb 2022 06:45:39 +0300
From: "=?UTF-8?B?0JDQvdGC0L7QvSDQn9GD0L/QutC+0LI=?="
        <tosha101...@gmail.com>
To: motion-user@lists.sourceforge.net
Subject: [Motion-user] Low latency realtime web interface view
Message-ID: <1c58c058-c8e9-1624-9866-43b6b25cf...@gmail.com>
Content-Type: text/plain; charset="utf-8"; Format="flowed"

Greetings!


This message is an idea of how to organize, improve the efficiency of 
the program.

I use Motion for organize videomonitoring, videodevice is an IP camera. 
Version of Motion that i use is 4.4.0. I was start use Motion from 
version 3.x.

Basically, i need that Motion detects motion, draw box, where motion are 
detected, write picture or movie. Great thing is grayscale privacy mask. 
Works fine. And, what important for me, is delay between maked motion in 
real world and displaying it in monitoring stream in Motion web 
intarface, that have place to be (so called displayng delay). Reasons of 
this latency are many.

When array of pixels from camera matrix going to Motion web interface 
through many processing electronic diveces, processing software this 
latency are appears. And the more processing stages, hardence of 
processing the more displaying delay. There is a time when need very 
operatively view what movement are detects.

Suppose, movement was detected in a private house at night. Let's assume 
that these are robbers. So, theys movement are detected by Motion. But, 
as I have experimentally verified, the displaying delay is from 2 to 10 
seconds. And when the user quickly went to the monitor to see what was 
happened there, he will see what was happend from 2 to 10 seconds ago. 
This is a lot of time and these robbers can already be either are close 
whis user, or just do a lot of bad things already. The sooner the user 
finds out what happened, the more time he will have to think about it 
and take an action. Of course, a video or image of the detected movement 
is being recorded, but this is for the archive, and it is not suitable 
for fast online viewing.

Let's briefly look at what makes the displaying delay. Devices are 
involved in the process: IP camera, LAN, computer (where Motion is used 
for detection, recording and displaying). Let's assume that the IP 
camera and LAN do not increase a displayng delay, although it is up to 
1-2 seconds. It can be influenced by setting up or selecting the 
appropriate equipment (IP camera, LAN).

How can this delay be reduced in the Motion program itself? 
Experimentally, I found that the displaying delay is affected by the 
drawing of text and graphics on stream frames, settings: fps, video or 
photo compression, compression of the stream (the one that is displayed 
in the motion web interface), stretching or narrowing the image that is 
obtained from the stream to the values specified in the width and height 
parameters. I do not take into account the cost of processor's time for 
the motion detection process, since effective and high-quality detection 
requires resources, but the minimum of their consumption for this method 
of detection can be achieved.

I want to propose to make such a configuration scenario in which Motion 
takes on a minimum of processor time and thereby introduces a minimum 
displaying delay (for realtime monitoring). I found some video and image 
processing parameters that can serve this purpose: picture_quality, 
movie_extpipe, movie_extpipe_use, movie_passthrough, width, height, 
framerate, movie_quality, picture_type, picture_exif, movie_bps, 
movie_codec, locate_motion_style, text_ and so on.



Parameters picture_quality, movie_quality. When these set in 100%, 
probably, quality manipulation of sream are not performed.? But in fact, 
when the parameter is set to 100%, the same actions do not occur, that 
are performed when the parameter is set to values less than 100%? If the 
same actions occur, then processing occurs, compression does not occur, 
processor time is still consumed, but less compared to when the 
parameter is set to values less than 100%. This is not the same as, for 
example, recording as is whithout processing. That is, parameters 
picture_quality (movie_quality) 100% and picture_passthrough 
(movie_passthroug) do not do the same thing, despite the fact that both 
parameters do not seem to do compression.


Parameters picture_type, move_codec. The algorithm of image formation, 
video consumes processor time. Moreover, the consumption of processor 
time is determined both by the image generation algorithm (jpg, png, 
bmp, and so on) and by the compression ratio within this algorithm. 
Similarly with the video. If the codec of the video being archived 
differs from the codec that the IP camera encodes the stream, then there 
is transcoding and additional consumption of processor time appears.

Parameters movie_textpipe, movie_extpipe_use. That is, you can forcibly 
perform an external program that can be more efficient, which can be 
configured quite flexibly with the help of its flags and get more 
efficiency of the same compression with less CPU time.



The movie_passthrough parameter. The actual stream in this encoded form 
is written to disk. The most effective option for recording archived 
video is only the time spent on recording to disk. But if you need to 
draw time stamps, a rectangle, then you can't do without processing 
(transcoding). CPU time consumption is increasing.


Parameters width, height, framerate. The larger they are, the more 
pixels you need to work with, more often, the more CPU time is consumed.


The locate_motion_style and text_* parameters. These parameters 
determine the rendering of graphics on frames of recorded video, images. 
They consume a significant portion of CPU time, the heavier the stream, 
the more. To save CPU time, a disabling should be provided for these 
mechanisms:
locate_motion_drawing off - completely disables (does not call) the 
procedure for drawing a rectangle, graphics in frames, bypasses it.
text_drawing off - similar to locate_motion_drawing.


I found that the movie_passthrough option also applies to picture output 
(this can be seen in the application logs when a picture with movement 
is saved). What exactly I would like to have in the program: the ability 
to set parameters such that the processing of incoming frames from the 
IP camera was minimal or absent altogether. This needed for hardware 
with low performance, where Motion are used. If there is a need to 
reduce size of the recorded video, image, then can do it directly in the 
IP camera video settings.



Modern processors have 4 cores and more. Modern IP cameras have 1,2,3 
streams and more. The first of them is with variable resolution (the 
high resolution stream (camera) from motion settings). The second stream 
has a fixed resolution of 720x480 for motion detection (netcam_url), for 
example. The third stream is 352x288 for operational visual monitoring 
(netcam_url).



Motion implements the so-called high resolution camera and norm 
resolution camera mechanism. I have activated this mode for first stream 
(this is my high resolution camera) and thrid stream (this is my normal 
resolution camera, for operational monitoring). And all streams form 
sigle IP camera. But there was not a very good feature. The resolution 
of the thrid stream is 352x288, and the resolution of the first stream 
is, for example, 1920x1080. Motion will saves images from high 
resolution camera whith resolution, that sets up in width, height 
parameters. I set these the settings according with resolution of first 
stream (highres camera): width 1920, height 1080 and found that motion 
stretches thrid stream (noramlres camera) from a resolution of 352x288 
to a resolution of 1920x1080. Naturally, this consumes a significant 
amount of CPU time and introduces a displaying latency in Motion web 
interface. Displaying a resolution of 352x288 requires significantly 
less processor time than for stretching and displaying in a resolution 
of 1920x1080.

I had to use the same stream for netcam_url and netcam_highres_url. 
Motion correctly fulfills the functionality of highres/normres cameras: 
detects movement on a normal camera, and records from the highres 
camera, as stated.

I propose to abandon the global parameters whidh, height, framerate 
altogether. I noticed that there is a new complex parameter 
netcam_parms, netcam_high_parms. That's exactly what you need to specify 
the stream parameters in it: resolution, bitrate, framerate, actual 
stream resolution (width, height), i-interval, cbr/vbr, IP-camera codec 
specific options, hardware decoder, software decoder - the same stream 
parameters that are specified in the settings in the IP camera. In the 
logs, I noticed that motion does not always determine fps (error "Unable 
determine source fps"), this is noticed when the maximum resolution of 
the stream is set (judging by this, due to the maximum resolution, 
information about the stream parameters is not included by IP camera, 
probably).

In this case, it is especially useful to specify the stream parameters 
directly in the motion netcam_parms parameter. Move the global framerate 
parameter in the movie_ parameters, that is, which framerate, bitrate to 
record an archived video. The parameter will be movie_framerate.


Let's go back to four CPU cores and three IP camera streams. As far as I 
know, in modern programming there is a principle of separation by 
processor cores/threads. Similar to the four main tasks that motion 
performs: recording archived video (images), motion detection, stream 
generation for operational monitoring, scripts and splitting 
(demultiplexing into three for three processor cores) streams and other 
operations. Use a separate processor core for each operation.

Expand the functionality of highres/normal res cameras. Do as follows:
- netcam_higres_url (uses first stream of the IP camera, the highest 
resolution and bitrate for recording archive video, image),
- netcam_mdetection_url (uses second stream of the IP camera, acceptable 
for detecting resolution, 720x480),
- netcam_monitoring_url (uses thrid stream of the IP camera with minimum 
resolution for operational visual control, minimup CPU time consumption 
and displaying delay). The IP camera gives these streams completely 
independently and each of the optimal resolution for the appropriate 
task: archival video recording, motion detection, visual monitoring. At 
the same time, must be leaved the possibility to use the same stream 
(for very cheap IP cameras with one stream at all, or devices with 
single stream) for all three operations: archive recording, detection 
and visual monitoring.

352x288 resolution stream is coming from the monitoring camera, there is 
no need to perform any processing operations on it. It can be passed 
through to the web interface as it is, which will naturally save CPU 
time and reduce the displaying delay. Also, the absence of any stream 
processing for motion detection will also significantly reduce the cost 
of processor time and reduce the displaying delay in Motion web interface.

Video processing is mainly needed when recording archived video, images, 
if the settings of this stream in the IP camera are not enough, there is 
little flexibility (for example, needs recording in different codec, 
container, resolution). Accordingly, all the operations for recording 
video and images using preprocessing according to the parameters of 
video quality in Motion settings (plus putting a timestamp on the frames 
recording videos, images, located motion rectangles - is a rather 
resource-intensive task) will be carried out over in a separate 
processor core (first processor core, for example, that works with first 
IP camera stream).

Since the fourth processor core demultiplexes common stream from IP 
camera to frames of first,second,thrid streams synchronously across all 
three cores (for it needs that the same fps on all three streams from 
the IP camera), motion detection on second stream which processed in 
second processor core, last will "transmit a signal" to first processor 
core to record the same frame (image) but in high resolution to disk.

The thrid processor core, which works with the thrid IP camera stream, 
is practically not loaded with anything (except drawing indication of 
detected movements or text in frames, but if both parameters 
locate_motion_drawing and text_drawing are sets up on "on" - drawin on 
frames are permitted) - the thrid stream is sent by Motion program to 
the web interface as is, without any processing at all.

That reduces CPU time consumption and displaying delay too. Scaling it 
on the web page (html parameters) and decoding the browser performs 
itself. In addition,? if parameters of web stream in Motion 
configuration, such as: stream_maxrate, stream_quality, stream_grey, 
area not reduce CPU time consumption they no necesary.

Second core of the processor works with a second stream of IP camera - 
focused on motion detection only (main purpose). The processing of the 
stream (frame, framerate, bitrate, resolution modification, compression, 
transcoding etc.) in it is also not carried out, except for the one that 
is actually necessary for detection.

The only nuance is the drawing of a rectangle or crosshair pointing to 
the changed pixels (detected motion). But even here the drawing of a 
rectangle and other graphics is not done. Second core detects the 
changed pixels, calculates the coordinates of the indication rectangle 
in which the area with the detected movement is inscribed. Passes these 
coordinates to the first core and the thrid core. Both of these cores, 
in accordance with the resolutions of their working streams, intrepolate 
the coordinates of the rectangle and first core draws a rectangle in the 
frames of the recorded video, image, thrid core draws a rectangle to 
indicate the detected movement in the web interface stream. And, if an 
indication of detected movement is needed (this setting will be 
separated for the first core, the thrid core), the appropriate setting 
is turned on. The process of drawing a rectangle in the operational web 
control stream will increase the displaying delay.

Also in the application logs, I noticed that motion is trying to 
determine the possibility of using hardware decoding: vaapi, vdpau, 
cuda, and so on. Pretty good stuff, it would be nice to be able to set 
the hardware decoder manually for all three streams separately. Hardware 
decoding is necessary for all three tasks: archive video recording, 
detection and for the web interface stream.

It seems that there is such an option movie_passthroug. It passes stream 
from IP camera as is to write to disk. But decoding is necessary to 
insert a time stamp and other text into the video, image, as well as to 
draw an indication box of the detected movement. As I think, for this 
process the stream needs to be decoded, pixels of the graphics to be 
drawed is inserted into the frame pixel array, and then encoded back. 
This is a lot of CPU time. And hardware decoder is a good solution for 
this. In addition to distributing tasks across processor cores, hardware 
decoding can also be used to reduce displaying delays and free up power 
for it useful and rational use.

The operation of the noise_level and noise_tune parameters is not 
entirely clear. Their physical meaning. If I set noise_tune to off, then 
the program freezes at startup.
-------------- next part --------------
An HTML attachment was scrubbed...

------------------------------



------------------------------

Subject: Digest Footer

_______________________________________________
Motion-user mailing list
Motion-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/motion-user


------------------------------

End of Motion-user Digest, Vol 188, Issue 2
*******************************************

Reply via email to