Send Motion-user mailing list submissions to motion-user@lists.sourceforge.net
To subscribe or unsubscribe via the World Wide Web, visit https://lists.sourceforge.net/lists/listinfo/motion-user or, via email, send a message with subject or body 'help' to motion-user-requ...@lists.sourceforge.net You can reach the person managing the list at motion-user-ow...@lists.sourceforge.net When replying, please edit your Subject line so it is more specific than "Re: Contents of Motion-user digest..." Today's Topics: 1. Re: Python object-detection post-hook repo (Greg Thomas) 2. correct aspect ratio with passtrough (Joshua Moninger) 3. Re: Python object-detection post-hook repo (Yash Sondhi) 4. Re: Python object-detection post-hook repo (Yash Sondhi) ---------------------------------------------------------------------- Message: 1 Date: Sun, 15 Aug 2021 19:58:37 +1200 From: Greg Thomas <geete...@gmail.com> To: Motion discussion list <motion-user@lists.sourceforge.net> Subject: Re: [Motion-user] Python object-detection post-hook repo Message-ID: <cage5y2dr+dmxtfxbja95v4gnzkztwqg3crxuuhjxjhdyxqk...@mail.gmail.com> Content-Type: text/plain; charset="utf-8" Hi Peter, I did something similar, I'm running a Pi4 with the coral TPU. I send the images rather than the movies (publishing the filename) to the Pi via MQTT, and I built a small server application to handle multiple cameras, using the Coco SSDMobilenet model from the Google TPU example site. I also limited the objects detected to those in interested in so only a subset of the standard 90 in the COCO set. Works pretty well generally speaking, but I'm having difficulty with night time images - does you setup handle night time images/movies reliably? Regards On Fri, 13 Aug 2021, 8:13 am Peter Torelli, <peter.j.tore...@gmail.com> wrote: > A few months ago I asked this list if there was a way to natively perform > object detection inside the main motion code. Since that isn't feasible, > and since I didn't get a response with existing projects, I wrote a quick > post-processor that uses SSD Mobilenet trained on COCO running on the > Google Coral USB Accelerator. I got tired of wrangling the DeepStream SDK > additions to GStreamer on my Xavier AGX, so this is a nice compromise. > > https://github.com/petertorelli/motion-nnet > > Basically it is a server that analyzes the component pictures of an event > movie, and if it detects one of the 90 classes, it moves that MP4 to > another folder (or you could push it to the cloud, or whatever). It cuts > back on having to look at lots of shadows during the day, or bugs and rain > at night. The server is triggered by a call from motion after a movie write > event to the client python script that takes %f as the input parameter. The > files must start with %v for the event so that the server knows which > pictures to scan. > > Hopefully someone finds this useful, as I'm glad to not have to sort > through hundreds of bogus movies (Caveat: SSDMobileNet isn't as good as a > big ResNet with classical feature extraction, so you won't get > everything... this is just a first shot at cutting back on noise). > > Peter > > > > _______________________________________________ > Motion-user mailing list > Motion-user@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/motion-user > https://motion-project.github.io/ > > Unsubscribe: https://lists.sourceforge.net/lists/options/motion-user -------------- next part -------------- An HTML attachment was scrubbed... ------------------------------ Message: 2 Date: Sun, 15 Aug 2021 11:32:08 +0300 From: Joshua Moninger <joshnig...@yandex.com> To: motion-user@lists.sourceforge.net Subject: [Motion-user] correct aspect ratio with passtrough Message-ID: <20375681629016...@sas1-767e5596d8d7.qloud-c.yandex.net> Content-Type: text/plain Hello. Motion is working well with movie_passthrough on. Our IP cameras' D1 resolution is a bit narrower than it should be: 704x576 vs 768x576 How to add ' -bsf:v "h264_metadata=sample_aspect_ratio=4/3" or -aspect 4:3 to the ffmpeg process? https://superuser.com/questions/907933/correct-aspect-ratio-without-re-encoding-video-file The storage medium is a usb flash drive hence the desire to avoid additional writes. Thank you. ------------------------------ Message: 3 Date: Sun, 15 Aug 2021 05:27:29 -0400 From: Yash Sondhi <ysond...@fiu.edu> To: Motion discussion list <motion-user@lists.sourceforge.net> Subject: Re: [Motion-user] Python object-detection post-hook repo Message-ID: <capzyv8jsm01z0agj4rubzqsdmayi9jannugmuywh73cvgtx...@mail.gmail.com> Content-Type: text/plain; charset="utf-8" Hi Peter, This is super useful, I have been using motion to monitor activity of insects in the field and have the same issue. Would the script also work for images, though I guess I could post facto stitch the images into short movies. Cheers On Thu, 12 Aug 2021, 16:11 Peter Torelli, <peter.j.tore...@gmail.com> wrote: > A few months ago I asked this list if there was a way to natively perform > object detection inside the main motion code. Since that isn't feasible, > and since I didn't get a response with existing projects, I wrote a quick > post-processor that uses SSD Mobilenet trained on COCO running on the > Google Coral USB Accelerator. I got tired of wrangling the DeepStream SDK > additions to GStreamer on my Xavier AGX, so this is a nice compromise. > > https://github.com/petertorelli/motion-nnet > > Basically it is a server that analyzes the component pictures of an event > movie, and if it detects one of the 90 classes, it moves that MP4 to > another folder (or you could push it to the cloud, or whatever). It cuts > back on having to look at lots of shadows during the day, or bugs and rain > at night. The server is triggered by a call from motion after a movie write > event to the client python script that takes %f as the input parameter. The > files must start with %v for the event so that the server knows which > pictures to scan. > > Hopefully someone finds this useful, as I'm glad to not have to sort > through hundreds of bogus movies (Caveat: SSDMobileNet isn't as good as a > big ResNet with classical feature extraction, so you won't get > everything... this is just a first shot at cutting back on noise). > > Peter > > > > _______________________________________________ > Motion-user mailing list > Motion-user@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/motion-user > https://motion-project.github.io/ > > Unsubscribe: https://lists.sourceforge.net/lists/options/motion-user -------------- next part -------------- An HTML attachment was scrubbed... ------------------------------ Message: 4 Date: Sun, 15 Aug 2021 05:28:39 -0400 From: Yash Sondhi <ysond...@fiu.edu> To: Motion discussion list <motion-user@lists.sourceforge.net> Subject: Re: [Motion-user] Python object-detection post-hook repo Message-ID: <capzyv8lchdq-v+ocpmwkkaa2dpkswrfu3nqatuwmcnaah0k...@mail.gmail.com> Content-Type: text/plain; charset="utf-8" Hi Greg, That sounds super useful, would you be able to share the code for that? Cheers On Sun, 15 Aug 2021, 03:59 Greg Thomas, <geete...@gmail.com> wrote: > Hi Peter, > I did something similar, I'm running a Pi4 with the coral TPU. I send > the images rather than the movies (publishing the filename) to the Pi via > MQTT, and I built a small server application to handle multiple cameras, > using the Coco SSDMobilenet model from the Google TPU example site. I also > limited the objects detected to those in interested in so only a subset of > the standard 90 in the COCO set. > > Works pretty well generally speaking, but I'm having difficulty with night > time images - does you setup handle night time images/movies reliably? > > Regards > > > > On Fri, 13 Aug 2021, 8:13 am Peter Torelli, <peter.j.tore...@gmail.com> > wrote: > >> A few months ago I asked this list if there was a way to natively perform >> object detection inside the main motion code. Since that isn't feasible, >> and since I didn't get a response with existing projects, I wrote a quick >> post-processor that uses SSD Mobilenet trained on COCO running on the >> Google Coral USB Accelerator. I got tired of wrangling the DeepStream SDK >> additions to GStreamer on my Xavier AGX, so this is a nice compromise. >> >> https://github.com/petertorelli/motion-nnet >> >> Basically it is a server that analyzes the component pictures of an event >> movie, and if it detects one of the 90 classes, it moves that MP4 to >> another folder (or you could push it to the cloud, or whatever). It cuts >> back on having to look at lots of shadows during the day, or bugs and rain >> at night. The server is triggered by a call from motion after a movie write >> event to the client python script that takes %f as the input parameter. The >> files must start with %v for the event so that the server knows which >> pictures to scan. >> >> Hopefully someone finds this useful, as I'm glad to not have to sort >> through hundreds of bogus movies (Caveat: SSDMobileNet isn't as good as a >> big ResNet with classical feature extraction, so you won't get >> everything... this is just a first shot at cutting back on noise). >> >> Peter >> >> >> >> _______________________________________________ >> Motion-user mailing list >> Motion-user@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/motion-user >> https://motion-project.github.io/ >> >> Unsubscribe: https://lists.sourceforge.net/lists/options/motion-user > > _______________________________________________ > Motion-user mailing list > Motion-user@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/motion-user > https://motion-project.github.io/ > > Unsubscribe: https://lists.sourceforge.net/lists/options/motion-user -------------- next part -------------- An HTML attachment was scrubbed... ------------------------------ ------------------------------ Subject: Digest Footer _______________________________________________ Motion-user mailing list Motion-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/motion-user ------------------------------ End of Motion-user Digest, Vol 182, Issue 8 *******************************************