Hi James,

I had a similar need a few years ago and wrote 
this https://github.com/chronomq/chronomq We've been using it in production 
for more than 2 years now and the project is fairly stable.

On Tuesday, October 20, 2020 at 1:40:59 AM UTC-7 gzh...@gmail.com wrote:

> Hello Uday and Jesper, 
>
> Thank you so much for the answer. I found another solution and will have a 
> try, https://github.com/gocraft/work, this library use Redis to save the 
> future tasks. 
>
> Best Regards,
>
> James
>
> Uday Kiran Jonnala <juday...@gmail.com> 于2020年10月20日周二 上午9:29写道:
>
>> For the same scenario, we use the following
>> - Go cron to schedule the job execution
>> - For crash consistency of the program, use a DB (as mentioned by Jasper 
>> also) with a db entry for job schedule information.
>>
>>
>> On Monday, October 19, 2020 at 2:33:50 AM UTC-7 jesper.lou...@gmail.com 
>> wrote:
>>
>>> On Mon, Oct 19, 2020 at 9:51 AM Zhihong GUO <gzh...@gmail.com> wrote:
>>>
>>>>
>>>> I am implementing a reminder system. The purpose is to provide API to 
>>>> client App to add a meeting, and before X minutes the meeting is to open, 
>>>> the reminder system can send notif to user by SMS or email. Here I need a 
>>>> function like: when a meeting is created, check the open time of the 
>>>> meeting, and schedule a job of sending email or sms to be executed just 
>>>> before the X minutes the meeting is open. I checked the goworker but it 
>>>> seems there is no way to enqueue a "delayed" job, any suggestions about 
>>>> the 
>>>> implementation?
>>>>
>>>>
>>> Not knowing your design criteria, I'm just going to make some 
>>> assumptions along the way:
>>>
>>> I'd use a database, probably postgresql. It should serve you up to at 
>>> least something like 100k concurrent meetings managed at any point in time. 
>>> It'll break apart at an even larger scale, but chances are you can rewrite 
>>> with new knowledge if that ever happens. You simply have a table tracking 
>>> the interval of each meeting and you can use this information for a lot of 
>>> things, including guarding for meeting conflicts and so on. On the Go side, 
>>> you have a job ticker (time.NewTicker(time.Minute)) and once it fires, you 
>>> look in the database if any meeting is about to start. Postgresql generally 
>>> handles time/date information well enough that you can use its internal 
>>> management to quickly query the eligible meetings. You can then decide if 
>>> you want to throw them on a channel internally and have another part of the 
>>> system responsible for sending out the emails, or if you want to do it in 
>>> the job ticker loop. I'd probably go with the former because you can have a 
>>> channel for each transport method you have (SMS, Email, Cloud Messaging[1], 
>>> etc)
>>>
>>> In turn, the Go part of the system simply reacts to events on channels 
>>> and carries out the work. The scheduling parts are handled by the database. 
>>> I think this is a nice split of responsibility in the system, since it will 
>>> simplify both parts: the database doesn't know about transports and their 
>>> design, and the Go system doesn't have to worry about long-term persistence 
>>> of meetings.
>>>
>>> Rationale:
>>>
>>> * Databases can survive a system reboot or an application restart. You 
>>> probably want your meetings persistent. And you want your meetings to be 
>>> stored in a way such that you don't accidentally lose them.
>>> * It is a rather simple poll-solution which is fast provided you have 
>>> the right indexes created on the database side.
>>> * You can use a partial index, created over meetings for which you have 
>>> yet to send out notifications. This severely cuts the index size down.
>>> * One-shot notifications are probably ok for a 10 minute window. If they 
>>> fail to arrive, they are not important in less than 10 minutes.
>>> * The relational model is quite strong if you don't know where you are 
>>> going in the long run as it tends to ensure good bounds on most queries. As 
>>> you learn more about your problem, you can look into switching once your 
>>> database reaches a pain threshold (Which is a couple of terabytes for 
>>> Postgres, at the very least).
>>>
>>> Alternative:
>>>
>>> Use a database, but Go (Hah!) with a more cloud-DBesque solution. This 
>>> could open up for almost infinite scaling, but often has a linear cost with 
>>> the size of your database and your query rate.
>>>
>>> [1] Consider Google's FCM solution or something similar for this route. 
>>> They demux the handling of different device types and transports for you, 
>>> as well as handle token refreshing, canonical token updates and so on for 
>>> you. It makes life much easier for everyone if you don't have to struggle 
>>> with the transport-specific APIs.
>>>
>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "golang-nuts" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to golang-nuts...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/golang-nuts/d580cdbb-3e3a-48d6-bcde-5cf5b27366fen%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/golang-nuts/d580cdbb-3e3a-48d6-bcde-5cf5b27366fen%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/20d9d22c-d138-4797-bb6f-76a65f197422n%40googlegroups.com.

Reply via email to