weixiaonan1 commented on issue #14832:
URL:
https://github.com/apache/dolphinscheduler/issues/14832#issuecomment-1702831830
@ruanwenjun @zixi0825
I think we should discuss these features first before deciding whether
create a new listener module or develop based on alert module.
First and foremost, we agree on this consensus: we cannot guarantee that the
event/alert not be lost because we don’t want to make ds vulnerable.
1. Do We need **Global Alarm Types**: That means that the alert instance
accepts all events by default without specifying a specific workflow.
1. overhead: Adding global alarm type will incur extra overhead. We
need to query database to see whether there is a global alert instance before
creating events. For example, when workflow starts, we query database first
and if there are any global alert instances, create a WorkflowStart event and
save it in database .
2. abstract interface: If global alarm type is added, there will be a
lot of events need to be stored and retrived later. Do we need to design an
interface for saving and retriving events? we can choose to store events in
other places(like redis or other database) besides current database. The save
method will be surrounded with try-catch block, which does not affect the
normal execution of the workflow.
2. Do we need **more Alarm event types**? Such as
workflowAddEvent/workflowUpdateEvent/workflowDeleteEvent/workflowStartEvent,etc.
3. **Flexibility:** In the current alert module, the title and content are
determined when master server create an alert. So the format of meassges is the
same for different alert plugins. Do we need to increase some flexibility?
similar to KafkaListener, plugins can generate messages in different formats
and perform different processing logic for different event types.
4. **Failed Alert Messages:** Do we consider resending alert messages after
they failed to send?
1. If we resend the failed messages: To ensure that messages sent by the
same alert instance are in order (e.g., workflowStartEvent should precede
workflowEndEvent) and do not affect other instances, the current message
processing method needs to be changed. Each alert instance should process its
events in chronological order.
5. **Dynamic Plugin:** In the design of this listener module, we can upload
custom plugin jar, this jar will be dynamically loaded through the
URLClassLoader. We can install/update/uninstall the plugin jar in a seperate
page in `Security Module`. If develop based on the alert module, should we keep
this design? #14817 said it need a page to support uploading jars of custom
task plugins, maybe it is a better way to keep this design and expand to manage
various types of dynamic plugin(alert plugin, task plugin, etc.).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]