qingwli opened a new issue, #15423: URL: https://github.com/apache/dolphinscheduler/issues/15423
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description ## 1. Goal Record user operation logs like create, update, delete, run, and stop jobs, etc. Easy to track item state. ## 1. Previous Design https://github.com/apache/dolphinscheduler/issues/5822 ### 1. Realize Method - Publish and subscribe mode. High code intrusive, more easier to format different logs. Logs are more user-friendly. - AOP. Low code intrusive. Just can record api information about which api request and params, response. Not user-firendly. The community decided to use publish and subscribe cause we only need to save successful operation. Other way - Mybatis interceptor, Low code intrusive. Can record real running sql, understanding for a user level. <img width="534" alt="image" src="https://github.com/apache/dolphinscheduler/assets/20885366/9f693438-95e9-4e8b-86e2-c88d964d5ccf"> ### 2. Schema Design | User Name | Resource Type | Project Name | Operation Type | Time | | --------- | ------------- | ------------ | -------------- | ------------------- | | admin | PROJECT | NewProject | CREATE | 2023-12-28 10:40:23 | | admin | USER | NewUser | CREATE | 2023-12-28 10:40:23 | ## 2. Deficiencies The recorded operation granularity is too rough. Not clear enough to show what specific updated the user made. ## 3. Classification Project - Project (Create, Update, Delete) - Workflow (Create, Update, Delete, Import, Copy) - Workflow Instance (Run, Stop, Kill) - Task (Create, Update, Delete) - Task Instance (Run, Stop, Kill) - Schedule(Create, Update, Delete) - ETL - Flink Resource - File (Create Folder, Update Folder, Delete Folder, Create File, Upload File, Delete File, Update File) - UDF (Create Folder, Update Folder, Delete Folder, Create UDF, Upload File, Create UDF Function, Delete UDF, Update UDF) - Task Group (Create, Update, Switch status) Datasource - Datasource (Create, Update, Delete) Security - Tenant (Create, Update, Delete) - User (Create, Update, Delete) - Alarm Group (Create, Update, Delete) - Alarm Instance (Create, Update, Delete) - Worker Group (Create, Update, Delete) - Yarn Queue (Create, Update, Delete) - Environment (Create, Update, Delete) - Cluster (Create, Update, Delete) - K8s namespace (Create, Update, Delete) - Token (Create, Update, Delete) ## 4. Extract We use mutili level of object like: - Level 1: Project. - Level 2: Workflow. - Level 3: Workflow Instance. - Level 1: Resource. - Level 2: Folder. - Level 3: File. Demo: - Object Type - Project - Resource - Datasource - Security - Object Id/Enum - Workflow - Folder - Detail Id/Enum - Workflow instance - File ## 5. Achieve effect Step 1: Finish the table design and finish the basic framework. Step 2: Record import impacting operations. Extract the details about the change. Like project A rename to B. Will show A->B, Something like this. Step 3: Record all impacting operations. ## 6. New Schema Design Java Code Enum Design ``` java PROJECT(0, -1, "Project", true), RESOURCE(1,-1, "Resource", false), DATASOURCE(2,-1, "Datasource", true), SECURITY(3,-1, "Security", false), WORKFLOW(4,0, "Workflow", true), WORKFLOW_INSTANCE(5,4, "Workflow instance", true), WORKFLOW_INSTANCE1(6,5, "Workflow instance1", true); private final int code; private final int parentCode; // support multi level private final String name; private final boolean hasLogs; // if this object has not value, in search button, only can choose `All Sub-Levels logs` private int level; // project 0, workflow 1, workflow instance 2, task 2, task instance 3 ``` ## 7. UI design Search field: - User - Object Type(Project, Security) - Scope(All Sub-Levels logs, current level logs). If Object Type `hasLogs` is false, only can choose `All Sub-Levels logs` - Object Name - Operation Type | User Name | Parent Type | Parent Name | Object Type | Object Name | Operation Type | Detail | Time | | --------- | ----------- | ----------- | ----------------- | ------------------- | -------------- | --------------- | ------------------- | | admin | PROJECT | ds-test | PROJECT | ds-project | Create | v-project | 2023-12-28 10:40:23 | | admin | Security | | USER | NewUser | Delete | | 2023-12-28 10:40:23 | | admin | PROJECT | ds-test | Workflow | ds-workflow | Create | ds-workflow | 2023-12-28 10:40:23 | | NewUser | Workflow | ds-workflow | Workflow-instance | Workflow-instance-1 | Run | | 2023-12-28 10:40:23 | | NewUser | PROJECT | ds-test | Workflow | ds-workflow | Update | add new task... | 2023-12-28 10:40:23 | Database Design ``` sql CREATE TABLE `t_ds_audit_log` ( `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `user_id` int(11) NOT NULL COMMENT 'user id', `object_type` int(11) NOT NULL COMMENT 'resource type', `operation_id` int(11) NOT NULL COMMENT 'operation id', `operation_type` int(11) NOT NULL COMMENT 'operation type', `time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT 'create time', `detail` text DEFAULT NULL COMMENT 'detail', PRIMARY KEY (`id`) /*T![clustered_index] CLUSTERED */ ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin AUTO_INCREMENT=1; ``` ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
