zhangyue19921010 opened a new pull request #10608:
URL: https://github.com/apache/druid/pull/10608


   <!-- Thanks for trying to help us make Apache Druid be the best it can be! 
Please fill out as much of the following information as is possible (where 
relevant, and remove it when irrelevant) to help make the intention and scope 
of this PR clear in order to ease review. -->
   
   <!-- Replace XXXX with the id of the issue fixed in this PR. Remove this 
section if there is no corresponding issue. Don't reference the issue in the 
title of this pull-request. -->
   
   <!-- If you are a committer, follow the PR action item checklist for 
committers:
   
https://github.com/apache/druid/blob/master/dev/committer-instructions.md#pr-and-issue-action-item-checklist-for-committers.
 -->
   
   ### Description
   
   <!-- Describe the goal of this PR, what problem are you fixing. If there is 
a corresponding issue (referenced above), it's not necessary to repeat the 
description here, however, you may choose to keep one summary sentence. -->
   
   <!-- Describe your patch: what did you change in code? How did you fix the 
problem? -->
   
   <!-- If there are several relatively logically separate changes in this PR, 
create a mini-section for each of them. For example: -->
   
   #### Conclusion
   1. After historical crashes in the process of downloading/creating segment 
files and startup again with lazyOnStart enable, there will no corrupted 
segments left on Historical disk.
   2. If served segments is damaged like manually modify `meta.smoosh` 
maliciously, Historical node with lazyOnStart enable will not perceive them and 
queries using these segments would fail at runtime.
   
   Here is the work flow of historical server loading segments.
   <img width="1783" alt="屏幕快照 2020-11-26 下午5 30 08" 
src="https://user-images.githubusercontent.com/69956021/100340728-94df2d80-3016-11eb-8a57-91fe7aeb2a3f.png";>
   
   Condition 1: When Historical Server re-start, it will try to load local 
cached segments listed in `info_dir`.
   Condition 2: Historical will create a specific file named by a corresponding 
segmentID in `info_dir` after downloading/creating segment files successfully.
   
   If Historical crashed  in the process of downloading/creating segment files 
before `info_dir file` created, Historical would not try to load this segments. 
And the loading of this segment will be triggered by ZKCurator which is a 
unLazy action.
   Un-lazy loading action will throw exception because of damaged segment files.
   ```
   2020-11-26T05:37:45,121 INFO [SimpleDataSegmentChangeHandler-26] 
org.apache.druid.server.coordination.SegmentLoadDropHandler - Loading segment 
traffic__ops_feed__realtime__second__dev__lazy__test_2020-11-26T03:00:00.000Z_2020-11-26T04:00:00.000Z_2020-11-26T03:30:17.710Z_3
   2020-11-26T05:37:45,170 WARN [SimpleDataSegmentChangeHandler-26] 
org.apache.druid.server.coordination.BatchDataSegmentAnnouncer - No path to 
unannounce 
segment[traffic__ops_feed__realtime__second__dev__lazy__test_2020-11-26T03:00:00.000Z_2020-11-26T04:00:00.000Z_2020-11-26T03:30:17.710Z_3]
   2020-11-26T05:37:45,171 INFO [SimpleDataSegmentChangeHandler-26] 
org.apache.druid.server.SegmentManager - Told to delete a queryable on 
dataSource[traffic__ops_feed__realtime__second__dev__lazy__test] for 
interval[2020-11-26T03:00:00.000Z/2020-11-26T04:00:00.000Z] and 
version[2020-11-26T03:30:17.710Z] that I don't have.
   2020-11-26T05:37:45,171 INFO [SimpleDataSegmentChangeHandler-26] 
org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager - Deleting 
directory[/var/druid/segment-cache/traffic__ops_feed__realtime__second__dev__lazy__test/2020-11-26T03:00:00.000Z_2020-11-26T04:00:00.000Z/2020-11-26T03:30:17.710Z/3]
   2020-11-26T05:37:45,172 WARN [SimpleDataSegmentChangeHandler-26] 
org.apache.druid.segment.loading.StorageLocation - 
SegmentDir[/var/druid/segment-cache/traffic__ops_feed__realtime__second__dev__lazy__test/2020-11-26T03:00:00.000Z_2020-11-26T04:00:00.000Z/2020-11-26T03:30:17.710Z/3]
 is not found under this location[/var/druid/segment-cache]
   2020-11-26T05:37:45,172 WARN [SimpleDataSegmentChangeHandler-26] 
org.apache.druid.server.coordination.SegmentLoadDropHandler - Unable to delete 
segmentInfoCacheFile[/var/druid/segment-cache/info_dir/traffic__ops_feed__realtime__second__dev__lazy__test_2020-11-26T03:00:00.000Z_2020-11-26T04:00:00.000Z_2020-11-26T03:30:17.710Z_3]
   2020-11-26T05:37:45,177 ERROR [SimpleDataSegmentChangeHandler-26] 
org.apache.druid.server.coordination.SegmentLoadDropHandler - Failed to load 
segment for dataSource: 
{class=org.apache.druid.server.coordination.SegmentLoadDropHandler, 
exceptionType=class org.apache.druid.segment.loading.SegmentLoadingException, 
exceptionMessage=Exception loading 
segment[traffic__ops_feed__realtime__second__dev__lazy__test_2020-11-26T03:00:00.000Z_2020-11-26T04:00:00.000Z_2020-11-26T03:30:17.710Z_3],
 segment=DataSegment{binaryVersion=9, 
id=traffic__ops_feed__realtime__second__dev__lazy__test_2020-11-26T03:00:00.000Z_2020-11-26T04:00:00.000Z_2020-11-26T03:30:17.710Z_3,
 loadSpec={type=>s3_zip, bucket=>pqm-druid-dev, 
key=>rtstorage/segments/traffic__ops_feed__realtime__second__dev__lazy__test/2020-11-26T03:00:00.000Z_2020-11-26T04:00:00.000Z/2020-11-26T03:30:17.710Z/3/630170ed-967b-4158-85a7-d2abc284e738/index.zip,
 S3Schema=>s3n}, dimensions=[video_cro_network_id, video_cro_network_name, 
distributor_n
 etwork_id, distributor_network_name, profile_id, profile_name, 
is_active_device, is_filtered, service_type, platform, ad_unit_type, country, 
country_name, state, state_name, dma, dma_name, syscode, syscode_name, 
tv_network_id, tv_network_name, linear_campaign_type, spot_type], 
metrics=[count, req_ad_request, req_ad_request_with_video_slot, 
req_ad_request_with_midroll_slot, req_resp_time_lt_100ms, 
req_resp_time_lt_300ms, req_resp_time_lt_500ms, req_resp_time_lt_1500ms, 
req_resp_time_gt_1500ms, req_empty_response, 
req_empty_response_with_video_slot, req_empty_response_with_midroll_slot, 
req_err_no_profile, req_err_no_mac_address, req_err_no_signal_id, 
req_err_syscode_not_found, req_err_station_not_found, 
req_err_schedule_not_found, req_err_signal_no_bind_break, 
req_err_break_duration_invalid, req_err_break_no_schedule_ad, 
req_err_schedule_creative_validation_failed, slot_avails, slot_unfilled_avails, 
ad_delivered_ad_primary, ad_delivered_ad_fallback, ad_delivered_ad, 
ad_err_full_avail
 _no_variant_segment, ad_err_fallback_to_evergreen, 
ad_err_inactive_addressable_order, ack_ad_impression, ack_ad_complete, 
ack_ad_first_quartile, ack_ad_mid_point, ack_ad_third_quartile, ack_ad_click, 
ack_err_psn_message_validation_failed, ack_err_psn_asset_info_invalid, 
ack_err_psn_unknown_message_reference, ack_err_psn_timeout, 
ack_err_psn_insertion_point_time_exceeded, 
ack_err_psn_abnormal_termination_of_playout, ack_err_psn_bit_rate_mismatch, 
ack_err_adm_e_no_ad, ack_err_adm_e_timeout, ack_err_adm_e_security, 
ack_err_adm_e_3p_comp, ack_err_adm_e_unknown, ack_err_adm_e_io, 
ack_err_adm_e_no_render, ack_err_adm_e_parse, ack_err_adm_e_device_limit, 
ack_err_adm_e_render_init, ack_err_vast_100, ack_err_vast_202, 
ack_err_vast_300, ack_err_vast_301, ack_err_vast_302, ack_err_vast_303, 
ack_err_vast_400, ack_err_vast_402, ack_err_vast_403, ack_err_vast_900], 
shardSpec=NumberedShardSpec{partitionNum=3, partitions=0}, 
lastCompactionState=null, size=50079060}}
   org.apache.druid.segment.loading.SegmentLoadingException: Exception loading 
segment[traffic__ops_feed__realtime__second__dev__lazy__test_2020-11-26T03:00:00.000Z_2020-11-26T04:00:00.000Z_2020-11-26T03:30:17.710Z_3]
        at 
org.apache.druid.server.coordination.SegmentLoadDropHandler.loadSegment(SegmentLoadDropHandler.java:263)
 ~[druid-server-0.17.1.jar:0.17.1]
        at 
org.apache.druid.server.coordination.SegmentLoadDropHandler.addSegment(SegmentLoadDropHandler.java:307)
 ~[druid-server-0.17.1.jar:0.17.1]
        at 
org.apache.druid.server.coordination.SegmentLoadDropHandler$1.lambda$addSegment$1(SegmentLoadDropHandler.java:513)
 ~[druid-server-0.17.1.jar:0.17.1]
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_221]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_221]
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 [?:1.8.0_221]
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 [?:1.8.0_221]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_221]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_221]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_221]
   Caused by: java.lang.NullPointerException
        at 
org.apache.druid.common.utils.SerializerUtils.readString(SerializerUtils.java:61)
 ~[druid-core-0.17.1.jar:0.17.1]
        at 
org.apache.druid.segment.IndexIO$V9IndexLoader.deserializeColumn(IndexIO.java:666)
 ~[druid-processing-0.17.1.jar:0.17.1]
        at 
org.apache.druid.segment.IndexIO$V9IndexLoader.load(IndexIO.java:617) 
~[druid-processing-0.17.1.jar:0.17.1]
        at org.apache.druid.segment.IndexIO.loadIndex(IndexIO.java:194) 
~[druid-processing-0.17.1.jar:0.17.1]
        at 
org.apache.druid.segment.loading.MMappedQueryableSegmentizerFactory.factorize(MMappedQueryableSegmentizerFactory.java:48)
 ~[druid-processing-0.17.1.jar:0.17.1]
        at 
org.apache.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegment(SegmentLoaderLocalCacheManager.java:150)
 ~[druid-server-0.17.1.jar:0.17.1]
        at 
org.apache.druid.server.SegmentManager.getAdapter(SegmentManager.java:198) 
~[druid-server-0.17.1.jar:0.17.1]
        at 
org.apache.druid.server.SegmentManager.loadSegment(SegmentManager.java:157) 
~[druid-server-0.17.1.jar:0.17.1]
        at 
org.apache.druid.server.coordination.SegmentLoadDropHandler.loadSegment(SegmentLoadDropHandler.java:259)
 ~[druid-server-0.17.1.jar:0.17.1]
        ... 9 more
   
   ```
   
   and will delete all the damaged files.
   
   If Historical crashed  after `info_dir file` created, it means all the 
segments file is complete.
   
   
   
   ### What's more!
   If served segments is damaged like manually modify `meta.smoosh` 
maliciously, Historical node with lazyOnStart enable will not perceive them and 
queries using these segments would fail at runtime.
   
   I delete `video_cro_network_name,0,23837261,23910951` in `meta.smoosh` and 
restart the historical node(enable lazyOnStart)
   
   Historical server startup successfully but queries is failed.
   
   ```
   2020-11-26T10:56:14,031 INFO [DruidSchema-Cache-0] 
org.apache.druid.server.log.LoggingRequestLogger - 2020-11-26T10:56:14.026Z     
          
{"queryType":"segmentMetadata","dataSource":{"type":"table","name":"traffic__ops_feed__realtime__second__dev__lazy__test"},"intervals":{"type":"segments","segments":[{"itvl":"2020-11-26T03:00:00.000Z/2020-11-26T04:00:00.000Z","ver":"2020-11-26T03:30:17.710Z","part":0},{"itvl":"2020-11-26T03:00:00.000Z/2020-11-26T04:00:00.000Z","ver":"2020-11-26T03:30:17.710Z","part":1},{"itvl":"2020-11-26T03:00:00.000Z/2020-11-26T04:00:00.000Z","ver":"2020-11-26T03:30:17.710Z","part":2},{"itvl":"2020-11-26T03:00:00.000Z/2020-11-26T04:00:00.000Z","ver":"2020-11-26T03:30:17.710Z","part":3},{"itvl":"2020-11-26T03:00:00.000Z/2020-11-26T04:00:00.000Z","ver":"2020-11-26T03:30:17.710Z","part":4},{"itvl":"2020-11-26T03:00:00.000Z/2020-11-26T04:00:00.000Z","ver":"2020-11-26T03:30:17.710Z","part":5}]},"toInclude":{"type":"all"},"merge":false,"context":{"queryId":"a022fbbf-d2fa-408b
 
-bd21-698645c74f62"},"analysisTypes":[],"usingDefaultInterval":false,"lenientAggregatorMerge":false,"descending":false,"granularity":{"type":"all"}}
   
{"query/time":4,"query/bytes":-1,"success":false,"identity":"allowAll","exception":"java.lang.RuntimeException:
 com.fasterxml.jackson.core.io.JsonEOFException: Unexpected end-of-input: 
expected close marker for Array (start marker at [Source: 
(SequenceInputStream); line: -1, column: -1])\n at [Source: 
(SequenceInputStream); line: -1, column: 8041]"}
   2020-11-26T10:56:14,031 WARN [DruidSchema-Cache-0] 
org.apache.druid.sql.calcite.schema.DruidSchema - Metadata refresh failed, 
trying again soon.
   java.lang.RuntimeException: com.fasterxml.jackson.core.io.JsonEOFException: 
Unexpected end-of-input: expected close marker for Array (start marker at 
[Source: (SequenceInputStream); line: -1, column: -1])
    at [Source: (SequenceInputStream); line: -1, column: 8041]
        at 
org.apache.druid.client.JsonParserIterator.next(JsonParserIterator.java:119) 
~[druid-server-0.17.1.jar:0.17.1]
        at 
org.apache.druid.java.util.common.guava.BaseSequence.makeYielder(BaseSequence.java:90)
 ~[druid-core-0.17.1.jar:0.17.1]
        at 
org.apache.druid.java.util.common.guava.BaseSequence.toYielder(BaseSequence.java:69)
 ~[druid-core-0.17.1.jar:0.17.1]
        at 
org.apache.druid.java.util.common.guava.MappedSequence.toYielder(MappedSequence.java:49)
 ~[druid-core-0.17.1.jar:0.17.1]
        at 
org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$ResultBatch.fromSequence(ParallelMergeCombiningSequence.java:847)
 ~[druid-core-0.17.1.jar:0.17.1]
        at 
org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$SequenceBatcher.block(ParallelMergeCombiningSequence.java:897)
 ~[druid-core-0.17.1.jar:0.17.1]
        at 
java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3313) 
~[?:1.8.0_221]
        at 
org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$SequenceBatcher.getBatchYielder(ParallelMergeCombiningSequence.java:886)
 ~[druid-core-0.17.1.jar:0.17.1]
        at 
org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$YielderBatchedResultsCursor.initialize(ParallelMergeCombiningSequence.java:993)
 ~[druid-core-0.17.1.jar:0.17.1]
        at 
org.apache.druid.java.util.common.guava.ParallelMergeCombiningSequence$PrepareMergeCombineInputsAction.compute(ParallelMergeCombiningSequence.java:702)
 ~[druid-core-0.17.1.jar:0.17.1]
        at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189) 
~[?:1.8.0_221]
        at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) 
~[?:1.8.0_221]
        at 
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) 
~[?:1.8.0_221]
        at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) 
~[?:1.8.0_221]
        at 
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) 
~[?:1.8.0_221]
   Caused by: com.fasterxml.jackson.core.io.JsonEOFException: Unexpected 
end-of-input: expected close marker for Array (start marker at [Source: 
(SequenceInputStream); line: -1, column: -1])
    at [Source: (SequenceInputStream); line: -1, column: 8041]
        at 
com.fasterxml.jackson.core.base.ParserMinimalBase._reportInvalidEOF(ParserMinimalBase.java:664)
 ~[jackson-core-2.10.1.jar:2.10.1]
        at 
com.fasterxml.jackson.dataformat.smile.SmileParserBase._handleEOF(SmileParserBase.java:746)
 ~[jackson-dataformat-smile-2.10.1.jar:2.10.1]
        at 
com.fasterxml.jackson.dataformat.smile.SmileParser._eofAsNextToken(SmileParser.java:2691)
 ~[jackson-dataformat-smile-2.10.1.jar:2.10.1]
        at 
com.fasterxml.jackson.dataformat.smile.SmileParser.nextToken(SmileParser.java:378)
 ~[jackson-dataformat-smile-2.10.1.jar:2.10.1]
        at 
org.apache.druid.client.JsonParserIterator.next(JsonParserIterator.java:115) 
~[druid-server-0.17.1.jar:0.17.1]
        ... 14 more
   ```
   
   
   
   
   
   
   
   <!--
   In each section, please describe design decisions made, including:
    - Choice of algorithms
    - Behavioral aspects. What configuration values are acceptable? How are 
corner cases and error conditions handled, such as when there are insufficient 
resources?
    - Class organization and design (how the logic is split between classes, 
inheritance, composition, design patterns)
    - Method organization and design (how the logic is split between methods, 
parameters and return types)
    - Naming (class, method, API, configuration, HTTP endpoint, names of 
emitted metrics)
   -->
   
   
   <!-- It's good to describe an alternative design (or mention an alternative 
name) for every design (or naming) decision point and compare the alternatives 
with the designs that you've implemented (or the names you've chosen) to 
highlight the advantages of the chosen designs and names. -->
   
   <!-- If there was a discussion of the design of the feature implemented in 
this PR elsewhere (e. g. a "Proposal" issue, any other issue, or a thread in 
the development mailing list), link to that discussion from this PR description 
and explain what have changed in your final design compared to your original 
proposal or the consensus version in the end of the discussion. If something 
hasn't changed since the original discussion, you can omit a detailed 
discussion of those aspects of the design here, perhaps apart from brief 
mentioning for the sake of readability of this PR description. -->
   
   <!-- Some of the aspects mentioned above may be omitted for simple and small 
changes. -->
   
   <hr>
   
   This PR has:
   - [ ] been self-reviewed.
      - [ ] using the [concurrency 
checklist](https://github.com/apache/druid/blob/master/dev/code-review/concurrency.md)
 (Remove this item if the PR doesn't have any relation to concurrency.)
   - [ ] added documentation for new or modified features or behaviors.
   - [ ] added Javadocs for most classes and all non-trivial methods. Linked 
related entities via Javadoc links.
   - [ ] added or updated version, license, or notice information in 
[licenses.yaml](https://github.com/apache/druid/blob/master/licenses.yaml)
   - [ ] added comments explaining the "why" and the intent of the code 
wherever would not be obvious for an unfamiliar reader.
   - [ ] added unit tests or modified existing tests to cover new code paths, 
ensuring the threshold for [code 
coverage](https://github.com/apache/druid/blob/master/dev/code-review/code-coverage.md)
 is met.
   - [ ] added integration tests.
   - [ ] been tested in a test Druid cluster.
   
   <!-- Check the items by putting "x" in the brackets for the done things. Not 
all of these items apply to every PR. Remove the items which are not done or 
not relevant to the PR. None of the items from the checklist above are strictly 
necessary, but it would be very helpful if you at least self-review the PR. -->
   
   <hr>
   
   ##### Key changed/added classes in this PR
    * `index.md`


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to