wangxiaobaidu11 commented on pull request #10920:
URL: https://github.com/apache/druid/pull/10920#issuecomment-990912600
> @wangxiaobaidu11 you don't need to make changes to the druid spark code
for your use case - you can call
`AggregatorFactoryRegistry.register("longUnique", new
LongUniqueAggregatorFactory("", "", 0)` from within your own spark app. That's
definitely still ugly since the AggregatorFactory instance is unnecessary, but
as mentioned in my previous comment this won't be the case for long. If
instantiating an instance is a problem, there is one other temporary
work-around: because all `AggregatorFactoryRegistry` does under the hood is
register subtypes, you can use the public package method `registerSubType`. In
your case, you would call `org.apache.druid.spark.registerSubtype(new
NamedType(classOf[LongUniqueAggregatorFactory], "longUnique"))` from your spark
app. (You can statically that method if you'd like, leaving just
`registerSubtype(...)`)
Thanks!I will update it . I have another question.
①when i set:

②spark runtime info:

③ the same date is covered,but I didn't want that to happen
`21/12/10 16:09:45 WARN SegmentRationalizer: More than one version detected
for interval 2020-01-01T00:00:00.000Z/2020-01-02T00:00:00.000Z on dataSource
test_spark_druid_cube_v4! Some segments will be overshadowed!
21/12/10 16:09:45 WARN SegmentRationalizer: More than one version detected
for interval 2020-01-02T00:00:00.000Z/2020-01-03T00:00:00.000Z on dataSource
test_spark_druid_cube_v4! Some segments will be overshadowed!`

④ I expect result which is combined segments,How do I set partition
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]