Hi, team. I want to make the same test on ARM like existing CI does(x86). As building and testing the whole spark projects will cost too long time, so I plan to split them to multiple jobs to run for lower time cost. But I cannot see what the existing CI[1] have done(so many private scripts called), so could any CI maintainers help/tell us for how to split them and the details about different CI jobs does? Such as PR title contains [SQL], [INFRA], [ML], [DOC], [CORE], [PYTHON], [k8s], [DSTREAMS], [MLlib], [SCHEDULER], [SS],[YARN], [BUIILD] and etc..I found each of them seems run the different CI job.
@shane knapp, Oh, sorry for disturb. I found your email looks like from 'berkeley.edu', are you the good guy who we are looking for help about this? ;-) If so, could you give some helps or advices? Thank you. Thank you very much, Best Regards, ZhaoBo [1] https://amplab.cs.berkeley.edu/jenkins [image: Mailtrack] <https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&> Sender notified by Mailtrack <https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&> 19/07/31 上午11:53:36 Tianhua huang <huangtianhua...@gmail.com> 于2019年7月29日周一 上午9:38写道: > @Sean Owen <sro...@gmail.com> Thank you very much. And I saw your reply > comment in https://issues.apache.org/jira/browse/SPARK-28519, I will test > with modification and to see whether there are other similar tests fail, > and will address them together in one pull request. > > On Sat, Jul 27, 2019 at 9:04 PM Sean Owen <sro...@gmail.com> wrote: > >> Great thanks - we can take this to JIRAs now. >> I think it's worth changing the implementation of atanh if the test value >> just reflects what Spark does, and there's evidence is a little bit >> inaccurate. >> There's an equivalent formula which seems to have better accuracy. >> >> On Fri, Jul 26, 2019 at 10:02 PM Takeshi Yamamuro <linguin....@gmail.com> >> wrote: >> >>> Hi, all, >>> >>> FYI: >>> >> @Yuming Wang the results in float8.sql are from PostgreSQL directly? >>> >> Interesting if it also returns the same less accurate result, which >>> >> might suggest it's more to do with underlying OS math libraries. You >>> >> noted that these tests sometimes gave platform-dependent differences >>> >> in the last digit, so wondering if the test value directly reflects >>> >> PostgreSQL or just what we happen to return now. >>> >>> The results in float8.sql.out were recomputed in Spark/JVM. >>> The expected output of the PostgreSQL test is here: >>> https://github.com/postgres/postgres/blob/master/src/test/regress/expected/float8.out#L493 >>> >>> As you can see in the file (float8.out), the results other than atanh >>> also are different between Spark/JVM and PostgreSQL. >>> For example, the answers of acosh are: >>> -- PostgreSQL >>> >>> https://github.com/postgres/postgres/blob/master/src/test/regress/expected/float8.out#L487 >>> 1.31695789692482 >>> >>> -- Spark/JVM >>> >>> https://github.com/apache/spark/blob/master/sql/core/src/test/resources/sql-tests/results/pgSQL/float8.sql.out#L523 >>> 1.3169578969248166 >>> >>> btw, the PostgreSQL implementation for atanh just calls atanh in math.h: >>> >>> https://github.com/postgres/postgres/blob/master/src/backend/utils/adt/float.c#L2606 >>> >>> Bests, >>> Takeshi >>> >>>