[jira] [Updated] (HADOOP-17452) Upgrade guice to 4.2.3

2021-01-05 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-17452:
-
Description: 
Upgrade guice to 4.2.3 to fix compatibility issue:
{noformat}
Exception in thread "main" java.lang.NoSuchMethodError: 
com.google.inject.util.Types.collectionOf(Ljava/lang/reflect/Type;)Ljava/lang/reflect/ParameterizedType;
» at 
com.google.inject.multibindings.Multibinder.collectionOfProvidersOf(Multibinder.java:202)
» at 
com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:283)
» at 
com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:258)
» at 
com.google.inject.multibindings.Multibinder.newRealSetBinder(Multibinder.java:178)
» at 
com.google.inject.multibindings.Multibinder.newSetBinder(Multibinder.java:150)
» at 
org.apache.druid.guice.LifecycleModule.getEagerBinder(LifecycleModule.java:115)
» at org.apache.druid.guice.LifecycleModule.configure(LifecycleModule.java:121)
» at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
» at com.google.inject.spi.Elements.getElements(Elements.java:110)
» at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
» at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
» at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
» at com.google.inject.spi.Elements.getElements(Elements.java:110)
» at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
» at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
» at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
» at com.google.inject.spi.Elements.getElements(Elements.java:110)
» at 
com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
» at 
com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
» at com.google.inject.Guice.createInjector(Guice.java:96)
» at com.google.inject.Guice.createInjector(Guice.java:73)
» at com.google.inject.Guice.createInjector(Guice.java:62)
» at 
org.apache.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:431)
» at org.apache.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:69)
» at org.apache.druid.cli.ServerRunnable.run(ServerRunnable.java:58)
» at org.apache.druid.cli.Main.main(Main.java:113)
{noformat}

  was:
Upgrade guice to 4.1.0 to fix compatibility issue:

{noformat}
Exception in thread "main" java.lang.NoSuchMethodError: 
com.google.inject.util.Types.collectionOf(Ljava/lang/reflect/Type;)Ljava/lang/reflect/ParameterizedType;
» at 
com.google.inject.multibindings.Multibinder.collectionOfProvidersOf(Multibinder.java:202)
» at 
com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:283)
» at 
com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:258)
» at 
com.google.inject.multibindings.Multibinder.newRealSetBinder(Multibinder.java:178)
» at 
com.google.inject.multibindings.Multibinder.newSetBinder(Multibinder.java:150)
» at 
org.apache.druid.guice.LifecycleModule.getEagerBinder(LifecycleModule.java:115)
» at org.apache.druid.guice.LifecycleModule.configure(LifecycleModule.java:121)
» at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
» at com.google.inject.spi.Elements.getElements(Elements.java:110)
» at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
» at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
» at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
» at com.google.inject.spi.Elements.getElements(Elements.java:110)
» at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
» at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
» at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
» at com.google.inject.spi.Elements.getElements(Elements.java:110)
» at 
com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
» at 
com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
» at com.google.inject.Guice.createInjector(Guice.java:96)
» at com.google.inject.Guice.createInjector(Guice.java:73)
» at com.google.inject.Guice.createInjector(Guice.java:62)
» at 
org.apache.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:431)
» at org.apache.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:69)
» at org.apache.druid.cli.ServerRunnable.run(ServerRunnable.java:58)
» at org.apache.druid.cli.Main.main(Main.java:113)
{noformat}



> Upgrade guice to 4.2.3
> --
>
> Key: HADOOP-17452
> URL: https://issues.apache.org/jira/browse/HADOOP-17452
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuming Wang
>

[jira] [Updated] (HADOOP-17452) Upgrade guice to 4.2.3

2021-01-05 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-17452:
-
Summary: Upgrade guice to 4.2.3  (was: Upgrade guice to 4.1.0)

> Upgrade guice to 4.2.3
> --
>
> Key: HADOOP-17452
> URL: https://issues.apache.org/jira/browse/HADOOP-17452
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuming Wang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Upgrade guice to 4.1.0 to fix compatibility issue:
> {noformat}
> Exception in thread "main" java.lang.NoSuchMethodError: 
> com.google.inject.util.Types.collectionOf(Ljava/lang/reflect/Type;)Ljava/lang/reflect/ParameterizedType;
> » at 
> com.google.inject.multibindings.Multibinder.collectionOfProvidersOf(Multibinder.java:202)
> » at 
> com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:283)
> » at 
> com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:258)
> » at 
> com.google.inject.multibindings.Multibinder.newRealSetBinder(Multibinder.java:178)
> » at 
> com.google.inject.multibindings.Multibinder.newSetBinder(Multibinder.java:150)
> » at 
> org.apache.druid.guice.LifecycleModule.getEagerBinder(LifecycleModule.java:115)
> » at 
> org.apache.druid.guice.LifecycleModule.configure(LifecycleModule.java:121)
> » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
> » at com.google.inject.spi.Elements.getElements(Elements.java:110)
> » at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
> » at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
> » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
> » at com.google.inject.spi.Elements.getElements(Elements.java:110)
> » at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
> » at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
> » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
> » at com.google.inject.spi.Elements.getElements(Elements.java:110)
> » at 
> com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
> » at 
> com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
> » at com.google.inject.Guice.createInjector(Guice.java:96)
> » at com.google.inject.Guice.createInjector(Guice.java:73)
> » at com.google.inject.Guice.createInjector(Guice.java:62)
> » at 
> org.apache.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:431)
> » at org.apache.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:69)
> » at org.apache.druid.cli.ServerRunnable.run(ServerRunnable.java:58)
> » at org.apache.druid.cli.Main.main(Main.java:113)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17452) Upgrade guice to 4.1.0

2020-12-31 Thread Yuming Wang (Jira)
Yuming Wang created HADOOP-17452:


 Summary: Upgrade guice to 4.1.0
 Key: HADOOP-17452
 URL: https://issues.apache.org/jira/browse/HADOOP-17452
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yuming Wang


Upgrade guice to 4.1.0 to fix compatibility issue:

{noformat}
Exception in thread "main" java.lang.NoSuchMethodError: 
com.google.inject.util.Types.collectionOf(Ljava/lang/reflect/Type;)Ljava/lang/reflect/ParameterizedType;
» at 
com.google.inject.multibindings.Multibinder.collectionOfProvidersOf(Multibinder.java:202)
» at 
com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:283)
» at 
com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:258)
» at 
com.google.inject.multibindings.Multibinder.newRealSetBinder(Multibinder.java:178)
» at 
com.google.inject.multibindings.Multibinder.newSetBinder(Multibinder.java:150)
» at 
org.apache.druid.guice.LifecycleModule.getEagerBinder(LifecycleModule.java:115)
» at org.apache.druid.guice.LifecycleModule.configure(LifecycleModule.java:121)
» at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
» at com.google.inject.spi.Elements.getElements(Elements.java:110)
» at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
» at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
» at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
» at com.google.inject.spi.Elements.getElements(Elements.java:110)
» at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
» at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
» at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
» at com.google.inject.spi.Elements.getElements(Elements.java:110)
» at 
com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
» at 
com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
» at com.google.inject.Guice.createInjector(Guice.java:96)
» at com.google.inject.Guice.createInjector(Guice.java:73)
» at com.google.inject.Guice.createInjector(Guice.java:62)
» at 
org.apache.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:431)
» at org.apache.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:69)
» at org.apache.druid.cli.ServerRunnable.run(ServerRunnable.java:58)
» at org.apache.druid.cli.Main.main(Main.java:113)
{noformat}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-08-26 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16152:
-
Description: 
Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
compatibility issues.

Spark: 
[https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146]
Calcite: [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
Hive: https://issues.apache.org/jira/browse/HIVE-21211

  was:
Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
compatibility issues.

Spark: 
[https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141]
Calcite: [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
Hive: https://issues.apache.org/jira/browse/HIVE-21211


> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16152.v1.patch
>
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: https://issues.apache.org/jira/browse/HIVE-21211



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15679) ShutdownHookManager shutdown time needs to be configurable & extended

2019-07-02 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876884#comment-16876884
 ] 

Yuming Wang commented on HADOOP-15679:
--

Hi [~ste...@apache.org] It seems Fix Version/s should be {{2.9.2, 2.8.6, 3.0.4, 
3.1.2}}, not {{2.9.2, 2.8.5, 3.0.4, 3.1.2}}.

> ShutdownHookManager shutdown time needs to be configurable & extended
> -
>
> Key: HADOOP-15679
> URL: https://issues.apache.org/jira/browse/HADOOP-15679
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.8.0, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 2.9.2, 2.8.5, 3.0.4, 3.1.2
>
> Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch, HADOOP-15679-003.patch, 
> HADOOP-15679-branch-2-001.patch, HADOOP-15679-branch-2-001.patch, 
> HADOOP-15679-branch-2-003.patch, HADOOP-15679-branch-2-003.patch, 
> HADOOP-15679-branch-2-004.patch, HADOOP-15679-branch-2-004.patch, 
> HADOOP-15679-branch-2.8-005.patch, HADOOP-15679-branch-2.8-005.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16272) Update HikariCP to 2.5.1

2019-04-24 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16272:
-
Summary: Update HikariCP to 2.5.1  (was: Update HikariCP to 3.3.1)

> Update HikariCP to 2.5.1
> 
>
> Key: HADOOP-16272
> URL: https://issues.apache.org/jira/browse/HADOOP-16272
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Yuming Wang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16273) Update mssql-jdbc to 7.2.2.jre8

2019-04-24 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16273:


 Summary: Update mssql-jdbc to 7.2.2.jre8
 Key: HADOOP-16273
 URL: https://issues.apache.org/jira/browse/HADOOP-16273
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Yuming Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16272) Update HikariCP to 3.3.1

2019-04-24 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16272:


 Summary: Update HikariCP to 3.3.1
 Key: HADOOP-16272
 URL: https://issues.apache.org/jira/browse/HADOOP-16272
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Yuming Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16271) Update okhttp to 3.14.1

2019-04-24 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16271:


 Summary: Update okhttp to 3.14.1
 Key: HADOOP-16271
 URL: https://issues.apache.org/jira/browse/HADOOP-16271
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Yuming Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16180) LocalFileSystem throw Malformed input or input contains unmappable characters

2019-03-21 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16798638#comment-16798638
 ] 

Yuming Wang commented on HADOOP-16180:
--

I'm not sure. Maybe it's the way we use it.

> LocalFileSystem throw Malformed input or input contains unmappable characters
> -
>
> Key: HADOOP-16180
> URL: https://issues.apache.org/jira/browse/HADOOP-16180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 3.2.0
>Reporter: Yuming Wang
>Priority: Major
>
> How to reproduce:
> {code:java}
> export LANG=
> export LC_CTYPE="POSIX"
> export LC_NUMERIC="POSIX"
> export LC_TIME="POSIX"
> export LC_COLLATE="POSIX"
> export LC_MONETARY="POSIX"
> export LC_MESSAGES="POSIX"
> export LC_PAPER="POSIX"
> export LC_NAME="POSIX"
> export LC_ADDRESS="POSIX"
> export LC_TELEPHONE="POSIX"
> export LC_MEASUREMENT="POSIX"
> export LC_IDENTIFICATION="POSIX"
> git clone https://github.com/apache/spark.git && cd spark && git checkout 
> v2.4.0
> build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
> -Dhadoop.version=2.8.0
> {code}
> Stack trace:
> {noformat}
> Caused by: sbt.ForkMain$ForkError: java.nio.file.InvalidPathException: 
> Malformed input or input contains unmappable characters: 
> /home/jenkins/workspace/SparkPullRequestBuilder@2/target/tmp/warehouse-15474fdf-0808-40ab-946d-1309fb05bf26/DaTaBaSe_I.db/tab_ı
>   at sun.nio.fs.UnixPath.encode(UnixPath.java:147)
>   at sun.nio.fs.UnixPath.(UnixPath.java:71)
>   at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281)
>   at java.io.File.toPath(File.java:2234)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getLastAccessTime(RawLocalFileSystem.java:683)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.(RawLocalFileSystem.java:694)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:664)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:987)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:656)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>   at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:520)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1436)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1503)
>   ... 112 more{noformat}
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103328/testReport/org.apache.spark.sql.hive.execution/HiveCatalogedDDLSuite/basic_DDL_using_locale_tr___caseSensitive_true/]
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103328/testReport/org.apache.spark.sql.hive.execution/HiveDDLSuite/create_Hive_serde_table_and_view_with_unicode_columns_and_comment/]
>  
> It works before https://issues.apache.org/jira/browse/HADOOP-12045.
> We could workaround it by resetting locale:
> {code:java}
> export LANG=en_US.UTF-8
> export LC_CTYPE="en_US.UTF-8"
> export LC_NUMERIC="en_US.UTF-8"
> export LC_TIME="en_US.UTF-8"
> export LC_COLLATE="en_US.UTF-8"
> export LC_MONETARY="en_US.UTF-8"
> export LC_MESSAGES="en_US.UTF-8"
> export LC_PAPER="en_US.UTF-8"
> export LC_NAME="en_US.UTF-8"
> export LC_ADDRESS="en_US.UTF-8"
> export LC_TELEPHONE="en_US.UTF-8"
> export LC_MEASUREMENT="en_US.UTF-8"
> export LC_IDENTIFICATION="en_US.UTF-8"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-03-19 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796710#comment-16796710
 ] 

Yuming Wang commented on HADOOP-16152:
--

[~ste...@apache.org] [~jojochuang] Could you help review this patch? The 
conflict only occurs when running the Spark YARN tests. I tried to workaround 
this issue at Spark side, but all failed:
 # Replace {{hadoop-yarn-server-tests}} for {{hadoop-client-minicluster}} and 
still have various class conflicts. [This is an 
example|https://github.com/wangyum/spark-hadoop-client-minicluster] of Spark 
for running YARN tests.
 # Change the version of jetty in the YARN module when testing with hadoop-3, 
Spark core module still 9.4.x. In this case it is evicted by 9.4.12.v20180830:
{noformat}
org.eclipse.jetty:jetty-servlet:9.4.12.v20180830 is selected over 
9.3.24.v20180605
{noformat}

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16152.v1.patch
>
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: https://issues.apache.org/jira/browse/HIVE-21211



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-03-19 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16152:
-
Attachment: HADOOP-16152.v1.patch
Status: Patch Available  (was: Open)

Update jetty to 9.4.x according to 
https://www.eclipse.org/jetty/documentation/9.4.x/upgrading-jetty.html

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16152.v1.patch
>
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: https://issues.apache.org/jira/browse/HIVE-21211



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-03-16 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16794371#comment-16794371
 ] 

Yuming Wang commented on HADOOP-16152:
--

Maybe we should move this ticket to the subtask of HADOOP-15338 due to support 
for java 11.
https://www.eclipse.org/lists/jetty-announce/msg00124.html

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Priority: Major
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: https://issues.apache.org/jira/browse/HIVE-21211



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16180) LocalFileSystem throw Malformed input or input contains unmappable characters

2019-03-13 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16180:
-
Description: 
How to reproduce:
{code:java}
export LANG=
export LC_CTYPE="POSIX"
export LC_NUMERIC="POSIX"
export LC_TIME="POSIX"
export LC_COLLATE="POSIX"
export LC_MONETARY="POSIX"
export LC_MESSAGES="POSIX"
export LC_PAPER="POSIX"
export LC_NAME="POSIX"
export LC_ADDRESS="POSIX"
export LC_TELEPHONE="POSIX"
export LC_MEASUREMENT="POSIX"
export LC_IDENTIFICATION="POSIX"

git clone https://github.com/apache/spark.git && cd spark && git checkout v2.4.0

build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
-Dhadoop.version=2.8.0

{code}
Stack trace:
{noformat}
Caused by: sbt.ForkMain$ForkError: java.nio.file.InvalidPathException: 
Malformed input or input contains unmappable characters: 
/home/jenkins/workspace/SparkPullRequestBuilder@2/target/tmp/warehouse-15474fdf-0808-40ab-946d-1309fb05bf26/DaTaBaSe_I.db/tab_ı
at sun.nio.fs.UnixPath.encode(UnixPath.java:147)
at sun.nio.fs.UnixPath.(UnixPath.java:71)
at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281)
at java.io.File.toPath(File.java:2234)
at 
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getLastAccessTime(RawLocalFileSystem.java:683)
at 
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.(RawLocalFileSystem.java:694)
at 
org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:664)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:987)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:656)
at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:520)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1436)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1503)
... 112 more{noformat}
[https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103328/testReport/org.apache.spark.sql.hive.execution/HiveCatalogedDDLSuite/basic_DDL_using_locale_tr___caseSensitive_true/]

[https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103328/testReport/org.apache.spark.sql.hive.execution/HiveDDLSuite/create_Hive_serde_table_and_view_with_unicode_columns_and_comment/]

 

It works before https://issues.apache.org/jira/browse/HADOOP-12045.

We could workaround it by resetting locale:
{code:java}
export LANG=en_US.UTF-8
export LC_CTYPE="en_US.UTF-8"
export LC_NUMERIC="en_US.UTF-8"
export LC_TIME="en_US.UTF-8"
export LC_COLLATE="en_US.UTF-8"
export LC_MONETARY="en_US.UTF-8"
export LC_MESSAGES="en_US.UTF-8"
export LC_PAPER="en_US.UTF-8"
export LC_NAME="en_US.UTF-8"
export LC_ADDRESS="en_US.UTF-8"
export LC_TELEPHONE="en_US.UTF-8"
export LC_MEASUREMENT="en_US.UTF-8"
export LC_IDENTIFICATION="en_US.UTF-8"
{code}

  was:
How to reproduce:
{code:java}
export LANG=
export LC_CTYPE="POSIX"
export LC_NUMERIC="POSIX"
export LC_TIME="POSIX"
export LC_COLLATE="POSIX"
export LC_MONETARY="POSIX"
export LC_MESSAGES="POSIX"
export LC_PAPER="POSIX"
export LC_NAME="POSIX"
export LC_ADDRESS="POSIX"
export LC_TELEPHONE="POSIX"
export LC_MEASUREMENT="POSIX"
export LC_IDENTIFICATION="POSIX"

git clone https://github.com/apache/spark.git && cd spark && git checkout v2.4.0

build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
-Dhadoop.version=2.8.0

{code}
Stack trace:
{noformat}
Caused by: sbt.ForkMain$ForkError: java.nio.file.InvalidPathException: 
Malformed input or input contains unmappable characters: 
/home/jenkins/workspace/SparkPullRequestBuilder@2/target/tmp/warehouse-15474fdf-0808-40ab-946d-1309fb05bf26/tab1/尼=2
at sun.nio.fs.UnixPath.encode(UnixPath.java:147)
at sun.nio.fs.UnixPath.(UnixPath.java:71)
at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281)
at java.io.File.toPath(File.java:2234)
at 
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getLastAccessTime(RawLocalFileSystem.java:683)
at 
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.(RawLocalFileSystem.java:694)
at 
org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:664)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:987)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:656)
at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
at 
org.apache.hadoop.hive.io.HdfsUtils$HadoopFileStatus.(HdfsUtils.java:211)
at 

[jira] [Commented] (HADOOP-16180) LocalFileSystem throw Malformed input or input contains unmappable characters

2019-03-13 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16791848#comment-16791848
 ] 

Yuming Wang commented on HADOOP-16180:
--

Thanks [~ste...@apache.org] I have updated the description.

> LocalFileSystem throw Malformed input or input contains unmappable characters
> -
>
> Key: HADOOP-16180
> URL: https://issues.apache.org/jira/browse/HADOOP-16180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 3.2.0
>Reporter: Yuming Wang
>Priority: Major
>
> How to reproduce:
> {code:java}
> export LANG=
> export LC_CTYPE="POSIX"
> export LC_NUMERIC="POSIX"
> export LC_TIME="POSIX"
> export LC_COLLATE="POSIX"
> export LC_MONETARY="POSIX"
> export LC_MESSAGES="POSIX"
> export LC_PAPER="POSIX"
> export LC_NAME="POSIX"
> export LC_ADDRESS="POSIX"
> export LC_TELEPHONE="POSIX"
> export LC_MEASUREMENT="POSIX"
> export LC_IDENTIFICATION="POSIX"
> git clone https://github.com/apache/spark.git && cd spark && git checkout 
> v2.4.0
> build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
> -Dhadoop.version=2.8.0
> {code}
> Stack trace:
> {noformat}
> Caused by: sbt.ForkMain$ForkError: java.nio.file.InvalidPathException: 
> Malformed input or input contains unmappable characters: 
> /home/jenkins/workspace/SparkPullRequestBuilder@2/target/tmp/warehouse-15474fdf-0808-40ab-946d-1309fb05bf26/tab1/尼=2
>   at sun.nio.fs.UnixPath.encode(UnixPath.java:147)
>   at sun.nio.fs.UnixPath.(UnixPath.java:71)
>   at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281)
>   at java.io.File.toPath(File.java:2234)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getLastAccessTime(RawLocalFileSystem.java:683)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.(RawLocalFileSystem.java:694)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:664)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:987)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:656)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>   at 
> org.apache.hadoop.hive.io.HdfsUtils$HadoopFileStatus.(HdfsUtils.java:211)
>   at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:3122)
>   at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3478)
>   at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1650)
>   at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1579)
>   at sun.reflect.GeneratedMethodAccessor209.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.sql.hive.client.Shim_v2_1.loadPartition(HiveShim.scala:1145)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$loadPartition$1(HiveClientImpl.scala:788)
>   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:287)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:225)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:224)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:270)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:778)
>   at 
> org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$loadPartition$1(HiveExternalCatalog.scala:885)
>   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at 
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99){noformat}
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103328/testReport/org.apache.spark.sql.hive.execution/HiveCatalogedDDLSuite/basic_DDL_using_locale_tr___caseSensitive_true/]
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103328/testReport/org.apache.spark.sql.hive.execution/HiveDDLSuite/create_Hive_serde_table_and_view_with_unicode_columns_and_comment/]
>  
> It works before https://issues.apache.org/jira/browse/HADOOP-12045.
> We could workaround it by resetting locale:
> {code:java}
> export LANG=en_US.UTF-8
> export LC_CTYPE="en_US.UTF-8"
> export LC_NUMERIC="en_US.UTF-8"
> export LC_TIME="en_US.UTF-8"
> export LC_COLLATE="en_US.UTF-8"
> export LC_MONETARY="en_US.UTF-8"
> export 

[jira] [Updated] (HADOOP-16180) LocalFileSystem throw Malformed input or input contains unmappable characters

2019-03-13 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16180:
-
Description: 
How to reproduce:
{code:java}
export LANG=
export LC_CTYPE="POSIX"
export LC_NUMERIC="POSIX"
export LC_TIME="POSIX"
export LC_COLLATE="POSIX"
export LC_MONETARY="POSIX"
export LC_MESSAGES="POSIX"
export LC_PAPER="POSIX"
export LC_NAME="POSIX"
export LC_ADDRESS="POSIX"
export LC_TELEPHONE="POSIX"
export LC_MEASUREMENT="POSIX"
export LC_IDENTIFICATION="POSIX"

git clone https://github.com/apache/spark.git && cd spark && git checkout v2.4.0

build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
-Dhadoop.version=2.8.0

{code}
Stack trace:
{noformat}
Caused by: sbt.ForkMain$ForkError: java.nio.file.InvalidPathException: 
Malformed input or input contains unmappable characters: 
/home/jenkins/workspace/SparkPullRequestBuilder@2/target/tmp/warehouse-15474fdf-0808-40ab-946d-1309fb05bf26/tab1/尼=2
at sun.nio.fs.UnixPath.encode(UnixPath.java:147)
at sun.nio.fs.UnixPath.(UnixPath.java:71)
at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281)
at java.io.File.toPath(File.java:2234)
at 
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getLastAccessTime(RawLocalFileSystem.java:683)
at 
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.(RawLocalFileSystem.java:694)
at 
org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:664)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:987)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:656)
at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
at 
org.apache.hadoop.hive.io.HdfsUtils$HadoopFileStatus.(HdfsUtils.java:211)
at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:3122)
at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3478)
at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1650)
at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1579)
at sun.reflect.GeneratedMethodAccessor209.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.spark.sql.hive.client.Shim_v2_1.loadPartition(HiveShim.scala:1145)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$loadPartition$1(HiveClientImpl.scala:788)
at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:287)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:225)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:224)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:270)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:778)
at 
org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$loadPartition$1(HiveExternalCatalog.scala:885)
at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:99){noformat}
[https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103328/testReport/org.apache.spark.sql.hive.execution/HiveCatalogedDDLSuite/basic_DDL_using_locale_tr___caseSensitive_true/]

[https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103328/testReport/org.apache.spark.sql.hive.execution/HiveDDLSuite/create_Hive_serde_table_and_view_with_unicode_columns_and_comment/]

 

It works before https://issues.apache.org/jira/browse/HADOOP-12045.

We could workaround it by resetting locale:
{code:java}
export LANG=en_US.UTF-8
export LC_CTYPE="en_US.UTF-8"
export LC_NUMERIC="en_US.UTF-8"
export LC_TIME="en_US.UTF-8"
export LC_COLLATE="en_US.UTF-8"
export LC_MONETARY="en_US.UTF-8"
export LC_MESSAGES="en_US.UTF-8"
export LC_PAPER="en_US.UTF-8"
export LC_NAME="en_US.UTF-8"
export LC_ADDRESS="en_US.UTF-8"
export LC_TELEPHONE="en_US.UTF-8"
export LC_MEASUREMENT="en_US.UTF-8"
export LC_IDENTIFICATION="en_US.UTF-8"
{code}

  was:
How to reproduce:
{code:java}
export LANG=
export LC_CTYPE="POSIX"
export LC_NUMERIC="POSIX"
export LC_TIME="POSIX"
export LC_COLLATE="POSIX"
export LC_MONETARY="POSIX"
export LC_MESSAGES="POSIX"
export LC_PAPER="POSIX"
export LC_NAME="POSIX"
export LC_ADDRESS="POSIX"
export LC_TELEPHONE="POSIX"
export LC_MEASUREMENT="POSIX"
export LC_IDENTIFICATION="POSIX"

git clone https://github.com/apache/spark.git && cd spark && git 

[jira] [Commented] (HADOOP-16180) LocalFileSystem throw Malformed input or input contains unmappable characters

2019-03-11 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16790169#comment-16790169
 ] 

Yuming Wang commented on HADOOP-16180:
--

cc [~fjk]

> LocalFileSystem throw Malformed input or input contains unmappable characters
> -
>
> Key: HADOOP-16180
> URL: https://issues.apache.org/jira/browse/HADOOP-16180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 3.2.0
>Reporter: Yuming Wang
>Priority: Major
>
> How to reproduce:
> {code:java}
> export LANG=
> export LC_CTYPE="POSIX"
> export LC_NUMERIC="POSIX"
> export LC_TIME="POSIX"
> export LC_COLLATE="POSIX"
> export LC_MONETARY="POSIX"
> export LC_MESSAGES="POSIX"
> export LC_PAPER="POSIX"
> export LC_NAME="POSIX"
> export LC_ADDRESS="POSIX"
> export LC_TELEPHONE="POSIX"
> export LC_MEASUREMENT="POSIX"
> export LC_IDENTIFICATION="POSIX"
> git clone https://github.com/apache/spark.git && cd spark && git checkout 
> v2.4.0
> build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
> -Dhadoop.version=2.8.0
> {code}
> It works before https://issues.apache.org/jira/browse/HADOOP-12045.
> We could workaround it by resetting locale:
> {code:java}
> export LANG=en_US.UTF-8
> export LC_CTYPE="en_US.UTF-8"
> export LC_NUMERIC="en_US.UTF-8"
> export LC_TIME="en_US.UTF-8"
> export LC_COLLATE="en_US.UTF-8"
> export LC_MONETARY="en_US.UTF-8"
> export LC_MESSAGES="en_US.UTF-8"
> export LC_PAPER="en_US.UTF-8"
> export LC_NAME="en_US.UTF-8"
> export LC_ADDRESS="en_US.UTF-8"
> export LC_TELEPHONE="en_US.UTF-8"
> export LC_MEASUREMENT="en_US.UTF-8"
> export LC_IDENTIFICATION="en_US.UTF-8"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16180) LocalFileSystem throw Malformed input or input contains unmappable characters

2019-03-11 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16180:
-
Description: 
How to reproduce:
{code:java}
export LANG=
export LC_CTYPE="POSIX"
export LC_NUMERIC="POSIX"
export LC_TIME="POSIX"
export LC_COLLATE="POSIX"
export LC_MONETARY="POSIX"
export LC_MESSAGES="POSIX"
export LC_PAPER="POSIX"
export LC_NAME="POSIX"
export LC_ADDRESS="POSIX"
export LC_TELEPHONE="POSIX"
export LC_MEASUREMENT="POSIX"
export LC_IDENTIFICATION="POSIX"

git clone https://github.com/apache/spark.git && cd spark && git checkout v2.4.0

build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
-Dhadoop.version=2.8.0

{code}
It works before https://issues.apache.org/jira/browse/HADOOP-12045.

We could workaround it by resetting locale:
{code:java}
export LANG=en_US.UTF-8
export LC_CTYPE="en_US.UTF-8"
export LC_NUMERIC="en_US.UTF-8"
export LC_TIME="en_US.UTF-8"
export LC_COLLATE="en_US.UTF-8"
export LC_MONETARY="en_US.UTF-8"
export LC_MESSAGES="en_US.UTF-8"
export LC_PAPER="en_US.UTF-8"
export LC_NAME="en_US.UTF-8"
export LC_ADDRESS="en_US.UTF-8"
export LC_TELEPHONE="en_US.UTF-8"
export LC_MEASUREMENT="en_US.UTF-8"
export LC_IDENTIFICATION="en_US.UTF-8"
{code}

  was:
How to reproduce:
{code}
export LANG=
export LC_CTYPE="POSIX"
export LC_NUMERIC="POSIX"
export LC_TIME="POSIX"
export LC_COLLATE="POSIX"
export LC_MONETARY="POSIX"
export LC_MESSAGES="POSIX"
export LC_PAPER="POSIX"
export LC_NAME="POSIX"
export LC_ADDRESS="POSIX"
export LC_TELEPHONE="POSIX"
export LC_MEASUREMENT="POSIX"
export LC_IDENTIFICATION="POSIX"

git clone https://github.com/apache/spark.git && cd spark && git checkout v2.4.0

build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
-Dhadoop.version=2.8.0

{code}
It works before https://issues.apache.org/jira/browse/HADOOP-12045.

We could workaround it by reset locale:
{code:java}
export LANG=en_US.UTF-8
export LC_CTYPE="en_US.UTF-8"
export LC_NUMERIC="en_US.UTF-8"
export LC_TIME="en_US.UTF-8"
export LC_COLLATE="en_US.UTF-8"
export LC_MONETARY="en_US.UTF-8"
export LC_MESSAGES="en_US.UTF-8"
export LC_PAPER="en_US.UTF-8"
export LC_NAME="en_US.UTF-8"
export LC_ADDRESS="en_US.UTF-8"
export LC_TELEPHONE="en_US.UTF-8"
export LC_MEASUREMENT="en_US.UTF-8"
export LC_IDENTIFICATION="en_US.UTF-8"
{code}


> LocalFileSystem throw Malformed input or input contains unmappable characters
> -
>
> Key: HADOOP-16180
> URL: https://issues.apache.org/jira/browse/HADOOP-16180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 3.2.0
>Reporter: Yuming Wang
>Priority: Major
>
> How to reproduce:
> {code:java}
> export LANG=
> export LC_CTYPE="POSIX"
> export LC_NUMERIC="POSIX"
> export LC_TIME="POSIX"
> export LC_COLLATE="POSIX"
> export LC_MONETARY="POSIX"
> export LC_MESSAGES="POSIX"
> export LC_PAPER="POSIX"
> export LC_NAME="POSIX"
> export LC_ADDRESS="POSIX"
> export LC_TELEPHONE="POSIX"
> export LC_MEASUREMENT="POSIX"
> export LC_IDENTIFICATION="POSIX"
> git clone https://github.com/apache/spark.git && cd spark && git checkout 
> v2.4.0
> build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
> -Dhadoop.version=2.8.0
> {code}
> It works before https://issues.apache.org/jira/browse/HADOOP-12045.
> We could workaround it by resetting locale:
> {code:java}
> export LANG=en_US.UTF-8
> export LC_CTYPE="en_US.UTF-8"
> export LC_NUMERIC="en_US.UTF-8"
> export LC_TIME="en_US.UTF-8"
> export LC_COLLATE="en_US.UTF-8"
> export LC_MONETARY="en_US.UTF-8"
> export LC_MESSAGES="en_US.UTF-8"
> export LC_PAPER="en_US.UTF-8"
> export LC_NAME="en_US.UTF-8"
> export LC_ADDRESS="en_US.UTF-8"
> export LC_TELEPHONE="en_US.UTF-8"
> export LC_MEASUREMENT="en_US.UTF-8"
> export LC_IDENTIFICATION="en_US.UTF-8"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16180) LocalFileSystem throw Malformed input or input contains unmappable characters

2019-03-11 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16180:


 Summary: LocalFileSystem throw Malformed input or input contains 
unmappable characters
 Key: HADOOP-16180
 URL: https://issues.apache.org/jira/browse/HADOOP-16180
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.2.0, 2.8.0
Reporter: Yuming Wang


How to reproduce:
{code}
export LANG=
export LC_CTYPE="POSIX"
export LC_NUMERIC="POSIX"
export LC_TIME="POSIX"
export LC_COLLATE="POSIX"
export LC_MONETARY="POSIX"
export LC_MESSAGES="POSIX"
export LC_PAPER="POSIX"
export LC_NAME="POSIX"
export LC_ADDRESS="POSIX"
export LC_TELEPHONE="POSIX"
export LC_MEASUREMENT="POSIX"
export LC_IDENTIFICATION="POSIX"

git clone https://github.com/apache/spark.git && cd spark && git checkout v2.4.0

build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
-Dhadoop.version=2.8.0

{code}
It works before https://issues.apache.org/jira/browse/HADOOP-12045.

We could workaround it by reset locale:
{code:java}
export LANG=en_US.UTF-8
export LC_CTYPE="en_US.UTF-8"
export LC_NUMERIC="en_US.UTF-8"
export LC_TIME="en_US.UTF-8"
export LC_COLLATE="en_US.UTF-8"
export LC_MONETARY="en_US.UTF-8"
export LC_MESSAGES="en_US.UTF-8"
export LC_PAPER="en_US.UTF-8"
export LC_NAME="en_US.UTF-8"
export LC_ADDRESS="en_US.UTF-8"
export LC_TELEPHONE="en_US.UTF-8"
export LC_MEASUREMENT="en_US.UTF-8"
export LC_IDENTIFICATION="en_US.UTF-8"
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-03-01 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16152:
-
Description: 
Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
compatibility issues.

Spark: 
[https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141]
Calcite: [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
Hive: https://issues.apache.org/jira/browse/HIVE-21211

  was:
Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
compatibility issues.

Spark: 
https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141
Calcite: https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87


> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Priority: Major
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: https://issues.apache.org/jira/browse/HIVE-21211



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-02-28 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781307#comment-16781307
 ] 

Yuming Wang commented on HADOOP-16152:
--

It conflict when doing test. I [set Jetty to 
9.3.24.v20180605|https://github.com/wangyum/spark/blob/5075a4231a5a46254ff393c30fba02f76cb4ddbf/pom.xml#L2844]
 to workaround this issue.
{code:java}
$ git clone https://github.com/wangyum/spark.git
$ cd spark && git checkout DNR-HADOOP-16152
$ build/sbt  "yarn/testOnly"  -Phadoop-3.1 -Pyarn
{code}
{noformat}
[info] YarnShuffleAuthSuite:
[info] org.apache.spark.deploy.yarn.YarnShuffleAuthSuite *** ABORTED *** (146 
milliseconds)
[info]   org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
java.lang.NoSuchMethodError: 
org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager;
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster.startResourceManager(MiniYARNCluster.java:373)
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster.access$300(MiniYARNCluster.java:128)
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceStart(MiniYARNCluster.java:503)
[info]   at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
[info]   at 
org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster.serviceStart(MiniYARNCluster.java:322)
[info]   at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
[info]   at 
org.apache.spark.deploy.yarn.BaseYarnClusterSuite.beforeAll(BaseYarnClusterSuite.scala:86)
[info]   at 
org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:212)
[info]   at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
[info]   at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
[info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:53)
[info]   at 
org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
[info]   at 
org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:507)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:286)
[info]   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info]   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[info]   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[info]   at java.lang.Thread.run(Thread.java:748)
[info]   Cause: java.lang.NoSuchMethodError: 
org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager;
[info]   at 
org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:577)
[info]   at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:558)
[info]   at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119)
[info]   at 
org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:433)
[info]   at 
org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:341)
[info]   at 
org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:432)
[info]   at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1226)
[info]   at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1335)
[info]   at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster.startResourceManager(MiniYARNCluster.java:365)
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster.access$300(MiniYARNCluster.java:128)
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceStart(MiniYARNCluster.java:503)
[info]   at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
[info]   at 
org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
[info]   at 
org.apache.hadoop.yarn.server.MiniYARNCluster.serviceStart(MiniYARNCluster.java:322)
[info]   at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
[info]   at 
org.apache.spark.deploy.yarn.BaseYarnClusterSuite.beforeAll(BaseYarnClusterSuite.scala:86)
[info]   at 
org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:212)
[info]   at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
[info]   at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
[info]   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:53)
[info]   at 
org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
[info]   at 
org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:507)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
[info]   at sbt.ForkMain$Run$2.call(ForkMain.java:286)

[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-02-28 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Fix Version/s: (was: 3.1.3)
   3.0.2

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.0.2
>
> Attachments: HADOOP-16087-branch-3.0-001.patch, 
> HADOOP-16087-branch-3.0-002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16086) Backport HADOOP-15549 to branch-3.1

2019-02-28 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16086:
-
Fix Version/s: (was: 3.0.2)
   3.1.3

> Backport HADOOP-15549 to branch-3.1
> ---
>
> Key: HADOOP-16086
> URL: https://issues.apache.org/jira/browse/HADOOP-16086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.1.3
>
> Attachments: HADOOP-16086-branch-3.1-001.patch, 
> HADOOP-16086-branch-3.1-002.patch
>
>
> Backport HADOOP-15549 to branch-3.1 to fix IllegalArgumentException:
> {noformat}
> 02:44:34.707 ERROR org.apache.hadoop.hive.ql.exec.Task: Job Submission failed 
> with exception 'java.io.IOException(Cannot initialize Cluster. Please check 
> your configuration for mapreduce.framework.name and the correspond server 
> addresses.)'
> java.io.IOException: Cannot initialize Cluster. Please check your 
> configuration for mapreduce.framework.name and the correspond server 
> addresses.
>   at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:116)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:109)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:102)
>   at org.apache.hadoop.mapred.JobClient.init(JobClient.java:475)
>   at org.apache.hadoop.mapred.JobClient.(JobClient.java:454)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:369)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$runHive$1(HiveClientImpl.scala:730)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:283)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:221)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:220)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:266)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.runHive(HiveClientImpl.scala:719)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.runSqlHive(HiveClientImpl.scala:709)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.createNonPartitionedTable(StatisticsSuite.scala:719)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$2(StatisticsSuite.scala:822)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTable(SQLTestUtils.scala:284)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTable$(SQLTestUtils.scala:283)
>   at 
> org.apache.spark.sql.StatisticsCollectionTestBase.withTable(StatisticsCollectionTestBase.scala:40)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1(StatisticsSuite.scala:821)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1$adapted(StatisticsSuite.scala:820)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.testAlterTableProperties(StatisticsSuite.scala:820)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$new$70(StatisticsSuite.scala:851)
>   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>   at org.scalatest.Transformer.apply(Transformer.scala:20)
>   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
>   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:104)
>   at 
> org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
>   at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
>   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
>   at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
>   at 

[jira] [Updated] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-02-28 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16152:
-
Description: 
Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
compatibility issues.

Spark: 
https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141
Calcite: https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87

  was:
Some big data projects have been upgraded to 9.4.x, which causes some 
compatibility issues.

Spark: 
https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141
Calcite: https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87


> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Priority: Major
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141
> Calcite: https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-02-27 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16152:


 Summary: Upgrade Eclipse Jetty version to 9.4.x
 Key: HADOOP-16152
 URL: https://issues.apache.org/jira/browse/HADOOP-16152
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.2.0
Reporter: Yuming Wang


Some big data projects have been upgraded to 9.4.x, which causes some 
compatibility issues.

Spark: 
https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141
Calcite: https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15549) Upgrade to commons-configuration 2.1 regresses task CPU consumption

2019-01-29 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1675#comment-1675
 ] 

Yuming Wang commented on HADOOP-15549:
--

Thanks [~ste...@apache.org]. Two new JIRAs have been created:

https://issues.apache.org/jira/browse/HADOOP-16086
 https://issues.apache.org/jira/browse/HADOOP-16087

> Upgrade to commons-configuration 2.1 regresses task CPU consumption
> ---
>
> Key: HADOOP-15549
> URL: https://issues.apache.org/jira/browse/HADOOP-15549
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: hadoop-15549.txt
>
>
> HADOOP-13660 upgraded from commons-configuration 1.x to 2.x. 
> commons-configuration is used when parsing the metrics configuration 
> properties file. The new builder API used in the new version apparently makes 
> use of a bunch of very bloated reflection and classloading nonsense to 
> achieve the same goal, and this results in a regression of >100ms of CPU time 
> as measured by a program which simply initializes DefaultMetricsSystem.
> This isn't a big deal for long-running daemons, but for MR tasks which might 
> only run a few seconds on poorly-tuned jobs, this can be noticeable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Attachment: HADOOP-16087-branch-3.0-002.patch

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16087-branch-3.0-001.patch, 
> HADOOP-16087-branch-3.0-002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16086) Backport HADOOP-15549 to branch-3.1

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16086:
-
Attachment: HADOOP-16086-branch-3.1-002.patch

> Backport HADOOP-15549 to branch-3.1
> ---
>
> Key: HADOOP-16086
> URL: https://issues.apache.org/jira/browse/HADOOP-16086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16086-branch-3.1-001.patch, 
> HADOOP-16086-branch-3.1-002.patch
>
>
> Backport HADOOP-15549 to branch-3.1 to fix IllegalArgumentException:
> {noformat}
> 02:44:34.707 ERROR org.apache.hadoop.hive.ql.exec.Task: Job Submission failed 
> with exception 'java.io.IOException(Cannot initialize Cluster. Please check 
> your configuration for mapreduce.framework.name and the correspond server 
> addresses.)'
> java.io.IOException: Cannot initialize Cluster. Please check your 
> configuration for mapreduce.framework.name and the correspond server 
> addresses.
>   at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:116)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:109)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:102)
>   at org.apache.hadoop.mapred.JobClient.init(JobClient.java:475)
>   at org.apache.hadoop.mapred.JobClient.(JobClient.java:454)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:369)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$runHive$1(HiveClientImpl.scala:730)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:283)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:221)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:220)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:266)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.runHive(HiveClientImpl.scala:719)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.runSqlHive(HiveClientImpl.scala:709)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.createNonPartitionedTable(StatisticsSuite.scala:719)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$2(StatisticsSuite.scala:822)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTable(SQLTestUtils.scala:284)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTable$(SQLTestUtils.scala:283)
>   at 
> org.apache.spark.sql.StatisticsCollectionTestBase.withTable(StatisticsCollectionTestBase.scala:40)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1(StatisticsSuite.scala:821)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1$adapted(StatisticsSuite.scala:820)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.testAlterTableProperties(StatisticsSuite.scala:820)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$new$70(StatisticsSuite.scala:851)
>   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>   at org.scalatest.Transformer.apply(Transformer.scala:20)
>   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
>   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:104)
>   at 
> org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
>   at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
>   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
>   at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
>   at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
>   at 

[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Attachment: HADOOP-16086-branch-3.1-002.patch

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16087-branch-3.0-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Attachment: (was: HADOOP-16086-branch-3.1-002.patch)

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16087-branch-3.0-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Status: Open  (was: Patch Available)

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16087-branch-3.0-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16086) Backport HADOOP-15549 to branch-3.1

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16086:
-
Description: 
Backport HADOOP-15549 to branch-3.1 to fix IllegalArgumentException:
{noformat}
02:44:34.707 ERROR org.apache.hadoop.hive.ql.exec.Task: Job Submission failed 
with exception 'java.io.IOException(Cannot initialize Cluster. Please check 
your configuration for mapreduce.framework.name and the correspond server 
addresses.)'
java.io.IOException: Cannot initialize Cluster. Please check your configuration 
for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:116)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:109)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:102)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:475)
at org.apache.hadoop.mapred.JobClient.(JobClient.java:454)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:369)
at 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$runHive$1(HiveClientImpl.scala:730)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:283)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:221)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:220)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:266)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.runHive(HiveClientImpl.scala:719)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.runSqlHive(HiveClientImpl.scala:709)
at 
org.apache.spark.sql.hive.StatisticsSuite.createNonPartitionedTable(StatisticsSuite.scala:719)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$2(StatisticsSuite.scala:822)
at 
org.apache.spark.sql.test.SQLTestUtilsBase.withTable(SQLTestUtils.scala:284)
at 
org.apache.spark.sql.test.SQLTestUtilsBase.withTable$(SQLTestUtils.scala:283)
at 
org.apache.spark.sql.StatisticsCollectionTestBase.withTable(StatisticsCollectionTestBase.scala:40)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1(StatisticsSuite.scala:821)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1$adapted(StatisticsSuite.scala:820)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
org.apache.spark.sql.hive.StatisticsSuite.testAlterTableProperties(StatisticsSuite.scala:820)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$new$70(StatisticsSuite.scala:851)
at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:104)
at 
org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
at org.scalatest.FunSuite.runTest(FunSuite.scala:1560)
at 
org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
at 
org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:396)
at scala.collection.immutable.List.foreach(List.scala:392)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:379)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
at 

[jira] [Created] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16087:


 Summary: Backport HADOOP-15549 to branch-3.0
 Key: HADOOP-16087
 URL: https://issues.apache.org/jira/browse/HADOOP-16087
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 3.0.2
Reporter: Yuming Wang
 Attachments: HADOOP-16087-branch-3.1-001.patch





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16086) Backport HADOOP-15549 to branch-3.1

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16086:
-
Attachment: HADOOP-16086-branch-3.1-001.patch
Status: Patch Available  (was: Open)

> Backport HADOOP-15549 to branch-3.1
> ---
>
> Key: HADOOP-16086
> URL: https://issues.apache.org/jira/browse/HADOOP-16086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16086-branch-3.1-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Attachment: HADOOP-16087-branch-3.0-001.patch
Status: Patch Available  (was: Open)

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16087-branch-3.0-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Attachment: (was: HADOOP-16087-branch-3.1-001.patch)

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Attachment: HADOOP-16087-branch-3.1-001.patch
Status: Patch Available  (was: Open)

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16087-branch-3.1-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16086) Backport HADOOP-15549 to branch-3.1

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16086:
-
Description: Backport branch-3.1

> Backport HADOOP-15549 to branch-3.1
> ---
>
> Key: HADOOP-16086
> URL: https://issues.apache.org/jira/browse/HADOOP-16086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16086-branch-3.1-001.patch
>
>
> Backport branch-3.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16086) Backport HADOOP-15549 to branch-3.1

2019-01-29 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16086:


 Summary: Backport HADOOP-15549 to branch-3.1
 Key: HADOOP-16086
 URL: https://issues.apache.org/jira/browse/HADOOP-16086
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 3.0.2
Reporter: Yuming Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15549) Upgrade to commons-configuration 2.1 regresses task CPU consumption

2019-01-29 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754869#comment-16754869
 ] 

Yuming Wang commented on HADOOP-15549:
--

Cloud we backport this patch to {{branch-3.1}}? I hint 
{{IllegalArgumentException}}:
{noformat}
02:44:34.707 ERROR org.apache.hadoop.hive.ql.exec.Task: Job Submission failed 
with exception 'java.io.IOException(Cannot initialize Cluster. Please check 
your configuration for mapreduce.framework.name and the correspond server 
addresses.)'
java.io.IOException: Cannot initialize Cluster. Please check your configuration 
for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:116)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:109)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:102)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:475)
at org.apache.hadoop.mapred.JobClient.(JobClient.java:454)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:369)
at 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$runHive$1(HiveClientImpl.scala:730)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:283)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:221)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:220)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:266)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.runHive(HiveClientImpl.scala:719)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.runSqlHive(HiveClientImpl.scala:709)
at 
org.apache.spark.sql.hive.StatisticsSuite.createNonPartitionedTable(StatisticsSuite.scala:719)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$2(StatisticsSuite.scala:822)
at 
org.apache.spark.sql.test.SQLTestUtilsBase.withTable(SQLTestUtils.scala:284)
at 
org.apache.spark.sql.test.SQLTestUtilsBase.withTable$(SQLTestUtils.scala:283)
at 
org.apache.spark.sql.StatisticsCollectionTestBase.withTable(StatisticsCollectionTestBase.scala:40)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1(StatisticsSuite.scala:821)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1$adapted(StatisticsSuite.scala:820)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
org.apache.spark.sql.hive.StatisticsSuite.testAlterTableProperties(StatisticsSuite.scala:820)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$new$70(StatisticsSuite.scala:851)
at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:104)
at 
org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
at org.scalatest.FunSuite.runTest(FunSuite.scala:1560)
at 
org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
at 
org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:396)
at scala.collection.immutable.List.foreach(List.scala:392)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:379)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
at 

[jira] [Commented] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults

2016-07-05 Thread Yuming Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15362107#comment-15362107
 ] 

Yuming Wang commented on HADOOP-9631:
-

[~lohit], Can you please help me upload the [latest 
patch|https://drive.google.com/open?id=0BxL8Kzbd2N--QW5VaU9wX1o4Z0k], I'm not 
authorized.

> ViewFs should use underlying FileSystem's server side defaults
> --
>
> Key: HADOOP-9631
> URL: https://issues.apache.org/jira/browse/HADOOP-9631
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 2.0.4-alpha
>Reporter: Lohit Vijayarenu
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, 
> HADOOP-9631.trunk.3.patch, HADOOP-9631.trunk.4.patch, TestFileContext.java
>
>
> On a cluster with ViewFS as default FileSystem, creating files using 
> FileContext will always result with replication factor of 1, instead of 
> underlying filesystem default (like HDFS)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9631) ViewFs should use underlying FileSystem's server side defaults

2016-07-02 Thread Yuming Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15360190#comment-15360190
 ] 

Yuming Wang commented on HADOOP-9631:
-

The test failures still exist?

> ViewFs should use underlying FileSystem's server side defaults
> --
>
> Key: HADOOP-9631
> URL: https://issues.apache.org/jira/browse/HADOOP-9631
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, viewfs
>Affects Versions: 2.0.4-alpha
>Reporter: Lohit Vijayarenu
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9631.trunk.1.patch, HADOOP-9631.trunk.2.patch, 
> HADOOP-9631.trunk.3.patch, HADOOP-9631.trunk.4.patch, TestFileContext.java
>
>
> On a cluster with ViewFS as default FileSystem, creating files using 
> FileContext will always result with replication factor of 1, instead of 
> underlying filesystem default (like HDFS)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org