[jira] [Commented] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-15 Thread Arun Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16885433#comment-16885433
 ] 

Arun Singh commented on HADOOP-16404:
-

[~ste...@apache.org] please can you help us in committing the change 

> ABFS default blocksize change(256MB from 512MB)
> ---
>
> Key: HADOOP-16404
> URL: https://issues.apache.org/jira/browse/HADOOP-16404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.1.2
>Reporter: Arun Singh
>Assignee: Arun Singh
>Priority: Major
>  Labels: patch
> Fix For: 3.1.2
>
> Attachments: HADOOP-16404.patch
>
>
> We intend to change the default blocksize of the abfs driver to 256Mb from 
> 512MB.
> After changing the blocksize we have performed a series of test(Spark Tera, 
> Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
> 4-5 %



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-10 Thread Arun Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh reassigned HADOOP-16404:
---

Assignee: Arun Singh

> ABFS default blocksize change(256MB from 512MB)
> ---
>
> Key: HADOOP-16404
> URL: https://issues.apache.org/jira/browse/HADOOP-16404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.1.2
>Reporter: Arun Singh
>Assignee: Arun Singh
>Priority: Major
>  Labels: patch
> Fix For: 3.1.2
>
> Attachments: HADOOP-16404.patch
>
>
> We intend to change the default blocksize of the abfs driver to 256Mb from 
> 512MB.
> After changing the blocksize we have performed a series of test(Spark Tera, 
> Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
> 4-5 %



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-09 Thread Arun Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16881697#comment-16881697
 ] 

Arun Singh commented on HADOOP-16404:
-

[~ste...@apache.org] please can you assign the task to me. I will be running 
test against abfs and sharing the results later on.

> ABFS default blocksize change(256MB from 512MB)
> ---
>
> Key: HADOOP-16404
> URL: https://issues.apache.org/jira/browse/HADOOP-16404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.1.2
>Reporter: Arun Singh
>Priority: Major
>  Labels: patch
> Fix For: 3.1.2
>
> Attachments: HADOOP-16404.patch
>
>
> We intend to change the default blocksize of the abfs driver to 256Mb from 
> 512MB.
> After changing the blocksize we have performed a series of test(Spark Tera, 
> Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
> 4-5 %



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-02 Thread Arun Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh updated HADOOP-16404:

Attachment: HADOOP-16404.patch
Status: Patch Available  (was: Open)

> ABFS default blocksize change(256MB from 512MB)
> ---
>
> Key: HADOOP-16404
> URL: https://issues.apache.org/jira/browse/HADOOP-16404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.1.2
>Reporter: Arun Singh
>Priority: Major
>  Labels: patch
> Fix For: 3.1.2
>
> Attachments: HADOOP-16404.patch
>
>
> We intend to change the default blocksize of the abfs driver to 256Mb from 
> 512MB.
> After changing the blocksize we have performed a series of test(Spark Tera, 
> Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
> 4-5 %



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-02 Thread Arun Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh updated HADOOP-16404:

Comment: was deleted

(was: We intend to change the default blocksize of the abfs driver to 256Mb 
from 512MB.

After changing the blocksize we have performed a series of test(Spark Tera, 
Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
4-5 %)

> ABFS default blocksize change(256MB from 512MB)
> ---
>
> Key: HADOOP-16404
> URL: https://issues.apache.org/jira/browse/HADOOP-16404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.1.2
>Reporter: Arun Singh
>Priority: Major
>  Labels: patch
> Fix For: 3.1.2
>
>
> We intend to change the default blocksize of the abfs driver to 256Mb from 
> 512MB.
> After changing the blocksize we have performed a series of test(Spark Tera, 
> Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
> 4-5 %



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-02 Thread Arun Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh updated HADOOP-16404:

Status: Open  (was: Patch Available)

> ABFS default blocksize change(256MB from 512MB)
> ---
>
> Key: HADOOP-16404
> URL: https://issues.apache.org/jira/browse/HADOOP-16404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.1.2
>Reporter: Arun Singh
>Priority: Major
>  Labels: patch
> Fix For: 3.1.2
>
>
> We intend to change the default blocksize of the abfs driver to 256Mb from 
> 512MB.
> After changing the blocksize we have performed a series of test(Spark Tera, 
> Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
> 4-5 %



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-02 Thread Arun Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh updated HADOOP-16404:

Attachment: (was: HADOOP-16404.patch)

> ABFS default blocksize change(256MB from 512MB)
> ---
>
> Key: HADOOP-16404
> URL: https://issues.apache.org/jira/browse/HADOOP-16404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.1.2
>Reporter: Arun Singh
>Priority: Major
>  Labels: patch
> Fix For: 3.1.2
>
>
> We intend to change the default blocksize of the abfs driver to 256Mb from 
> 512MB.
> After changing the blocksize we have performed a series of test(Spark Tera, 
> Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
> 4-5 %



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-02 Thread Arun Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh updated HADOOP-16404:

   Labels: patch  (was: )
Affects Version/s: 3.1.2
   Attachment: HADOOP-16404.patch
 Target Version/s: 3.1.2
   Status: Patch Available  (was: Open)

We intend to change the default blocksize of the abfs driver to 256Mb from 
512MB.

After changing the blocksize we have performed a series of test(Spark Tera, 
Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
4-5 %

> ABFS default blocksize change(256MB from 512MB)
> ---
>
> Key: HADOOP-16404
> URL: https://issues.apache.org/jira/browse/HADOOP-16404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.1.2
>Reporter: Arun Singh
>Priority: Major
>  Labels: patch
> Fix For: 3.1.2
>
> Attachments: HADOOP-16404.patch
>
>
> We intend to change the default blocksize of the abfs driver to 256Mb from 
> 512MB.
> After changing the blocksize we have performed a series of test(Spark Tera, 
> Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
> 4-5 %



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-02 Thread Arun Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh updated HADOOP-16404:

Attachment: (was: HADOOP-16404-001.patch)

> ABFS default blocksize change(256MB from 512MB)
> ---
>
> Key: HADOOP-16404
> URL: https://issues.apache.org/jira/browse/HADOOP-16404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.1.2
>Reporter: Arun Singh
>Priority: Major
>  Labels: patch
> Fix For: 3.1.2
>
> Attachments: HADOOP-16404.patch
>
>
> We intend to change the default blocksize of the abfs driver to 256Mb from 
> 512MB.
> After changing the blocksize we have performed a series of test(Spark Tera, 
> Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
> 4-5 %



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-02 Thread Arun Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh updated HADOOP-16404:

Attachment: HADOOP-16404-001.patch

> ABFS default blocksize change(256MB from 512MB)
> ---
>
> Key: HADOOP-16404
> URL: https://issues.apache.org/jira/browse/HADOOP-16404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Arun Singh
>Priority: Major
> Fix For: 3.1.2
>
> Attachments: HADOOP-16404-001.patch
>
>
> We intend to change the default blocksize of the abfs driver to 256Mb from 
> 512MB.
> After changing the blocksize we have performed a series of test(Spark Tera, 
> Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
> 4-5 %



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-02 Thread Arun Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh updated HADOOP-16404:

Attachment: (was: HADOOP-16404.patch)

> ABFS default blocksize change(256MB from 512MB)
> ---
>
> Key: HADOOP-16404
> URL: https://issues.apache.org/jira/browse/HADOOP-16404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Arun Singh
>Priority: Major
> Fix For: 3.1.2
>
>
> We intend to change the default blocksize of the abfs driver to 256Mb from 
> 512MB.
> After changing the blocksize we have performed a series of test(Spark Tera, 
> Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
> 4-5 %



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-02 Thread Arun Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh updated HADOOP-16404:

Attachment: HADOOP-16404.patch

> ABFS default blocksize change(256MB from 512MB)
> ---
>
> Key: HADOOP-16404
> URL: https://issues.apache.org/jira/browse/HADOOP-16404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Arun Singh
>Priority: Major
> Fix For: 3.1.2
>
> Attachments: HADOOP-16404.patch
>
>
> We intend to change the default blocksize of the abfs driver to 256Mb from 
> 512MB.
> After changing the blocksize we have performed a series of test(Spark Tera, 
> Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
> 4-5 %



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-02 Thread Arun Singh (JIRA)
Arun Singh created HADOOP-16404:
---

 Summary: ABFS default blocksize change(256MB from 512MB)
 Key: HADOOP-16404
 URL: https://issues.apache.org/jira/browse/HADOOP-16404
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Reporter: Arun Singh
 Fix For: 3.1.2


We intend to change the default blocksize of the abfs driver to 256Mb from 
512MB.

After changing the blocksize we have performed a series of test(Spark Tera, 
Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
4-5 %



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12987) HortonWorks Zeppelin issue in HDP 2.4

2016-04-04 Thread Arun Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15225575#comment-15225575
 ] 

Arun Singh commented on HADOOP-12987:
-

Hi Colin,

Understood. Please accept my apology for filing HortonWorks related issue at 
wrong place. 

Thanks,
Arun

> HortonWorks Zeppelin issue in HDP 2.4
> -
>
> Key: HADOOP-12987
> URL: https://issues.apache.org/jira/browse/HADOOP-12987
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: SLES11 SP4
>Reporter: Arun Singh
>Assignee: Ali Bajwa
>
> Issue 2: Zeppelin. Zeppelin is a new component in Tech Preview in the latest 
> HDP stack (2.4). I've been following this guide: 
> http://hortonworks.com/hadoop-tutorial/apache-zeppelin-hdp-2-4/
>When installing Zeppelin through the Ambari interface, it errors out with 
> a message saying it can't install the package gcc-gfortran
>  
>If you open the file: 
> /var/lib/ambari-server/resources/stacks/HDP/2.4/services/ZEPPELIN/metainfo.xml
>  
>  Line 72: 
>  
>   redhat7,redhat6,redhat5,suse11 
>  
>
> gcc-gfortran 
>
>
> blas-devel 
>
>
> lapack-devel 
>
>
> python-devel 
>
>
>  python-pip 
>
>
> zeppelin 
>
>  
>  
> This list packages to install on SUSE11, but you don't find these packages on 
> SUSE11 as they have different names than the RHEL ones... 
> Eg: 
> RHEL: gcc-gfortran 
> SUSE: gcc-fortran 
> RHEL: blas-devel 
> SUSE: libblas3 ? 
> RHEL: lapack-devel 
> SUSE: liblapack3 ? 
> RHEL: python-dev 
> SUSE: python-devel 
> RHEL: python-pip 
> SUSE: doesn't seem to be part of the standard repo 
> Solution: Make a custom  for SUSE 11, with the 
> correct named packages as they are named on SUSE 11
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12987) HortonWorks Zeppelin issue in HDP 2.4

2016-03-31 Thread Arun Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh resolved HADOOP-12987.
-
  Resolution: Information Provided
Release Note: Moved this issue to : AMBARI-15659

> HortonWorks Zeppelin issue in HDP 2.4
> -
>
> Key: HADOOP-12987
> URL: https://issues.apache.org/jira/browse/HADOOP-12987
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: SLES11 SP4
>Reporter: Arun Singh
>Assignee: Ali Bajwa
>
> Issue 2: Zeppelin. Zeppelin is a new component in Tech Preview in the latest 
> HDP stack (2.4). I've been following this guide: 
> http://hortonworks.com/hadoop-tutorial/apache-zeppelin-hdp-2-4/
>When installing Zeppelin through the Ambari interface, it errors out with 
> a message saying it can't install the package gcc-gfortran
>  
>If you open the file: 
> /var/lib/ambari-server/resources/stacks/HDP/2.4/services/ZEPPELIN/metainfo.xml
>  
>  Line 72: 
>  
>   redhat7,redhat6,redhat5,suse11 
>  
>
> gcc-gfortran 
>
>
> blas-devel 
>
>
> lapack-devel 
>
>
> python-devel 
>
>
>  python-pip 
>
>
> zeppelin 
>
>  
>  
> This list packages to install on SUSE11, but you don't find these packages on 
> SUSE11 as they have different names than the RHEL ones... 
> Eg: 
> RHEL: gcc-gfortran 
> SUSE: gcc-fortran 
> RHEL: blas-devel 
> SUSE: libblas3 ? 
> RHEL: lapack-devel 
> SUSE: liblapack3 ? 
> RHEL: python-dev 
> SUSE: python-devel 
> RHEL: python-pip 
> SUSE: doesn't seem to be part of the standard repo 
> Solution: Make a custom  for SUSE 11, with the 
> correct named packages as they are named on SUSE 11
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12987) HortonWorks Zeppelin issue in HDP 2.4

2016-03-31 Thread Arun Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh updated HADOOP-12987:


As suggested moved this to : AMBARI-15659

> HortonWorks Zeppelin issue in HDP 2.4
> -
>
> Key: HADOOP-12987
> URL: https://issues.apache.org/jira/browse/HADOOP-12987
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: SLES11 SP4
>Reporter: Arun Singh
>Assignee: Ali Bajwa
>
> Issue 2: Zeppelin. Zeppelin is a new component in Tech Preview in the latest 
> HDP stack (2.4). I've been following this guide: 
> http://hortonworks.com/hadoop-tutorial/apache-zeppelin-hdp-2-4/
>When installing Zeppelin through the Ambari interface, it errors out with 
> a message saying it can't install the package gcc-gfortran
>  
>If you open the file: 
> /var/lib/ambari-server/resources/stacks/HDP/2.4/services/ZEPPELIN/metainfo.xml
>  
>  Line 72: 
>  
>   redhat7,redhat6,redhat5,suse11 
>  
>
> gcc-gfortran 
>
>
> blas-devel 
>
>
> lapack-devel 
>
>
> python-devel 
>
>
>  python-pip 
>
>
> zeppelin 
>
>  
>  
> This list packages to install on SUSE11, but you don't find these packages on 
> SUSE11 as they have different names than the RHEL ones... 
> Eg: 
> RHEL: gcc-gfortran 
> SUSE: gcc-fortran 
> RHEL: blas-devel 
> SUSE: libblas3 ? 
> RHEL: lapack-devel 
> SUSE: liblapack3 ? 
> RHEL: python-dev 
> SUSE: python-devel 
> RHEL: python-pip 
> SUSE: doesn't seem to be part of the standard repo 
> Solution: Make a custom  for SUSE 11, with the 
> correct named packages as they are named on SUSE 11
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12986) Hortonworks Data Flow (aka, NiFi)

2016-03-31 Thread Arun Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh resolved HADOOP-12986.
-
  Resolution: Invalid
Release Note: As suggested moved this to : NIFI-1715

> Hortonworks Data Flow (aka, NiFi)
> -
>
> Key: HADOOP-12986
> URL: https://issues.apache.org/jira/browse/HADOOP-12986
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: SLES11 SP4
>Reporter: Arun Singh
>Assignee: Ali Bajwa
>
> Issue 1: Hortonworks Data Flow (aka, NiFi). When running the command 
> "bin/nifi.sh install", it will setup the correct service file for you so that 
> nifi will start on boot. When you look at the file, especially the "install" 
> section: 
>  
> install() {
> SVC_NAME=nifi
> if [ "x$2" != "x" ] ; then
> SVC_NAME=$2
> fi
> SVC_FILE="/etc/init.d/${SVC_NAME}"
> cp "$0" "${SVC_FILE}"
> sed -i s:NIFI_HOME=.*:NIFI_HOME="${NIFI_HOME}": "${SVC_FILE}"
> sed -i s:PROGNAME=.*:PROGNAME="${SCRIPT_NAME}": "${SVC_FILE}"
> rm -f "/etc/rc2.d/S65${SVC_NAME}"
> ln -s "/etc/init.d/${SVC_NAME}" "/etc/rc2.d/S65${SVC_NAME}"
> rm -f "/etc/rc2.d/K65${SVC_NAME}"
> ln -s "/etc/init.d/${SVC_NAME}" "/etc/rc2.d/K65${SVC_NAME}"
> echo "Service ${SVC_NAME} installed"
> }
>  
> The problem above is that the startup and shutdown files (the "S" and "K" 
> files) are created in a directory "/etc/rc2.d", however this directory exists 
> only on RHEL. On SUSE this directory is slightly different, /etc/init.d/rc2.d
>  
> So when attempting to setup the services file (for bootup purposes), the 
> above command fails on SUSE. Worse, no error checking is performed and it 
> will actually print a successful message! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12987) HortonWorks Zeppelin issue in HDP 2.4

2016-03-31 Thread Arun Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15220873#comment-15220873
 ] 

Arun Singh commented on HADOOP-12987:
-

Hi Wei-Ching,

Accept my ignorance here & please gude:

Is there a way I can move this bug/issue to suggested bug/issue to AMBARI or I 
have to create new one?

Thanks,
Arun



> HortonWorks Zeppelin issue in HDP 2.4
> -
>
> Key: HADOOP-12987
> URL: https://issues.apache.org/jira/browse/HADOOP-12987
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: SLES11 SP4
>Reporter: Arun Singh
>Assignee: Ali Bajwa
>
> Issue 2: Zeppelin. Zeppelin is a new component in Tech Preview in the latest 
> HDP stack (2.4). I've been following this guide: 
> http://hortonworks.com/hadoop-tutorial/apache-zeppelin-hdp-2-4/
>When installing Zeppelin through the Ambari interface, it errors out with 
> a message saying it can't install the package gcc-gfortran
>  
>If you open the file: 
> /var/lib/ambari-server/resources/stacks/HDP/2.4/services/ZEPPELIN/metainfo.xml
>  
>  Line 72: 
>  
>   redhat7,redhat6,redhat5,suse11 
>  
>
> gcc-gfortran 
>
>
> blas-devel 
>
>
> lapack-devel 
>
>
> python-devel 
>
>
>  python-pip 
>
>
> zeppelin 
>
>  
>  
> This list packages to install on SUSE11, but you don't find these packages on 
> SUSE11 as they have different names than the RHEL ones... 
> Eg: 
> RHEL: gcc-gfortran 
> SUSE: gcc-fortran 
> RHEL: blas-devel 
> SUSE: libblas3 ? 
> RHEL: lapack-devel 
> SUSE: liblapack3 ? 
> RHEL: python-dev 
> SUSE: python-devel 
> RHEL: python-pip 
> SUSE: doesn't seem to be part of the standard repo 
> Solution: Make a custom  for SUSE 11, with the 
> correct named packages as they are named on SUSE 11
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12988) upgrading the HDP stack through Ambari (from 2.3 to 2.4)

2016-03-31 Thread Arun Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh reassigned HADOOP-12988:
---

Assignee: Ali Bajwa

Please let me know if any additional info is needed. We will get from 
customer/eng. Thx.

> upgrading the HDP stack through Ambari (from 2.3 to 2.4)
> 
>
> Key: HADOOP-12988
> URL: https://issues.apache.org/jira/browse/HADOOP-12988
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: SLES11 SP4
>Reporter: Arun Singh
>Assignee: Ali Bajwa
>
>  - Issue 3: When upgrading the HDP stack through Ambari (from 2.3 to 2.4), at 
> some point a YARN smokescreen test is performed. This smoke screen test 
> fails, as it is trying to call an API command using curl with the --negotiate 
> option. The problem is that on SUSE 11, the version of curl does not ship 
> with one that understands "--negotiate", grinding the whole upgrade process 
> to a halt. 
>  
> There are quite a few files in Ambari where this seems to be the case, 
> although I personally only encountered it during the YARN component: 
> /var/lib/ambari-server/resources/common-services/RANGER/0.4.0/package/scripts/service_check.py:
>   
> Execute(format("curl -s -o /dev/null -w'%{{http_code}}' --negotiate -u: 
> -k {ranger_external_url}/login.jsp | grep 200"),
> /var/lib/ambari-server/resources/common-services/SPARK/1.2.0.2.2/package/scripts/service_check.py:
> 
> Execute(format("curl -s -o /dev/null -w'%{{http_code}}' --negotiate -u: 
> -khttp://{spark_history_server_host}:{spark_history_ui_port} | grep 200"),
> /var/lib/ambari-server/resources/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py:
>   
> get_app_info_cmd = "curl --negotiate -u : -ksL --connect-timeout " + 
> CURL_CONNECTION_TIMEOUT + " " + info_app_url
> /var/lib/ambari-server/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py:
> 
> smoke_cmd = format('curl --negotiate -u : -b ~/cookiejar.txt -c 
> ~/cookiejar.txt -s -o /dev/null -w 
> "%{{http_code}}"http://{metadata_host}:{metadata_port}/')
> /var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/files/templetonSmoke.sh:
> cmd="${kinitcmd}curl --negotiate -u : -s -w 'http_code <%{http_code}>'  
> $ttonurl/status 2>&1"
> /var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/files/templetonSmoke.sh:
>   
> cmd="${kinitcmd}curl --negotiate -u : -s -w 'http_code <%{http_code}>'  
> $ttonurl/status?user.name=$smoke_test_user 2>&1"
> /var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/files/templetonSmoke.sh:
> cmd="${kinitcmd}curl --negotiate -u : -s -w 'http_code <%{http_code}>' -d 
>  \@${destdir}/show_db.post.txt  $ttonurl/ddl 2>&1"
> For example, in this file: 
> /var/lib/ambari-server/resources/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py
> You will see the code as:
> for rm_webapp_address in params.rm_webapp_addresses_list:
>   info_app_url = params.scheme + "://" + rm_webapp_address + 
> "/ws/v1/cluster/apps/" + application_name
>   get_app_info_cmd = "curl --negotiate -u : -ksL --connect-timeout " + 
> CURL_CONNECTION_TIMEOUT + " " + info_app_url
>   return_code, stdout, _ = get_user_call_output(get_app_info_cmd,
> user=params.smokeuser,
> 
> path='/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin',
> )
>  
>  
> There is no code checking whether RHEL vs SUSE, to run the correct usage of 
> curl. Or alternatively, there is no code to check for version of curl, and 
> run a "deprecated" version of the command as a fallback should it detect that 
> the installed curl does not support --negotiate. This is just blindly 
> assuming to work on SUSE 11 (or any version of curl). 
>  
> Information about the curl installed on the system: 
> hdplab02:~ # curl -V 
> curl 7.45.0 (x86_64-pc-linux-gnu) libcurl/7.45.0 OpenSSL/1.0.2d zlib/1.2.8 
> Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp 
> smb smbs smtp smtps telnet tftp 
> Features: IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP UnixSockets
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12987) HortonWorks Zeppelin issue in HDP 2.4

2016-03-31 Thread Arun Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh reassigned HADOOP-12987:
---

Assignee: Ali Bajwa

Let me know if any additional info is needed, I will reach out to 
customer/engr. Thx

> HortonWorks Zeppelin issue in HDP 2.4
> -
>
> Key: HADOOP-12987
> URL: https://issues.apache.org/jira/browse/HADOOP-12987
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: SLES11 SP4
>Reporter: Arun Singh
>Assignee: Ali Bajwa
>
> Issue 2: Zeppelin. Zeppelin is a new component in Tech Preview in the latest 
> HDP stack (2.4). I've been following this guide: 
> http://hortonworks.com/hadoop-tutorial/apache-zeppelin-hdp-2-4/
>When installing Zeppelin through the Ambari interface, it errors out with 
> a message saying it can't install the package gcc-gfortran
>  
>If you open the file: 
> /var/lib/ambari-server/resources/stacks/HDP/2.4/services/ZEPPELIN/metainfo.xml
>  
>  Line 72: 
>  
>   redhat7,redhat6,redhat5,suse11 
>  
>
> gcc-gfortran 
>
>
> blas-devel 
>
>
> lapack-devel 
>
>
> python-devel 
>
>
>  python-pip 
>
>
> zeppelin 
>
>  
>  
> This list packages to install on SUSE11, but you don't find these packages on 
> SUSE11 as they have different names than the RHEL ones... 
> Eg: 
> RHEL: gcc-gfortran 
> SUSE: gcc-fortran 
> RHEL: blas-devel 
> SUSE: libblas3 ? 
> RHEL: lapack-devel 
> SUSE: liblapack3 ? 
> RHEL: python-dev 
> SUSE: python-devel 
> RHEL: python-pip 
> SUSE: doesn't seem to be part of the standard repo 
> Solution: Make a custom  for SUSE 11, with the 
> correct named packages as they are named on SUSE 11
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12986) Hortonworks Data Flow (aka, NiFi)

2016-03-31 Thread Arun Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Singh reassigned HADOOP-12986:
---

Assignee: Ali Bajwa

As per your advice. Please bear with me any missing info as I am new to this. 
thx.

> Hortonworks Data Flow (aka, NiFi)
> -
>
> Key: HADOOP-12986
> URL: https://issues.apache.org/jira/browse/HADOOP-12986
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: SLES11 SP4
>Reporter: Arun Singh
>Assignee: Ali Bajwa
>
> Issue 1: Hortonworks Data Flow (aka, NiFi). When running the command 
> "bin/nifi.sh install", it will setup the correct service file for you so that 
> nifi will start on boot. When you look at the file, especially the "install" 
> section: 
>  
> install() {
> SVC_NAME=nifi
> if [ "x$2" != "x" ] ; then
> SVC_NAME=$2
> fi
> SVC_FILE="/etc/init.d/${SVC_NAME}"
> cp "$0" "${SVC_FILE}"
> sed -i s:NIFI_HOME=.*:NIFI_HOME="${NIFI_HOME}": "${SVC_FILE}"
> sed -i s:PROGNAME=.*:PROGNAME="${SCRIPT_NAME}": "${SVC_FILE}"
> rm -f "/etc/rc2.d/S65${SVC_NAME}"
> ln -s "/etc/init.d/${SVC_NAME}" "/etc/rc2.d/S65${SVC_NAME}"
> rm -f "/etc/rc2.d/K65${SVC_NAME}"
> ln -s "/etc/init.d/${SVC_NAME}" "/etc/rc2.d/K65${SVC_NAME}"
> echo "Service ${SVC_NAME} installed"
> }
>  
> The problem above is that the startup and shutdown files (the "S" and "K" 
> files) are created in a directory "/etc/rc2.d", however this directory exists 
> only on RHEL. On SUSE this directory is slightly different, /etc/init.d/rc2.d
>  
> So when attempting to setup the services file (for bootup purposes), the 
> above command fails on SUSE. Worse, no error checking is performed and it 
> will actually print a successful message! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12988) upgrading the HDP stack through Ambari (from 2.3 to 2.4)

2016-03-31 Thread Arun Singh (JIRA)
Arun Singh created HADOOP-12988:
---

 Summary: upgrading the HDP stack through Ambari (from 2.3 to 2.4)
 Key: HADOOP-12988
 URL: https://issues.apache.org/jira/browse/HADOOP-12988
 Project: Hadoop Common
  Issue Type: Bug
 Environment: SLES11 SP4
Reporter: Arun Singh


 - Issue 3: When upgrading the HDP stack through Ambari (from 2.3 to 2.4), at 
some point a YARN smokescreen test is performed. This smoke screen test fails, 
as it is trying to call an API command using curl with the --negotiate option. 
The problem is that on SUSE 11, the version of curl does not ship with one that 
understands "--negotiate", grinding the whole upgrade process to a halt. 
 
There are quite a few files in Ambari where this seems to be the case, although 
I personally only encountered it during the YARN component: 
/var/lib/ambari-server/resources/common-services/RANGER/0.4.0/package/scripts/service_check.py:
  
Execute(format("curl -s -o /dev/null -w'%{{http_code}}' --negotiate -u: -k 
{ranger_external_url}/login.jsp | grep 200"),
/var/lib/ambari-server/resources/common-services/SPARK/1.2.0.2.2/package/scripts/service_check.py:

Execute(format("curl -s -o /dev/null -w'%{{http_code}}' --negotiate -u: 
-khttp://{spark_history_server_host}:{spark_history_ui_port} | grep 200"),
/var/lib/ambari-server/resources/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py:
  
get_app_info_cmd = "curl --negotiate -u : -ksL --connect-timeout " + 
CURL_CONNECTION_TIMEOUT + " " + info_app_url
/var/lib/ambari-server/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py:

smoke_cmd = format('curl --negotiate -u : -b ~/cookiejar.txt -c 
~/cookiejar.txt -s -o /dev/null -w 
"%{{http_code}}"http://{metadata_host}:{metadata_port}/')
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/files/templetonSmoke.sh:
cmd="${kinitcmd}curl --negotiate -u : -s -w 'http_code <%{http_code}>'  
$ttonurl/status 2>&1"
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/files/templetonSmoke.sh:
  
cmd="${kinitcmd}curl --negotiate -u : -s -w 'http_code <%{http_code}>'  
$ttonurl/status?user.name=$smoke_test_user 2>&1"
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/files/templetonSmoke.sh:
cmd="${kinitcmd}curl --negotiate -u : -s -w 'http_code <%{http_code}>' -d  
\@${destdir}/show_db.post.txt  $ttonurl/ddl 2>&1"


For example, in this file: 
/var/lib/ambari-server/resources/common-services/YARN/2.1.0.2.0/package/scripts/service_check.py
You will see the code as:
for rm_webapp_address in params.rm_webapp_addresses_list:
  info_app_url = params.scheme + "://" + rm_webapp_address + 
"/ws/v1/cluster/apps/" + application_name

  get_app_info_cmd = "curl --negotiate -u : -ksL --connect-timeout " + 
CURL_CONNECTION_TIMEOUT + " " + info_app_url

  return_code, stdout, _ = get_user_call_output(get_app_info_cmd,
user=params.smokeuser,

path='/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin',
)
 
 
There is no code checking whether RHEL vs SUSE, to run the correct usage of 
curl. Or alternatively, there is no code to check for version of curl, and run 
a "deprecated" version of the command as a fallback should it detect that the 
installed curl does not support --negotiate. This is just blindly assuming to 
work on SUSE 11 (or any version of curl). 
 
Information about the curl installed on the system: 
hdplab02:~ # curl -V 
curl 7.45.0 (x86_64-pc-linux-gnu) libcurl/7.45.0 OpenSSL/1.0.2d zlib/1.2.8 
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb 
smbs smtp smtps telnet tftp 
Features: IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP UnixSockets
 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12987) HortonWorks Zeppelin issue in HDP 2.4

2016-03-31 Thread Arun Singh (JIRA)
Arun Singh created HADOOP-12987:
---

 Summary: HortonWorks Zeppelin issue in HDP 2.4
 Key: HADOOP-12987
 URL: https://issues.apache.org/jira/browse/HADOOP-12987
 Project: Hadoop Common
  Issue Type: Bug
 Environment: SLES11 SP4
Reporter: Arun Singh


Issue 2: Zeppelin. Zeppelin is a new component in Tech Preview in the latest 
HDP stack (2.4). I've been following this guide: 
http://hortonworks.com/hadoop-tutorial/apache-zeppelin-hdp-2-4/
   When installing Zeppelin through the Ambari interface, it errors out with a 
message saying it can't install the package gcc-gfortran
 
   If you open the file: 
/var/lib/ambari-server/resources/stacks/HDP/2.4/services/ZEPPELIN/metainfo.xml 
 Line 72: 
 
  redhat7,redhat6,redhat5,suse11 
 
   
gcc-gfortran 
   
   
blas-devel 
   
   
lapack-devel 
   
   
python-devel 
   
   
 python-pip 
   
   
zeppelin 
   
 
 

This list packages to install on SUSE11, but you don't find these packages on 
SUSE11 as they have different names than the RHEL ones... 
Eg: 
RHEL: gcc-gfortran 
SUSE: gcc-fortran 

RHEL: blas-devel 
SUSE: libblas3 ? 

RHEL: lapack-devel 
SUSE: liblapack3 ? 

RHEL: python-dev 
SUSE: python-devel 

RHEL: python-pip 
SUSE: doesn't seem to be part of the standard repo 

Solution: Make a custom  for SUSE 11, with the correct 
named packages as they are named on SUSE 11
 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12986) Hortonworks Data Flow (aka, NiFi)

2016-03-31 Thread Arun Singh (JIRA)
Arun Singh created HADOOP-12986:
---

 Summary: Hortonworks Data Flow (aka, NiFi)
 Key: HADOOP-12986
 URL: https://issues.apache.org/jira/browse/HADOOP-12986
 Project: Hadoop Common
  Issue Type: Bug
 Environment: SLES11 SP4
Reporter: Arun Singh


Issue 1: Hortonworks Data Flow (aka, NiFi). When running the command 
"bin/nifi.sh install", it will setup the correct service file for you so that 
nifi will start on boot. When you look at the file, especially the "install" 
section: 
 
install() {
SVC_NAME=nifi
if [ "x$2" != "x" ] ; then
SVC_NAME=$2
fi

SVC_FILE="/etc/init.d/${SVC_NAME}"
cp "$0" "${SVC_FILE}"
sed -i s:NIFI_HOME=.*:NIFI_HOME="${NIFI_HOME}": "${SVC_FILE}"
sed -i s:PROGNAME=.*:PROGNAME="${SCRIPT_NAME}": "${SVC_FILE}"
rm -f "/etc/rc2.d/S65${SVC_NAME}"
ln -s "/etc/init.d/${SVC_NAME}" "/etc/rc2.d/S65${SVC_NAME}"
rm -f "/etc/rc2.d/K65${SVC_NAME}"
ln -s "/etc/init.d/${SVC_NAME}" "/etc/rc2.d/K65${SVC_NAME}"
echo "Service ${SVC_NAME} installed"
}
 
The problem above is that the startup and shutdown files (the "S" and "K" 
files) are created in a directory "/etc/rc2.d", however this directory exists 
only on RHEL. On SUSE this directory is slightly different, /etc/init.d/rc2.d
 
So when attempting to setup the services file (for bootup purposes), the above 
command fails on SUSE. Worse, no error checking is performed and it will 
actually print a successful message! 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)