405 method not allowed

2016-11-06 Thread Vikash Kumar
Hi all,
I am trying to convert the zeppelin-web application from Angular to Aurelia. 
When I am executing the following code  to get the interpreter binding I am 
getting error 405 method not allowed. Same error again in postman. I am able to 
hit the login api.

let url = BaseUrl.getRestApiBase() + '/notebook/interpreter/bind/' + 
this.note.id;
this.http.fetch(url, {
method: 'GET',
 headers: {
  'Content-Type': 'application/x-www-form-urlencoded'
}
  })
  .then(response => response.json())
  .then(data => {
console.log('Data === ', data);  //Empty
if((data || {}).status === 'OK') {
  this.interpreterBindings = data.body;
} else {
   console.log('Error %o %o', (data || {}).status, (data || 
{}).message);
}
  });

Thanks & Regards,
Vikash Kumar





RE: Netty error with spark interpreter

2016-10-19 Thread Vikash Kumar
Hi all,
I solved this problem by excluding netty-all dependencies from external jar as 
well as from spark-dependency project. Spark was also using two different 
versions. Adding new netty-all-4.0.29.Final dependency into both project just 
worked fine.

Thanks & Regards,
Vikash Kumar
From: Vikash Kumar [mailto:vikash.ku...@resilinc.com]
Sent: Wednesday, October 19, 2016 12:11 PM
To: users@zeppelin.apache.org
Subject: Netty error with spark interpreter

Hi all,
I am trying zeppelin with spark which is throwing me the 
following error related to netty jar conflicts. I checked properly my class 
path. There are only single versions of netty-3.8.0 and netty-all-4.0.29-Final 
jar.

Other information :
Spark 2.0.0
Scala 2.11
Zeppelin .6.2 snapshot
Command to build:
mvn clean install -DskipTests -Drat.ignoreErrors=true 
-Dcheckstyle.skip=true -Denforcer.skip=true -Pspark-2.0 -Dspark.version=2.0.0 
-Pscala-2.11 -Phadoop-2.7 -Pyarn
Queries:
sc.version :- it works fine
sqlContext.sql("show tables").show :- throws error

Running on local mode.
I am attaching my spark log file[zeppelin-interpreter-spark-root.log] and mvn 
dependency:tree result [dependencyTree.txt]


So I am not able to solve this problem.. :(


Thanks & Regards,
Vikash Kumar



Netty error with spark interpreter

2016-10-19 Thread Vikash Kumar
Hi all,
I am trying zeppelin with spark which is throwing me the 
following error related to netty jar conflicts. I checked properly my class 
path. There are only single versions of netty-3.8.0 and netty-all-4.0.29-Final 
jar.

Other information :
Spark 2.0.0
Scala 2.11
Zeppelin .6.2 snapshot
Command to build:
mvn clean install -DskipTests -Drat.ignoreErrors=true 
-Dcheckstyle.skip=true -Denforcer.skip=true -Pspark-2.0 -Dspark.version=2.0.0 
-Pscala-2.11 -Phadoop-2.7 -Pyarn
Queries:
sc.version :- it works fine
sqlContext.sql("show tables").show :- throws error

Running on local mode.
I am attaching my spark log file[zeppelin-interpreter-spark-root.log] and mvn 
dependency:tree result [dependencyTree.txt]


So I am not able to solve this problem.. :(


Thanks & Regards,
Vikash Kumar

[INFO] Scanning for projects...
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.zeppelin:zeppelin-zengine:jar:0.6.2-SNAPSHOT
[WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
be unique: org.eclipse.jetty.websocket:websocket-client:jar -> duplicate 
declaration of version ${jetty.version} @ 
org.apache.zeppelin:zeppelin-zengine:[unknown-version], 
/usr/hdp/2.4.3.0-227/zeppelin/zeppelin/poc-reportingZeppelin/zeppelin-branch-0.6/zeppelin-zengine/pom.xml,
 line 179, column 15
[WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
be unique: junit:junit:jar -> duplicate declaration of version (?) @ 
org.apache.zeppelin:zeppelin-zengine:[unknown-version], 
/usr/hdp/2.4.3.0-227/zeppelin/zeppelin/poc-reportingZeppelin/zeppelin-branch-0.6/zeppelin-zengine/pom.xml,
 line 259, column 15
[WARNING] 
[WARNING] It is highly recommended to fix these problems because they threaten 
the stability of your build.
[WARNING] 
[WARNING] For this reason, future Maven versions might no longer support 
building such malformed projects.
[WARNING] 
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Zeppelin
[INFO] Zeppelin: Interpreter
[INFO] Zeppelin: Zengine
[INFO] Zeppelin: Display system apis
[INFO] Zeppelin: Spark dependencies
[INFO] Zeppelin: Spark
[INFO] Zeppelin: JDBC interpreter
[INFO] Zeppelin: web Application
[INFO] Zeppelin: Server
[INFO] Zeppelin: Packaging distribution
[INFO] 
[INFO] 
[INFO] Building Zeppelin 0.6.2-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ zeppelin ---
[INFO] org.apache.zeppelin:zeppelin:pom:0.6.2-SNAPSHOT
[INFO] 
[INFO] 
[INFO] Building Zeppelin: Interpreter 0.6.2-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ 
zeppelin-interpreter ---
[INFO] org.apache.zeppelin:zeppelin-interpreter:jar:0.6.2-SNAPSHOT
[INFO] +- org.apache.thrift:libthrift:jar:0.9.2:compile
[INFO] |  +- org.apache.httpcomponents:httpclient:jar:4.3.6:compile
[INFO] |  |  +- commons-logging:commons-logging:jar:1.1.1:compile
[INFO] |  |  \- commons-codec:commons-codec:jar:1.5:compile
[INFO] |  \- org.apache.httpcomponents:httpcore:jar:4.3.3:compile
[INFO] +- com.google.code.gson:gson:jar:2.2:compile
[INFO] +- org.apache.commons:commons-exec:jar:1.3:compile
[INFO] +- org.apache.commons:commons-pool2:jar:2.3:compile
[INFO] +- commons-lang:commons-lang:jar:2.5:compile
[INFO] +- org.slf4j:slf4j-api:jar:1.7.10:compile
[INFO] +- org.slf4j:slf4j-log4j12:jar:1.7.10:compile
[INFO] |  \- log4j:log4j:jar:1.2.17:compile
[INFO] +- junit:junit:jar:4.11:test
[INFO] |  \- org.hamcrest:hamcrest-core:jar:1.3:test
[INFO] +- org.mockito:mockito-all:jar:1.9.0:test
[INFO] +- org.apache.commons:commons-lang3:jar:3.4:compile
[INFO] +- org.apache.maven:maven-plugin-api:jar:3.0:compile
[INFO] |  \- org.apache.maven:maven-artifact:jar:3.0:compile
[INFO] +- org.sonatype.aether:aether-api:jar:1.12:compile
[INFO] +- org.sonatype.aether:aether-util:jar:1.12:compile
[INFO] +- org.sonatype.aether:aether-impl:jar:1.12:compile
[INFO] |  \- org.sonatype.aether:aether-spi:jar:1.12:compile
[INFO] +- org.apache.maven:maven-aether-provider:jar:3.0.3:compile
[INFO] |  +- org.apache.maven:maven-model:jar:3.0.3:compile
[INFO] |  +- org.apache.maven:maven-model-builder:jar:3.0.3:compile
[INFO] |  |  \- org.codehaus.plexus:plexus-interpolation:jar:1.14:compile
[INFO] |  +- org.apache.maven:maven-repository-metadata:jar:3.0.3:compile
[INFO] |  \- org.codehaus.plexus:plexus-component-annotations:jar

zeppelin + helium activation

2016-10-18 Thread Vikash Kumar
Hi all,
Congratulations for latest release.

I went through zeppelin helium proposal which looks great and more advance and 
it will bring zeppelin far ahead from other notebooks. While exploring over the 
internet I came across presentation on helium[1] where in slide number 24 has a 
UI to plug-in components.
So at any point can I see(Activate if need some configurations) helium UI in 
zeppelin .So that I can design a demo with zeppelin + helium?

[1] http://www.slideshare.net/HadoopSummit/apache-zeppelin-helium-and-beyond


Thanks & Regards,
Vikash Kumar



RE: User specific interpreter

2016-10-05 Thread Vikash Kumar
Thanks moon,
   Yes this task solves my problem but we have to wait for 7 
release. So is there nearby plan to release 07 version?

Thanks & Regards,
Vikash Kumar
From: moon soo Lee [mailto:m...@apache.org]
Sent: Wednesday, October 5, 2016 6:53 PM
To: users@zeppelin.apache.org
Subject: Re: User specific interpreter

Regarding two interpreter settings,

1.   Phoenix (Accessible only to admin)

2.   Phoenix-custom (Accessible to other user)

I think interpreter authorization [1] can help. which is available on master 
branch (0.7.0-SNAPSHOT).

Thanks,
moon

[1] https://issues.apache.org/jira/browse/ZEPPELIN-945

On Wed, Oct 5, 2016 at 4:26 PM Jongyoul Lee 
<jongy...@gmail.com<mailto:jongy...@gmail.com>> wrote:
Thanks,

I'll think of it more, too. :-) Please keep the status at JIRA.

Regards,
Jongyoul

On Wed, Oct 5, 2016 at 4:04 PM, Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi,
Here you  can use two ways

1.   Add another tenant filed in authentication object and set the value 
when you are authentication the user.(Along with principle, ticket and 
role)That’s the right way.

2.   Use ticket as tenant id. Then you need to change the way of creation 
of ticket.

We are using the second way as we are not using the shiro for authentication 
and it’s simple as well.

Thanks & Regards,
Vikash Kumar

From: Jongyoul Lee [mailto:jongy...@gmail.com<mailto:jongy...@gmail.com>]
Sent: Wednesday, October 5, 2016 12:24 PM

To: users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>
Subject: Re: User specific interpreter

Hi Vikash,

I'm also considering passing some tenancies into interpreter. it would be 
helpful but we should think of mix of those two cases like you. Do you have any 
idea to handle them nicely?

On Wed, Oct 5, 2016 at 3:16 PM, Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi Jongyoul,
Thanks for your quick response.

I created an interpreter as same phoenix which works with 
multi-tenant concept. So only user with their specific tenant_id can access 
their data. But for admin user  need another phoenix interpreter who can access 
the data for any tenant.

But user other than admin permission  should not be able to use phoenix 
interpreter.

Just assume there are two interpreters:

1.   Phoenix (Accessible only to admin)

2.   Phoenix-custom (Accessible to other user)


And but about release date for 7 version?
Thanks & Regards,
Vikash Kumar

From: Jongyoul Lee [mailto:jongy...@gmail.com<mailto:jongy...@gmail.com>]
Sent: Wednesday, October 5, 2016 11:30 AM
To: users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>
Subject: Re: User specific interpreter

Hi,

Can you share your idea in more details? If you want to new interpreter setting 
with existing interpreter, it's very simple. You can go to the inteterpreter 
tab and create new one with different name. Unfortunately, others can see that 
new setting and use it. About Multiuser implementation, there're a lot of 
requests and we are keeping it with 
https://issues.apache.org/jira/browse/ZEPPELIN-1337

Hope this help,
Jongyoul

On Wed, Oct 5, 2016 at 2:20 PM, Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi all,
Can we create user specific interpreters? Like I want to create 
phoenix jdbc interpreter only for admin user. I am using branch 0.6.2.
And question regarding

1.   release date for branch 7 so that we can demo for Helium

2.Multiuser implementation roadmap?



Thanks & Regards,
Vikash Kumar



--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net



--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net



--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


RE: User specific interpreter

2016-10-05 Thread Vikash Kumar
Hi,
Here you  can use two ways

1.   Add another tenant filed in authentication object and set the value 
when you are authentication the user.(Along with principle, ticket and 
role)That’s the right way.

2.   Use ticket as tenant id. Then you need to change the way of creation 
of ticket.

We are using the second way as we are not using the shiro for authentication 
and it’s simple as well.

Thanks & Regards,
Vikash Kumar

From: Jongyoul Lee [mailto:jongy...@gmail.com]
Sent: Wednesday, October 5, 2016 12:24 PM
To: users@zeppelin.apache.org
Subject: Re: User specific interpreter

Hi Vikash,

I'm also considering passing some tenancies into interpreter. it would be 
helpful but we should think of mix of those two cases like you. Do you have any 
idea to handle them nicely?

On Wed, Oct 5, 2016 at 3:16 PM, Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi Jongyoul,
Thanks for your quick response.

I created an interpreter as same phoenix which works with 
multi-tenant concept. So only user with their specific tenant_id can access 
their data. But for admin user  need another phoenix interpreter who can access 
the data for any tenant.

But user other than admin permission  should not be able to use phoenix 
interpreter.

Just assume there are two interpreters:

1.   Phoenix (Accessible only to admin)

2.   Phoenix-custom (Accessible to other user)


And but about release date for 7 version?
Thanks & Regards,
Vikash Kumar

From: Jongyoul Lee [mailto:jongy...@gmail.com<mailto:jongy...@gmail.com>]
Sent: Wednesday, October 5, 2016 11:30 AM
To: users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>
Subject: Re: User specific interpreter

Hi,

Can you share your idea in more details? If you want to new interpreter setting 
with existing interpreter, it's very simple. You can go to the inteterpreter 
tab and create new one with different name. Unfortunately, others can see that 
new setting and use it. About Multiuser implementation, there're a lot of 
requests and we are keeping it with 
https://issues.apache.org/jira/browse/ZEPPELIN-1337

Hope this help,
Jongyoul

On Wed, Oct 5, 2016 at 2:20 PM, Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi all,
Can we create user specific interpreters? Like I want to create 
phoenix jdbc interpreter only for admin user. I am using branch 0.6.2.
And question regarding

1.   release date for branch 7 so that we can demo for Helium

2.Multiuser implementation roadmap?



Thanks & Regards,
Vikash Kumar



--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net



--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


RE: User specific interpreter

2016-10-05 Thread Vikash Kumar
Hi Jongyoul,
Thanks for your quick response.

I created an interpreter as same phoenix which works with 
multi-tenant concept. So only user with their specific tenant_id can access 
their data. But for admin user  need another phoenix interpreter who can access 
the data for any tenant.

But user other than admin permission  should not be able to use phoenix 
interpreter.

Just assume there are two interpreters:

1.   Phoenix (Accessible only to admin)

2.   Phoenix-custom (Accessible to other user)


And but about release date for 7 version?
Thanks & Regards,
Vikash Kumar

From: Jongyoul Lee [mailto:jongy...@gmail.com]
Sent: Wednesday, October 5, 2016 11:30 AM
To: users@zeppelin.apache.org
Subject: Re: User specific interpreter

Hi,

Can you share your idea in more details? If you want to new interpreter setting 
with existing interpreter, it's very simple. You can go to the inteterpreter 
tab and create new one with different name. Unfortunately, others can see that 
new setting and use it. About Multiuser implementation, there're a lot of 
requests and we are keeping it with 
https://issues.apache.org/jira/browse/ZEPPELIN-1337

Hope this help,
Jongyoul

On Wed, Oct 5, 2016 at 2:20 PM, Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi all,
Can we create user specific interpreters? Like I want to create 
phoenix jdbc interpreter only for admin user. I am using branch 0.6.2.
And question regarding

1.   release date for branch 7 so that we can demo for Helium

2.Multiuser implementation roadmap?



Thanks & Regards,
Vikash Kumar



--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


User specific interpreter

2016-10-04 Thread Vikash Kumar
Hi all,
Can we create user specific interpreters? Like I want to create 
phoenix jdbc interpreter only for admin user. I am using branch 0.6.2.
And question regarding

1.   release date for branch 7 so that we can demo for Helium

2.Multiuser implementation roadmap?



Thanks & Regards,
Vikash Kumar


RE: Z-Manager Zeppelin installation

2016-09-26 Thread Vikash Kumar
Hi Jesang,
The second option worked for me. I used this way because I 
needed to customize the installation with spark version etc. So now I can 
create a script and share with other so they can install just by running the 
script.
Thanks a Lot : )
Thanks & Regards,
Vikash Kuma
From: Jesang Yoon [mailto:yoon...@gmail.com]
Sent: Monday, September 26, 2016 4:56 PM
To: users@zeppelin.apache.org
Subject: Re: Z-Manager Zeppelin installation

Hi Vikash,

From my experience, there is no pre installation required to run Z-Manager.
What about run your modified script into Vagrant Environment?

https://zeppelin.apache.org/docs/0.7.0-SNAPSHOT/install/virtual_machine.html#create-a-zeppelin-ready-vm

According to documentation you can execute script like this:


curl -fsSL 
https://raw.githubusercontent.com/NFLabs/z-manager/master/zeppelin-installer.sh 
| bash

or

cat zeppelin-installer.sh | bash

2016-09-26 19:07 GMT+09:00 Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>>:
Hi all,
I am trying to install Zeppelin with Z-Manger by using 
zeppelin-installer.sh script. I downloaded that file and made required changes. 
But when I am running from my machine its treating each word as a command. And 
giving errors. Is there any other installation required to run this script or 
how to run this script.


Thanks & Regards,
Vikash Kumar



Z-Manager Zeppelin installation

2016-09-26 Thread Vikash Kumar
Hi all,
I am trying to install Zeppelin with Z-Manger by using 
zeppelin-installer.sh script. I downloaded that file and made required changes. 
But when I am running from my machine its treating each word as a command. And 
giving errors. Is there any other installation required to run this script or 
how to run this script.


Thanks & Regards,
Vikash Kumar


RE: Hbase configuration storage without data

2016-09-13 Thread Vikash Kumar
Hi,
But storing the data in a separate file approach will need to maintain the link 
between both files. And also this approach is not preferable when the data is 
obtained on access basis. like in my case data which comes from hbase through 
phoenix is tenant base. So storing that data into note.json or in different 
file is breaking the point of multi tenancy.
So as an approach can we store only configuration and retrieve the data when we 
are loading the note by running the all paragraph for first time load.

But at the same time, i think having data in the note.json helps make 
import/export simple and make notebook render able without run it.

So for import/export providing the data is it good? Data is always confidential 
and cannot be shared with anyone in form of json. So in this approach any one 
can open the note.json and can access the data.
Thanks & Regards,
Vikash Kumar
From: Felix Cheung [mailto:felixcheun...@hotmail.com]
Sent: Wednesday, September 14, 2016 6:24 AM
To: users@zeppelin.apache.org; users@zeppelin.apache.org
Subject: Re: Hbase configuration storage without data

I like that approach - though you should be able to clear result output before 
exporting the note, if all you want is the config? The should remove all output 
data, keeping it smaller?


_
From: Mohit Jaggi <mohitja...@gmail.com<mailto:mohitja...@gmail.com>>
Sent: Monday, September 12, 2016 10:38 AM
Subject: Re: Hbase configuration storage without data
To: <users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>>



one option is to keep the data in separate files. notes.json can contain the 
code and the data can be a pointer to /path/to/file. import/export can choose 
to include or exclude the data. when it is included the data files are added to 
a tgz file containing notes.json otherwise you just export notes.json



On Mon, Sep 12, 2016 at 10:33 AM, moon soo Lee 
<m...@apache.org<mailto:m...@apache.org>> wrote:
Right big note.json file is a problem.
But at the same time, i think having data in the note.json helps make 
import/export simple and make notebook renderable without run it.

So far, i didn't see much discussion about this subject on mailing list or on 
the issue tracker.

If there's an good idea that can handle large data while keeping import/export 
simple and ability to render without run, that would be a great starting point 
of the discussions.

Thanks,
moon

On Wed, Sep 7, 2016 at 9:40 PM Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi moon,
Yes that was the way that I was using. But is there any plan for future 
releases to removing the data from note and storing only configuration?
Because storing the configuration with data when there is no max result limit 
will create a big note.json file.

Thanks & Regards,
Vikash Kumar
From: moon soo Lee [mailto:m...@apache.org<mailto:m...@apache.org>]
Sent: Wednesday, September 7, 2016 8:39 PM
To: users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>
Subject: Re: Hbase configuration storage without data

Hi,

For now, code and result data are mixed in note.json, which is represented by 
'class Note' [1]. And every Notebook storage layer need to implement 
'NotebookRepo.get()' [2] to read note.json from underlying storage and convert 
it into 'class Note'.

As you see the related API and class definition, NotebookRepo actually doesn't 
have any restriction how 'class Note' is serialized and saved in the storage.

So you can event new format, you can exclude result data from saving, and so on.

Hop this helps.

Thanks,
moon

[1] 
https://github.com/apache/zeppelin/blob/master/zeppelin-zengine/src/main/java/org/apache/zeppelin/notebook/Note.java
[2] 
https://github.com/apache/zeppelin/blob/master/zeppelin-zengine/src/main/java/org/apache/zeppelin/notebook/repo/NotebookRepo.java#L47

On Wed, Sep 7, 2016 at 3:47 AM Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi all,
We are storing the note.json configuration into hbase as it is 
stored into File system. As default behavior in note.json the query data is 
stored along with configuration. But we want to store the configurations only 
and when user loading its note then query should get executed and data 
generated. This feature we are using for phoenix interpreter. So how can we 
remove the data from note.json? Is there any plan for that?


Thanks & Regards,
Vikash Kumar




Hbase configuration storage without data

2016-09-07 Thread Vikash Kumar
Hi all,
We are storing the note.json configuration into hbase as it is 
stored into File system. As default behavior in note.json the query data is 
stored along with configuration. But we want to store the configurations only 
and when user loading its note then query should get executed and data 
generated. This feature we are using for phoenix interpreter. So how can we 
remove the data from note.json? Is there any plan for that?


Thanks & Regards,
Vikash Kumar


RE: Spark error when loading phoenix-spark dependency

2016-09-05 Thread Vikash Kumar
Hi,
I am loading the library through UI in spark interpreter as:


1.   org.apache.phoenix:phoenix-spark:4.4.0-HBase-1.1

Excluded :- org.scala-lang:scala-library, org.scala-lang:scala-compiler, 
org.scala-lang:scala-reflect, org.apache.phoenix:phoenix-core


2.   org.apache.phoenix:phoenix-core:4.4.0-HBase-1.1

Excluded :- com.sun.jersey:jersey-core, com.sun.jersey:jersey-server, 
com.sun.jersey:jersey-client, org.ow2.asm:asm, io.netty:netty

Thanks and Regard,
Vikash Kumar

From: astros...@gmail.com [mailto:astros...@gmail.com] On Behalf Of Hyung Sung 
Shim
Sent: Tuesday, September 6, 2016 10:47 AM
To: users <users@zeppelin.apache.org>
Subject: Re: Spark error when loading phoenix-spark dependency

Hello.
How did you load library?


2016-09-06 13:49 GMT+09:00 Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>>:
Hi ,
Is there anyone who is getting the same errors?

Thanks and Regard,
Vikash Kumar

From: Vikash Kumar 
[mailto:vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>]
Sent: Thursday, September 1, 2016 11:08 AM
To: users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>
Subject: Spark error when loading phoenix-spark dependency

Hi all,
I am getting the following error when loading the 
org.apache.phoenix:phoenix-spark:4.4.0-HBase-1.1 dependency from spark 
interpreter. I am using Zeppelin Version 0.6.2-SNAPSHOT with spark 1.6.1 and 
hdp 2.7.1.

The packages that I am inporting is:
import org.apache.phoenix.spark._
import org.apache.phoenix.spark.PhoenixRDD._
import java.sql.{ Date, Timestamp}
My build command is
mvn clean package -DskipTests -Drat.ignoreErrors=true 
-Dcheckstyle.skip=true -Pspark-1.6 -Dspark.version=1.6.1 -Phadoop-2.6 –Pyarn


java.lang.NoSuchMethodError: 
org.apache.spark.util.Utils$.resolveURIs(Ljava/lang/String;)Ljava/lang/String;
at 
org.apache.spark.repl.SparkILoop$.getAddedJars(SparkILoop.scala:1079)
at 
org.apache.spark.repl.SparkILoop.createInterpreter(SparkILoop.scala:210)
at 
org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:698)
at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
at 
org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)





Thanks and Regard,
Vikash Kumar



RE: Spark error when loading phoenix-spark dependency

2016-09-05 Thread Vikash Kumar
Hi ,
Is there anyone who is getting the same errors?

Thanks and Regard,
Vikash Kumar

From: Vikash Kumar [mailto:vikash.ku...@resilinc.com]
Sent: Thursday, September 1, 2016 11:08 AM
To: users@zeppelin.apache.org
Subject: Spark error when loading phoenix-spark dependency

Hi all,
I am getting the following error when loading the 
org.apache.phoenix:phoenix-spark:4.4.0-HBase-1.1 dependency from spark 
interpreter. I am using Zeppelin Version 0.6.2-SNAPSHOT with spark 1.6.1 and 
hdp 2.7.1.

The packages that I am inporting is:
import org.apache.phoenix.spark._
import org.apache.phoenix.spark.PhoenixRDD._
import java.sql.{ Date, Timestamp}
My build command is
mvn clean package -DskipTests -Drat.ignoreErrors=true 
-Dcheckstyle.skip=true -Pspark-1.6 -Dspark.version=1.6.1 -Phadoop-2.6 -Pyarn


java.lang.NoSuchMethodError: 
org.apache.spark.util.Utils$.resolveURIs(Ljava/lang/String;)Ljava/lang/String;
at 
org.apache.spark.repl.SparkILoop$.getAddedJars(SparkILoop.scala:1079)
at 
org.apache.spark.repl.SparkILoop.createInterpreter(SparkILoop.scala:210)
at 
org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:698)
at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
at 
org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)





Thanks and Regard,
Vikash Kumar


Spark error when loading phoenix-spark dependency

2016-08-31 Thread Vikash Kumar
Hi all,
I am getting the following error when loading the 
org.apache.phoenix:phoenix-spark:4.4.0-HBase-1.1 dependency from spark 
interpreter. I am using Zeppelin Version 0.6.2-SNAPSHOT with spark 1.6.1 and 
hdp 2.7.1.

The packages that I am inporting is:
import org.apache.phoenix.spark._
import org.apache.phoenix.spark.PhoenixRDD._
import java.sql.{ Date, Timestamp}
My build command is
mvn clean package -DskipTests -Drat.ignoreErrors=true 
-Dcheckstyle.skip=true -Pspark-1.6 -Dspark.version=1.6.1 -Phadoop-2.6 -Pyarn


java.lang.NoSuchMethodError: 
org.apache.spark.util.Utils$.resolveURIs(Ljava/lang/String;)Ljava/lang/String;
at 
org.apache.spark.repl.SparkILoop$.getAddedJars(SparkILoop.scala:1079)
at 
org.apache.spark.repl.SparkILoop.createInterpreter(SparkILoop.scala:210)
at 
org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:698)
at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
at 
org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)





Thanks and Regard,
Vikash Kumar


RE: Error when starting zeppelin server

2016-08-02 Thread Vikash Kumar
Hi Jeff,
It is before 0.6.0. I took this code on a random day 4 month 
back.

From: Jeff Zhang [mailto:zjf...@gmail.com]
Sent: Tuesday, August 2, 2016 7:02 PM
To: users@zeppelin.apache.org
Subject: Re: Error when starting zeppelin server

What version of zeppelin do you use ?

On Tue, Aug 2, 2016 at 8:42 PM, Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi All,
We are getting the error when starting zeppelin daemon. We are 
running with 4 month old code where we made some changes into code. Same code 
is running on HDP 2.3 on VM but when running  on HDP 2.4 on another system we 
are getting this error :

Exception in thread "main" java.lang.SecurityException: class 
"javax.servlet.ServletRegistration$Dynamic"'s signer information does not match 
signer information of other classes in the same package
at java.lang.ClassLoader.checkCerts(ClassLoader.java:898)
at java.lang.ClassLoader.preDefineClass(ClassLoader.java:668)
at java.lang.ClassLoader.defineClass(ClassLoader.java:761)
at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at 
org.apache.zeppelin.server.ZeppelinServer.setupRestApiContextHandler(ZeppelinServer.java:228)
at 
org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:105)

System Configs :
CentOS 7.2
HDP 2.4
JAVA 1.8
Spark 1.6.1



Thanks and regards,
Vikash Kumar



--
Best Regards

Jeff Zhang


RE: Spark-sql showing no table

2016-07-24 Thread Vikash Kumar
Hi,
I was using the show function. Now I am able to read tables through sql. As I 
solved this issue with three steps:

1.   Copied core-site.xml,hdfs-site.xml and hive-site.xml into 
ZEPPELIN_HOME/conf folder.

2.   Use same sqlContext object for each function

3.   By adding  import sqlContext.implicits._

From: mina lee [mailto:mina...@apache.org]
Sent: Friday, July 22, 2016 3:06 PM
To: users@zeppelin.apache.org
Subject: Re: Spark-sql showing no table

Hi Vikash,

if you want to render dataframe as a table with sqlContext, you will need to run
z.show(tables)

On Thu, Jul 14, 2016 at 1:22 PM Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
I am creating a sqlContext from exiting sc.
Var tables = sqlContext.sql(“show tables”)


Thanks and regards,
Vikash Kumar

From: Mohit Jaggi [mailto:mohitja...@gmail.com<mailto:mohitja...@gmail.com>]
Sent: Wednesday, July 13, 2016 10:24 PM
To: users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>
Subject: Re: Spark-sql showing no table

make sure you use a hive context

On Jul 13, 2016, at 12:42 AM, Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:

Hi all,
I am using spark with scala to read phoenix tables and register 
as temporary table. Which I am able to do.
After that when I am running query :
%sql show tables
Its giving all possible output, but when I am running same 
query with scala sqlContext ,then its not showing any table or neither giving 
any error.
What should I do now because I also copied 
core-site.xml,hdfs-site.xml,hbase-site.xml and hive-site.xml inot zeppelin conf 
folder?



RE: Spark-sql showing no table

2016-07-13 Thread Vikash Kumar
I am creating a sqlContext from exiting sc.
Var tables = sqlContext.sql("show tables")


Thanks and regards,
Vikash Kumar

From: Mohit Jaggi [mailto:mohitja...@gmail.com]
Sent: Wednesday, July 13, 2016 10:24 PM
To: users@zeppelin.apache.org
Subject: Re: Spark-sql showing no table

make sure you use a hive context

On Jul 13, 2016, at 12:42 AM, Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:

Hi all,
I am using spark with scala to read phoenix tables and register 
as temporary table. Which I am able to do.
After that when I am running query :
%sql show tables
Its giving all possible output, but when I am running same 
query with scala sqlContext ,then its not showing any table or neither giving 
any error.
What should I do now because I also copied 
core-site.xml,hdfs-site.xml,hbase-site.xml and hive-site.xml inot zeppelin conf 
folder?



Spark-sql showing no table

2016-07-13 Thread Vikash Kumar
Hi all,
I am using spark with scala to read phoenix tables and register 
as temporary table. Which I am able to do.
After that when I am running query :
%sql show tables
Its giving all possible output, but when I am running same 
query with scala sqlContext ,then its not showing any table or neither giving 
any error.
What should I do now because I also copied 
core-site.xml,hdfs-site.xml,hbase-site.xml and hive-site.xml inot zeppelin conf 
folder?


RE: Phoenix Interpreter in 0.6 release

2016-07-13 Thread Vikash Kumar
Thank you dear. I will go for that commit.

From: Jongyoul Lee [mailto:jongy...@gmail.com]
Sent: Wednesday, July 13, 2016 11:43 AM
To: users@zeppelin.apache.org
Subject: Re: Phoenix Interpreter in 0.6 release

Okay, I see your situation.

You can use that feature with 'phoenix.TenantId' in your interpreter tab. the 
properties from phoenix. will pass the properties extracting 'pheonix.'. Try it 
again and let me know the result.

The last commit before removing PhoenixInterpreter is 
f786d1387a7ccae0387e470abb44912d5f322d6b. You can check it.

Hope this help,
JL

On Wed, Jul 13, 2016 at 2:54 PM, Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi,
Phoenix supports multiple tenant configuring with their tenant id[1]. So that 
we can create jdbc connection with extra arg of tenantId. In my case when a 
user making connection with phoenix it submits a tenant id.
A user without tenantId not allowed to connect to phoenix which we can only 
achieve through phoenix interpreter.
And it’s not like that I want phoenix interpreter in latest release but can I 
get the latest phoenix interpreter code somehow?

[1] https://phoenix.apache.org/multi-tenancy.html

From: Jongyoul Lee [mailto:jongy...@gmail.com<mailto:jongy...@gmail.com>]
Sent: Wednesday, July 13, 2016 10:36 AM

To: users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>
Subject: Re: Phoenix Interpreter in 0.6 release

Hi,

PhoenixInterpreter and JdbcInterpreter are based on HiveInterpreter at first, 
and JdbcInterpreter supports running queries simultaneously. If what you told 
is about supporting multiple users, any JDBC-like interpreter didn't supported 
yet. That feature are scheduled by 0.7.0.

It will help you tell me your use cases in details.

Regards,
JL

On Wed, Jul 13, 2016 at 1:13 PM, Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi,
Phoenix support multi tenancy which we cannot achieve in jdbc interpreter.

Thanks and regards
Vikash Kumar

From: Jongyoul Lee [mailto:jongy...@gmail.com<mailto:jongy...@gmail.com>]
Sent: Tuesday, July 12, 2016 3:21 PM

To: users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>
Subject: Re: Phoenix Interpreter in 0.6 release

Hi,

Could you please describe what it means of multi-tenancy? I think there's no 
reduction of functionality of moving JdbcInterpreter from PhoenixInterpreter.

Regards,
JL

On Tue, Jul 12, 2016 at 6:38 PM, Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi,
But previously it was available(code) which is I am not able to get now. And 
what about multitenant environment?
Thanks & Regards,
Vikash Kumar

From: Jongyoul Lee [mailto:jongy...@gmail.com<mailto:jongy...@gmail.com>]
Sent: Tuesday, July 12, 2016 2:55 PM
To: users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>
Subject: Re: Phoenix Interpreter in 0.6 release

Hello,

You can use Phoenix feature via JdbcInterpreter which already has a example 
setting. You can also see the document here[1]

Hope this help,
JL

[1]: http://zeppelin.apache.org/docs/0.6.0/interpreter/jdbc.html

On Tue, Jul 12, 2016 at 6:16 PM, Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi all,
Phoenix interpreter is absent in latest release ,so how can we 
achieve multi-tenancy with jdbc interpreter. And actually what does 
multi-tenancy means in Zeppelin because I am not seeing any code with tenant 
field?
    And why Hbase interpreter is not included in earlier version?

Thanks & Regards
Vikash Kumar




--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net



--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net



--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net



--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


RE: Phoenix Interpreter in 0.6 release

2016-07-12 Thread Vikash Kumar
Hi,
Phoenix support multi tenancy which we cannot achieve in jdbc interpreter.

Thanks and regards
Vikash Kumar

From: Jongyoul Lee [mailto:jongy...@gmail.com]
Sent: Tuesday, July 12, 2016 3:21 PM
To: users@zeppelin.apache.org
Subject: Re: Phoenix Interpreter in 0.6 release

Hi,

Could you please describe what it means of multi-tenancy? I think there's no 
reduction of functionality of moving JdbcInterpreter from PhoenixInterpreter.

Regards,
JL

On Tue, Jul 12, 2016 at 6:38 PM, Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi,
But previously it was available(code) which is I am not able to get now. And 
what about multitenant environment?
Thanks & Regards,
Vikash Kumar

From: Jongyoul Lee [mailto:jongy...@gmail.com<mailto:jongy...@gmail.com>]
Sent: Tuesday, July 12, 2016 2:55 PM
To: users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>
Subject: Re: Phoenix Interpreter in 0.6 release

Hello,

You can use Phoenix feature via JdbcInterpreter which already has a example 
setting. You can also see the document here[1]

Hope this help,
JL

[1]: http://zeppelin.apache.org/docs/0.6.0/interpreter/jdbc.html

On Tue, Jul 12, 2016 at 6:16 PM, Vikash Kumar 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi all,
Phoenix interpreter is absent in latest release ,so how can we 
achieve multi-tenancy with jdbc interpreter. And actually what does 
multi-tenancy means in Zeppelin because I am not seeing any code with tenant 
field?
And why Hbase interpreter is not included in earlier version?

Thanks & Regards
Vikash Kumar




--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net



--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net


Phoenix Interpreter in 0.6 release

2016-07-12 Thread Vikash Kumar
Hi all,
Phoenix interpreter is absent in latest release ,so how can we 
achieve multi-tenancy with jdbc interpreter. And actually what does 
multi-tenancy means in Zeppelin because I am not seeing any code with tenant 
field?
And why Hbase interpreter is not included in earlier version?

Thanks & Regards
Vikash Kumar



Re: How to remove uglify operation from zeppelin-web

2016-06-20 Thread Vikash Kumar
Hi,

Becuase when we are adding our own UI libraries, it is giving some errors.Which 
we are not able to trace. I initialized uglify option as false.But still its 
compressing. We need to remove uglify for debug precess.


Thanks & Regards
Vikash Kumar


From: Corneau Damien <cornead...@gmail.com>
Sent: 18 June 2016 07:27:46
To: users@zeppelin.apache.org
Subject: Re: How to remove uglify operation from zeppelin-web


This is happening in the grunt.js

But why would you want to remove it? This is a basic production build rule.

On Jun 17, 2016 22:27, "Vikash Kumar" 
<vikash.ku...@resilinc.com<mailto:vikash.ku...@resilinc.com>> wrote:
Hi all,
How to remove uglify operation from zeppelin-web that I can remove the 
functionality of compression.

Thanks & Regards
Vikash Kumar


RE: Ask opinion regarding 0.6.0 release package

2016-06-17 Thread Vikash Kumar
Hi,
Our company is also working with Spark and Phoenix. So that will be good if you 
are adding Phoenix interpreter in min binary release.

Thanks & Regards

Vikash Kumar
Software Engineer
Resilinc – India Center of Excellence | http://www.resilinc.com/
Mobile: +91-7276111812
[cid:image001.jpg@01D0E4E0.2F0DB8A0]<http://www.resilinc.com/>
[cid:image002.jpg@01D0E4E0.2F0DB8A0]<https://www.facebook.com/pages/Resilinc/152374944798272>
  [cid:image003.png@01D0E4E0.2F0DB8A0] <http://www.twitter.com/resilinc>   
[cid:image004.png@01D0E4E0.2F0DB8A0] 
<http://www.linkedin.com/company/resilinc-corporation>   
[cid:image005.png@01D0E4E0.2F0DB8A0] 
<https://www.youtube.com/channel/UCbo4dHglF3tdc-h7Db8YaGw>

From: mina lee [mailto:mina...@apache.org]
Sent: Friday, June 17, 2016 1:32 PM
To: users@zeppelin.apache.org
Subject: Ask opinion regarding 0.6.0 release package

Hi all!

Zeppelin just started release process. Prior to creating release candidate I 
want to ask users' opinion about how you want it to be packaged.

For the last release(0.5.6), we have released one binary package which includes 
all interpreters.
The concern with providing one type of binary package is that package size will 
be quite big(~600MB).
So I am planning to provide two binary packages:
  - zeppelin-0.6.0-bin-all.tgz (includes all interpreters)
  - zeppelin-0.6.0-bin-min.tgz (includes only most used interpreters)

I am thinking about putting spark(pyspark, sparkr, sql), python, jdbc, shell, 
markdown, angular in minimized package.
Could you give your opinion on whether these sets are enough, or some of them 
are ok to be excluded?

Community's opinion will be helpful to make decision not only for 0.6.0 but 
also for 0.7.0 release since we are planning to provide only minimized package 
from 0.7.0 release. From the 0.7.0 version, interpreters those are not included 
in binary package will be able to use dynamic interpreter feature [1] which is 
in progress under [2].

Thanks,
Mina

[1] 
http://zeppelin.apache.org/docs/0.6.0-SNAPSHOT/manual/dynamicinterpreterload.html
[2] https://github.com/apache/zeppelin/pull/908


How to remove uglify operation from zeppelin-web

2016-06-17 Thread Vikash Kumar
Hi all,
How to remove uglify operation from zeppelin-web that I can remove the 
functionality of compression.

Thanks & Regards
Vikash Kumar