I would use the "ps" command on each machine while the job is running to
confirm that every process involved is running as root.

Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition
<http://shop.oreilly.com/product/0636920033073.do> (O'Reilly)
Typesafe <http://typesafe.com>
@deanwampler <http://twitter.com/deanwampler>
http://polyglotprogramming.com

On Tue, Apr 28, 2015 at 8:58 PM, Lin Hao Xu <xulin...@cn.ibm.com> wrote:

> btw, from spark web ui, the acl is marked with *root*
>
> Best regards,
>
> Lin Hao XU
> IBM Research China
> Email: xulin...@cn.ibm.com
> My Flickr: http://www.flickr.com/photos/xulinhao/sets
>
> [image: Inactive hide details for Dean Wampler ---2015/04/29
> 09:40:34---Are the tasks on the slaves also running as root? If not, that]Dean
> Wampler ---2015/04/29 09:40:34---Are the tasks on the slaves also running
> as root? If not, that might explain the problem.
>
> From: Dean Wampler <deanwamp...@gmail.com>
> To: Lin Hao Xu/China/IBM@IBMCN
> Cc: Hai Shan Wu/China/IBM@IBMCN, user <user@spark.apache.org>
> Date: 2015/04/29 09:40
> Subject: Re: A problem of using spark streaming to capture network packets
> ------------------------------
>
>
>
> Are the tasks on the slaves also running as root? If not, that might
> explain the problem.
>
> dean
>
> Dean Wampler, Ph.D.
> Author: *Programming Scala, 2nd Edition*
> <http://shop.oreilly.com/product/0636920033073.do> (O'Reilly)
> *Typesafe* <http://typesafe.com/>
> *@deanwampler* <http://twitter.com/deanwampler>
> *http://polyglotprogramming.com* <http://polyglotprogramming.com/>
>
> On Tue, Apr 28, 2015 at 8:30 PM, Lin Hao Xu <*xulin...@cn.ibm.com*
> <xulin...@cn.ibm.com>> wrote:
>
>    1. The full command line is written in a shell script:
>
>    LIB=/home/spark/.m2/repository
>
>    /opt/spark/bin/spark-submit \
>    --class spark.pcap.run.TestPcapSpark \
>    --jars
>    
> $LIB/org/pcap4j/pcap4j-core/1.4.0/pcap4j-core-1.4.0.jar,$LIB/org/pcap4j/pcap4j-packetfactory-static/1.4.0/pcap4j-packetfactory-static-1.4.0.jar,$LIB/
>    
> org/slf4j/slf4j-api/1.7.6/slf4j-api-1.7.6.jar,$LIB/org/slf4j/slf4j-log4j12/1.7.6/slf4j-log4j12-1.7.6.jar,$LIB/net/java/dev/jna/jna/4.1.0/jna-4.1.0.jar
>    \
>    /home/spark/napa/napa.jar
>
>    2. And we run this script with *sudo*, if you do not use sudo, then
>    you cannot access network interface.
>
>    3. We also tested List<PcapNetworkInterface> nifs = Pcaps.findAllDevs() in
>    a standard Java program, it really worked like a champion.
>
>    Best regards,
>
>    Lin Hao XU
>    IBM Research China
>    Email: *xulin...@cn.ibm.com* <xulin...@cn.ibm.com>
>    My Flickr: *http://www.flickr.com/photos/xulinhao/sets*
>    <http://www.flickr.com/photos/xulinhao/sets>
>
>    [image: Inactive hide details for Dean Wampler ---2015/04/28
>    20:07:54---It's probably not your code. What's the full command line you 
> u]Dean
>    Wampler ---2015/04/28 20:07:54---It's probably not your code. What's the
>    full command line you use to submit the job?
>
>    From: Dean Wampler <*deanwamp...@gmail.com* <deanwamp...@gmail.com>>
>    To: Hai Shan Wu/China/IBM@IBMCN
>    Cc: user <*user@spark.apache.org* <user@spark.apache.org>>, Lin Hao
>    Xu/China/IBM@IBMCN
>    Date: 2015/04/28 20:07
>    Subject: Re: A problem of using spark streaming to capture network
>    packets
>    ------------------------------
>
>
>
>
>    It's probably not your code.
>
>    What's the full command line you use to submit the job?
>
>    Are you sure the job on the cluster has access to the network
>    interface? Can you test the receiver by itself without Spark? For example,
>    does this line work as expected:
>
>    List<PcapNetworkInterface> nifs = Pcaps.findAllDevs();
>
>    dean
>
>    Dean Wampler, Ph.D.
>    Author: *Programming Scala, 2nd Edition*
>    <http://shop.oreilly.com/product/0636920033073.do> (O'Reilly)
> *Typesafe* <http://typesafe.com/>
> *@deanwampler* <http://twitter.com/deanwampler>
> *http://polyglotprogramming.com* <http://polyglotprogramming.com/>
>
>    On Mon, Apr 27, 2015 at 4:03 AM, Hai Shan Wu <*wuh...@cn.ibm.com*
>    <wuh...@cn.ibm.com>> wrote:
>       Hi Everyone
>
>       We use pcap4j to capture network packets and then use spark
>       streaming to analyze captured packets. However, we met a strange 
> problem.
>
>       If we run our application on spark locally (for example,
>       spark-submit --master local[2]), then the program runs successfully.
>
>       If we run our application on spark standalone cluster, then the
>       program will tell us that NO NIFs found.
>
>       I also attach two test files for clarification.
>
>       So anyone can help on this? Thanks in advance!
>
>
> * (See attached file: PcapReceiver.java)(See attached file:
>       TestPcapSpark.java)*
>
>       Best regards,
>
>       - Haishan
>
>       Haishan Wu (吴海珊)
>
>       IBM Research - China
>       Tel: 86-10-58748508
>       Fax: 86-10-58748330
>       Email: *wuh...@cn.ibm.com* <wuh...@cn.ibm.com>
>       Lotus Notes: Hai Shan Wu/China/IBM
>
>
>
>       ---------------------------------------------------------------------
>       To unsubscribe, e-mail: *user-unsubscr...@spark.apache.org*
>       <user-unsubscr...@spark.apache.org>
>       For additional commands, e-mail: *user-h...@spark.apache.org*
>       <user-h...@spark.apache.org>
>
>
>

Reply via email to