I don't think the accumulo-maven-plugin is in our definition of public API (although it really should be since it's primarily there for users...)

So, probably, "it depends". This would be a good convo to have on dev@ to make sure the other devs see it. Would also be good to actually figure out what the scope of changes required would be (assuming you're already doing the leg-work to make it work w/ modifications on top of 1.7.0).

tl;dr let's talk out specifics on dev@ :)

James Hughes wrote:
Thanks.  In terms of knobs for Kerberos, etc. would that be a small
enough change to make sense in the 1.7.x series?  If so, and if we end
up needing it, I'd definitely be interested in contributing it back.

On Sun, Jun 7, 2015 at 4:36 PM, Josh Elser <[email protected]
<mailto:[email protected]>> wrote:

    Check out the accumulo-maven-plugin:

    https://accumulo.apache.org/release_notes/1.6.0.html#maven-plugin

    This will make it pretty simple to start a single MAC, run your
    tests and then stop it. A word of caution, I don't think it's
    heavily used, nor do I think it contains support for configuration
    all the knobs you could using MAC in your code (e.g. Kerberos, for one).

    A full pom example can be seen at
    
https://github.com/apache/accumulo/blob/1.7.0/maven-plugin/src/it/plugin-test/pom.xml

    James Hughes wrote:

        Josh,

        Thanks.  That's more or less what I expected.

        As we work to transition from Mock to MiniAccumulo, I'd want to
        change
        from spinning up lots of MockInstances to one MiniAccumulo.  To
        understand that pattern, do I basically just need to read
        through test
        sub-module and the test/pom.xml?  Are there any other resources
        I should
        be checking out?

        Cheers,

        Jim

        On Sun, Jun 7, 2015 at 1:37 PM, Josh Elser <[email protected]
        <mailto:[email protected]>
        <mailto:[email protected] <mailto:[email protected]>>> wrote:

             MiniAccumulo, yes. MockAccumulo, no. In general, we've near
             completely moved away from MockAccumulo. I wouldn't be
        surprised if
             it gets deprecated and removed soon.

        
https://github.com/apache/accumulo/blob/1.7/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java

             Apache Directory provides a MiniKdc that can be used easily w/
             MiniAccumulo. Many of the integration tests have already been
             altered to support running w/ or w/o kerberos.

             James Hughes wrote:

                 Hi all,

                 For GeoMesa, stats writing is quite secondary and
        optional, so
                 we can
                 sort that out as a follow-on to seeing GeoMesa work with
                 Accumulo 1.7.

                 I haven't had a chance to read in details yet, so
        forgive me if
                 this is
                 covered in the docs.  Does either Mock or MiniAccumulo
        provide
                 enough
                 hooks to test out Kerberos integration effectively?  I
        suppose I'm
                 really asking what kind of testing environment a
        project like
                 GeoMesa
                 would need to use to test out Accumulo 1.7.

                 Even though MockAccumulo has a number of limitations,
        it is rather
                 effective in unit tests which can be part of a quick
        build.

                 Thanks,

                 Jim

                 On Sat, Jun 6, 2015 at 11:14 PM, Xu (Simon) Chen
        <[email protected] <mailto:[email protected]>
        <mailto:[email protected] <mailto:[email protected]>>
        <mailto:[email protected] <mailto:[email protected]>
        <mailto:[email protected] <mailto:[email protected]>>>> wrote:

                      Nope, I am running the example as what the readme file
                 suggested:

                      java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
                      org.geomesa.QuickStart -instanceId somecloud
        -zookeepers
        "zoo1:2181,zoo2:2181,zoo3:2181" -user someuser -password somepwd
                      -tableName sometable

                      I'll raise this question with the geomesa folks,
        but you're
                 right that
                      I can ignore it for now...

                      Thanks!
                      -Simon


                      On Sat, Jun 6, 2015 at 11:00 PM, Josh Elser
        <[email protected] <mailto:[email protected]>
        <mailto:[email protected] <mailto:[email protected]>>
        <mailto:[email protected] <mailto:[email protected]>
        <mailto:[email protected] <mailto:[email protected]>>>> wrote:
         > Are you running it via `mvn exec:java` by chance or netbeans?
         >
         >
        
http://mail-archives.apache.org/mod_mbox/accumulo-user/201411.mbox/%[email protected]%3E
         >
         > If that's just a background thread writing in Stats, it might
                      just be a
         > factor of how you're invoking the program and you can ignore it.
                      I don't
         > know enough about the inner-workings of GeoMesa to say one way or
                      the other.
         >
         >
         > Xu (Simon) Chen wrote:
         >>
         >> Josh,
         >>
         >> Everything works well, except for one thing :-)
         >>
         >> I am running geomesa-quickstart program that ingest some data
                      and then
         >> perform a simple query:
         >> https://github.com/geomesa/geomesa-quickstart
         >>
         >> For some reason, the following error is emitted consistently
                 at the
         >> end of the execution, after outputting the correct result:
         >> 15/06/07 00:29:22 INFO zookeeper.ZooCache: Zookeeper error, will
                      retry
         >> java.lang.InterruptedException
         >>          at java.lang.Object.wait(Native Method)
         >>          at java.lang.Object.wait(Object.java:503)
         >>          at
         >>

        org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
         >>          at

          org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1036)
         >>          at
         >>

        org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
         >>          at
         >>

        org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
         >>          at
         >>

        org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
         >>          at
         >>

        org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
         >>          at
         >>


        
org.apache.accumulo.core.client.impl.Tables.getTableState(Tables.java:180)
         >>          at
         >>


        
org.apache.accumulo.core.client.impl.ConnectorImpl.getTableId(ConnectorImpl.java:82)
         >>          at
         >>


        
org.apache.accumulo.core.client.impl.ConnectorImpl.createBatchWriter(ConnectorImpl.java:128)
         >>          at
         >>


        
org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:174)
         >>          at
         >>


        
org.locationtech.geomesa.core.stats.StatWriter$$anonfun$write$2.apply(StatWriter.scala:156)
         >>          at

          scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
         >>          at
         >>


        
org.locationtech.geomesa.core.stats.StatWriter$.write(StatWriter.scala:156)
         >>          at
         >>


        
org.locationtech.geomesa.core.stats.StatWriter$.drainQueue(StatWriter.scala:148)
         >>          at
         >>


        
org.locationtech.geomesa.core.stats.StatWriter$.run(StatWriter.scala:116)
         >>          at
         >>


        java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
         >>          at
         >> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
         >>          at
         >>


        
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
         >>          at
         >>


        
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
         >>          at
         >>


        
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         >>          at
         >>


        
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         >>          at java.lang.Thread.run(Thread.java:745)
         >>
         >> This is more annoying than a real problem. I am new to both
                 accumulo
         >> and geomesa, but I am curious what the problem might be.
         >>
         >> Thanks!
         >> -Simon
         >>
         >>
         >> On Sat, Jun 6, 2015 at 8:01 PM, Josh
                 Elser<[email protected]
        <mailto:[email protected]> <mailto:[email protected]
        <mailto:[email protected]>>
        <mailto:[email protected] <mailto:[email protected]>
        <mailto:[email protected] <mailto:[email protected]>>>>
        wrote:
        >  >>
        >  >> Great! Glad to hear it. Please let us know how it works out!
        >  >>
        >  >>
        >  >> Xu (Simon) Chen wrote:
        >  >>>
        >  >>> Josh,
        >  >>>
        >  >>> You're right again.. Thanks!
        >  >>>
        >  >>> My ansible play actually pushed client.conf to all the server
                      config
        >  >>> directories, but didn't do anything for the clients, and
                 that's my
        >  >>> problem. Now kerberos is working great for me.
        >  >>>
        >  >>> Thanks again!
        >  >>> -Simon
        >  >>>
        >  >>> On Sat, Jun 6, 2015 at 5:04 PM, Josh
                      Elser<[email protected]
        <mailto:[email protected]> <mailto:[email protected]
        <mailto:[email protected]>>
        <mailto:[email protected] <mailto:[email protected]>
        <mailto:[email protected] <mailto:[email protected]>>>>


         >>>> wrote:
         >>>>>
         >>>>> Simon,
         >>>>>
         >>>>> Did you create a client configuration file
                 (~/.accumulo/config or
         >>>>> $ACCUMULO_CONF_DIR/client.conf)? You need to configure
                      Accumulo clients
         >>>>> to
         >>>>> actually use SASL when you're trying to use Kerberos
                      authentication.
         >>>>> Your
         >>>>> server is expecting that, but I would venture a guess that
                      your client
         >>>>> isn't.
         >>>>>
         >>>>> See
         >>>>>
         >>>>>
        
http://accumulo.apache.org/1.7/accumulo_user_manual.html#_configuration_3
         >>>>>
         >>>>>
         >>>>> Xu (Simon) Chen wrote:
         >>>>>>
         >>>>>> Josh,
         >>>>>>
         >>>>>> Thanks. It makes sense...
         >>>>>>
         >>>>>> I used a KerberosToken, but my program got stuck when
                      running the
         >>>>>> following:
         >>>>>> new ZooKeeperInstance(instance,
                 zookeepers).getConnector(user,
         >>>>>> krbToken)
         >>>>>>
         >>>>>> It looks like my client is stuck here:
         >>>>>>
         >>>>>>
         >>>>>>
         >>>>>>
        
https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
         >>>>>> failing in the receive part of
         >>>>>>
         >>>>>>
         >>>>>>
         >>>>>>


        
org.apache.accumulo.core.client.impl.thrift.ClientService.Client.authenticate().
         >>>>>>
         >>>>>> On my tservers, I see the following:
         >>>>>>
         >>>>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer]
                 ERROR: Error
         >>>>>> occurred during processing of message.
         >>>>>> java.lang.RuntimeException:
         >>>>>> org.apache.thrift.transport.TTransportException:
         >>>>>> java.net <http://java.net>
        <http://java.net>.SocketTimeoutException: Read

                 timed out
         >>>>>>            at
         >>>>>>
         >>>>>>
         >>>>>>


        
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
         >>>>>>            at
         >>>>>>
         >>>>>>
         >>>>>>


        
org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51)
         >>>>>>            at
         >>>>>>
         >>>>>>
         >>>>>>


        
org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48)
         >>>>>>            at
                 java.security.AccessController.doPrivileged(Native
         >>>>>> Method)
         >>>>>>            at
                 javax.security.auth.Subject.doAs(Subject.java:356)
         >>>>>>            at
         >>>>>>
         >>>>>>
         >>>>>>


        
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622)
         >>>>>>            at
         >>>>>>
         >>>>>>
         >>>>>>


        
org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48)
         >>>>>>            at
         >>>>>>
         >>>>>>
         >>>>>>


        
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208)
         >>>>>>            at
         >>>>>>
         >>>>>>
         >>>>>>


        
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         >>>>>>            at
         >>>>>>
         >>>>>>
         >>>>>>


        
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         >>>>>>            at
         >>>>>>
         >>>>>>
         >>>>>>


        
org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
         >>>>>>            at java.lang.Thread.run(Thread.java:745)
         >>>>>> Caused by: org.apache.thrift.transport.TTransportException:
         >>>>>> java.net <http://java.net>
        <http://java.net>.SocketTimeoutException: Read
                 timed out
        > >>>>>            at
        > >>>>>
        > >>>>>
        > >>>>>


        
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
        > >>>>>            at
        > >>>>>


        org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
        > >>>>>            at
        > >>>>>
        > >>>>>
        > >>>>>


        
org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
        > >>>>>            at
        > >>>>>
        > >>>>>
        > >>>>>


        
org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
        > >>>>>            at
        > >>>>>
        > >>>>>


        org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
        > >>>>>            at
        > >>>>>
        > >>>>>
        > >>>>>


        
org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
        > >>>>>            at
        > >>>>>
        > >>>>>
        > >>>>>


        
org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
        > >>>>>            ... 11 more
        > >>>>> Caused by: java.net <http://java.net>
        <http://java.net>.SocketTimeoutException: Read timed out
        > >>>>>            at java.net.SocketInputStream.socketRead0(Native
                      Method)
        > >>>>>            at
        > >>>>> java.net.SocketInputStream.read(SocketInputStream.java:152)
        > >>>>>            at
        > >>>>> java.net.SocketInputStream.read(SocketInputStream.java:122)
        > >>>>>            at
        > >>>>>

        java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
        > >>>>>            at
        > >>>>>

        java.io.BufferedInputStream.read(BufferedInputStream.java:334)
        > >>>>>            at
        > >>>>>
        > >>>>>
        > >>>>>


        
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
        > >>>>>            ... 17 more
        > >>>>>
        > >>>>> Any ideas why?
        > >>>>>
        > >>>>> Thanks.
        > >>>>> -Simon
        > >>>>>
        > >>>>>
        > >>>>>
        > >>>>>
        > >>>>> On Sat, Jun 6, 2015 at 2:19 PM, Josh
                      Elser<[email protected]
        <mailto:[email protected]> <mailto:[email protected]
        <mailto:[email protected]>>
        <mailto:[email protected] <mailto:[email protected]>
        <mailto:[email protected] <mailto:[email protected]>>>>


         >>>>>> wrote:
         >>>>>>>
         >>>>>>> Make sure you read the JavaDoc on DelegationToken:
         >>>>>>>
         >>>>>>> <snip>
         >>>>>>> Obtain a delegation token by calling {@link
         >>>>>>>
         >>>>>>>
         >>>>>>>
         >>>>>>>


        
SecurityOperations#getDelegationToken(org.apache.accumulo.core.client.admin.DelegationTokenConfig)}
         >>>>>>> </snip>
         >>>>>>>
         >>>>>>> You cannot create a usable DelegationToken as the client
                      itself.
         >>>>>>>
         >>>>>>> Anyways, DelegationTokens are only relevant in cases where
                      the client
         >>>>>>> Kerberos credentials are unavailable. The most common case
                      is running
         >>>>>>> MapReduce jobs. If you are just interacting with Accumulo
                      through the
         >>>>>>> Java
         >>>>>>> API, the KerberosToken is all you need to use.
         >>>>>>>
         >>>>>>> The user-manual likely just needs to be updated. I
                 believe the
         >>>>>>> DelegationTokenConfig was added after I wrote the initial
         >>>>>>> documentation.
         >>>>>>>
         >>>>>>>
         >>>>>>> Xu (Simon) Chen wrote:
         >>>>>>>>
         >>>>>>>> Hi folks,
         >>>>>>>>
         >>>>>>>> The latest kerberos doc seems to indicate that
                      getDelegationToken
         >>>>>>>> can
         >>>>>>>> be
         >>>>>>>> called without any parameters:
         >>>>>>>>
         >>>>>>>>
         >>>>>>>>
         >>>>>>>>
         >>>>>>>>
        
https://github.com/apache/accumulo/blob/1.7/docs/src/main/asciidoc/chapters/kerberos.txt#L410
         >>>>>>>>
         >>>>>>>> Yet the source code indicates a DelegationTokenConfig
                      object must be
         >>>>>>>> passed in:
         >>>>>>>>
         >>>>>>>>
         >>>>>>>>
         >>>>>>>>
         >>>>>>>>
        
https://github.com/apache/accumulo/blob/1.7/core/src/main/java/org/apache/accumulo/core/client/admin/SecurityOperations.java#L359
         >>>>>>>>
         >>>>>>>> Any ideas on how I should construct the
                 DelegationTokenConfig
         >>>>>>>> object?
         >>>>>>>>
         >>>>>>>> For context, I've been trying to get geomesa to work on my
                      accumulo
         >>>>>>>> 1.7
         >>>>>>>> with kerberos turned on. Right now, the code is somewhat
                      tied to
         >>>>>>>> password auth:
         >>>>>>>>
         >>>>>>>>
         >>>>>>>>
         >>>>>>>>
         >>>>>>>>
        
https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
         >>>>>>>> My thought is that I should get a KerberosToken first, and
                      then try
         >>>>>>>> generate a DelegationToken, which is passed back for later
         >>>>>>>> interactions
         >>>>>>>> between geomesa and accumulo.
         >>>>>>>>
         >>>>>>>> Thanks.
         >>>>>>>> -Simon




Reply via email to