Merge cloudera/kudu-examples into the examples subdirectory

This merges the cloudera-kudu[1] repository into the examples
subdirectory.

I scrubbed Cloudera-specific bits, namely
- Removed the extra license (which was also Apache 2.0)
- Removed the demo-vm-setup directory
- Removed demo vm instructions from the README
- Removed the python directory, as it contained an early version
  of the Python client, which is already integrated.

Fix-ups to improve and integrate the java and python examples
will be in follow-ups.

[1]: https://github.com/cloudera/kudu-examples

Change-Id: I4ebaa902b7af6f87bfd1cfba07f0ed2ea6de1460


Project: http://git-wip-us.apache.org/repos/asf/kudu/repo
Commit: http://git-wip-us.apache.org/repos/asf/kudu/commit/c83f6eb3
Tree: http://git-wip-us.apache.org/repos/asf/kudu/tree/c83f6eb3
Diff: http://git-wip-us.apache.org/repos/asf/kudu/diff/c83f6eb3

Branch: refs/heads/master
Commit: c83f6eb35da5708e893ed4dc1540c01da074ea08
Parents: 32d276e 7545dec
Author: Will Berkeley <wdberke...@apache.org>
Authored: Wed Mar 28 14:48:21 2018 -0700
Committer: Will Berkeley <wdberke...@apache.org>
Committed: Wed Mar 28 14:49:39 2018 -0700

----------------------------------------------------------------------
 examples/README.md                              |   6 +
 examples/java/.gitignore                        |   7 +
 examples/java/collectl/README                   |  82 ++++++
 examples/java/collectl/pom.xml                  |  72 +++++
 .../examples/collectl/KuduCollectlExample.java  | 199 +++++++++++++
 examples/java/insert-loadgen/README             |  17 ++
 examples/java/insert-loadgen/pom.xml            |  72 +++++
 .../kududb/examples/loadgen/InsertLoadgen.java  | 110 +++++++
 examples/java/java-sample/README                |  15 +
 examples/java/java-sample/pom.xml               |  72 +++++
 .../java/org/kududb/examples/sample/Sample.java |  77 +++++
 examples/python/dstat-kudu/README.md            |  69 +++++
 examples/python/dstat-kudu/kudu_dstat.py        |  93 ++++++
 examples/python/graphite-kudu/kudu/__init__.py  |   3 +
 .../python/graphite-kudu/kudu/kudu_graphite.py  | 287 +++++++++++++++++++
 examples/python/graphite-kudu/setup.cfg         |   2 +
 examples/python/graphite-kudu/setup.py          |  26 ++
 17 files changed, 1209 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/README.md
----------------------------------------------------------------------
diff --cc examples/README.md
index 0000000,0000000..96f119f
new file mode 100644
--- /dev/null
+++ b/examples/README.md
@@@ -1,0 -1,0 +1,6 @@@
++# Kudu examples
++
++This directory holds example code and tutorials for Kudu.
++
++It was imported from https://github.com/cloudera/kudu-examples at commit
++7545deccb8e12effa17a955ab5b841bdcc5afe85.

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/java/.gitignore
----------------------------------------------------------------------
diff --cc examples/java/.gitignore
index 0000000,0000000..b2dce6e
new file mode 100644
--- /dev/null
+++ b/examples/java/.gitignore
@@@ -1,0 -1,0 +1,7 @@@
++dependency-reduced-pom.xml
++target/
++.classpath
++.project
++.settings
++.idea
++*.iml

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/java/collectl/README
----------------------------------------------------------------------
diff --cc examples/java/collectl/README
index 0000000,0000000..74a2fd7
new file mode 100644
--- /dev/null
+++ b/examples/java/collectl/README
@@@ -1,0 -1,0 +1,82 @@@
++This example implements a simple Java application which listens on a
++TCP socket for time series data corresponding to the Collectl wire protocol.
++The commonly-available 'collectl' tool can be used to send example data
++to the server.
++
++This tutorial assumes that you are running a Kudu master on 
'quickstart.cloudera'.
++Otherwise, you can pass another host using '-DkuduMaster=host:port'.
++
++To start the example server:
++
++$ mvn package
++$ java -jar target/kudu-collectl-example-1.0-SNAPSHOT.jar
++
++To start collecting data, run the following command on one or more machines:
++
++$ collectl --export=graphite,127.0.0.1,p=/
++
++(substituting '127.0.0.1' with the IP address of whichever server is running 
the
++example program).
++
++----
++
++Exploring the data with Impala
++========
++
++First, we need to map the table into Impala:
++
++    CREATE EXTERNAL TABLE `metrics` (
++    `host` STRING,
++    `metric` STRING,
++    `timestamp` INT,
++    `value` DOUBLE
++    )
++    TBLPROPERTIES(
++      'storage_handler' = 'com.cloudera.kudu.hive.KuduStorageHandler',
++      'kudu.table_name' = 'metrics',
++      'kudu.master_addresses' = 'quickstart.cloudera:7051',
++      'kudu.key_columns' = 'host, metric, timestamp'
++    );
++
++Then, we can run some queries:
++
++    [quickstart.cloudera:21000] > select count(distinct metric) from metrics;
++    Query: select count(distinct metric) from metrics
++    +------------------------+
++    | count(distinct metric) |
++    +------------------------+
++    | 23                     |
++    +------------------------+
++    Fetched 1 row(s) in 0.19s
++
++
++Exploring the data with Spark
++========
++
++NOTE: if you are using the Quickstart VM, Spark is not installed by default.
++You can install it by running:
++
++    sudo yum -y install spark-core
++
++Download the Kudu MR jar and run Spark with it on the classpath:
++
++    wget 
https://repository.cloudera.com/artifactory/cloudera-repos/org/apache/kudu/kudu-spark_2.10/0.10.0/kudu-spark_2.10-0.10.0.jar
++    spark-shell --jars kudu-spark*jar
++
++You can then paste this example script:
++
++    import org.apache.kudu.spark.kudu._
++
++    val df = sqlContext.read.options(Map(
++      "kudu.master" -> "quickstart.cloudera",
++      "kudu.table" -> "metrics")).kudu
++    df.registerTempTable("metrics")
++
++    // Print the first five values
++    sqlContext.sql("select * from metrics limit 5").show()
++    
++    // Calculate the average value of every host/metric pair
++    sqlContext.sql("select host, metric, avg(value) from metrics group by 
host, metric").show()
++    
++Note that if you are still running the 'collectl' command above, you can see
++the data changing in real time by re-running the queries.

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/java/collectl/pom.xml
----------------------------------------------------------------------
diff --cc examples/java/collectl/pom.xml
index 0000000,0000000..3727201
new file mode 100644
--- /dev/null
+++ b/examples/java/collectl/pom.xml
@@@ -1,0 -1,0 +1,72 @@@
++<project xmlns="http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
++  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd";>
++  <modelVersion>4.0.0</modelVersion>
++  <groupId>kudu-collectl-example</groupId>
++  <artifactId>kudu-collectl-example</artifactId>
++  <packaging>jar</packaging>
++  <version>1.0-SNAPSHOT</version>
++  <name>kudu-collectl-example</name>
++
++  <build>
++    <plugins>
++      <plugin>
++        <groupId>org.apache.maven.plugins</groupId>
++        <artifactId>maven-compiler-plugin</artifactId>
++        <version>2.3.1</version>
++        <configuration>
++          <source>1.7</source>
++          <target>1.7</target>
++        </configuration>
++      </plugin>
++      <plugin>
++        <groupId>org.apache.maven.plugins</groupId>
++        <artifactId>maven-shade-plugin</artifactId>
++        <version>2.4</version>
++        <executions>
++          <execution>
++            <phase>package</phase>
++            <goals>
++              <goal>shade</goal>
++            </goals>
++            <configuration>
++              <transformers>
++                <transformer 
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
++                  
<mainClass>org.kududb.examples.collectl.KuduCollectlExample</mainClass>
++                </transformer>
++              </transformers>
++            </configuration>
++          </execution>
++        </executions>
++      </plugin>
++    </plugins>
++  </build>
++
++  <repositories>
++    <repository>
++      <id>cdh.repo</id>
++      <name>Cloudera Repositories</name>
++      <url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
++      <snapshots>
++        <enabled>false</enabled>
++      </snapshots>
++    </repository>
++  </repositories>
++
++  <dependencies>
++
++    <dependency>
++      <groupId>org.apache.kudu</groupId>
++      <artifactId>kudu-client</artifactId>
++      <version>1.1.0</version>
++    </dependency>
++
++    <!-- for logging messages -->
++    <dependency>
++      <groupId>org.slf4j</groupId>
++      <artifactId>slf4j-simple</artifactId>
++      <version>1.7.12</version>
++    </dependency>
++
++  </dependencies>
++
++</project>

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/java/collectl/src/main/java/org/kududb/examples/collectl/KuduCollectlExample.java
----------------------------------------------------------------------
diff --cc 
examples/java/collectl/src/main/java/org/kududb/examples/collectl/KuduCollectlExample.java
index 0000000,0000000..f7c7d1c
new file mode 100644
--- /dev/null
+++ 
b/examples/java/collectl/src/main/java/org/kududb/examples/collectl/KuduCollectlExample.java
@@@ -1,0 -1,0 +1,199 @@@
++package org.kududb.examples.collectl;
++
++import java.io.BufferedReader;
++import java.io.InputStreamReader;
++import java.net.ServerSocket;
++import java.net.Socket;
++import java.util.ArrayList;
++import java.util.Collections;
++import java.util.List;
++import java.util.Set;
++import java.util.concurrent.ConcurrentHashMap;
++
++import org.apache.kudu.ColumnSchema;
++import org.apache.kudu.ColumnSchema.ColumnSchemaBuilder;
++import org.apache.kudu.Schema;
++import org.apache.kudu.Type;
++import org.apache.kudu.client.CreateTableOptions;
++import org.apache.kudu.client.Insert;
++import org.apache.kudu.client.KuduClient;
++import org.apache.kudu.client.KuduSession;
++import org.apache.kudu.client.KuduTable;
++import org.apache.kudu.client.OperationResponse;
++import org.apache.kudu.client.RowError;
++import org.apache.kudu.client.SessionConfiguration.FlushMode;
++
++
++public class KuduCollectlExample {
++  private static final int GRAPHITE_PORT = 2003;
++  private static final String TABLE_NAME = "metrics";
++  private static final String ID_TABLE_NAME = "metric_ids";
++
++  private static final String KUDU_MASTER =
++      System.getProperty("kuduMaster", "quickstart.cloudera");
++
++  private KuduClient client;
++  private KuduTable table;
++  private KuduTable idTable;
++
++  private Set<String> existingMetrics = Collections.newSetFromMap(
++    new ConcurrentHashMap<String, Boolean>());
++
++  public static void main(String[] args) throws Exception {
++    new KuduCollectlExample().run();
++  }
++
++  KuduCollectlExample() {
++    this.client = new KuduClient.KuduClientBuilder(KUDU_MASTER).build();
++  }
++  
++  public void run() throws Exception {
++    createTableIfNecessary();
++    createIdTableIfNecessary();
++    this.table = client.openTable(TABLE_NAME);
++    this.idTable = client.openTable(ID_TABLE_NAME);
++    try (ServerSocket listener = new ServerSocket(GRAPHITE_PORT)) {
++      while (true) {
++        Socket s = listener.accept();
++        new HandlerThread(s).start();
++      }
++    }
++  }
++  
++  private void createTableIfNecessary() throws Exception {
++    if (client.tableExists(TABLE_NAME)) {
++      return;
++    }
++    
++    List<ColumnSchema> cols = new ArrayList<>();
++    cols.add(new ColumnSchemaBuilder("host", Type.STRING).key(true).encoding(
++        ColumnSchema.Encoding.DICT_ENCODING).build());
++    cols.add(new ColumnSchemaBuilder("metric", 
Type.STRING).key(true).encoding(
++        ColumnSchema.Encoding.DICT_ENCODING).build());
++    cols.add(new ColumnSchemaBuilder("timestamp", 
Type.INT32).key(true).encoding(
++        ColumnSchema.Encoding.BIT_SHUFFLE).build());
++    cols.add(new ColumnSchemaBuilder("value", Type.DOUBLE)
++        .encoding(ColumnSchema.Encoding.BIT_SHUFFLE).build());
++
++    // Need to set this up since we're not pre-partitioning.
++    List<String> rangeKeys = new ArrayList<>();
++    rangeKeys.add("host");
++    rangeKeys.add("metric");
++    rangeKeys.add("timestamp");
++
++    client.createTable(TABLE_NAME, new Schema(cols),
++                       new 
CreateTableOptions().setRangePartitionColumns(rangeKeys));
++  }
++  
++  private void createIdTableIfNecessary() throws Exception {
++    if (client.tableExists(ID_TABLE_NAME)) {
++      return;
++    }
++    
++    ArrayList<ColumnSchema> cols = new ArrayList<>();
++    cols.add(new ColumnSchemaBuilder("host", Type.STRING).key(true).build());
++    cols.add(new ColumnSchemaBuilder("metric", 
Type.STRING).key(true).build());
++
++    // Need to set this up since we're not pre-partitioning.
++    List<String> rangeKeys = new ArrayList<>();
++    rangeKeys.add("host");
++    rangeKeys.add("metric");
++
++    client.createTable(ID_TABLE_NAME, new Schema(cols),
++                       new 
CreateTableOptions().setRangePartitionColumns(rangeKeys));
++  }
++
++  class HandlerThread extends Thread {
++    private Socket socket;
++    private KuduSession session;
++
++    HandlerThread(Socket s) {
++      this.socket = s;
++      this.session = client.newSession();
++      // TODO: AUTO_FLUSH_BACKGROUND would be better for this kind of 
usecase, but
++      // it seems like it's buffering data too long, and only flushing based 
on size.
++      // Perhaps we should support a time-based buffering as well?
++      session.setFlushMode(FlushMode.MANUAL_FLUSH);
++      
++      // Increase the number of mutations that we can buffer
++      session.setMutationBufferSpace(10000);
++    }
++    
++    @Override
++    public void run() {
++      try {
++        doRun();
++      } catch (Exception e) {
++        System.err.println("exception handling connection from " + socket);
++        e.printStackTrace();
++      }
++    }
++
++    private void insertIdIfNecessary(String host, String metric) throws 
Exception {
++      String id = host + "/" + metric;
++      if (existingMetrics.contains(id)) {
++        return;
++      }
++      Insert ins = idTable.newInsert();
++      ins.getRow().addString("host", host);
++      ins.getRow().addString("metric", metric);
++      session.apply(ins);
++      session.flush();
++      // TODO: error handling!
++      //System.err.println("registered new metric " + id);
++      existingMetrics.add(id);
++    }
++    
++    private void doRun() throws Exception {
++      BufferedReader br = new BufferedReader(new InputStreamReader(
++          socket.getInputStream()));
++      socket = null;
++      
++      // Read lines from collectd. Each line should look like:
++      // hostname.example.com/.cpuload.avg1 2.27 1435788059
++      String input;
++      while ((input = br.readLine()) != null) { 
++        String[] fields = input.split(" ");
++        if (fields.length != 3) {
++          throw new Exception("Invalid input: " + input);
++        }
++        String[] hostAndMetric = fields[0].split("/.");
++        if (hostAndMetric.length != 2) {
++          System.err.println("bad line: " + input);
++          throw new Exception("expected /. delimiter between host and metric 
name. " +
++              "Did you run collectl with --export=collectl,<hostname>,p=/ ?");
++        }
++        String host = hostAndMetric[0];
++        String metric = hostAndMetric[1];
++        insertIdIfNecessary(host, metric);
++        double val = Double.parseDouble(fields[1]);        
++        int ts = Integer.parseInt(fields[2]);
++        
++        Insert insert = table.newInsert();
++        insert.getRow().addString("host", hostAndMetric[0]);
++        insert.getRow().addString("metric", hostAndMetric[1]);
++        insert.getRow().addInt("timestamp", ts);
++        insert.getRow().addDouble("value", val);
++        session.apply(insert);
++        
++        // If there's more data to read, don't flush yet -- better to 
accumulate
++        // a larger batch.
++        if (!br.ready()) {
++          List<OperationResponse> responses = session.flush();
++          for (OperationResponse r : responses) {
++            if (r.hasRowError()) {
++              RowError e = r.getRowError();
++              // TODO: the client should offer an enum for different row 
errors, instead
++              // of string comparison!
++              if ("ALREADY_PRESENT".equals(e.getStatus())) {
++                continue;
++              }
++              System.err.println("Error inserting " + 
e.getOperation().toString()
++                  + ": " + e.toString());
++            }
++          }
++        }
++      }
++    }
++  }
++}

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/java/insert-loadgen/README
----------------------------------------------------------------------
diff --cc examples/java/insert-loadgen/README
index 0000000,0000000..7c0628e
new file mode 100644
--- /dev/null
+++ b/examples/java/insert-loadgen/README
@@@ -1,0 -1,0 +1,17 @@@
++Random insert load generator. This will insert as fast as it can using
++AUTO_BACKGROUND_FLUSH with the specified number of threads. All fields are
++randomized, and insert failures (including errors from collisions) are 
ignored.
++
++To build and run, do the following:
++
++$ mvn package
++$ java -jar target/kudu-insert-loadgen-0.1-SNAPSHOT.jar kudu_master_host 
kudu_table_name num_threads
++
++For example, if you are running the Quickstart VM with the host name
++"quickstart.cloudera", then you can use:
++
++$ java -jar target/kudu-insert-loadgen-0.1-SNAPSHOT.jar quickstart.cloudera 
test_table 16
++
++Note: This program will not create the "test_table" table. You must do that
++via other means, such as through impala-shell or using the create-demo-table
++program included in the Kudu source tree.

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/java/insert-loadgen/pom.xml
----------------------------------------------------------------------
diff --cc examples/java/insert-loadgen/pom.xml
index 0000000,0000000..09bc0d1
new file mode 100644
--- /dev/null
+++ b/examples/java/insert-loadgen/pom.xml
@@@ -1,0 -1,0 +1,72 @@@
++<project xmlns="http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
++  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd";>
++  <modelVersion>4.0.0</modelVersion>
++  <groupId>kudu-examples</groupId>
++  <artifactId>kudu-insert-loadgen</artifactId>
++  <packaging>jar</packaging>
++  <version>0.1-SNAPSHOT</version>
++  <name>Random Insert Load Generator for Kudu</name>
++
++  <build>
++    <plugins>
++      <plugin>
++        <groupId>org.apache.maven.plugins</groupId>
++        <artifactId>maven-compiler-plugin</artifactId>
++        <version>2.3.1</version>
++        <configuration>
++          <source>1.7</source>
++          <target>1.7</target>
++        </configuration>
++      </plugin>
++      <plugin>
++        <groupId>org.apache.maven.plugins</groupId>
++        <artifactId>maven-shade-plugin</artifactId>
++        <version>2.4</version>
++        <executions>
++          <execution>
++            <phase>package</phase>
++            <goals>
++              <goal>shade</goal>
++            </goals>
++            <configuration>
++              <transformers>
++                <transformer 
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
++                  
<mainClass>org.kududb.examples.loadgen.InsertLoadgen</mainClass>
++                </transformer>
++              </transformers>
++            </configuration>
++          </execution>
++        </executions>
++      </plugin>
++    </plugins>
++  </build>
++
++  <repositories>
++    <repository>
++      <id>cdh.repo</id>
++      <name>Cloudera Repositories</name>
++      <url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
++      <snapshots>
++        <enabled>false</enabled>
++      </snapshots>
++    </repository>
++  </repositories>
++
++  <dependencies>
++
++    <dependency>
++      <groupId>org.apache.kudu</groupId>
++      <artifactId>kudu-client</artifactId>
++      <version>1.1.0</version>
++    </dependency>
++
++    <!-- for logging messages -->
++    <dependency>
++      <groupId>org.slf4j</groupId>
++      <artifactId>slf4j-simple</artifactId>
++      <version>1.7.12</version>
++    </dependency>
++
++  </dependencies>
++
++</project>

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/java/insert-loadgen/src/main/java/org/kududb/examples/loadgen/InsertLoadgen.java
----------------------------------------------------------------------
diff --cc 
examples/java/insert-loadgen/src/main/java/org/kududb/examples/loadgen/InsertLoadgen.java
index 0000000,0000000..4b76c34
new file mode 100644
--- /dev/null
+++ 
b/examples/java/insert-loadgen/src/main/java/org/kududb/examples/loadgen/InsertLoadgen.java
@@@ -1,0 -1,0 +1,110 @@@
++package org.kududb.examples.loadgen;
++
++import java.util.ArrayList;
++import java.util.List;
++import java.util.Random;
++import java.util.UUID;
++
++import org.apache.kudu.Schema;
++import org.apache.kudu.Type;
++import org.apache.kudu.client.Insert;
++import org.apache.kudu.client.KuduClient;
++import org.apache.kudu.client.KuduSession;
++import org.apache.kudu.client.KuduTable;
++import org.apache.kudu.client.PartialRow;
++import org.apache.kudu.client.SessionConfiguration;
++
++public class InsertLoadgen {
++  private static class RandomDataGenerator {
++    private final Random rng;
++    private final int index;
++    private final Type type;
++
++    /**
++     * Instantiate a random data generator for a specific field.
++     * @param index The numerical index of the column in the row schema
++     * @param type The type of the data at index {@code index}
++     */
++    public RandomDataGenerator(int index, Type type) {
++      this.rng = new Random();
++      this.index = index;
++      this.type = type;
++    }
++
++    /**
++     * Add random data to the given row for the column at index {@code index}
++     * of type {@code type}
++     * @param row The row to add the field to
++     */
++    void generateColumnData(PartialRow row) {
++      switch (type) {
++        case INT8:
++          row.addByte(index, (byte) rng.nextInt(Byte.MAX_VALUE));
++          return;
++        case INT16:
++          row.addShort(index, (short)rng.nextInt(Short.MAX_VALUE));
++          return;
++        case INT32:
++          row.addInt(index, rng.nextInt(Integer.MAX_VALUE));
++          return;
++        case INT64:
++        case UNIXTIME_MICROS:
++          row.addLong(index, rng.nextLong());
++          return;
++        case BINARY:
++          byte bytes[] = new byte[16];
++          rng.nextBytes(bytes);
++          row.addBinary(index, bytes);
++          return;
++        case STRING:
++          row.addString(index, UUID.randomUUID().toString());
++          return;
++        case BOOL:
++          row.addBoolean(index, rng.nextBoolean());
++          return;
++        case FLOAT:
++          row.addFloat(index, rng.nextFloat());
++          return;
++        case DOUBLE:
++          row.addDouble(index, rng.nextDouble());
++          return;
++        default:
++          throw new UnsupportedOperationException("Unknown type " + type);
++      }
++    }
++  }
++
++  public static void main(String[] args) throws Exception {
++    if (args.length != 2) {
++      System.err.println("Usage: InsertLoadgen kudu_master_host kudu_table");
++      System.exit(1);
++    }
++
++    String masterHost = args[0];
++    String tableName = args[1];
++
++    try (KuduClient client = new 
KuduClient.KuduClientBuilder(masterHost).build()) {
++      KuduTable table = client.openTable(tableName);
++      Schema schema = table.getSchema();
++      List<RandomDataGenerator> generators = new 
ArrayList<>(schema.getColumnCount());
++      for (int i = 0; i < schema.getColumnCount(); i++) {
++        generators.add(new RandomDataGenerator(i, 
schema.getColumnByIndex(i).getType()));
++      }
++
++      KuduSession session = client.newSession();
++      
session.setFlushMode(SessionConfiguration.FlushMode.AUTO_FLUSH_BACKGROUND);
++      for (int insertCount = 0; ; insertCount++) {
++        Insert insert = table.newInsert();
++        PartialRow row = insert.getRow();
++        for (int i = 0; i < schema.getColumnCount(); i++) {
++          generators.get(i).generateColumnData(row);
++        }
++        session.apply(insert);
++
++        if (insertCount % 1000 == 0 && session.countPendingErrors() > 0) {
++          throw new 
RuntimeException(session.getPendingErrors().getRowErrors()[0].toString());
++        }
++      }
++    }
++  }
++}

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/java/java-sample/README
----------------------------------------------------------------------
diff --cc examples/java/java-sample/README
index 0000000,0000000..71cd034
new file mode 100644
--- /dev/null
+++ b/examples/java/java-sample/README
@@@ -1,0 -1,0 +1,15 @@@
++Here's a Java client sample that creates a table, writes some data, scans it, 
and then deletes
++its table.
++
++
++To build and run, do the following:
++
++$ mvn package
++$ java -jar target/kudu-java-sample-1.0-SNAPSHOT.jar
++
++This example assumes that you are running the Quickstart VM with the host name
++"quickstart.cloudera". If you are running against a different cluster, pass 
the
++host name of the Kudu Master using a Java property:
++
++$ java -DkuduMaster=a1216.halxg.cloudera.com -jar 
target/kudu-java-sample-1.0-SNAPSHOT.jar
++

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/java/java-sample/pom.xml
----------------------------------------------------------------------
diff --cc examples/java/java-sample/pom.xml
index 0000000,0000000..b078b04
new file mode 100644
--- /dev/null
+++ b/examples/java/java-sample/pom.xml
@@@ -1,0 -1,0 +1,72 @@@
++<project xmlns="http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
++  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd";>
++  <modelVersion>4.0.0</modelVersion>
++  <groupId>kudu-java-sample</groupId>
++  <artifactId>kudu-java-sample</artifactId>
++  <packaging>jar</packaging>
++  <version>1.0-SNAPSHOT</version>
++  <name>kudu-java-sample</name>
++
++  <build>
++    <plugins>
++      <plugin>
++        <groupId>org.apache.maven.plugins</groupId>
++        <artifactId>maven-compiler-plugin</artifactId>
++        <version>2.3.1</version>
++        <configuration>
++          <source>1.7</source>
++          <target>1.7</target>
++        </configuration>
++      </plugin>
++      <plugin>
++        <groupId>org.apache.maven.plugins</groupId>
++        <artifactId>maven-shade-plugin</artifactId>
++        <version>2.4</version>
++        <executions>
++          <execution>
++            <phase>package</phase>
++            <goals>
++              <goal>shade</goal>
++            </goals>
++            <configuration>
++              <transformers>
++                <transformer 
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
++                  <mainClass>org.kududb.examples.sample.Sample</mainClass>
++                </transformer>
++              </transformers>
++            </configuration>
++          </execution>
++        </executions>
++      </plugin>
++    </plugins>
++  </build>
++
++  <repositories>
++    <repository>
++      <id>cdh.repo</id>
++      <name>Cloudera Repositories</name>
++      <url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
++      <snapshots>
++        <enabled>false</enabled>
++      </snapshots>
++    </repository>
++  </repositories>
++
++  <dependencies>
++
++    <dependency>
++      <groupId>org.apache.kudu</groupId>
++      <artifactId>kudu-client</artifactId>
++      <version>1.1.0</version>
++    </dependency>
++
++    <!-- for logging messages -->
++    <dependency>
++      <groupId>org.slf4j</groupId>
++      <artifactId>slf4j-simple</artifactId>
++      <version>1.7.12</version>
++    </dependency>
++
++  </dependencies>
++
++</project>

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/java/java-sample/src/main/java/org/kududb/examples/sample/Sample.java
----------------------------------------------------------------------
diff --cc 
examples/java/java-sample/src/main/java/org/kududb/examples/sample/Sample.java
index 0000000,0000000..5c5285f
new file mode 100644
--- /dev/null
+++ 
b/examples/java/java-sample/src/main/java/org/kududb/examples/sample/Sample.java
@@@ -1,0 -1,0 +1,77 @@@
++package org.kududb.examples.sample;
++
++import org.apache.kudu.ColumnSchema;
++import org.apache.kudu.Schema;
++import org.apache.kudu.Type;
++import org.apache.kudu.client.*;
++
++import java.util.ArrayList;
++import java.util.List;
++
++public class Sample {
++
++  private static final String KUDU_MASTER = System.getProperty(
++      "kuduMaster", "quickstart.cloudera");
++
++  public static void main(String[] args) {
++    System.out.println("-----------------------------------------------");
++    System.out.println("Will try to connect to Kudu master at " + 
KUDU_MASTER);
++    System.out.println("Run with -DkuduMaster=myHost:port to override.");
++    System.out.println("-----------------------------------------------");
++    String tableName = "java_sample-" + System.currentTimeMillis();
++    KuduClient client = new KuduClient.KuduClientBuilder(KUDU_MASTER).build();
++
++    try {
++      List<ColumnSchema> columns = new ArrayList(2);
++      columns.add(new ColumnSchema.ColumnSchemaBuilder("key", Type.INT32)
++          .key(true)
++          .build());
++      columns.add(new ColumnSchema.ColumnSchemaBuilder("value", Type.STRING)
++          .build());
++      List<String> rangeKeys = new ArrayList<>();
++      rangeKeys.add("key");
++
++      Schema schema = new Schema(columns);
++      client.createTable(tableName, schema,
++                         new 
CreateTableOptions().setRangePartitionColumns(rangeKeys));
++
++      KuduTable table = client.openTable(tableName);
++      KuduSession session = client.newSession();
++      for (int i = 0; i < 3; i++) {
++        Insert insert = table.newInsert();
++        PartialRow row = insert.getRow();
++        row.addInt(0, i);
++        row.addString(1, "value " + i);
++        session.apply(insert);
++      }
++
++      List<String> projectColumns = new ArrayList<>(1);
++      projectColumns.add("value");
++      KuduScanner scanner = client.newScannerBuilder(table)
++          .setProjectedColumnNames(projectColumns)
++          .build();
++      while (scanner.hasMoreRows()) {
++        RowResultIterator results = scanner.nextRows();
++        while (results.hasNext()) {
++          RowResult result = results.next();
++          System.out.println(result.getString(0));
++        }
++      }
++    } catch (Exception e) {
++      e.printStackTrace();
++    } finally {
++      try {
++        client.deleteTable(tableName);
++      } catch (Exception e) {
++        e.printStackTrace();
++      } finally {
++        try {
++          client.shutdown();
++        } catch (Exception e) {
++          e.printStackTrace();
++        }
++      }
++    }
++  }
++}
++

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/python/dstat-kudu/README.md
----------------------------------------------------------------------
diff --cc examples/python/dstat-kudu/README.md
index 0000000,0000000..a4f443c
new file mode 100644
--- /dev/null
+++ b/examples/python/dstat-kudu/README.md
@@@ -1,0 -1,0 +1,69 @@@
++# Kudu + dstat + Impala
++
++This is an example program that shows how to use the Kudu API in Python to 
load data into
++a new / existing Kudu table generated by an external program.
++
++## Prerequisites
++
++Make sure you have the Kudu client library installed and the kudu Python 
bindings are
++available. If you have the Kudu client library and Python bindings in a 
special place,
++you'll need to set the environment variables:
++
++   LD_LIBRARY_PATH
++   PYTHONPATH
++
++To the according directories. In addition you'll need the dstat program, it 
should be
++available from your typical package repository.
++
++## Usage
++
++In this case the `dstat` program is used to generate data about the system 
load and pipe
++this data into a named pipe that is then read and pipe to the python program.
++
++To execute this script simply run:
++
++    python kudu_dstat.py
++
++This will create a table assuming that you have a kudu-master running 
locally. You can use
++the Web UI to access some information about the table using the following 
link:
++http://localhost:8051. The program will run until it is terminated via C-c.
++
++To drop the table in Kudu and start fresh start the program with:
++
++    python kudu_dstat.py drop
++
++To query the data via Impala, create a new Kudu table in Impala using the
++following command in the impala-shell.
++
++    CREATE EXTERNAL TABLE dstat (
++    `ts` BIGINT,
++    `usr` FLOAT,
++    `sys` FLOAT,
++    `idl` FLOAT,
++    `wai` FLOAT,
++    `hiq` FLOAT,
++    `siq` FLOAT,
++    `read` FLOAT,
++    `writ` FLOAT,
++    `recv` FLOAT,
++    `send` FLOAT,
++    `in` FLOAT,
++    `out` FLOAT,
++    `int` FLOAT,
++    `csw` FLOAT
++    )
++    TBLPROPERTIES(
++      'storage_handler' = 'com.cloudera.kudu.hive.KuduStorageHandler',
++      'kudu.table_name' = 'dstat',
++      'kudu.master_addresses' = '127.0.0.1:7051',
++      'kudu.key_columns' = 'ts'
++    );
++
++Now you can query your local system's load using:
++
++
++    -- How many rows are stored right now?
++    select count(*) from dstat;
++
++    -- Average load in 10s windows
++    select (ts - ts % 10 ) as mod_ts, avg(usr), avg(sys), avg(idl) from dstat 
group by mod_ts order by mod_ts

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/python/dstat-kudu/kudu_dstat.py
----------------------------------------------------------------------
diff --cc examples/python/dstat-kudu/kudu_dstat.py
index 0000000,0000000..f1f5bb2
new file mode 100644
--- /dev/null
+++ b/examples/python/dstat-kudu/kudu_dstat.py
@@@ -1,0 -1,0 +1,93 @@@
++import kudu
++import os
++import subprocess
++import sys
++import tempfile
++import time
++from kudu.client import Partitioning
++
++DSTAT_COL_NAMES = ["usr", "sys", "idl", "wai", "hiq", "siq", "read", "writ", 
"recv", "send",
++                  "in","out","int","csw"]
++
++
++def open_or_create_table(client, table, drop=False):
++  """Based on the default dstat column names create a new table indexed by a 
timstamp col"""
++  exists = False
++  if client.table_exists(table):
++    exists = True
++    if drop:
++      client.delete_table(table)
++      exists = False
++
++  if not exists:
++    # Create the schema for the table, basically all float cols
++    builder = kudu.schema_builder()
++    builder.add_column("ts", kudu.int64, nullable=False, primary_key=True)
++    for col in DSTAT_COL_NAMES:
++      builder.add_column(col, kudu.float_)
++    schema = builder.build()
++
++    # Create hash partitioning buckets
++    partitioning = Partitioning().add_hash_partitions('ts', 2)
++
++    client.create_table(table, schema, partitioning)
++
++  return client.table(table)
++
++def append_row(table, line):
++  """The line is the raw string read from stdin, that is then splitted by , 
and prepended
++  with the current timestamp."""
++  data = [float(x.strip()) for x in line.split(",")]
++
++  op = table.new_insert()
++  # Convert to microseconds
++  op["ts"] = int(time.time() * 1000000)
++  for c, v in zip(DSTAT_COL_NAMES, data):
++    op[c] = v
++  return op
++
++def start_dstat():
++  tmpdir = tempfile.mkdtemp()
++  path = os.path.join(tmpdir, "dstat.pipe")
++  os.mkfifo(path)
++  proc = subprocess.Popen(["dstat", "-cdngy", "--output", "{0}".format(path)])
++  return proc.pid, path
++
++if __name__ == "__main__":
++
++  drop = False
++
++  if len(sys.argv) > 1:
++    operation = sys.argv[1]
++    if operation in ["drop"]:
++      drop = True
++
++  client = kudu.connect("127.0.0.1", 7051)
++  table = open_or_create_table(client, "dstat", drop)
++
++  # Start dstat
++  dstat_id, pipe_path = start_dstat()
++
++  try:
++    # Create file handle to read from pipe
++    fid = open(pipe_path, "r")
++
++    # Create session object
++    session = client.new_session()
++    counter = 0
++
++    # The dstat output first prints uninteresting lines, skip until we find 
the header
++    skip = True
++    while True:
++      line  = fid.readline()
++      if line.startswith("\"usr\""):
++        skip = False
++        continue
++      if not skip:
++        session.apply(append_row(table, line))
++        counter += 1
++        if counter % 10 == 0:
++          session.flush()
++  except KeyboardInterrupt:
++    if os.path.exists(pipe_path):
++      os.remove(pipe_path)

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/python/graphite-kudu/kudu/__init__.py
----------------------------------------------------------------------
diff --cc examples/python/graphite-kudu/kudu/__init__.py
index 0000000,0000000..34da70a
new file mode 100644
--- /dev/null
+++ b/examples/python/graphite-kudu/kudu/__init__.py
@@@ -1,0 -1,0 +1,3 @@@
++from pkgutil import extend_path
++__path__ = extend_path(__path__, __name__)
++

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/python/graphite-kudu/kudu/kudu_graphite.py
----------------------------------------------------------------------
diff --cc examples/python/graphite-kudu/kudu/kudu_graphite.py
index 0000000,0000000..86a142e
new file mode 100644
--- /dev/null
+++ b/examples/python/graphite-kudu/kudu/kudu_graphite.py
@@@ -1,0 -1,0 +1,287 @@@
++import sys, os
++      
++import re
++import time
++import math
++import kudu
++
++from multiprocessing.pool import ThreadPool
++from django.conf import settings
++
++import graphite
++from graphite.intervals import Interval, IntervalSet
++from graphite.node import LeafNode, BranchNode
++from graphite.readers import FetchInProgress
++from graphite.logger import log
++
++import json
++
++KUDU_MAX_REQUESTS = 10
++KUDU_REQUEST_POOL = ThreadPool(KUDU_MAX_REQUESTS)
++
++class KuduNode(object):
++    def __init__(self):
++        self.child_nodes = []
++    
++    #Node is leaf, if it has no child nodes.
++    def isLeaf(self):
++        return len(self.child_nodes) == 0
++    
++    #Add child node to node.
++    def addChildNode(self, node):
++        self.child_nodes.append(node)
++    
++    #Get child node with specified name
++    def getChild(self, name):
++        for node in self.child_nodes:
++            if node.name == name:
++                return node
++        return None
++    
++    def getChildren(self):
++        return self.child_nodes
++
++    
++class KuduTree(KuduNode):
++    pass
++            
++
++class KuduRegularNode(KuduNode):
++    def __init__(self, name):
++        KuduNode.__init__(self)
++        self.name = name
++    
++    def getName(self):
++        return self.name
++ 
++                
++class KuduReader(object):
++    __slots__ = ('kudu_table', 'metric_name')
++    supported = True
++
++    def __init__(self, kudu_table, metric_name):
++        self.kudu_table = kudu_table
++        self.metric_name = metric_name
++
++    def get_intervals(self):
++        return IntervalSet([Interval(0, time.time())])
++
++    def fetch(self, startTime, endTime):
++        def get_data(startTime, endTime):
++            log.info("time range %d-%d" % (startTime, endTime))
++            host, metric = self.metric_name.split("com.")
++            host += "com"
++            s = self.kudu_table.scanner()
++            s.add_predicate(s.range_predicate(0, host, host))
++            s.add_predicate(s.range_predicate(1, metric, metric))
++            s.add_predicate(s.range_predicate(2, startTime, endTime))
++            s.open()
++            values = []
++            while s.has_more_rows():
++              t = s.next_batch().as_tuples()
++              log.info("metric batch: %d" % len(t))
++              values.extend([(time, value) for (_, _, time, value) in t])
++            # TODO: project just the time and value, not host/metric!
++            values.sort()
++            values_length = len(values)
++            
++            if values_length == 0:
++                time_info = (startTime, endTime, 1)
++                datapoints = []
++                return (time_info, datapoints)
++
++            startTime = min(t[0] for t in values)
++            endTime = max(t[0] for t in values)
++            if values_length == 1:
++                time_info = (startTime, endTime, 1)
++                datapoints = [values[0][1]]
++                return (time_info, datapoints)
++            log.info("data: %s" % repr(values))
++                    
++            # 1. Calculate step (in seconds)
++            #    Step will be lowest time delta between values or 1 (in case 
if delta is smaller)
++            step = 1
++            minDelta = None
++            
++            for i in range(0, values_length - 2):
++                (timeI, valueI) = values[i]
++                (timeIplus1, valueIplus1) = values[i + 1]
++                delta = timeIplus1 - timeI
++                
++                if (minDelta == None or delta < minDelta):
++                    minDelta = delta
++            
++            if minDelta > step:
++                step = minDelta
++            
++            # 2. Fill time info table    
++            time_info = (startTime, endTime, step)
++            
++            # 3. Create array of output points
++            number_points = int(math.ceil((endTime - startTime) / step))
++            datapoints = [None for i in range(number_points)]
++            
++            # 4. Fill array of output points
++            cur_index = 0
++            cur_value = None
++            cur_time_stamp = None
++            cur_value_used = None
++            
++            for i in range(0, number_points - 1):
++                
++                data_point_time_stamp = startTime + i * step
++                
++                (cur_time_stamp, cur_value) = values[cur_index]
++                cur_time_stamp = cur_time_stamp
++                
++                while (cur_index + 1 < values_length):
++                    (next_time_stamp, next_value) = values[cur_index + 1]
++                    if next_time_stamp > data_point_time_stamp:
++                        break;
++                    (cur_value, cur_time_stamp, cur_value_used) = 
(next_value, next_time_stamp, False)
++                    cur_index = cur_index + 1
++                    
++                data_point_value = None
++                if(not cur_value_used and cur_time_stamp <= 
data_point_time_stamp):
++                    cur_value_used = True
++                    data_point_value = cur_value
++                
++                datapoints[i] =  data_point_value
++     
++            log.info("data: %s" % repr(datapoints))
++            return (time_info, datapoints)
++
++        job = KUDU_REQUEST_POOL.apply_async(get_data, [startTime, endTime])
++        return FetchInProgress(job.get)
++    
++    
++class KuduFinder(object):
++    def __init__(self, kudu_table=None):
++        self.client = kudu.Client(settings.KUDU_MASTER)
++        self.kudu_table = self.client.open_table(settings.KUDU_TABLE)
++        
++    # Fills tree of metrics out from flat list
++    # of metrics names, separated by dot value
++    def _fill_kudu_tree(self, metric_names):
++        tree = KuduTree()
++        
++        for metric_name in metric_names:
++            name_parts = re.split("[./]", metric_name)
++            
++            cur_parent_node = tree
++            cur_node = None
++            
++            for name_part in name_parts:
++                cur_node = cur_parent_node.getChild(name_part)
++                if cur_node == None:
++                    cur_node = KuduRegularNode(name_part)
++                    cur_parent_node.addChildNode(cur_node)
++                cur_parent_node = cur_node
++        
++        return tree
++    
++    
++    def _find_nodes_from_pattern(self, kudu_table, pattern):
++        query_parts = []
++        for part in pattern.split('.'):
++            part = part.replace('*', '.*')
++            part = re.sub(
++                r'{([^{]*)}',
++                lambda x: "(%s)" % x.groups()[0].replace(',', '|'),
++                part,
++            )
++            query_parts.append(part)
++          
++        #Request for metrics
++        t = self.client.open_table("metric_ids")
++        s = t.scanner()
++
++        # Handle a prefix pattern
++        if re.match(".+\\*", pattern):
++          prefix_match = pattern[:-1]
++          if '.com.' in prefix_match:
++            host_prefix, metric_prefix = prefix_match.split(".com.")
++            host_prefix += ".com"
++            s.add_predicate(s.range_predicate(1, metric_prefix, metric_prefix 
+ "\xff"))
++          else:
++            host_prefix = prefix_match
++
++          s.add_predicate(s.range_predicate(0, host_prefix, host_prefix + 
"\xff"))
++        elif not "*" in pattern:
++          # equality match
++          host, metric = pattern.split(".com.")
++          host += ".com"
++          s.add_predicate(s.range_predicate(0, host, host))
++          s.add_predicate(s.range_predicate(1, metric, metric))
++        s.open()
++
++        metrics = []
++        while s.has_more_rows():
++          t = s.next_batch().as_tuples()
++          log.info("batch: %d" % len(t))
++          metrics.extend(t)
++        metric_names = ["%s/%s" % (host, metric) for (host, metric) in 
metrics]
++        #Form tree out of them
++        metrics_tree = self._fill_kudu_tree(metric_names)    
++        
++        for node in self._find_kudu_nodes(kudu_table, query_parts, 
metrics_tree):
++            yield node
++    
++    def _find_kudu_nodes(self, kudu_table, query_parts, current_branch, 
path=''):
++        query_regex = re.compile(query_parts[0])
++        for node, node_data, node_name, node_path in 
self._get_branch_nodes(kudu_table, current_branch, path):
++            dot_count = node_name.count('.') + node_name.count('/')
++    
++            if dot_count:
++                node_query_regex = 
re.compile(r'\.'.join(query_parts[:dot_count+1]))
++            else:
++                node_query_regex = query_regex
++    
++            if node_query_regex.match(node_name):
++                if len(query_parts) == 1:
++                    yield node
++                elif not node.is_leaf:
++                    for inner_node in self._find_kudu_nodes(
++                        kudu_table,
++                        query_parts[dot_count+1:],
++                        node_data,
++                        node_path,
++                    ):
++                        yield inner_node
++    
++    
++    def _get_branch_nodes(self, kudu_table, input_branch, path):
++        results = input_branch.getChildren()
++        if results:
++            if path:
++                path += '.'
++                
++            branches = [];
++            leaves = [];
++            
++            for item in results:
++                if item.isLeaf():
++                    leaves.append(item)
++                else:
++                    branches.append(item)
++            
++            if (len(branches) != 0):
++                for branch in branches:
++                    node_name = branch.getName()
++                    node_path = path + node_name
++                    yield BranchNode(node_path), branch, node_name, node_path
++            if (len(leaves) != 0):
++                for leaf in leaves:
++                    node_name = leaf.getName()
++                    node_path = path + node_name
++                    reader = KuduReader(self.kudu_table, node_path)
++                    yield LeafNode(node_path, reader), leaf, node_name, 
node_path
++
++    def find_nodes(self, query):
++        log.info("q:" + repr(query))
++        try:
++          for node in self._find_nodes_from_pattern(self.kudu_table, 
query.pattern):
++              yield node
++        except Exception, e:
++          log.exception(e)
++          raise

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/python/graphite-kudu/setup.cfg
----------------------------------------------------------------------
diff --cc examples/python/graphite-kudu/setup.cfg
index 0000000,0000000..5e40900
new file mode 100644
--- /dev/null
+++ b/examples/python/graphite-kudu/setup.cfg
@@@ -1,0 -1,0 +1,2 @@@
++[wheel]
++universal = 1

http://git-wip-us.apache.org/repos/asf/kudu/blob/c83f6eb3/examples/python/graphite-kudu/setup.py
----------------------------------------------------------------------
diff --cc examples/python/graphite-kudu/setup.py
index 0000000,0000000..386edeb
new file mode 100644
--- /dev/null
+++ b/examples/python/graphite-kudu/setup.py
@@@ -1,0 -1,0 +1,26 @@@
++# coding: utf-8
++from setuptools import setup
++
++setup(
++    name='graphite-kudu',
++    version='0.0.1',
++    license='BSD',
++    author=u'Dmitry Gryzunov',
++    description=('A plugin for using graphite-web with Kudu as a backend'),
++    py_modules=('kudu.kudu_graphite',),
++    zip_safe=False,
++    include_package_data=True,
++    platforms='any',
++    classifiers=(
++        'Intended Audience :: Developers',
++        'Intended Audience :: System Administrators',
++        'License :: OSI Approved :: BSD License',
++        'Operating System :: OS Independent',
++        'Programming Language :: Python',
++        'Programming Language :: Python :: 2',
++        'Topic :: System :: Monitoring',
++    ),
++    install_requires=(
++        'python-kudu',
++    ),
++)

Reply via email to