streaming of binary files in PySpark

2017-05-22 Thread Yogesh Vyas
Hi,

I want to use Spark Streaming to read the binary files from HDFS. In the
documentation, it is mentioned to use binaryRecordStream(directory,
recordLength).
But I didn't understand what does the record length means?? Does it means
the size of the binary file or something else?


Regards,
Yogesh


Re: Spark Streaming: Custom Receiver OOM consistently

2017-05-22 Thread kant kodali
Well there are few things here.

1. What is the Spark Version?
2. You said there is OOM error but what is the cause that appears in the
log message or stack trace? OOM can happen for various reasons and JVM
usually specifies the cause in the error message.
3. What is the driver and executor memory?
4. What is the receive throughput per second and what is the data size of
an average message?
5. What OS you are using ?

StorageLevel.MEMORY_AND_DISK_SER_2 This means that after the receiver
receives the data is replicated across worker nodes.




On Mon, May 22, 2017 at 5:20 PM, Manish Malhotra <
manish.malhotra.w...@gmail.com> wrote:

> thanks Alonso,
>
> Sorry, but there are some security reservations.
>
> But we can assume the receiver, is equivalent to writing a JMS based
> custom receiver, where we register a message listener and for each message
> delivered by JMS will be stored by calling store method of listener.
>
>
> Something like : https://github.com/tbfenet/spark-jms-receiver/blob/
> master/src/main/scala/org/apache/spark/streaming/jms/JmsReceiver.scala
>
> Though the diff is here this JMS receiver is using block generator and the
> calling store.
> I m calling store when I receive message.
> And also I'm not using block generator.
> Not sure if that something will make the memory to balloon up.
>
> Btw I also run the same message consumer code from standalone map and
> never seen this memory issue.
>
> On Sun, May 21, 2017 at 10:20 AM, Alonso Isidoro Roman  > wrote:
>
>> could you share the code?
>>
>> Alonso Isidoro Roman
>> [image: https://]about.me/alonso.isidoro.roman
>>
>> 
>>
>> 2017-05-20 7:54 GMT+02:00 Manish Malhotra > >:
>>
>>> Hello,
>>>
>>> have implemented Java based custom receiver, which consumes from
>>> messaging system say JMS.
>>> once received message, I call store(object) ... Im storing spark Row
>>> object.
>>>
>>> it run for around 8 hrs, and then goes OOM, and OOM is happening in
>>> receiver nodes.
>>> I also tried to run multiple receivers, to distribute the load but
>>> faces the same issue.
>>>
>>> something fundamentally we are doing wrong, which tells custom 
>>> receiver/spark
>>> to release the memory.
>>> but Im not able to crack that, atleast till now.
>>>
>>> any help is appreciated !!
>>>
>>> Regards,
>>> Manish
>>>
>>>
>>
>


Re: Bizarre UI Behavior after migration

2017-05-22 Thread Miles Crawford
Well, what's happening here is that jobs become "un-finished" - they
complete, and then later on pop back into the "Active" section showing a
small number of complete/inprogress tasks.

In my screenshot, Job #1 completed as normal, and then later on switched
back to active with only 92 tasks... it never seems to change again, it's
stuck in this frozen, active state.


On Mon, May 22, 2017 at 12:50 PM, Vadim Semenov  wrote:

> I believe it shows only the tasks that have actually being executed, if
> there were tasks with no data, they don't get reported.
>
> I might be mistaken, if somebody has a good explanation, would also like
> to hear.
>
> On Fri, May 19, 2017 at 5:45 PM, Miles Crawford 
> wrote:
>
>> Hey ya'll,
>>
>> Trying to migrate from Spark 1.6.1 to 2.1.0.
>>
>> I use EMR, and launched a new cluster using EMR 5.5, which runs spark
>> 2.1.0.
>>
>> I updated my dependencies, and fixed a few API changes related to
>> accumulators, and presto! my application was running on the new cluster.
>>
>> But the application UI shows crazy output:
>> https://www.dropbox.com/s/egtj1056qeudswj/sparkwut.png?dl=0
>>
>> The applications seem to complete successfully, but I was wondering if
>> anyone has an idea of what might be going wrong?
>>
>> Thanks,
>> -Miles
>>
>
>


Re: Spark Streaming: Custom Receiver OOM consistently

2017-05-22 Thread Manish Malhotra
thanks Alonso,

Sorry, but there are some security reservations.

But we can assume the receiver, is equivalent to writing a JMS based custom
receiver, where we register a message listener and for each message
delivered by JMS will be stored by calling store method of listener.


Something like :
https://github.com/tbfenet/spark-jms-receiver/blob/master/src/main/scala/org/apache/spark/streaming/jms/JmsReceiver.scala

Though the diff is here this JMS receiver is using block generator and the
calling store.
I m calling store when I receive message.
And also I'm not using block generator.
Not sure if that something will make the memory to balloon up.

Btw I also run the same message consumer code from standalone map and never
seen this memory issue.

On Sun, May 21, 2017 at 10:20 AM, Alonso Isidoro Roman 
wrote:

> could you share the code?
>
> Alonso Isidoro Roman
> [image: https://]about.me/alonso.isidoro.roman
>
> 
>
> 2017-05-20 7:54 GMT+02:00 Manish Malhotra 
> :
>
>> Hello,
>>
>> have implemented Java based custom receiver, which consumes from
>> messaging system say JMS.
>> once received message, I call store(object) ... Im storing spark Row
>> object.
>>
>> it run for around 8 hrs, and then goes OOM, and OOM is happening in
>> receiver nodes.
>> I also tried to run multiple receivers, to distribute the load but faces
>> the same issue.
>>
>> something fundamentally we are doing wrong, which tells custom receiver/spark
>> to release the memory.
>> but Im not able to crack that, atleast till now.
>>
>> any help is appreciated !!
>>
>> Regards,
>> Manish
>>
>>
>


Re: Convert camelCase to snake_case when saving Dataframe/Dataset to parquet?

2017-05-22 Thread Mike Wheeler
Cool. Thanks a lot in advance.

On Mon, May 22, 2017 at 2:12 PM, Bryan Jeffrey 
wrote:

> Mike,
>
> I have code to do that. I'll share it tomorrow.
>
> Get Outlook for Android 
>
>
>
>
> On Mon, May 22, 2017 at 4:53 PM -0400, "Mike Wheeler" <
> rotationsymmetr...@gmail.com> wrote:
>
> Hi Spark User,
>>
>> For Scala case class, we usually use camelCase (carType) for member
>> fields. However, many data system use snake_case (car_type) for column
>> names. When saving a Dataset of case class to parquet, is there any way to
>> automatically convert camelCase to snake_case (carType -> car_type)?
>>
>> Thanks,
>>
>> Mike
>>
>>
>>


Re: Convert camelCase to snake_case when saving Dataframe/Dataset to parquet?

2017-05-22 Thread Bryan Jeffrey
Mike, 




I have code to do that. I'll share it tomorrow. 




Get Outlook for Android







On Mon, May 22, 2017 at 4:53 PM -0400, "Mike Wheeler" 
 wrote:










Hi Spark User,
For Scala case class, we usually use camelCase (carType) for member fields. 
However, many data system use snake_case (car_type) for column names. When 
saving a Dataset of case class to parquet, is there any way to automatically 
convert camelCase to snake_case (carType -> car_type)? 
Thanks,
Mike









Convert camelCase to snake_case when saving Dataframe/Dataset to parquet?

2017-05-22 Thread Mike Wheeler
Hi Spark User,

For Scala case class, we usually use camelCase (carType) for member fields.
However, many data system use snake_case (car_type) for column names. When
saving a Dataset of case class to parquet, is there any way to
automatically convert camelCase to snake_case (carType -> car_type)?

Thanks,

Mike


Re: Bizarre UI Behavior after migration

2017-05-22 Thread Vadim Semenov
I believe it shows only the tasks that have actually being executed, if
there were tasks with no data, they don't get reported.

I might be mistaken, if somebody has a good explanation, would also like to
hear.

On Fri, May 19, 2017 at 5:45 PM, Miles Crawford  wrote:

> Hey ya'll,
>
> Trying to migrate from Spark 1.6.1 to 2.1.0.
>
> I use EMR, and launched a new cluster using EMR 5.5, which runs spark
> 2.1.0.
>
> I updated my dependencies, and fixed a few API changes related to
> accumulators, and presto! my application was running on the new cluster.
>
> But the application UI shows crazy output:
> https://www.dropbox.com/s/egtj1056qeudswj/sparkwut.png?dl=0
>
> The applications seem to complete successfully, but I was wondering if
> anyone has an idea of what might be going wrong?
>
> Thanks,
> -Miles
>


Broadcasted Object is empty in executors.

2017-05-22 Thread Pedro Tuero
Hi,
I'm using spark 2.1.0 in aws emr. Kryo Serializer.

I'm broadcasting a java class :

public class NameMatcher {

private static final Logger LOG =
LoggerFactory.getLogger(NameMatcher.class);
private final Splitter splitter;
private final SetMultimap itemsByWord;
private final Multiset wordCount;

private NameMatcher(Builder builder) {
splitter = builder.splitter;
itemsByWord = cloneMultiMap(builder.itemsByWord);
wordCount = cloneMultiSet(builder.wordCount);
LOG.info("Matcher itemsByWorld size: {}", itemsByWord.size());
LOG.info("Matcher wordCount size: {}", wordCount.size());
}

private  Multiset cloneMultiSet(Multiset multiset) {
Multiset result = HashMultiset.create();
result.addAll(multiset);
return result;
}

private  SetMultimap cloneMultiMap(Multimap multimap)
{
SetMultimap result = HashMultimap.create();
result.putAll(multimap);
return result;
}

public Set match(CharSequence text) {
LOG.info("itemsByWorld Keys {}", itemsByWord.keys());
LOG.info("QueryMatching: {}", text);
Multiset counter = HashMultiset.create();
Set result = Sets.newHashSet();
for (String word : Sets.newHashSet(splitter.split(text))) {
if (itemsByWord.containsKey(word)) {
for (IdNamed item : itemsByWord.get(word)) {
counter.add(item);
if (wordCount.count(item) == counter.count(item)) {
result.add(item);
}
}
}
}
return result;
}
}

So the logs in the constructor are ok:
LOG.info("Matcher itemsByWorld size: {}", itemsByWord.size());
prints itemsByWorld sizes and it's as expected. But when calling:
nameMatcher.getValue().match(...
in a RDD transformation, the log line in match method:
 LOG.info("itemsByWorld Keys {}", itemsByWord.keys());
Prints an empty list.

This works alright running locally  in my computer, but fail with no match
running in aws emr.
I usually broadcast objects and map with no problems.
Can anyone give me a clue about what's happening here?
Thanks you very much,
Pedro.


Re: couple naive questions on Spark Structured Streaming

2017-05-22 Thread kant kodali
HI Burak,

My response is inline.

Thanks a lot!

On Mon, May 22, 2017 at 9:26 AM, Burak Yavuz  wrote:

> Hi Kant,
>
>>
>>
>> 1. Can we use Spark Structured Streaming for stateless transformations
>> just like we would do with DStreams or Spark Structured Streaming is only
>> meant for stateful computations?
>>
>
> Of course you can do stateless transformations. Any map, filter, select,
> type of transformation is stateless. Aggregations are generally stateful.
> You could also perform arbitrary stateless aggregations with "
> flatMapGroups
> "
> or make them stateful with "flatMapGroupsWithState
> 
> ".
>

*Got it. so Spark Structured Streaming does both Stateful and Stateless
tranformations. In that case I am assuming DStreams API will be deprecated?
  How about groupBy ? That is stateful right?*

>
>
>
>> 2. When we use groupBy and Window operations for event time processing
>> and specify a watermark does this mean the timestamp field in each message
>> is compared to the processing time of that machine/node and discard that
>> events that are late than the specified threshold? If we don't specify a
>> watermark I am assuming the processing time wont come into the picture. is
>> that right? Just trying to understand the interplay between processing time
>> and even time when we do even time processing.
>>
>> Watermarks are tracked with respect to the event time of your data, not
> the processing time of the machine. Please take a look at the blog below
> for more details
> https://databricks.com/blog/2017/05/08/event-time-
> aggregation-watermarking-apache-sparks-structured-streaming.html
>

*Thanks for this article. I am not sure if I am interpreting the article
incorrectly buy Looks Like that Article shows there is indeed a
relationship between Processing time and event time. For example*
*say I set an Watermark of 10 minutes and *

*1. I send one message which has an event time stamp of May 22 2017 1PM and
Processing Time as May 22 2017 1:02 PM*


*2. I send another message which has an event time of May 22 2017 12:55 PM
and Processing Time as May 23 2017 1PM*

*Simply put, say I am just faking my event timestamp's to meet the cut off
specified by the watermark but I am actually sending them a day or week
later. How does Spark Structured Streaming handle this case? *

>
>
> Best,
> Burak
>


Re: Is there a Kafka sink for Spark Structured Streaming

2017-05-22 Thread Michael Armbrust
There is an RC here.  Please test!

http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Apache-Spark-2-2-0-RC2-td21497.html

On Fri, May 19, 2017 at 4:07 PM, kant kodali  wrote:

> Hi Patrick,
>
> I am using 2.1.1 and I tried the above code you sent and I get
>
> "java.lang.UnsupportedOperationException: Data source kafka does not
> support streamed writing"
>
> so yeah this probably works only from Spark 2.2 onwards. I am not sure
> when it officially releases.
>
> Thanks!
>
> On Fri, May 19, 2017 at 8:39 AM,  wrote:
>
>> Hi!
>>
>> Is this possible possible in spark 2.1.1?
>>
>> Sent from my iPhone
>>
>> On May 19, 2017, at 5:55 AM, Patrick McGloin 
>> wrote:
>>
>> # Write key-value data from a DataFrame to a Kafka topic specified in an 
>> option
>> query = df \
>>   .selectExpr("CAST(userId AS STRING) AS key", "to_json(struct(*)) AS 
>> value") \
>>   .writeStream \
>>   .format("kafka") \
>>   .option("kafka.bootstrap.servers", "host1:port1,host2:port2") \
>>   .option("topic", "topic1") \
>>   .option("checkpointLocation", "/path/to/HDFS/dir") \
>>   .start()
>>
>> Described here:
>>
>> https://databricks.com/blog/2017/04/26/processing-data-in-apache-kafka-with-structured-streaming-in-apache-spark-2-2.html
>>
>>
>>
>> On 19 May 2017 at 10:45,  wrote:
>>
>>> Is there a Kafka sink for Spark Structured Streaming ?
>>>
>>> Sent from my iPhone
>>>
>>
>>
>


Re: Are tachyon and akka removed from 2.1.1 please

2017-05-22 Thread vincent gromakowski
Akka has been replaced by netty in 1.6

Le 22 mai 2017 15:25, "Chin Wei Low"  a écrit :

> I think akka has been removed since 2.0.
>
> On 22 May 2017 10:19 pm, "Gene Pang"  wrote:
>
>> Hi,
>>
>> Tachyon has been renamed to Alluxio. Here is the documentation for
>> running Alluxio with Spark
>> .
>>
>> Hope this helps,
>> Gene
>>
>> On Sun, May 21, 2017 at 6:15 PM, 萝卜丝炒饭 <1427357...@qq.com> wrote:
>>
>>> HI all,
>>> Iread some paper about source code, the paper base on version 1.2.  they
>>> refer the tachyon and akka.  When i read the 2.1code. I can not find the
>>> code abiut akka and tachyon.
>>>
>>> Are tachyon and akka removed from 2.1.1  please
>>>
>>
>>


Re: couple naive questions on Spark Structured Streaming

2017-05-22 Thread Burak Yavuz
Hi Kant,

>
>
> 1. Can we use Spark Structured Streaming for stateless transformations
> just like we would do with DStreams or Spark Structured Streaming is only
> meant for stateful computations?
>

Of course you can do stateless transformations. Any map, filter, select,
type of transformation is stateless. Aggregations are generally stateful.
You could also perform arbitrary stateless aggregations with "flatMapGroups
"
or make them stateful with "flatMapGroupsWithState

".



> 2. When we use groupBy and Window operations for event time processing and
> specify a watermark does this mean the timestamp field in each message is
> compared to the processing time of that machine/node and discard that
> events that are late than the specified threshold? If we don't specify a
> watermark I am assuming the processing time wont come into the picture. is
> that right? Just trying to understand the interplay between processing time
> and even time when we do even time processing.
>
> Watermarks are tracked with respect to the event time of your data, not
the processing time of the machine. Please take a look at the blog below
for more details
https://databricks.com/blog/2017/05/08/event-time-aggregation-watermarking-apache-sparks-structured-streaming.html

Best,
Burak


Re: Rmse recomender system

2017-05-22 Thread Chen, Mingrui
Hi,


Try to use the most popular "recall, precision and F-score" as evaluation 
metrics for your recommendation system. Improving prediction performance 
depends on how good the features you use and whether you choose a proper model. 
It's hard to tell without any more details.



From: Arun 
Sent: Saturday, May 20, 2017 9:48:10 PM
To: user@spark.apache.org
Subject: Rmse recomender system


hi all..

I am new to machine learning.

i am working on recomender system. for training dataset RMSE is  0.08  while on 
test data its is 2.345

whats conclusion and what steps can i take to improve



Sent from Samsung tablet


Re: KMeans Clustering is not Reproducible

2017-05-22 Thread Anastasios Zouzias
Hi Christoph,

Take a look at this, you might end up having a similar case:

http://www.spark.tc/using-sparks-cache-for-correctness-not-just-performance/

If this is not the case, then I agree with you the kmeans should be
partitioning agnostic (although I haven't check the code yet).

Best,
Anastasios


On Mon, May 22, 2017 at 3:42 PM, Christoph Bruecke 
wrote:

> Hi,
>
> I’m trying to figure out how to use KMeans in order to achieve
> reproducible results. I have found that running the same kmeans instance on
> the same data, with different partitioning will produce different
> clusterings.
>
> Given a simple KMeans run with fixed seed returns different results on the
> same
> training data, if the training data is partitioned differently.
>
> Consider the following example. The same KMeans clustering set up is run on
> identical data. The only difference is the partitioning of the training
> data
> (one partition vs. four partitions).
>
> ```
> import org.apache.spark.sql.DataFrame
> import org.apache.spark.ml.clustering.KMeans
> import org.apache.spark.ml.features.VectorAssembler
>
> // generate random data for clustering
> val randomData = spark.range(1, 1000).withColumn("a",
> rand(123)).withColumn("b", rand(321))
>
> val vecAssembler = new VectorAssembler().setInputCols(Array("a",
> "b")).setOutputCol("features")
>
> val data = vecAssembler.transform(randomData)
>
> // instantiate KMeans with fixed seed
> val kmeans = new KMeans().setK(10).setSeed(9876L)
>
> // train the model with different partitioning
> val dataWith1Partition = data.repartition(1)
> println("1 Partition: " + kmeans.fit(dataWith1Partition).computeCost(
> dataWith1Partition))
>
> val dataWith4Partition = data.repartition(4)
> println("4 Partition: " + kmeans.fit(dataWith4Partition).computeCost(
> dataWith4Partition))
> ```
>
> I get the following related cost
>
> ```
> 1 Partition: 16.028212597888057
> 4 Partition: 16.14758460544976
> ```
>
> What I want to achieve is that repeated computations of the KMeans
> Clustering should yield identical result on identical training data,
> regardless of the partitioning.
>
> Looking through the Spark source code, I guess the cause is the
> initialization method of KMeans which in turn uses the `takeSample` method,
> which does not seem to be partition agnostic.
>
> Is this behaviour expected? Is there anything I could do to achieve
> reproducible results?
>
> Best,
> Christoph
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


-- 
-- Anastasios Zouzias



Re: Are tachyon and akka removed from 2.1.1 please

2017-05-22 Thread Chin Wei Low
I think akka has been removed since 2.0.

On 22 May 2017 10:19 pm, "Gene Pang"  wrote:

> Hi,
>
> Tachyon has been renamed to Alluxio. Here is the documentation for
> running Alluxio with Spark
> .
>
> Hope this helps,
> Gene
>
> On Sun, May 21, 2017 at 6:15 PM, 萝卜丝炒饭 <1427357...@qq.com> wrote:
>
>> HI all,
>> Iread some paper about source code, the paper base on version 1.2.  they
>> refer the tachyon and akka.  When i read the 2.1code. I can not find the
>> code abiut akka and tachyon.
>>
>> Are tachyon and akka removed from 2.1.1  please
>>
>
>


Re: Are tachyon and akka removed from 2.1.1 please

2017-05-22 Thread Gene Pang
Hi,

Tachyon has been renamed to Alluxio. Here is the documentation for running
Alluxio with Spark
.

Hope this helps,
Gene

On Sun, May 21, 2017 at 6:15 PM, 萝卜丝炒饭 <1427357...@qq.com> wrote:

> HI all,
> Iread some paper about source code, the paper base on version 1.2.  they
> refer the tachyon and akka.  When i read the 2.1code. I can not find the
> code abiut akka and tachyon.
>
> Are tachyon and akka removed from 2.1.1  please
>


KMeans Clustering is not Reproducible

2017-05-22 Thread Christoph Bruecke
Hi,

I’m trying to figure out how to use KMeans in order to achieve reproducible 
results. I have found that running the same kmeans instance on the same data, 
with different partitioning will produce different clusterings.

Given a simple KMeans run with fixed seed returns different results on the same
training data, if the training data is partitioned differently.

Consider the following example. The same KMeans clustering set up is run on
identical data. The only difference is the partitioning of the training data
(one partition vs. four partitions).

```
import org.apache.spark.sql.DataFrame
import org.apache.spark.ml.clustering.KMeans
import org.apache.spark.ml.features.VectorAssembler

// generate random data for clustering
val randomData = spark.range(1, 1000).withColumn("a", 
rand(123)).withColumn("b", rand(321))

val vecAssembler = new VectorAssembler().setInputCols(Array("a", 
"b")).setOutputCol("features")

val data = vecAssembler.transform(randomData)

// instantiate KMeans with fixed seed
val kmeans = new KMeans().setK(10).setSeed(9876L)

// train the model with different partitioning
val dataWith1Partition = data.repartition(1)
println("1 Partition: " + 
kmeans.fit(dataWith1Partition).computeCost(dataWith1Partition))

val dataWith4Partition = data.repartition(4)
println("4 Partition: " + 
kmeans.fit(dataWith4Partition).computeCost(dataWith4Partition))
```

I get the following related cost

```
1 Partition: 16.028212597888057
4 Partition: 16.14758460544976
```

What I want to achieve is that repeated computations of the KMeans Clustering 
should yield identical result on identical training data, regardless of the 
partitioning.

Looking through the Spark source code, I guess the cause is the initialization 
method of KMeans which in turn uses the `takeSample` method, which does not 
seem to be partition agnostic.

Is this behaviour expected? Is there anything I could do to achieve 
reproducible results?

Best,
Christoph
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Spark on Mesos failure, when launching a simple job

2017-05-22 Thread ved_kpl
I have been trying to learn spark on mesos, but the spark-shell just keeps on
ignoring the offers. Here is my setup:

All the components are in the same subnet

- 1 mesos master  on EC2 instance (t2.micro)

  command: `mesos-master  --work_dir=/tmp/abc --hostname=`

- 2 mesos agents (each with 4 cores, 16 GB ram and 30 GB disk space)

   command: `mesos-slave --master=":5050"
--hostname="" --work_dir=/tmp/abc`

- 1 spark-shell (client) on ec2 instance (t2.micro)
  I have set the following environment variables on this instance before
launching the spark-shell

 export MESOS_NATIVE_JAVA_LIBRARY=/usr/lib/libmesos.so 
 export
SPARK_EXECUTOR_URI=https://d3kbcqa49mib13.cloudfront.net/spark-2.1.1-bin-hadoop2.7.tgz

  and then I launch the the spark-shell as follows

./bin/spark-shell --master mesos://172.31.1.93:5050 (with private IP
of the master)

Once the spark-shell is up, I run the simplest program possible
  
val f = sc.textFile ("/tmp/ok.txt");
f.count()

and it fails. Here are the logs

https://pastebin.ca/3815427

Note: I have not set SPARK_LOCAL_IP on the spark shell.
I am using mesos 1.2.0 and spark 2.1.1 on ubuntu 16.04. I have verified by
writing a small node.js based http client and the offers from the master
seem fine. What possibly is going wrong here?







--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-Mesos-failure-when-launching-a-simple-job-tp28701.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



unsubscribe

2017-05-22 Thread 信息安全部
unsubscribe