Mailing lists matching spark.apache.org

commits spark.apache.org
dev spark.apache.org
issues spark.apache.org
reviews spark.apache.org
user spark.apache.org


dev-subscr...@spark.apache.org

2016-08-30 Thread huanqinghappy
dev-subscr...@spark.apache.org

subscribe user@spark.apache.org

2020-09-09 Thread Joan
i want to subscribeuser@spark.apache.org??thanks a lot??

[jira] [Updated] (SPARK-42642) Make Python the first code example tab

2023-03-01 Thread Allan Folting (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Folting updated SPARK-42642:
--
Description: 
Python is the most approachable and most popular language so it should be the 
default language in code examples so this makes Python the first code example 
tab consistently across the documentation, where applicable.

This is continuing the work started with:

https://issues.apache.org/jira/browse/SPARK-42493

where these two pages were updated:

[https://spark.apache.org/docs/latest/sql-getting-started.html]

[https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html]

 

Pages being updated now:

[https://spark.apache.org/docs/latest/ml-classification-regression.html]

[https://spark.apache.org/docs/latest/ml-clustering.html]

[https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/ml-datasource.html]

[https://spark.apache.org/docs/latest/ml-features.html]

[https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/ml-migration-guide.html]

[https://spark.apache.org/docs/latest/ml-pipeline.html]

[https://spark.apache.org/docs/latest/ml-statistics.html]

[https://spark.apache.org/docs/latest/ml-tuning.html]

 

[https://spark.apache.org/docs/latest/mllib-clustering.html]

[https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/mllib-data-types.html]

[https://spark.apache.org/docs/latest/mllib-decision-tree.html]

[https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]

[https://spark.apache.org/docs/latest/mllib-ensembles.html]

[https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html]

[https://spark.apache.org/docs/latest/mllib-feature-extraction.html]

[https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]

[https://spark.apache.org/docs/latest/mllib-linear-methods.html]

[https://spark.apache.org/docs/latest/mllib-naive-bayes.html]

[https://spark.apache.org/docs/latest/mllib-statistics.html]

 

[https://spark.apache.org/docs/latest/quick-start.html]

 

[https://spark.apache.org/docs/latest/rdd-programming-guide.html]

 

[https://spark.apache.org/docs/latest/sql-data-sources-avro.html]

[https://spark.apache.org/docs/latest/sql-data-sources-binaryFile.html]

[https://spark.apache.org/docs/latest/sql-data-sources-csv.html]

[https://spark.apache.org/docs/latest/sql-data-sources-generic-options.html]

[https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html]

[https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html]

[https://spark.apache.org/docs/latest/sql-data-sources-json.html]

[https://spark.apache.org/docs/latest/sql-data-sources-parquet.html]

sql-data-sources-protobuf.md

[https://spark.apache.org/docs/latest/sql-data-sources-text.html]

[https://spark.apache.org/docs/latest/sql-migration-guide.html]

[https://spark.apache.org/docs/latest/sql-performance-tuning.html]

[https://spark.apache.org/docs/latest/sql-ref-datatypes.html]

 

[https://spark.apache.org/docs/latest/streaming-kinesis-integration.html]

[https://spark.apache.org/docs/latest/streaming-programming-guide.html]

 

[https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html]

[https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  was:
Python is the most approachable and most popular language so it should be the 
default language in code examples.

Continuing the work started with:

https://issues.apache.org/jira/browse/SPARK-42493

Making Python the first code example tab consistently across the documentation, 
where applicable.

Pages being updated:
[https://spark.apache.org/docs/latest/rdd-programming-guide.html]

[https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]

[https://spark.apache.org/docs/latest/streaming-programming-guide.html]

[https://spark.apache.org/docs/latest/ml-statistics.html]

[https://spark.apache.org/docs/latest/ml-datasource.html]

[https://spark.apache.org/docs/latest/ml-pipeline.html]

[https://spark.apache.org/docs/latest/ml-features.html]

[https://spark.apache.org/docs/latest/ml-classification-regression.html]

[https://spark.apache.org/docs/latest/ml-clustering.html]

[https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/ml-tuning.html]

[https://spark.apache.org/docs/latest/ml-migration-guide.html]

[https://spark.apache.org/docs/latest/mllib-data-types.html]

[https://spark.apache.org/docs/latest/mllib-statistics.html]

[https://spark.apache.org/docs/latest/mllib-linear-methods.html]

[https

[jira] [Updated] (SPARK-42642) Make Python the first code example tab

2023-03-01 Thread Allan Folting (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Folting updated SPARK-42642:
--
Description: 
Python is the most approachable and most popular language so it should be the 
default language in code examples so this makes Python the first code example 
tab consistently across the documentation, where applicable.

This is continuing the work started with:

https://issues.apache.org/jira/browse/SPARK-42493

where these two pages were updated:

[https://spark.apache.org/docs/latest/sql-getting-started.html]

[https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html]

 

Pages being updated now:

[https://spark.apache.org/docs/latest/ml-classification-regression.html]

[https://spark.apache.org/docs/latest/ml-clustering.html]

[https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/ml-datasource.html]

[https://spark.apache.org/docs/latest/ml-features.html]

[https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/ml-migration-guide.html]

[https://spark.apache.org/docs/latest/ml-pipeline.html]

[https://spark.apache.org/docs/latest/ml-statistics.html]

[https://spark.apache.org/docs/latest/ml-tuning.html]

 

[https://spark.apache.org/docs/latest/mllib-clustering.html]

[https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/mllib-data-types.html]

[https://spark.apache.org/docs/latest/mllib-decision-tree.html]

[https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]

[https://spark.apache.org/docs/latest/mllib-ensembles.html]

[https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html]

[https://spark.apache.org/docs/latest/mllib-feature-extraction.html]

[https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]

[https://spark.apache.org/docs/latest/mllib-linear-methods.html]

[https://spark.apache.org/docs/latest/mllib-naive-bayes.html]

[https://spark.apache.org/docs/latest/mllib-statistics.html]

 

[https://spark.apache.org/docs/latest/quick-start.html]

 

[https://spark.apache.org/docs/latest/rdd-programming-guide.html]

 

[https://spark.apache.org/docs/latest/sql-data-sources-avro.html]

[https://spark.apache.org/docs/latest/sql-data-sources-binaryFile.html]

[https://spark.apache.org/docs/latest/sql-data-sources-csv.html]

[https://spark.apache.org/docs/latest/sql-data-sources-generic-options.html]

[https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html]

[https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html]

[https://spark.apache.org/docs/latest/sql-data-sources-json.html]

[https://spark.apache.org/docs/latest/sql-data-sources-parquet.html]

sql-data-sources-protobuf.html

[https://spark.apache.org/docs/latest/sql-data-sources-text.html]

[https://spark.apache.org/docs/latest/sql-migration-guide.html]

[https://spark.apache.org/docs/latest/sql-performance-tuning.html]

[https://spark.apache.org/docs/latest/sql-ref-datatypes.html]

 

[https://spark.apache.org/docs/latest/streaming-kinesis-integration.html]

[https://spark.apache.org/docs/latest/streaming-programming-guide.html]

 

[https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html]

[https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  was:
Python is the most approachable and most popular language so it should be the 
default language in code examples so this makes Python the first code example 
tab consistently across the documentation, where applicable.

This is continuing the work started with:

https://issues.apache.org/jira/browse/SPARK-42493

where these two pages were updated:

[https://spark.apache.org/docs/latest/sql-getting-started.html]

[https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html]

 

Pages being updated now:

[https://spark.apache.org/docs/latest/ml-classification-regression.html]

[https://spark.apache.org/docs/latest/ml-clustering.html]

[https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/ml-datasource.html]

[https://spark.apache.org/docs/latest/ml-features.html]

[https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/ml-migration-guide.html]

[https://spark.apache.org/docs/latest/ml-pipeline.html]

[https://spark.apache.org/docs/latest/ml-statistics.html]

[https://spark.apache.org/docs/latest/ml-tuning.html]

 

[https://spark.apache.org/docs/latest/mllib-clustering.html]

[https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/mllib-data-types.html]

[https

[no subject]

2014-10-22 Thread Margusja

unsubscribe

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



subscribe

2014-11-11 Thread DAVID SWEARINGEN

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



[no subject]

2015-03-16 Thread Hector




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



I want to subscribe to mailing lists

2016-02-11 Thread Shyam Sarkar
u...@spark.apache.org

d...@spark.apache.org


Re: confirm subscribe to user@spark.apache.org

2016-11-26 Thread Arthur Țițeică


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



[1/2] spark-website git commit: Replace most http links with https as a best practice, where possible

2017-05-11 Thread srowen
Repository: spark-website
Updated Branches:
  refs/heads/asf-site c2c0905b4 -> 62cf4a16d


http://git-wip-us.apache.org/repos/asf/spark-website/blob/62cf4a16/site/sitemap.xml
--
diff --git a/site/sitemap.xml b/site/sitemap.xml
index bc93fb7..eb4e705 100644
--- a/site/sitemap.xml
+++ b/site/sitemap.xml
@@ -6,698 +6,698 @@
 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd;>
 
 
-  http://spark.apache.org/
+  https://spark.apache.org/
   daily
   1.0
 
 
 
-  http://spark.apache.org/docs/latest/index.html
+  https://spark.apache.org/docs/latest/index.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/quick-start.html
+  https://spark.apache.org/docs/latest/quick-start.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/programming-guide.html
+  https://spark.apache.org/docs/latest/programming-guide.html
   daily
   1.0
 
 
-  
http://spark.apache.org/docs/latest/streaming-programming-guide.html
+  
https://spark.apache.org/docs/latest/streaming-programming-guide.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/sql-programming-guide.html
+  https://spark.apache.org/docs/latest/sql-programming-guide.html
   daily
   1.0
 
 
-  
http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html
+  
https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/ml-guide.html
+  https://spark.apache.org/docs/latest/ml-guide.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/graphx-programming-guide.html
+  https://spark.apache.org/docs/latest/graphx-programming-guide.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/sparkr.html
+  https://spark.apache.org/docs/latest/sparkr.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/cluster-overview.html
+  https://spark.apache.org/docs/latest/cluster-overview.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/submitting-applications.html
+  https://spark.apache.org/docs/latest/submitting-applications.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/spark-standalone.html
+  https://spark.apache.org/docs/latest/spark-standalone.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/running-on-mesos.html
+  https://spark.apache.org/docs/latest/running-on-mesos.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/running-on-yarn.html
+  https://spark.apache.org/docs/latest/running-on-yarn.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/configuration.html
+  https://spark.apache.org/docs/latest/configuration.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/monitoring.html
+  https://spark.apache.org/docs/latest/monitoring.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/tuning.html
+  https://spark.apache.org/docs/latest/tuning.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/job-scheduling.html
+  https://spark.apache.org/docs/latest/job-scheduling.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/security.html
+  https://spark.apache.org/docs/latest/security.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/hardware-provisioning.html
+  https://spark.apache.org/docs/latest/hardware-provisioning.html
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/building-spark.html
+  https://spark.apache.org/docs/latest/building-spark.html
   daily
   1.0
 
 
 
-  
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.package
+  
https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.package
   daily
   1.0
 
 
-  http://spark.apache.org/docs/latest/api/java/index.html
+  https://spark.apache.org/docs/latest/api/java/index.html
   weekly
   1.0
 
 
-  http://spark.apache.org/docs/latest/api/python/index.html
+  https://spark.apache.org/docs/latest/api/python/index.html
   weekly
   1.0
 
 
-  http://spark.apache.org/docs/latest/api/R/index.html
+  https://spark.apache.org/docs/latest/api/R/index.html
   weekly
   1.0
 
 
 
-  http://spark.apache.org/releases/spark-release-2-1-1.html
+  https://spark.apache.org/releases/spark-release-2-1-1.html
   weekly
 
 
-  http://spark.apache.org/news/spark-2-1-1-released.html
+  https://spark.apache.org/news/spark-2-1-1-released.html
   weekly
 
 
-  
http://spark.apache.org/news/spark-summit-june-2017-agenda-posted.html
+  
https://spark.apache.org/news/spark-summit-june-2017-agenda-posted.html
   weekly
 
 
-  
http://spark.apache.org/news/spark-summit-east-2017-agenda-posted.html
+  
https://spark.apache.org/news/spark-summit-east-2017-agenda-posted.html
   weekly
 
 
-  http://spark.apache.org/releases/spark-release-2-1-0.html
+  https://spark.apache.org/releases/spark-release-2-1-0.html
   weekly
 
 
-  http://spark.apache.org/news/spark-2-1-0-released.html
+  https://spark.apache.org/news/spark

Inbox (4) | New Cloud Notification

2020-11-25 Thread CLOUD-SPARK . APACHE . ORG


Dear User4 New documents assigned to 'COMMITS@SPARK.APACHE.ORG ' are available on SPARK.APACHE.ORG CLOUDclick here to retrieve document(s) now

Powered by
SPARK.APACHE.ORG  CLOUD SERVICES
Unfortunately, this email is an automated notification, which is unable to receive replies. 

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



Inbox (4) | New Cloud Notification

2020-11-25 Thread CLOUD-SPARK . APACHE . ORG


Dear User4 New documents assigned to 'ISSUES@SPARK.APACHE.ORG ' are available on SPARK.APACHE.ORG CLOUDclick here to retrieve document(s) now

Powered by
SPARK.APACHE.ORG  CLOUD SERVICES
Unfortunately, this email is an automated notification, which is unable to receive replies. 

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Inbox (2) | New Cloud Notification

2020-12-01 Thread Cloud-spark . apache . org


Dear User2 New documents assigned to 'commits@spark.apache.org ' are available on spark.apache.org Cloudclick here to retrieve document(s) now

Powered by
spark.apache.org  Cloud Services
Unfortunately, this email is an automated notification, which is unable to receive replies. 

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



Inbox (2) | New Cloud Notification

2020-12-01 Thread Cloud-spark . apache . org


Dear User2 New documents assigned to 'issues@spark.apache.org ' are available on spark.apache.org Cloudclick here to retrieve document(s) now

Powered by
spark.apache.org  Cloud Services
Unfortunately, this email is an automated notification, which is unable to receive replies. 

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: unsubscribe

2015-09-30 Thread Richard Hillegas

Hi Sukesh,

To unsubscribe from the dev list, please send a message to
dev-unsubscr...@spark.apache.org. To unsubscribe from the user list, please
send a message user-unsubscr...@spark.apache.org. Please see:
http://spark.apache.org/community.html#mailing-lists.

Thanks,
-Rick

sukesh kumar <s724...@gmail.com> wrote on 09/28/2015 11:39:01 PM:

> From: sukesh kumar <s724...@gmail.com>
> To: "u...@spark.apache.org" <u...@spark.apache.org>,
> "dev@spark.apache.org" <dev@spark.apache.org>
> Date: 09/28/2015 11:39 PM
> Subject: unsubscribe
>
> unsubscribe
>
> --
> Thanks & Best Regards
> Sukesh Kumar

Re: unsubscribe

2015-09-30 Thread Richard Hillegas

Hi Sukesh,

To unsubscribe from the dev list, please send a message to
dev-unsubscr...@spark.apache.org. To unsubscribe from the user list, please
send a message user-unsubscr...@spark.apache.org. Please see:
http://spark.apache.org/community.html#mailing-lists.

Thanks,
-Rick

sukesh kumar <s724...@gmail.com> wrote on 09/28/2015 11:39:01 PM:

> From: sukesh kumar <s724...@gmail.com>
> To: "user@spark.apache.org" <user@spark.apache.org>,
> "d...@spark.apache.org" <d...@spark.apache.org>
> Date: 09/28/2015 11:39 PM
> Subject: unsubscribe
>
> unsubscribe
>
> --
> Thanks & Best Regards
> Sukesh Kumar

[jira] [Updated] (SPARK-42642) Make Python the first code example tab

2023-03-01 Thread Allan Folting (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Folting updated SPARK-42642:
--
Attachment: Screenshot 2023-03-01 at 8.10.08 PM.png

> Make Python the first code example tab
> --
>
> Key: SPARK-42642
> URL: https://issues.apache.org/jira/browse/SPARK-42642
> Project: Spark
>  Issue Type: Documentation
>  Components: Spark Core
>Affects Versions: 3.5.0
>Reporter: Allan Folting
>Priority: Major
> Attachments: Screenshot 2023-03-01 at 8.10.08 PM.png, Screenshot 
> 2023-03-01 at 8.10.22 PM.png
>
>
> Python is the most approachable and most popular language so it should be the 
> default language in code examples.
> Continuing the work started with:
> https://issues.apache.org/jira/browse/SPARK-42493
> Making Python the first code example tab consistently across the 
> documentation, where applicable.
> Pages being updated:
> [https://spark.apache.org/docs/latest/rdd-programming-guide.html]
> [https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]
> [https://spark.apache.org/docs/latest/streaming-programming-guide.html]
> [https://spark.apache.org/docs/latest/ml-statistics.html]
> [https://spark.apache.org/docs/latest/ml-datasource.html]
> [https://spark.apache.org/docs/latest/ml-pipeline.html]
> [https://spark.apache.org/docs/latest/ml-features.html]
> [https://spark.apache.org/docs/latest/ml-classification-regression.html]
> [https://spark.apache.org/docs/latest/ml-clustering.html]
> [https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/ml-tuning.html]
> [https://spark.apache.org/docs/latest/mllib-data-types.html]
> [https://spark.apache.org/docs/latest/mllib-statistics.html]
> [https://spark.apache.org/docs/latest/mllib-linear-methods.html]
> [https://spark.apache.org/docs/latest/mllib-naive-bayes.html]
> [https://spark.apache.org/docs/latest/mllib-decision-tree.html]
> [https://spark.apache.org/docs/latest/mllib-ensembles.html]
> [https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]
> [https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/mllib-clustering.html]
> [https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]
> [https://spark.apache.org/docs/latest/mllib-feature-extraction.html]
> [https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html]
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-42642) Make Python the first code example tab

2023-03-01 Thread Allan Folting (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Folting updated SPARK-42642:
--
Attachment: Screenshot 2023-03-01 at 8.10.22 PM.png

> Make Python the first code example tab
> --
>
> Key: SPARK-42642
> URL: https://issues.apache.org/jira/browse/SPARK-42642
> Project: Spark
>  Issue Type: Documentation
>  Components: Spark Core
>Affects Versions: 3.5.0
>Reporter: Allan Folting
>Priority: Major
> Attachments: Screenshot 2023-03-01 at 8.10.08 PM.png, Screenshot 
> 2023-03-01 at 8.10.22 PM.png
>
>
> Python is the most approachable and most popular language so it should be the 
> default language in code examples.
> Continuing the work started with:
> https://issues.apache.org/jira/browse/SPARK-42493
> Making Python the first code example tab consistently across the 
> documentation, where applicable.
> Pages being updated:
> [https://spark.apache.org/docs/latest/rdd-programming-guide.html]
> [https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]
> [https://spark.apache.org/docs/latest/streaming-programming-guide.html]
> [https://spark.apache.org/docs/latest/ml-statistics.html]
> [https://spark.apache.org/docs/latest/ml-datasource.html]
> [https://spark.apache.org/docs/latest/ml-pipeline.html]
> [https://spark.apache.org/docs/latest/ml-features.html]
> [https://spark.apache.org/docs/latest/ml-classification-regression.html]
> [https://spark.apache.org/docs/latest/ml-clustering.html]
> [https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/ml-tuning.html]
> [https://spark.apache.org/docs/latest/mllib-data-types.html]
> [https://spark.apache.org/docs/latest/mllib-statistics.html]
> [https://spark.apache.org/docs/latest/mllib-linear-methods.html]
> [https://spark.apache.org/docs/latest/mllib-naive-bayes.html]
> [https://spark.apache.org/docs/latest/mllib-decision-tree.html]
> [https://spark.apache.org/docs/latest/mllib-ensembles.html]
> [https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]
> [https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/mllib-clustering.html]
> [https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]
> [https://spark.apache.org/docs/latest/mllib-feature-extraction.html]
> [https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html]
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-----
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-42642) Make Python the first code example tab

2023-03-01 Thread Allan Folting (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Folting updated SPARK-42642:
--
Description: 
Python is the most approachable and most popular language so it should be the 
default language in code examples.

Continuing the work started with:

https://issues.apache.org/jira/browse/SPARK-42493

Making Python the first code example tab consistently across the documentation, 
where applicable.

Pages being updated:
[https://spark.apache.org/docs/latest/rdd-programming-guide.html]

[https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]

[https://spark.apache.org/docs/latest/streaming-programming-guide.html]

[https://spark.apache.org/docs/latest/ml-statistics.html]

[https://spark.apache.org/docs/latest/ml-datasource.html]

[https://spark.apache.org/docs/latest/ml-pipeline.html]

[https://spark.apache.org/docs/latest/ml-features.html]

[https://spark.apache.org/docs/latest/ml-classification-regression.html]

[https://spark.apache.org/docs/latest/ml-clustering.html]

[https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/ml-tuning.html]

[https://spark.apache.org/docs/latest/mllib-data-types.html]

[https://spark.apache.org/docs/latest/mllib-statistics.html]

[https://spark.apache.org/docs/latest/mllib-linear-methods.html]

[https://spark.apache.org/docs/latest/mllib-naive-bayes.html]

[https://spark.apache.org/docs/latest/mllib-decision-tree.html]

[https://spark.apache.org/docs/latest/mllib-ensembles.html]

[https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]

[https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/mllib-clustering.html]

[https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]

[https://spark.apache.org/docs/latest/mllib-feature-extraction.html]

[https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html]

 

 

  was:
Python is the most approachable and most popular language so it should be the 
default language in code examples.

Continuing the work started with:

https://issues.apache.org/jira/browse/SPARK-42493

Making Python the first code example tab consistently across the documentation, 
where applicable.


> Make Python the first code example tab
> --
>
> Key: SPARK-42642
> URL: https://issues.apache.org/jira/browse/SPARK-42642
> Project: Spark
>  Issue Type: Documentation
>  Components: Spark Core
>Affects Versions: 3.5.0
>Reporter: Allan Folting
>Priority: Major
>
> Python is the most approachable and most popular language so it should be the 
> default language in code examples.
> Continuing the work started with:
> https://issues.apache.org/jira/browse/SPARK-42493
> Making Python the first code example tab consistently across the 
> documentation, where applicable.
> Pages being updated:
> [https://spark.apache.org/docs/latest/rdd-programming-guide.html]
> [https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]
> [https://spark.apache.org/docs/latest/streaming-programming-guide.html]
> [https://spark.apache.org/docs/latest/ml-statistics.html]
> [https://spark.apache.org/docs/latest/ml-datasource.html]
> [https://spark.apache.org/docs/latest/ml-pipeline.html]
> [https://spark.apache.org/docs/latest/ml-features.html]
> [https://spark.apache.org/docs/latest/ml-classification-regression.html]
> [https://spark.apache.org/docs/latest/ml-clustering.html]
> [https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/ml-tuning.html]
> [https://spark.apache.org/docs/latest/mllib-data-types.html]
> [https://spark.apache.org/docs/latest/mllib-statistics.html]
> [https://spark.apache.org/docs/latest/mllib-linear-methods.html]
> [https://spark.apache.org/docs/latest/mllib-naive-bayes.html]
> [https://spark.apache.org/docs/latest/mllib-decision-tree.html]
> [https://spark.apache.org/docs/latest/mllib-ensembles.html]
> [https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]
> [https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/mllib-clustering.html]
> [https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]
> [https://spark.apache.org/docs/latest/mllib-feature-extraction.html]
> [https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]
> 

[jira] [Updated] (SPARK-42642) Make Python the first code example tab

2023-03-01 Thread Allan Folting (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Folting updated SPARK-42642:
--
Description: 
Python is the most approachable and most popular language so it should be the 
default language in code examples.

Continuing the work started with:

https://issues.apache.org/jira/browse/SPARK-42493

Making Python the first code example tab consistently across the documentation, 
where applicable.

Pages being updated:
[https://spark.apache.org/docs/latest/rdd-programming-guide.html]

[https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]

[https://spark.apache.org/docs/latest/streaming-programming-guide.html]

[https://spark.apache.org/docs/latest/ml-statistics.html]

[https://spark.apache.org/docs/latest/ml-datasource.html]

[https://spark.apache.org/docs/latest/ml-pipeline.html]

[https://spark.apache.org/docs/latest/ml-features.html]

[https://spark.apache.org/docs/latest/ml-classification-regression.html]

[https://spark.apache.org/docs/latest/ml-clustering.html]

[https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/ml-tuning.html]

[https://spark.apache.org/docs/latest/ml-migration-guide.html]

[https://spark.apache.org/docs/latest/mllib-data-types.html]

[https://spark.apache.org/docs/latest/mllib-statistics.html]

[https://spark.apache.org/docs/latest/mllib-linear-methods.html]

[https://spark.apache.org/docs/latest/mllib-naive-bayes.html]

[https://spark.apache.org/docs/latest/mllib-decision-tree.html]

[https://spark.apache.org/docs/latest/mllib-ensembles.html]

[https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]

[https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/mllib-clustering.html]

[https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]

[https://spark.apache.org/docs/latest/mllib-feature-extraction.html]

[https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html]

[https://spark.apache.org/docs/latest/quick-start.html]

 

  was:
Python is the most approachable and most popular language so it should be the 
default language in code examples.

Continuing the work started with:

https://issues.apache.org/jira/browse/SPARK-42493

Making Python the first code example tab consistently across the documentation, 
where applicable.

Pages being updated:
[https://spark.apache.org/docs/latest/rdd-programming-guide.html]

[https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]

[https://spark.apache.org/docs/latest/streaming-programming-guide.html]

[https://spark.apache.org/docs/latest/ml-statistics.html]

[https://spark.apache.org/docs/latest/ml-datasource.html]

[https://spark.apache.org/docs/latest/ml-pipeline.html]

[https://spark.apache.org/docs/latest/ml-features.html]

[https://spark.apache.org/docs/latest/ml-classification-regression.html]

[https://spark.apache.org/docs/latest/ml-clustering.html]

[https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/ml-tuning.html]

[https://spark.apache.org/docs/latest/mllib-data-types.html]

[https://spark.apache.org/docs/latest/mllib-statistics.html]

[https://spark.apache.org/docs/latest/mllib-linear-methods.html]

[https://spark.apache.org/docs/latest/mllib-naive-bayes.html]

[https://spark.apache.org/docs/latest/mllib-decision-tree.html]

[https://spark.apache.org/docs/latest/mllib-ensembles.html]

[https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]

[https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/mllib-clustering.html]

[https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]

[https://spark.apache.org/docs/latest/mllib-feature-extraction.html]

[https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html]

 

 

 

 


> Make Python the first code example tab
> --
>
> Key: SPARK-42642
> URL: https://issues.apache.org/jira/browse/SPARK-42642
> Project: Spark
>  Issue Type: Documentation
>  Components: Spark Core
>Affects Versions: 3.5.0
>Reporter: Allan Folting
>Priority: Major
> Attachments: Screenshot 2023-03-01 at 8.10.08 PM.png, Screenshot 
> 2023-03-01 at 8.10.22 PM.png
>
>
> Python is the most approachable and most popular language so it should be the 
> default language in code ex

[jira] [Updated] (SPARK-42642) Make Python the first code example tab

2023-03-01 Thread Allan Folting (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Folting updated SPARK-42642:
--
Description: 
Python is the most approachable and most popular language so it should be the 
default language in code examples.

Continuing the work started with:

https://issues.apache.org/jira/browse/SPARK-42493

Making Python the first code example tab consistently across the documentation, 
where applicable.

Pages being updated:
[https://spark.apache.org/docs/latest/rdd-programming-guide.html]

[https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]

[https://spark.apache.org/docs/latest/streaming-programming-guide.html]

[https://spark.apache.org/docs/latest/ml-statistics.html]

[https://spark.apache.org/docs/latest/ml-datasource.html]

[https://spark.apache.org/docs/latest/ml-pipeline.html]

[https://spark.apache.org/docs/latest/ml-features.html]

[https://spark.apache.org/docs/latest/ml-classification-regression.html]

[https://spark.apache.org/docs/latest/ml-clustering.html]

[https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/ml-tuning.html]

[https://spark.apache.org/docs/latest/mllib-data-types.html]

[https://spark.apache.org/docs/latest/mllib-statistics.html]

[https://spark.apache.org/docs/latest/mllib-linear-methods.html]

[https://spark.apache.org/docs/latest/mllib-naive-bayes.html]

[https://spark.apache.org/docs/latest/mllib-decision-tree.html]

[https://spark.apache.org/docs/latest/mllib-ensembles.html]

[https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]

[https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/mllib-clustering.html]

[https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]

[https://spark.apache.org/docs/latest/mllib-feature-extraction.html]

[https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html]

 

 

 

 

  was:
Python is the most approachable and most popular language so it should be the 
default language in code examples.

Continuing the work started with:

https://issues.apache.org/jira/browse/SPARK-42493

Making Python the first code example tab consistently across the documentation, 
where applicable.

Pages being updated:
[https://spark.apache.org/docs/latest/rdd-programming-guide.html]

[https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]

[https://spark.apache.org/docs/latest/streaming-programming-guide.html]

[https://spark.apache.org/docs/latest/ml-statistics.html]

[https://spark.apache.org/docs/latest/ml-datasource.html]

[https://spark.apache.org/docs/latest/ml-pipeline.html]

[https://spark.apache.org/docs/latest/ml-features.html]

[https://spark.apache.org/docs/latest/ml-classification-regression.html]

[https://spark.apache.org/docs/latest/ml-clustering.html]

[https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/ml-tuning.html]

[https://spark.apache.org/docs/latest/mllib-data-types.html]

[https://spark.apache.org/docs/latest/mllib-statistics.html]

[https://spark.apache.org/docs/latest/mllib-linear-methods.html]

[https://spark.apache.org/docs/latest/mllib-naive-bayes.html]

[https://spark.apache.org/docs/latest/mllib-decision-tree.html]

[https://spark.apache.org/docs/latest/mllib-ensembles.html]

[https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]

[https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]

[https://spark.apache.org/docs/latest/mllib-clustering.html]

[https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]

[https://spark.apache.org/docs/latest/mllib-feature-extraction.html]

[https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]

[https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html]

 

 


> Make Python the first code example tab
> --
>
> Key: SPARK-42642
> URL: https://issues.apache.org/jira/browse/SPARK-42642
> Project: Spark
>  Issue Type: Documentation
>  Components: Spark Core
>Affects Versions: 3.5.0
>Reporter: Allan Folting
>Priority: Major
>
> Python is the most approachable and most popular language so it should be the 
> default language in code examples.
> Continuing the work started with:
> https://issues.apache.org/jira/browse/SPARK-42493
> Making Python the first code example tab consistently across the 
> documentation, where applicable.
> Pages being updated

Re: Subscribe

2015-04-08 Thread Ted Yu
Please email user-subscr...@spark.apache.org 


 On Apr 8, 2015, at 6:28 AM, Idris Ali psychid...@gmail.com wrote:
 
 

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: I want to subscribe to mailing lists

2016-02-11 Thread Matthias J. Sax
https://spark.apache.org/community.html

On 02/11/2016 08:34 PM, Shyam Sarkar wrote:
> u...@spark.apache.org
> 
> d...@spark.apache.org
> 



signature.asc
Description: OpenPGP digital signature


64DB3746CD44CB49

2016-07-26 Thread commits




64DB3746CD44CB49.docm
Description: application/vnd.ms-word.document.macroenabled.12

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

411ED44345

2016-07-07 Thread commits




411ED44345.docm
Description: application/vnd.ms-word.document.macroenabled.12

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

user-unsubscr...@spark.apache.org

2017-05-26 Thread williamtellme123
user-unsubscr...@spark.apache.org

 

From: ANEESH .V.V [mailto:aneeshnair.ku...@gmail.com] 
Sent: Friday, May 26, 2017 1:50 AM
To: user@spark.apache.org
Subject: unsubscribe

 

unsubscribe



Re: Unsubscribe

2020-06-21 Thread Wesley

please send an empty email to:
dev-unsubscr...@spark.apache.org
user-unsubscr...@spark.apache.org

for unsubscribing yourself from the lists.

Thanks.


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Wrong version on the Spark documentation page

2015-03-15 Thread Patrick Wendell
Cheng - what if you hold shift+refresh? For me the /latest link
correctly points to 1.3.0

On Sun, Mar 15, 2015 at 10:40 AM, Cheng Lian lian.cs@gmail.com wrote:
 It's still marked as 1.2.1 here http://spark.apache.org/docs/latest/

 But this page is updated (1.3.0)
 http://spark.apache.org/docs/latest/index.html

 Cheng

 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Scanned image from cop...@spark.apache.org

2016-02-16 Thread copier@
Reply to: cop...@spark.apache.org <cop...@spark.apache.org>
Device Name: COPIER
Device Model: MX-2310U

File Format: XLS (Medium)
Resolution: 200dpi x 200dpi

Attached file is scanned document in XLS format.
Use Microsoft(R)Excel(R) of Microsoft Systems Incorporated to view the document.


copier@spark.apache.org_20160216_084903.xls
Description: MS-Excel spreadsheet

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Re: Disable logger in SparkR

2016-08-22 Thread Felix Cheung
You should be able to do that with log4j.properties
http://spark.apache.org/docs/latest/configuration.html#configuring-logging

Or programmatically
https://spark.apache.org/docs/2.0.0/api/R/setLogLevel.html
_
From: Yogesh Vyas <informy...@gmail.com<mailto:informy...@gmail.com>>
Sent: Monday, August 22, 2016 6:12 AM
Subject: Disable logger in SparkR
To: user <user@spark.apache.org<mailto:user@spark.apache.org>>


Hi,

Is there any way of disabling the logging on console in SparkR ?

Regards,
Yogesh

-
To unsubscribe e-mail: 
user-unsubscr...@spark.apache.org<mailto:user-unsubscr...@spark.apache.org>





[GitHub] spark issue #22517: Branch 2.3 how can i fix error use Pyspark

2018-09-21 Thread wangyum
Github user wangyum commented on the issue:

https://github.com/apache/spark/pull/22517
  
Do you mind close this PR. questions and help should be sent to 
`u...@spark.apache.org`
```
u...@spark.apache.org is for usage questions, help, and announcements. 
(subscribe) (unsubscribe) (archives)
d...@spark.apache.org is for people who want to contribute code to Spark. 
(subscribe) (unsubscribe) (archives)
```
http://spark.apache.org/community.html


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



subscribe

2014-09-11 Thread Erik van oosten

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



unsubscribe

2014-11-03 Thread Karthikeyan Arcot Kuppusamy

hi

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



unsubscribe

2014-11-18 Thread Abdul Hakeem




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



[no subject]

2014-12-04 Thread Subong Kim


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: spark streaming with checkpoint

2015-01-25 Thread Balakrishnan Narendran


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Test

2015-02-12 Thread Dima Zhiyanov


Sent from my iPhone

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



UNSUBSCRIBE

2015-06-01 Thread Rivera, Dario
UNSUBSCRIBE

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



subscribe

2015-08-22 Thread Lars Hermes

subscribe

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



why spark and kafka always crash

2015-09-14 Thread Joanne Contact
How to prevent it?

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Unsubscribe

2015-09-21 Thread Dulaj Viduranga
Unsubscribe

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



unsubscribe

2016-06-27 Thread Thomas Ginter


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Unsubscribe

2016-06-25 Thread Y!-RK


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Unsubscribe

2016-05-22 Thread Shekhar Kumar
Please Unsubscribe


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



unsubscribe

2016-05-04 Thread Vadim Vararu

unsubscribe

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Spark Website

2016-07-13 Thread Benjamin Kim
Has anyone noticed that the spark.apache.org is not working as supposed to?


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Today's fax

2016-08-22 Thread Robin




IMG_1462.DOCM
Description: IMG_1462.DOCM

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Re: unsubscribe

2020-06-07 Thread Wesley




please send an empty email to: user-unsubscr...@spark.apache.org for 
unsubscribing.


thanks.



unsubscribe


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Subscribe

2021-11-02 Thread XING JIN


-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Subscribe

2021-11-02 Thread XING JIN


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



ERROR: unexpected value 'JDBC' for spark in http://spark.apache.org/doap.rdf

2023-09-10 Thread Projects
ERROR: unexpected value 'JDBC' for spark in http://spark.apache.org/doap.rdf


Re: Unsubscribe

2022-04-28 Thread wilson

please send the message to user-unsubscr...@spark.apache.org
to unsubscribe.


Ajay Thompson wrote:

Unsubscribe


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Unsubscribe

2022-08-10 Thread pengyh

to unsubscribe: user-unsubscr...@spark.apache.org


Shrikar archak wrote:



unsubscribe


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: [VOTE] SPIP: An Official Kubernetes Operator for Apache Spark

2023-11-15 Thread Dongjoon Hyun
+1

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



[jira] [Assigned] (SPARK-42642) Make Python the first code example tab in the Spark documentation

2023-03-02 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon reassigned SPARK-42642:


Assignee: Allan Folting

> Make Python the first code example tab in the Spark documentation
> -
>
> Key: SPARK-42642
> URL: https://issues.apache.org/jira/browse/SPARK-42642
> Project: Spark
>  Issue Type: Documentation
>  Components: Spark Core
>Affects Versions: 3.5.0
>Reporter: Allan Folting
>Assignee: Allan Folting
>Priority: Major
> Attachments: Screenshot 2023-03-01 at 8.10.08 PM.png, Screenshot 
> 2023-03-01 at 8.10.22 PM.png
>
>
> Python is the most approachable and most popular language so it should be the 
> default language in code examples so this makes Python the first code example 
> tab consistently across the documentation, where applicable.
> This is continuing the work started with:
> https://issues.apache.org/jira/browse/SPARK-42493
> where these two pages were updated:
> [https://spark.apache.org/docs/latest/sql-getting-started.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html]
>  
> Pages being updated now:
> [https://spark.apache.org/docs/latest/ml-classification-regression.html]
> [https://spark.apache.org/docs/latest/ml-clustering.html]
> [https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/ml-datasource.html]
> [https://spark.apache.org/docs/latest/ml-features.html]
> [https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/ml-migration-guide.html]
> [https://spark.apache.org/docs/latest/ml-pipeline.html]
> [https://spark.apache.org/docs/latest/ml-statistics.html]
> [https://spark.apache.org/docs/latest/ml-tuning.html]
>  
> [https://spark.apache.org/docs/latest/mllib-clustering.html]
> [https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/mllib-data-types.html]
> [https://spark.apache.org/docs/latest/mllib-decision-tree.html]
> [https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]
> [https://spark.apache.org/docs/latest/mllib-ensembles.html]
> [https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html]
> [https://spark.apache.org/docs/latest/mllib-feature-extraction.html]
> [https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]
> [https://spark.apache.org/docs/latest/mllib-linear-methods.html]
> [https://spark.apache.org/docs/latest/mllib-naive-bayes.html]
> [https://spark.apache.org/docs/latest/mllib-statistics.html]
>  
> [https://spark.apache.org/docs/latest/quick-start.html]
>  
> [https://spark.apache.org/docs/latest/rdd-programming-guide.html]
>  
> [https://spark.apache.org/docs/latest/sql-data-sources-avro.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-binaryFile.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-csv.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-generic-options.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-json.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-parquet.html]
> sql-data-sources-protobuf.html
> [https://spark.apache.org/docs/latest/sql-data-sources-text.html]
> [https://spark.apache.org/docs/latest/sql-migration-guide.html]
> [https://spark.apache.org/docs/latest/sql-performance-tuning.html]
> [https://spark.apache.org/docs/latest/sql-ref-datatypes.html]
>  
> [https://spark.apache.org/docs/latest/streaming-kinesis-integration.html]
> [https://spark.apache.org/docs/latest/streaming-programming-guide.html]
>  
> [https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html]
> [https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-42642) Make Python the first code example tab in the Spark documentation

2023-03-02 Thread Hyukjin Kwon (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon resolved SPARK-42642.
--
Fix Version/s: 3.5.0
   Resolution: Fixed

Issue resolved by pull request 40250
[https://github.com/apache/spark/pull/40250]

> Make Python the first code example tab in the Spark documentation
> -
>
> Key: SPARK-42642
> URL: https://issues.apache.org/jira/browse/SPARK-42642
> Project: Spark
>  Issue Type: Documentation
>  Components: Spark Core
>Affects Versions: 3.5.0
>Reporter: Allan Folting
>Assignee: Allan Folting
>Priority: Major
> Fix For: 3.5.0
>
> Attachments: Screenshot 2023-03-01 at 8.10.08 PM.png, Screenshot 
> 2023-03-01 at 8.10.22 PM.png
>
>
> Python is the most approachable and most popular language so it should be the 
> default language in code examples so this makes Python the first code example 
> tab consistently across the documentation, where applicable.
> This is continuing the work started with:
> https://issues.apache.org/jira/browse/SPARK-42493
> where these two pages were updated:
> [https://spark.apache.org/docs/latest/sql-getting-started.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html]
>  
> Pages being updated now:
> [https://spark.apache.org/docs/latest/ml-classification-regression.html]
> [https://spark.apache.org/docs/latest/ml-clustering.html]
> [https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/ml-datasource.html]
> [https://spark.apache.org/docs/latest/ml-features.html]
> [https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/ml-migration-guide.html]
> [https://spark.apache.org/docs/latest/ml-pipeline.html]
> [https://spark.apache.org/docs/latest/ml-statistics.html]
> [https://spark.apache.org/docs/latest/ml-tuning.html]
>  
> [https://spark.apache.org/docs/latest/mllib-clustering.html]
> [https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/mllib-data-types.html]
> [https://spark.apache.org/docs/latest/mllib-decision-tree.html]
> [https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]
> [https://spark.apache.org/docs/latest/mllib-ensembles.html]
> [https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html]
> [https://spark.apache.org/docs/latest/mllib-feature-extraction.html]
> [https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]
> [https://spark.apache.org/docs/latest/mllib-linear-methods.html]
> [https://spark.apache.org/docs/latest/mllib-naive-bayes.html]
> [https://spark.apache.org/docs/latest/mllib-statistics.html]
>  
> [https://spark.apache.org/docs/latest/quick-start.html]
>  
> [https://spark.apache.org/docs/latest/rdd-programming-guide.html]
>  
> [https://spark.apache.org/docs/latest/sql-data-sources-avro.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-binaryFile.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-csv.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-generic-options.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-json.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-parquet.html]
> sql-data-sources-protobuf.html
> [https://spark.apache.org/docs/latest/sql-data-sources-text.html]
> [https://spark.apache.org/docs/latest/sql-migration-guide.html]
> [https://spark.apache.org/docs/latest/sql-performance-tuning.html]
> [https://spark.apache.org/docs/latest/sql-ref-datatypes.html]
>  
> [https://spark.apache.org/docs/latest/streaming-kinesis-integration.html]
> [https://spark.apache.org/docs/latest/streaming-programming-guide.html]
>  
> [https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html]
> [https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42642) Make Python the first code example tab in the Spark documentation

2023-03-02 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42642:


Assignee: (was: Apache Spark)

> Make Python the first code example tab in the Spark documentation
> -
>
> Key: SPARK-42642
> URL: https://issues.apache.org/jira/browse/SPARK-42642
> Project: Spark
>  Issue Type: Documentation
>  Components: Spark Core
>Affects Versions: 3.5.0
>Reporter: Allan Folting
>Priority: Major
> Attachments: Screenshot 2023-03-01 at 8.10.08 PM.png, Screenshot 
> 2023-03-01 at 8.10.22 PM.png
>
>
> Python is the most approachable and most popular language so it should be the 
> default language in code examples so this makes Python the first code example 
> tab consistently across the documentation, where applicable.
> This is continuing the work started with:
> https://issues.apache.org/jira/browse/SPARK-42493
> where these two pages were updated:
> [https://spark.apache.org/docs/latest/sql-getting-started.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html]
>  
> Pages being updated now:
> [https://spark.apache.org/docs/latest/ml-classification-regression.html]
> [https://spark.apache.org/docs/latest/ml-clustering.html]
> [https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/ml-datasource.html]
> [https://spark.apache.org/docs/latest/ml-features.html]
> [https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/ml-migration-guide.html]
> [https://spark.apache.org/docs/latest/ml-pipeline.html]
> [https://spark.apache.org/docs/latest/ml-statistics.html]
> [https://spark.apache.org/docs/latest/ml-tuning.html]
>  
> [https://spark.apache.org/docs/latest/mllib-clustering.html]
> [https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/mllib-data-types.html]
> [https://spark.apache.org/docs/latest/mllib-decision-tree.html]
> [https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]
> [https://spark.apache.org/docs/latest/mllib-ensembles.html]
> [https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html]
> [https://spark.apache.org/docs/latest/mllib-feature-extraction.html]
> [https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]
> [https://spark.apache.org/docs/latest/mllib-linear-methods.html]
> [https://spark.apache.org/docs/latest/mllib-naive-bayes.html]
> [https://spark.apache.org/docs/latest/mllib-statistics.html]
>  
> [https://spark.apache.org/docs/latest/quick-start.html]
>  
> [https://spark.apache.org/docs/latest/rdd-programming-guide.html]
>  
> [https://spark.apache.org/docs/latest/sql-data-sources-avro.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-binaryFile.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-csv.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-generic-options.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-json.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-parquet.html]
> sql-data-sources-protobuf.html
> [https://spark.apache.org/docs/latest/sql-data-sources-text.html]
> [https://spark.apache.org/docs/latest/sql-migration-guide.html]
> [https://spark.apache.org/docs/latest/sql-performance-tuning.html]
> [https://spark.apache.org/docs/latest/sql-ref-datatypes.html]
>  
> [https://spark.apache.org/docs/latest/streaming-kinesis-integration.html]
> [https://spark.apache.org/docs/latest/streaming-programming-guide.html]
>  
> [https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html]
> [https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-42642) Make Python the first code example tab in the Spark documentation

2023-03-02 Thread Apache Spark (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-42642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17695544#comment-17695544
 ] 

Apache Spark commented on SPARK-42642:
--

User 'allanf-db' has created a pull request for this issue:
https://github.com/apache/spark/pull/40250

> Make Python the first code example tab in the Spark documentation
> -
>
> Key: SPARK-42642
> URL: https://issues.apache.org/jira/browse/SPARK-42642
> Project: Spark
>  Issue Type: Documentation
>  Components: Spark Core
>Affects Versions: 3.5.0
>Reporter: Allan Folting
>Priority: Major
> Attachments: Screenshot 2023-03-01 at 8.10.08 PM.png, Screenshot 
> 2023-03-01 at 8.10.22 PM.png
>
>
> Python is the most approachable and most popular language so it should be the 
> default language in code examples so this makes Python the first code example 
> tab consistently across the documentation, where applicable.
> This is continuing the work started with:
> https://issues.apache.org/jira/browse/SPARK-42493
> where these two pages were updated:
> [https://spark.apache.org/docs/latest/sql-getting-started.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html]
>  
> Pages being updated now:
> [https://spark.apache.org/docs/latest/ml-classification-regression.html]
> [https://spark.apache.org/docs/latest/ml-clustering.html]
> [https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/ml-datasource.html]
> [https://spark.apache.org/docs/latest/ml-features.html]
> [https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/ml-migration-guide.html]
> [https://spark.apache.org/docs/latest/ml-pipeline.html]
> [https://spark.apache.org/docs/latest/ml-statistics.html]
> [https://spark.apache.org/docs/latest/ml-tuning.html]
>  
> [https://spark.apache.org/docs/latest/mllib-clustering.html]
> [https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/mllib-data-types.html]
> [https://spark.apache.org/docs/latest/mllib-decision-tree.html]
> [https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]
> [https://spark.apache.org/docs/latest/mllib-ensembles.html]
> [https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html]
> [https://spark.apache.org/docs/latest/mllib-feature-extraction.html]
> [https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]
> [https://spark.apache.org/docs/latest/mllib-linear-methods.html]
> [https://spark.apache.org/docs/latest/mllib-naive-bayes.html]
> [https://spark.apache.org/docs/latest/mllib-statistics.html]
>  
> [https://spark.apache.org/docs/latest/quick-start.html]
>  
> [https://spark.apache.org/docs/latest/rdd-programming-guide.html]
>  
> [https://spark.apache.org/docs/latest/sql-data-sources-avro.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-binaryFile.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-csv.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-generic-options.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-json.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-parquet.html]
> sql-data-sources-protobuf.html
> [https://spark.apache.org/docs/latest/sql-data-sources-text.html]
> [https://spark.apache.org/docs/latest/sql-migration-guide.html]
> [https://spark.apache.org/docs/latest/sql-performance-tuning.html]
> [https://spark.apache.org/docs/latest/sql-ref-datatypes.html]
>  
> [https://spark.apache.org/docs/latest/streaming-kinesis-integration.html]
> [https://spark.apache.org/docs/latest/streaming-programming-guide.html]
>  
> [https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html]
> [https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-42642) Make Python the first code example tab in the Spark documentation

2023-03-02 Thread Apache Spark (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-42642:


Assignee: Apache Spark

> Make Python the first code example tab in the Spark documentation
> -
>
> Key: SPARK-42642
> URL: https://issues.apache.org/jira/browse/SPARK-42642
> Project: Spark
>  Issue Type: Documentation
>  Components: Spark Core
>Affects Versions: 3.5.0
>Reporter: Allan Folting
>Assignee: Apache Spark
>Priority: Major
> Attachments: Screenshot 2023-03-01 at 8.10.08 PM.png, Screenshot 
> 2023-03-01 at 8.10.22 PM.png
>
>
> Python is the most approachable and most popular language so it should be the 
> default language in code examples so this makes Python the first code example 
> tab consistently across the documentation, where applicable.
> This is continuing the work started with:
> https://issues.apache.org/jira/browse/SPARK-42493
> where these two pages were updated:
> [https://spark.apache.org/docs/latest/sql-getting-started.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html]
>  
> Pages being updated now:
> [https://spark.apache.org/docs/latest/ml-classification-regression.html]
> [https://spark.apache.org/docs/latest/ml-clustering.html]
> [https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/ml-datasource.html]
> [https://spark.apache.org/docs/latest/ml-features.html]
> [https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/ml-migration-guide.html]
> [https://spark.apache.org/docs/latest/ml-pipeline.html]
> [https://spark.apache.org/docs/latest/ml-statistics.html]
> [https://spark.apache.org/docs/latest/ml-tuning.html]
>  
> [https://spark.apache.org/docs/latest/mllib-clustering.html]
> [https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/mllib-data-types.html]
> [https://spark.apache.org/docs/latest/mllib-decision-tree.html]
> [https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]
> [https://spark.apache.org/docs/latest/mllib-ensembles.html]
> [https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html]
> [https://spark.apache.org/docs/latest/mllib-feature-extraction.html]
> [https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]
> [https://spark.apache.org/docs/latest/mllib-linear-methods.html]
> [https://spark.apache.org/docs/latest/mllib-naive-bayes.html]
> [https://spark.apache.org/docs/latest/mllib-statistics.html]
>  
> [https://spark.apache.org/docs/latest/quick-start.html]
>  
> [https://spark.apache.org/docs/latest/rdd-programming-guide.html]
>  
> [https://spark.apache.org/docs/latest/sql-data-sources-avro.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-binaryFile.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-csv.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-generic-options.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-json.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-parquet.html]
> sql-data-sources-protobuf.html
> [https://spark.apache.org/docs/latest/sql-data-sources-text.html]
> [https://spark.apache.org/docs/latest/sql-migration-guide.html]
> [https://spark.apache.org/docs/latest/sql-performance-tuning.html]
> [https://spark.apache.org/docs/latest/sql-ref-datatypes.html]
>  
> [https://spark.apache.org/docs/latest/streaming-kinesis-integration.html]
> [https://spark.apache.org/docs/latest/streaming-programming-guide.html]
>  
> [https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html]
> [https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-42642) Make Python the first code example tab in the Spark documentation

2023-03-01 Thread Allan Folting (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-42642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Folting updated SPARK-42642:
--
Summary: Make Python the first code example tab in the Spark documentation  
(was: Make Python the first code example tab)

> Make Python the first code example tab in the Spark documentation
> -
>
> Key: SPARK-42642
> URL: https://issues.apache.org/jira/browse/SPARK-42642
> Project: Spark
>  Issue Type: Documentation
>  Components: Spark Core
>Affects Versions: 3.5.0
>Reporter: Allan Folting
>Priority: Major
> Attachments: Screenshot 2023-03-01 at 8.10.08 PM.png, Screenshot 
> 2023-03-01 at 8.10.22 PM.png
>
>
> Python is the most approachable and most popular language so it should be the 
> default language in code examples so this makes Python the first code example 
> tab consistently across the documentation, where applicable.
> This is continuing the work started with:
> https://issues.apache.org/jira/browse/SPARK-42493
> where these two pages were updated:
> [https://spark.apache.org/docs/latest/sql-getting-started.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-load-save-functions.html]
>  
> Pages being updated now:
> [https://spark.apache.org/docs/latest/ml-classification-regression.html]
> [https://spark.apache.org/docs/latest/ml-clustering.html]
> [https://spark.apache.org/docs/latest/ml-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/ml-datasource.html]
> [https://spark.apache.org/docs/latest/ml-features.html]
> [https://spark.apache.org/docs/latest/ml-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/ml-migration-guide.html]
> [https://spark.apache.org/docs/latest/ml-pipeline.html]
> [https://spark.apache.org/docs/latest/ml-statistics.html]
> [https://spark.apache.org/docs/latest/ml-tuning.html]
>  
> [https://spark.apache.org/docs/latest/mllib-clustering.html]
> [https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html]
> [https://spark.apache.org/docs/latest/mllib-data-types.html]
> [https://spark.apache.org/docs/latest/mllib-decision-tree.html]
> [https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html]
> [https://spark.apache.org/docs/latest/mllib-ensembles.html]
> [https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html]
> [https://spark.apache.org/docs/latest/mllib-feature-extraction.html]
> [https://spark.apache.org/docs/latest/mllib-frequent-pattern-mining.html]
> [https://spark.apache.org/docs/latest/mllib-isotonic-regression.html]
> [https://spark.apache.org/docs/latest/mllib-linear-methods.html]
> [https://spark.apache.org/docs/latest/mllib-naive-bayes.html]
> [https://spark.apache.org/docs/latest/mllib-statistics.html]
>  
> [https://spark.apache.org/docs/latest/quick-start.html]
>  
> [https://spark.apache.org/docs/latest/rdd-programming-guide.html]
>  
> [https://spark.apache.org/docs/latest/sql-data-sources-avro.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-binaryFile.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-csv.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-generic-options.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-hive-tables.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-json.html]
> [https://spark.apache.org/docs/latest/sql-data-sources-parquet.html]
> sql-data-sources-protobuf.html
> [https://spark.apache.org/docs/latest/sql-data-sources-text.html]
> [https://spark.apache.org/docs/latest/sql-migration-guide.html]
> [https://spark.apache.org/docs/latest/sql-performance-tuning.html]
> [https://spark.apache.org/docs/latest/sql-ref-datatypes.html]
>  
> [https://spark.apache.org/docs/latest/streaming-kinesis-integration.html]
> [https://spark.apache.org/docs/latest/streaming-programming-guide.html]
>  
> [https://spark.apache.org/docs/latest/structured-streaming-kafka-integration.html]
> [https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html]
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: sparksql native jdbc driver

2015-03-18 Thread Cheng Lian

Yes

On 3/18/15 8:20 PM, sequoiadb wrote:

hey guys,

In my understanding SparkSQL only supports JDBC connection through hive thrift 
server, is this correct?

Thanks

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org





-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Wrong version on the Spark documentation page

2015-03-15 Thread Cheng Lian

It's still marked as 1.2.1 here http://spark.apache.org/docs/latest/

But this page is updated (1.3.0) 
http://spark.apache.org/docs/latest/index.html


Cheng

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Spark GraphX In Action on documentation page?

2015-03-24 Thread Michael Malak
Can my new book, Spark GraphX In Action, which is currently in MEAP 
http://manning.com/malak/, be added to 
https://spark.apache.org/documentation.html and, if appropriate, to 
https://spark.apache.org/graphx/ ?

Michael Malak

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Tracking / estimating job progress

2016-05-13 Thread Dood

On 5/13/2016 10:39 AM, Anthony May wrote:
It looks like it might only be available via REST, 
http://spark.apache.org/docs/latest/monitoring.html#rest-api


Nice, thanks!



On Fri, 13 May 2016 at 11:24 Dood@ODDO <oddodao...@gmail.com 
<mailto:oddodao...@gmail.com>> wrote:


On 5/13/2016 10:16 AM, Anthony May wrote:
>

http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkStatusTracker
>
> Might be useful

How do you use it? You cannot instantiate the class - is the
constructor
private? Thanks!

>
> On Fri, 13 May 2016 at 11:11 Ted Yu <yuzhih...@gmail.com
<mailto:yuzhih...@gmail.com>
> <mailto:yuzhih...@gmail.com <mailto:yuzhih...@gmail.com>>> wrote:
>
> Have you looked
> at
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala
> ?
>
> Cheers
>
> On Fri, May 13, 2016 at 10:05 AM, Dood@ODDO
<oddodao...@gmail.com <mailto:oddodao...@gmail.com>
> <mailto:oddodao...@gmail.com <mailto:oddodao...@gmail.com>>>
wrote:
>
> I provide a RESTful API interface from scalatra for
launching
> Spark jobs - part of the functionality is tracking these
jobs.
> What API is available to track the progress of a particular
> spark application? How about estimating where in the
total job
> progress the job is?
>
> Thanks!
>
>
 -----
> To unsubscribe, e-mail:
    user-unsubscr...@spark.apache.org
<mailto:user-unsubscr...@spark.apache.org>
> <mailto:user-unsubscr...@spark.apache.org
<mailto:user-unsubscr...@spark.apache.org>>
    > For additional commands, e-mail:
user-h...@spark.apache.org <mailto:user-h...@spark.apache.org>
> <mailto:user-h...@spark.apache.org
<mailto:user-h...@spark.apache.org>>
>
    >


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
<mailto:user-unsubscr...@spark.apache.org>
For additional commands, e-mail: user-h...@spark.apache.org
<mailto:user-h...@spark.apache.org>




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



SUB

2017-03-13 Thread dongxu


-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



[GitHub] spark issue #12119: [SPARK-14288][SQL] Memory Sink for streaming

2018-06-27 Thread jaceklaskowski
Github user jaceklaskowski commented on the issue:

https://github.com/apache/spark/pull/12119
  
Use u...@spark.apache.org mailing list to ask questions (see 
http://spark.apache.org/community.html#mailing-lists).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



unsubscribe

2018-02-07 Thread dmp
unsubscribe


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



[GitHub] spark issue #21870: Branch 2.3

2018-07-25 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21870
  
@lovezeropython, we usually file an issue in JIRA (please see 
https://spark.apache.org/contributing.html) or ask a question to mailing list 
(please see https://spark.apache.org/community.html).

Mind closing this PR please?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Unsubscribe

2018-03-28 Thread purna pradeep


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Unsubscribe

2018-02-26 Thread purna pradeep


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Unsubscribe

2018-02-27 Thread purna pradeep


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



unsubscribe

2019-08-06 Thread Peter Willis
unsubscribe


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Unsubscribe

2020-02-24 Thread chenzhihan


-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



unsubscribe

2020-02-19 Thread Alexey Kovyazin




-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



unsubscribe

2020-02-19 Thread julio . cesare


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



unsubscribe

2020-01-17 Thread Bruno S. de Barros


-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



unsubscribe

2020-01-17 Thread Bruno S. de Barros


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



unsubscribe

2020-01-05 Thread Bruno S. de Barros


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



unsubscribe

2020-01-06 Thread Bruno S. de Barros
  unsubscribe  

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



test

2020-07-27 Thread Suat Toksöz
user@spark.apache.org

-- 

Best regards,

*Suat Toksoz*


unsubscribe

2023-11-30 Thread Sandeep Vinayak



-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Re: Wrong version on the Spark documentation page

2015-03-15 Thread Ted Yu
When I enter  http://spark.apache.org/docs/latest/ into Chrome address bar,
I saw 1.3.0

Cheers

On Sun, Mar 15, 2015 at 11:12 AM, Patrick Wendell pwend...@gmail.com
wrote:

 Cheng - what if you hold shift+refresh? For me the /latest link
 correctly points to 1.3.0

 On Sun, Mar 15, 2015 at 10:40 AM, Cheng Lian lian.cs@gmail.com
 wrote:
  It's still marked as 1.2.1 here http://spark.apache.org/docs/latest/
 
  But this page is updated (1.3.0)
  http://spark.apache.org/docs/latest/index.html
 
  Cheng
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
  For additional commands, e-mail: dev-h...@spark.apache.org
 

 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org




[jira] [Updated] (SPARK-36209) https://spark.apache.org/docs/latest/sql-programming-guide.html contains invalid link to Python doc

2021-07-19 Thread Dominik Gehl (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-36209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dominik Gehl updated SPARK-36209:
-
Description: 
On https://spark.apache.org/docs/latest/sql-programming-guide.html , the link 
to the python doc points to 
https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame
 which returns a "Not found"


> https://spark.apache.org/docs/latest/sql-programming-guide.html contains 
> invalid link to Python doc
> ---
>
> Key: SPARK-36209
> URL: https://issues.apache.org/jira/browse/SPARK-36209
> Project: Spark
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 3.1.2
> Environment: On 
> https://spark.apache.org/docs/latest/sql-programming-guide.html, the link to 
> the python doc points to 
> https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame
>  which returns a "Not found"
>    Reporter: Dominik Gehl
>Priority: Major
>
> On https://spark.apache.org/docs/latest/sql-programming-guide.html , the link 
> to the python doc points to 
> https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame
>  which returns a "Not found"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: Berlin Apache Spark Meetup

2015-02-17 Thread Matei Zaharia
Thanks! I've added you.

Matei

 On Feb 17, 2015, at 4:06 PM, Ralph Bergmann | the4thFloor.eu 
 ra...@the4thfloor.eu wrote:
 
 Hi,
 
 
 there is a small Spark Meetup group in Berlin, Germany :-)
 http://www.meetup.com/Berlin-Apache-Spark-Meetup/
 
 Plaes add this group to the Meetups list at
 https://spark.apache.org/community.html
 
 
 Ralph
 
 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org
 


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Unsubscribe

2015-09-21 Thread Richard Hillegas

To unsubscribe from the dev list, please send a message to
dev-unsubscr...@spark.apache.org as described here:
http://spark.apache.org/community.html#mailing-lists.

Thanks,
-Rick

Dulaj Viduranga <vidura...@icloud.com> wrote on 09/21/2015 10:15:58 AM:

> From: Dulaj Viduranga <vidura...@icloud.com>
> To: dev@spark.apache.org
> Date: 09/21/2015 10:16 AM
> Subject: Unsubscribe
>
> Unsubscribe
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>

Re: UDF in SparkR

2016-08-17 Thread Felix Cheung
This is supported in Spark 2.0.0 as dapply and gapply. Please see the API doc:
https://spark.apache.org/docs/2.0.0/api/R/

Feedback welcome and appreciated!


_
From: Yogesh Vyas <informy...@gmail.com<mailto:informy...@gmail.com>>
Sent: Tuesday, August 16, 2016 11:39 PM
Subject: UDF in SparkR
To: user <user@spark.apache.org<mailto:user@spark.apache.org>>


Hi,

Is there is any way of using UDF in SparkR ?

Regards,
Yogesh

-
To unsubscribe e-mail: 
user-unsubscr...@spark.apache.org<mailto:user-unsubscr...@spark.apache.org>





Re: UNSUBSCRIBE

2017-08-02 Thread Andi Levin
Writing to the list user@spark.apache.org
Subscription address user-subscr...@spark.apache.org
Digest subscription address user-digest-subscr...@spark.apache.org
Unsubscription addresses user-unsubscr...@spark.apache.org
Getting help with the list user-h...@spark.apache.org
Feeds: Atom 1.0
<https://mail-archives.apache.org/mod_mbox/spark-user/?format=atom>

On Wed, Aug 2, 2017 at 9:36 AM, DAS, SUTANU <sd2...@att.com> wrote:

>
>
>
>
>
>
> Please Unsubscribe me.
>
>
>
>
>



-- 
Cheers,


/andi

Andi Levin
*(415) 462-4490 <%28415%29%20462-4490>*


[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-02 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69383569
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType),
+  Literal.create(partToExtract, StringType))), expected)
+}
+def checkParseUrlWithKey(
+expected: String, urlStr: String,
+partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+  Literal.create(key, StringType))), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("userinfo", 
"http://useri...@spark.apache.org/path?query=1;, "USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "")
+
+// exceptional cases
+intercept[java.util.regex.PatternSyntaxException] {
--- End diff --

In other words, Spark of this PR runs the execution for that problematic 
parameter while Hive does not.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-02 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69384995
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType),
+  Literal.create(partToExtract, StringType))), expected)
+}
+def checkParseUrlWithKey(
+expected: String, urlStr: String,
+partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+  Literal.create(key, StringType))), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("userinfo", 
"http://useri...@spark.apache.org/path?query=1;, "USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "")
+
+// exceptional cases
+intercept[java.util.regex.PatternSyntaxException] {
--- End diff --

Thank you, @janplus .


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-03 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69388219
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType),
+  Literal.create(partToExtract, StringType))), expected)
+}
+def checkParseUrlWithKey(
+expected: String, urlStr: String,
+partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+  Literal.create(key, StringType))), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("userinfo", 
"http://useri...@spark.apache.org/path?query=1;, "USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "")
+
+// exceptional cases
+intercept[java.util.regex.PatternSyntaxException] {
--- End diff --

I think it's fine to throw the exception at executor side, no need to 
specially handle literal here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-03 Thread janplus
Github user janplus commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69401574
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,51 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType),
+  Literal.create(partToExtract, StringType))), expected)
+}
+def checkParseUrlWithKey(
+expected: String, urlStr: String,
+partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Seq(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+  Literal.create(key, StringType))), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("userinfo", 
"http://useri...@spark.apache.org/path?query=1;, "USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "")
+
+// exceptional cases
+intercept[java.util.regex.PatternSyntaxException] {
--- End diff --

OK, @cloud-fan 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: Hamburg Apache Spark Meetup

2015-02-25 Thread Petar Zecevic


Please add the Zagreb Meetup group, too.

http://www.meetup.com/Apache-Spark-Zagreb-Meetup/

Thanks!

On 18.2.2015. 19:46, Johan Beisser wrote:

If you could also add the Hamburg Apache Spark Meetup, I'd appreciate it.

http://www.meetup.com/Hamburg-Apache-Spark-Meetup/

On Tue, Feb 17, 2015 at 5:08 PM, Matei Zaharia matei.zaha...@gmail.com wrote:

Thanks! I've added you.

Matei


On Feb 17, 2015, at 4:06 PM, Ralph Bergmann | the4thFloor.eu 
ra...@the4thfloor.eu wrote:

Hi,


there is a small Spark Meetup group in Berlin, Germany :-)
http://www.meetup.com/Berlin-Apache-Spark-Meetup/

Plaes add this group to the Meetups list at
https://spark.apache.org/community.html


Ralph

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



[jira] [Updated] (SPARK-19546) Every mail to u...@spark.apache.org is getting blocked

2017-02-10 Thread Shivam Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivam Sharma updated SPARK-19546:
--
Description: 
Each time I am sending mail to  u...@spark.apache.org I am getting email from 
yahoo-inc that "tylerchap...@yahoo-inc.com is no longer with Yahoo! Inc".


  was:
Each time I am sending mail to  u...@spark.apache.org I am getting email from 
yahoo-inc that "tylerchap...@yahoo-inc.com is no longer with Yahoo! Inc".

P

Summary: Every mail to u...@spark.apache.org is getting blocked  (was: 
Every mail to u...@spark.apache.org is blocked)

> Every mail to u...@spark.apache.org is getting blocked
> --
>
> Key: SPARK-19546
> URL: https://issues.apache.org/jira/browse/SPARK-19546
> Project: Spark
>  Issue Type: IT Help
>  Components: Project Infra
>Affects Versions: 2.1.0
>Reporter: Shivam Sharma
>Priority: Minor
>
> Each time I am sending mail to  u...@spark.apache.org I am getting email from 
> yahoo-inc that "tylerchap...@yahoo-inc.com is no longer with Yahoo! Inc".



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-01 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69266712
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,43 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType)),
+expected)
+}
+def checkParseUrlWithKey(expected: String, urlStr: String,
+  partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+ Literal.create(key, StringType)), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("jian", "http://j...@spark.apache.org/path?query=1;, 
"USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
--- End diff --

Could you add exceptional cases by using the following statement?
```
intercept[AnalysisException] {
  ...
}
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-01 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69266361
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,43 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType)),
+expected)
+}
+def checkParseUrlWithKey(expected: String, urlStr: String,
+  partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+ Literal.create(key, StringType)), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("jian", "http://j...@spark.apache.org/path?query=1;, 
"USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
+
+// arguments checking
+assert(ParseUrl(Literal("1")).checkInputDataTypes().isFailure)
+assert(ParseUrl(Literal("1"), Literal("2"),
+Literal("3"), 
Literal("4")).checkInputDataTypes().isFailure)
--- End diff --

nit: 2-indentation.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-01 Thread janplus
Github user janplus commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69297640
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,43 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType)),
+expected)
+}
+def checkParseUrlWithKey(expected: String, urlStr: String,
+  partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+ Literal.create(key, StringType)), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("jian", "http://j...@spark.apache.org/path?query=1;, 
"USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
--- End diff --

I am not sure. Is there any exceptional case?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-01 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69303124
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,43 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType)),
+expected)
+}
+def checkParseUrlWithKey(expected: String, urlStr: String,
+  partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+ Literal.create(key, StringType)), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("jian", "http://j...@spark.apache.org/path?query=1;, 
"USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
--- End diff --

e.g. invalid url, invalid `part`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-01 Thread janplus
Github user janplus commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69307883
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,43 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType)),
+expected)
+}
+def checkParseUrlWithKey(expected: String, urlStr: String,
+  partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+ Literal.create(key, StringType)), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("jian", "http://j...@spark.apache.org/path?query=1;, 
"USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
--- End diff --

Oh sorry, I miss the point.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-01 Thread janplus
Github user janplus commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69323801
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,43 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType)),
+expected)
+}
+def checkParseUrlWithKey(expected: String, urlStr: String,
+  partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+ Literal.create(key, StringType)), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("jian", "http://j...@spark.apache.org/path?query=1;, 
"USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
--- End diff --

As invalid url and invalid part just get `null` result, I wonder in what 
circumstance there would throw an exception?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14008: [SPARK-16281][SQL] Implement parse_url SQL functi...

2016-07-01 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/14008#discussion_r69350632
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/StringExpressionsSuite.scala
 ---
@@ -725,4 +725,43 @@ class StringExpressionsSuite extends SparkFunSuite 
with ExpressionEvalHelper {
 checkEvaluation(FindInSet(Literal("abf"), Literal("abc,b,ab,c,def")), 
0)
 checkEvaluation(FindInSet(Literal("ab,"), Literal("abc,b,ab,c,def")), 
0)
   }
+
+  test("ParseUrl") {
+def checkParseUrl(expected: String, urlStr: String, partToExtract: 
String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType)),
+expected)
+}
+def checkParseUrlWithKey(expected: String, urlStr: String,
+  partToExtract: String, key: String): Unit = {
+  checkEvaluation(
+ParseUrl(Literal.create(urlStr, StringType), 
Literal.create(partToExtract, StringType),
+ Literal.create(key, StringType)), expected)
+    }
+
    +checkParseUrl("spark.apache.org", 
"http://spark.apache.org/path?query=1;, "HOST")
+checkParseUrl("/path", "http://spark.apache.org/path?query=1;, "PATH")
+checkParseUrl("query=1", "http://spark.apache.org/path?query=1;, 
"QUERY")
+checkParseUrl("Ref", "http://spark.apache.org/path?query=1#Ref;, "REF")
+checkParseUrl("http", "http://spark.apache.org/path?query=1;, 
"PROTOCOL")
+checkParseUrl("/path?query=1", "http://spark.apache.org/path?query=1;, 
"FILE")
+checkParseUrl("spark.apache.org:8080", 
"http://spark.apache.org:8080/path?query=1;, "AUTHORITY")
+checkParseUrl("jian", "http://j...@spark.apache.org/path?query=1;, 
"USERINFO")
+checkParseUrlWithKey("1", "http://spark.apache.org/path?query=1;, 
"QUERY", "query")
+
+// Null checking
+checkParseUrl(null, null, "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, null)
+checkParseUrl(null, null, null)
+checkParseUrl(null, "test", "HOST")
    +checkParseUrl(null, "http://spark.apache.org/path?query=1;, "NO")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"HOST", "query")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", "quer")
+checkParseUrlWithKey(null, "http://spark.apache.org/path?query=1;, 
"QUERY", null)
--- End diff --

Try this one.
```
SELECT parse_url('http://spark/?','QUERY', '???')
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



  1   2   3   4   5   6   7   8   9   10   >