[jira] [Commented] (SPARK-28085) Spark Scala API documentation URLs not working properly in Chrome

2019-08-08 Thread Andrew Leverentz (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903377#comment-16903377
 ] 

Andrew Leverentz commented on SPARK-28085:
--

In Chrome 76, this issue appears to be resolved.  Thanks to anyone out there 
who submitted bug reports :)

> Spark Scala API documentation URLs not working properly in Chrome
> -
>
> Key: SPARK-28085
> URL: https://issues.apache.org/jira/browse/SPARK-28085
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation
>Affects Versions: 2.4.3
>Reporter: Andrew Leverentz
>Priority: Minor
>
> In Chrome version 75, URLs in the Scala API documentation are not working 
> properly, which makes them difficult to bookmark.
> For example, URLs like the following get redirected to a generic "root" 
> package page:
> [https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/Dataset.html]
> [https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.Dataset]
> Here's the URL that I get redirected to:
> [https://spark.apache.org/docs/latest/api/scala/index.html#package]
> This issue seems to have appeared between versions 74 and 75 of Chrome, but 
> the documentation URLs still work in Safari.  I suspect that this has 
> something to do with security-related changes to how Chrome 75 handles frames 
> and/or redirects.  I've reported this issue to the Chrome team via the 
> in-browser help menu, but I don't have any visibility into their response, so 
> it's not clear whether they'll consider this a bug or "working as intended".



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28085) Spark Scala API documentation URLs not working properly in Chrome

2019-07-22 Thread Andrew Leverentz (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890314#comment-16890314
 ] 

Andrew Leverentz commented on SPARK-28085:
--

This issue still remains, more than a month after the Chrome update that caused 
it.  It's not clear whether Google considers it a bug that needs fixing.  I've 
reported the issue to Google, as mentioned above, but if anyone else has a 
better way of contacting the Chrome team, I'd appreciate it if you could try to 
get in touch with them to see whether they are aware of this bug and planning 
to fix it.

> Spark Scala API documentation URLs not working properly in Chrome
> -
>
> Key: SPARK-28085
> URL: https://issues.apache.org/jira/browse/SPARK-28085
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation
>Affects Versions: 2.4.3
>Reporter: Andrew Leverentz
>Priority: Minor
>
> In Chrome version 75, URLs in the Scala API documentation are not working 
> properly, which makes them difficult to bookmark.
> For example, URLs like the following get redirected to a generic "root" 
> package page:
> [https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/Dataset.html]
> [https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.Dataset]
> Here's the URL that I get redirected to:
> [https://spark.apache.org/docs/latest/api/scala/index.html#package]
> This issue seems to have appeared between versions 74 and 75 of Chrome, but 
> the documentation URLs still work in Safari.  I suspect that this has 
> something to do with security-related changes to how Chrome 75 handles frames 
> and/or redirects.  I've reported this issue to the Chrome team via the 
> in-browser help menu, but I don't have any visibility into their response, so 
> it's not clear whether they'll consider this a bug or "working as intended".



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-28225) Unexpected behavior for Window functions

2019-07-22 Thread Andrew Leverentz (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Leverentz resolved SPARK-28225.
--
Resolution: Not A Problem

> Unexpected behavior for Window functions
> 
>
> Key: SPARK-28225
> URL: https://issues.apache.org/jira/browse/SPARK-28225
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: Andrew Leverentz
>Priority: Major
>
> I've noticed some odd behavior when combining the "first" aggregate function 
> with an ordered Window.
> In particular, I'm working with columns created using the syntax
> {code}
> first($"y", ignoreNulls = true).over(Window.orderBy($"x"))
> {code}
> Below, I'm including some code which reproduces this issue in a Databricks 
> notebook.
> *Code:*
> {code:java}
> import org.apache.spark.sql.functions.first
> import org.apache.spark.sql.expressions.Window
> import org.apache.spark.sql.Row
> import org.apache.spark.sql.types.{StructType,StructField,IntegerType}
> val schema = StructType(Seq(
>   StructField("x", IntegerType, false),
>   StructField("y", IntegerType, true),
>   StructField("z", IntegerType, true)
> ))
> val input =
>   spark.createDataFrame(sc.parallelize(Seq(
> Row(101, null, 11),
> Row(102, null, 12),
> Row(103, null, 13),
> Row(203, 24, null),
> Row(201, 26, null),
> Row(202, 25, null)
>   )), schema = schema)
> input.show
> val output = input
>   .withColumn("u1", first($"y", ignoreNulls = 
> true).over(Window.orderBy($"x".asc_nulls_last)))
>   .withColumn("u2", first($"y", ignoreNulls = 
> true).over(Window.orderBy($"x".asc)))
>   .withColumn("u3", first($"y", ignoreNulls = 
> true).over(Window.orderBy($"x".desc_nulls_last)))
>   .withColumn("u4", first($"y", ignoreNulls = 
> true).over(Window.orderBy($"x".desc)))
>   .withColumn("u5", first($"z", ignoreNulls = 
> true).over(Window.orderBy($"x".asc_nulls_last)))
>   .withColumn("u6", first($"z", ignoreNulls = 
> true).over(Window.orderBy($"x".asc)))
>   .withColumn("u7", first($"z", ignoreNulls = 
> true).over(Window.orderBy($"x".desc_nulls_last)))
>   .withColumn("u8", first($"z", ignoreNulls = 
> true).over(Window.orderBy($"x".desc)))
> output.show
> {code}
> *Expectation:*
> Based on my understanding of how ordered-Window and aggregate functions work, 
> the results I expected to see were:
>  * u1 = u2 = constant value of 26
>  * u3 = u4 = constant value of 24
>  * u5 = u6 = constant value of 11
>  * u7 = u8 = constant value of 13
> However, columns u1, u2, u7, and u8 contain some unexpected nulls. 
> *Results:*
> {code:java}
> +---+++++---+---+---+---+++
> |  x|   y|   z|  u1|  u2| u3| u4| u5| u6|  u7|  u8|
> +---+++++---+---+---+---+++
> |203|  24|null|  26|  26| 24| 24| 11| 11|null|null|
> |202|  25|null|  26|  26| 24| 24| 11| 11|null|null|
> |201|  26|null|  26|  26| 24| 24| 11| 11|null|null|
> |103|null|  13|null|null| 24| 24| 11| 11|  13|  13|
> |102|null|  12|null|null| 24| 24| 11| 11|  13|  13|
> |101|null|  11|null|null| 24| 24| 11| 11|  13|  13|
> +---+++++---+---+---+---+++
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28225) Unexpected behavior for Window functions

2019-07-22 Thread Andrew Leverentz (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890310#comment-16890310
 ] 

Andrew Leverentz commented on SPARK-28225:
--

Marco, thanks for the explanation.  In this case, the workaround in Scala is to 
use

{{Window.orderBy($"x").rowsBetween(Window.unboundedPreceding, 
Window.unboundedFollowing)}}

This issue can be marked resolved.

> Unexpected behavior for Window functions
> 
>
> Key: SPARK-28225
> URL: https://issues.apache.org/jira/browse/SPARK-28225
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: Andrew Leverentz
>Priority: Major
>
> I've noticed some odd behavior when combining the "first" aggregate function 
> with an ordered Window.
> In particular, I'm working with columns created using the syntax
> {code}
> first($"y", ignoreNulls = true).over(Window.orderBy($"x"))
> {code}
> Below, I'm including some code which reproduces this issue in a Databricks 
> notebook.
> *Code:*
> {code:java}
> import org.apache.spark.sql.functions.first
> import org.apache.spark.sql.expressions.Window
> import org.apache.spark.sql.Row
> import org.apache.spark.sql.types.{StructType,StructField,IntegerType}
> val schema = StructType(Seq(
>   StructField("x", IntegerType, false),
>   StructField("y", IntegerType, true),
>   StructField("z", IntegerType, true)
> ))
> val input =
>   spark.createDataFrame(sc.parallelize(Seq(
> Row(101, null, 11),
> Row(102, null, 12),
> Row(103, null, 13),
> Row(203, 24, null),
> Row(201, 26, null),
> Row(202, 25, null)
>   )), schema = schema)
> input.show
> val output = input
>   .withColumn("u1", first($"y", ignoreNulls = 
> true).over(Window.orderBy($"x".asc_nulls_last)))
>   .withColumn("u2", first($"y", ignoreNulls = 
> true).over(Window.orderBy($"x".asc)))
>   .withColumn("u3", first($"y", ignoreNulls = 
> true).over(Window.orderBy($"x".desc_nulls_last)))
>   .withColumn("u4", first($"y", ignoreNulls = 
> true).over(Window.orderBy($"x".desc)))
>   .withColumn("u5", first($"z", ignoreNulls = 
> true).over(Window.orderBy($"x".asc_nulls_last)))
>   .withColumn("u6", first($"z", ignoreNulls = 
> true).over(Window.orderBy($"x".asc)))
>   .withColumn("u7", first($"z", ignoreNulls = 
> true).over(Window.orderBy($"x".desc_nulls_last)))
>   .withColumn("u8", first($"z", ignoreNulls = 
> true).over(Window.orderBy($"x".desc)))
> output.show
> {code}
> *Expectation:*
> Based on my understanding of how ordered-Window and aggregate functions work, 
> the results I expected to see were:
>  * u1 = u2 = constant value of 26
>  * u3 = u4 = constant value of 24
>  * u5 = u6 = constant value of 11
>  * u7 = u8 = constant value of 13
> However, columns u1, u2, u7, and u8 contain some unexpected nulls. 
> *Results:*
> {code:java}
> +---+++++---+---+---+---+++
> |  x|   y|   z|  u1|  u2| u3| u4| u5| u6|  u7|  u8|
> +---+++++---+---+---+---+++
> |203|  24|null|  26|  26| 24| 24| 11| 11|null|null|
> |202|  25|null|  26|  26| 24| 24| 11| 11|null|null|
> |201|  26|null|  26|  26| 24| 24| 11| 11|null|null|
> |103|null|  13|null|null| 24| 24| 11| 11|  13|  13|
> |102|null|  12|null|null| 24| 24| 11| 11|  13|  13|
> |101|null|  11|null|null| 24| 24| 11| 11|  13|  13|
> +---+++++---+---+---+---+++
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-28225) Unexpected behavior for Window functions

2019-07-22 Thread Andrew Leverentz (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890310#comment-16890310
 ] 

Andrew Leverentz edited comment on SPARK-28225 at 7/22/19 4:54 PM:
---

Marco, thanks for the explanation.  In this case, the solution in Scala is to 
use

{{Window.orderBy($"x").rowsBetween(Window.unboundedPreceding, 
Window.unboundedFollowing)}}

This issue can be marked resolved.


was (Author: alev_etx):
Marco, thanks for the explanation.  In this case, the workaround in Scala is to 
use

{{Window.orderBy($"x").rowsBetween(Window.unboundedPreceding, 
Window.unboundedFollowing)}}

This issue can be marked resolved.

> Unexpected behavior for Window functions
> 
>
> Key: SPARK-28225
> URL: https://issues.apache.org/jira/browse/SPARK-28225
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.0
>Reporter: Andrew Leverentz
>Priority: Major
>
> I've noticed some odd behavior when combining the "first" aggregate function 
> with an ordered Window.
> In particular, I'm working with columns created using the syntax
> {code}
> first($"y", ignoreNulls = true).over(Window.orderBy($"x"))
> {code}
> Below, I'm including some code which reproduces this issue in a Databricks 
> notebook.
> *Code:*
> {code:java}
> import org.apache.spark.sql.functions.first
> import org.apache.spark.sql.expressions.Window
> import org.apache.spark.sql.Row
> import org.apache.spark.sql.types.{StructType,StructField,IntegerType}
> val schema = StructType(Seq(
>   StructField("x", IntegerType, false),
>   StructField("y", IntegerType, true),
>   StructField("z", IntegerType, true)
> ))
> val input =
>   spark.createDataFrame(sc.parallelize(Seq(
> Row(101, null, 11),
> Row(102, null, 12),
> Row(103, null, 13),
> Row(203, 24, null),
> Row(201, 26, null),
> Row(202, 25, null)
>   )), schema = schema)
> input.show
> val output = input
>   .withColumn("u1", first($"y", ignoreNulls = 
> true).over(Window.orderBy($"x".asc_nulls_last)))
>   .withColumn("u2", first($"y", ignoreNulls = 
> true).over(Window.orderBy($"x".asc)))
>   .withColumn("u3", first($"y", ignoreNulls = 
> true).over(Window.orderBy($"x".desc_nulls_last)))
>   .withColumn("u4", first($"y", ignoreNulls = 
> true).over(Window.orderBy($"x".desc)))
>   .withColumn("u5", first($"z", ignoreNulls = 
> true).over(Window.orderBy($"x".asc_nulls_last)))
>   .withColumn("u6", first($"z", ignoreNulls = 
> true).over(Window.orderBy($"x".asc)))
>   .withColumn("u7", first($"z", ignoreNulls = 
> true).over(Window.orderBy($"x".desc_nulls_last)))
>   .withColumn("u8", first($"z", ignoreNulls = 
> true).over(Window.orderBy($"x".desc)))
> output.show
> {code}
> *Expectation:*
> Based on my understanding of how ordered-Window and aggregate functions work, 
> the results I expected to see were:
>  * u1 = u2 = constant value of 26
>  * u3 = u4 = constant value of 24
>  * u5 = u6 = constant value of 11
>  * u7 = u8 = constant value of 13
> However, columns u1, u2, u7, and u8 contain some unexpected nulls. 
> *Results:*
> {code:java}
> +---+++++---+---+---+---+++
> |  x|   y|   z|  u1|  u2| u3| u4| u5| u6|  u7|  u8|
> +---+++++---+---+---+---+++
> |203|  24|null|  26|  26| 24| 24| 11| 11|null|null|
> |202|  25|null|  26|  26| 24| 24| 11| 11|null|null|
> |201|  26|null|  26|  26| 24| 24| 11| 11|null|null|
> |103|null|  13|null|null| 24| 24| 11| 11|  13|  13|
> |102|null|  12|null|null| 24| 24| 11| 11|  13|  13|
> |101|null|  11|null|null| 24| 24| 11| 11|  13|  13|
> +---+++++---+---+---+---+++
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-28225) Unexpected behavior for Window functions

2019-07-01 Thread Andrew Leverentz (JIRA)
Andrew Leverentz created SPARK-28225:


 Summary: Unexpected behavior for Window functions
 Key: SPARK-28225
 URL: https://issues.apache.org/jira/browse/SPARK-28225
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.4.0
Reporter: Andrew Leverentz


I've noticed some odd behavior when combining the "first" aggregate function 
with an ordered Window.

In particular, I'm working with columns created using the syntax
{code}
first($"y", ignoreNulls = true).over(Window.orderBy($"x"))
{code}
Below, I'm including some code which reproduces this issue in a Databricks 
notebook.

*Code:*
{code:java}
import org.apache.spark.sql.functions.first
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.Row
import org.apache.spark.sql.types.{StructType,StructField,IntegerType}

val schema = StructType(Seq(
  StructField("x", IntegerType, false),
  StructField("y", IntegerType, true),
  StructField("z", IntegerType, true)
))

val input =
  spark.createDataFrame(sc.parallelize(Seq(
Row(101, null, 11),
Row(102, null, 12),
Row(103, null, 13),
Row(203, 24, null),
Row(201, 26, null),
Row(202, 25, null)
  )), schema = schema)

input.show

val output = input
  .withColumn("u1", first($"y", ignoreNulls = 
true).over(Window.orderBy($"x".asc_nulls_last)))
  .withColumn("u2", first($"y", ignoreNulls = 
true).over(Window.orderBy($"x".asc)))
  .withColumn("u3", first($"y", ignoreNulls = 
true).over(Window.orderBy($"x".desc_nulls_last)))
  .withColumn("u4", first($"y", ignoreNulls = 
true).over(Window.orderBy($"x".desc)))
  .withColumn("u5", first($"z", ignoreNulls = 
true).over(Window.orderBy($"x".asc_nulls_last)))
  .withColumn("u6", first($"z", ignoreNulls = 
true).over(Window.orderBy($"x".asc)))
  .withColumn("u7", first($"z", ignoreNulls = 
true).over(Window.orderBy($"x".desc_nulls_last)))
  .withColumn("u8", first($"z", ignoreNulls = 
true).over(Window.orderBy($"x".desc)))

output.show
{code}
*Expectation:*

Based on my understanding of how ordered-Window and aggregate functions work, 
the results I expected to see were:
 * u1 = u2 = constant value of 26
 * u3 = u4 = constant value of 24
 * u5 = u6 = constant value of 11
 * u7 = u8 = constant value of 13

However, columns u1, u2, u7, and u8 contain some unexpected nulls. 

*Results:*
{code:java}
+---+++++---+---+---+---+++
|  x|   y|   z|  u1|  u2| u3| u4| u5| u6|  u7|  u8|
+---+++++---+---+---+---+++
|203|  24|null|  26|  26| 24| 24| 11| 11|null|null|
|202|  25|null|  26|  26| 24| 24| 11| 11|null|null|
|201|  26|null|  26|  26| 24| 24| 11| 11|null|null|
|103|null|  13|null|null| 24| 24| 11| 11|  13|  13|
|102|null|  12|null|null| 24| 24| 11| 11|  13|  13|
|101|null|  11|null|null| 24| 24| 11| 11|  13|  13|
+---+++++---+---+---+---+++
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-28085) Spark Scala API documentation URLs not working properly in Chrome

2019-06-18 Thread Andrew Leverentz (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Leverentz updated SPARK-28085:
-
Description: 
In Chrome version 75, URLs in the Scala API documentation are not working 
properly, which makes them difficult to bookmark.

For example, URLs like the following get redirected to a generic "root" package 
page:

[https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/Dataset.html]

[https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.Dataset]

Here's the URL that I get redirected to:

[https://spark.apache.org/docs/latest/api/scala/index.html#package]

This issue seems to have appeared between versions 74 and 75 of Chrome, but the 
documentation URLs still work in Safari.  I suspect that this has something to 
do with security-related changes to how Chrome 75 handles frames and/or 
redirects.  I've reported this issue to the Chrome team via the in-browser help 
menu, but I don't have any visibility into their response, so it's not clear 
whether they'll consider this a bug or "working as intended".

  was:
In Chrome version 75, URLs in the Scala API documentation are not working 
properly, which makes them difficult to bookmark.

For example, URLs like the following get redirected to a generic "root" package 
page:

[https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/Dataset.html]

[https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.Dataset]

Here's the URL that I get :

[https://spark.apache.org/docs/latest/api/scala/index.html#package]

This issue seems to have appeared between versions 74 and 75 of Chrome, but the 
documentation URLs still work in Safari.  I suspect that this has something to 
do with security-related changes to how Chrome 75 handles frames and/or 
redirects.  I've reported this issue to the Chrome team via the in-browser help 
menu, but I don't have any visibility into their response, so it's not clear 
whether they'll consider this a bug or "working as intended".


> Spark Scala API documentation URLs not working properly in Chrome
> -
>
> Key: SPARK-28085
> URL: https://issues.apache.org/jira/browse/SPARK-28085
> Project: Spark
>  Issue Type: Documentation
>  Components: Documentation
>Affects Versions: 2.4.3
>Reporter: Andrew Leverentz
>Priority: Minor
>
> In Chrome version 75, URLs in the Scala API documentation are not working 
> properly, which makes them difficult to bookmark.
> For example, URLs like the following get redirected to a generic "root" 
> package page:
> [https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/Dataset.html]
> [https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.Dataset]
> Here's the URL that I get redirected to:
> [https://spark.apache.org/docs/latest/api/scala/index.html#package]
> This issue seems to have appeared between versions 74 and 75 of Chrome, but 
> the documentation URLs still work in Safari.  I suspect that this has 
> something to do with security-related changes to how Chrome 75 handles frames 
> and/or redirects.  I've reported this issue to the Chrome team via the 
> in-browser help menu, but I don't have any visibility into their response, so 
> it's not clear whether they'll consider this a bug or "working as intended".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-28085) Spark Scala API documentation URLs not working properly in Chrome

2019-06-17 Thread Andrew Leverentz (JIRA)
Andrew Leverentz created SPARK-28085:


 Summary: Spark Scala API documentation URLs not working properly 
in Chrome
 Key: SPARK-28085
 URL: https://issues.apache.org/jira/browse/SPARK-28085
 Project: Spark
  Issue Type: Documentation
  Components: Documentation
Affects Versions: 2.4.3
Reporter: Andrew Leverentz


In Chrome version 75, URLs in the Scala API documentation are not working 
properly, which makes them difficult to bookmark.

For example, URLs like the following get redirected to a generic "root" package 
page:

[https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/Dataset.html]

[https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.Dataset]

Here's the URL that I get :

[https://spark.apache.org/docs/latest/api/scala/index.html#package]

This issue seems to have appeared between versions 74 and 75 of Chrome, but the 
documentation URLs still work in Safari.  I suspect that this has something to 
do with security-related changes to how Chrome 75 handles frames and/or 
redirects.  I've reported this issue to the Chrome team via the in-browser help 
menu, but I don't have any visibility into their response, so it's not clear 
whether they'll consider this a bug or "working as intended".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org