[jira] [Commented] (SPARK-13317) SPARK_LOCAL_IP does not bind on Slaves

2016-02-14 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15146693#comment-15146693
 ] 

DOAN DuyHai commented on SPARK-13317:
-

To complement this JIRA, I would say that the issue is:

*how to configure Spark to use public IP address for slaves on machine with 
multiple network interfaces* ?

> SPARK_LOCAL_IP does not bind on Slaves
> --
>
> Key: SPARK-13317
> URL: https://issues.apache.org/jira/browse/SPARK-13317
> Project: Spark
>  Issue Type: Bug
> Environment: Linux EC2, different VPC 
>Reporter: Christopher Bourez
>
> SPARK_LOCAL_IP does not bind to the provided IP on slaves.
> When launching a job or a spark-shell from a second network, the returned IP 
> for the slave is still the first IP of the slave. 
> So the job fails with the message : 
> Initial job has not accepted any resources; check your cluster UI to ensure 
> that workers are registered and have sufficient resources
> It is not a question of resources but the driver which cannot connect to the 
> slave given the wrong IP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-9435) Java UDFs don't work with GROUP BY expressions

2015-11-12 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-9435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15001866#comment-15001866
 ] 

DOAN DuyHai commented on SPARK-9435:


Same error for me:

{code:java}
// Register computeDecade() as a SparkSQL function
sqlContext.udf().register("computeDecade", (Integer year) -> 
computeDecade(year), DataTypes.StringType);

final List albums = Arrays.asList(new Album(2000, "1"), new 
Album(2000, "2"), new Album(2000, "3"));

final JavaRDD rdd = javaSc.parallelize(albums);
final DataFrame df = sqlContext.createDataFrame(rdd, Album.class);
df.registerTempTable("albums");

final DataFrame dataFrame = sqlContext.sql("SELECT 
computeDecade(year),count(title) "+
" FROM albums " +
" GROUP BY computeDecade(year)");

{code}

> Java UDFs don't work with GROUP BY expressions
> --
>
> Key: SPARK-9435
> URL: https://issues.apache.org/jira/browse/SPARK-9435
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.4.1
> Environment: All
>Reporter: James Aley
> Attachments: IncMain.java, points.txt
>
>
> If you define a UDF in Java, for example by implementing the UDF1 interface, 
> then try to use that UDF on a column in both the SELECT and GROUP BY clauses 
> of a query, you'll get an error like this:
> {code}
> "SELECT inc(y),COUNT(DISTINCT x) FROM test_table GROUP BY inc(y)"
> org.apache.spark.sql.AnalysisException: expression 'y' is neither present in 
> the group by, nor is it an aggregate function. Add to group by or wrap in 
> first() if you don't care which value you get.
> {code}
> We put together a minimal reproduction in the attached Java file, which makes 
> use of the data in the text file attached.
> I'm guessing there's some kind of issue with the equality implementation, so 
> Spark can't tell that those two expressions are the same maybe? If you do the 
> same thing from Scala, it works fine.
> Note for context: we ran into this issue while working around SPARK-9338.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-9435) Java UDFs don't work with GROUP BY expressions

2015-11-12 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-9435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002126#comment-15002126
 ] 

DOAN DuyHai edited comment on SPARK-9435 at 11/12/15 1:58 PM:
--

Work-around: *define the UDF using the Scala API instead*

{code:java}
public static final class ComputeDecadeFn extends 
AbstractFunction1 implements Serializable {
@Override
public String apply(Integer year) {
return computeDecade(year);
}
}
  sqlContext.udf().register("computeDecade", new ComputeDecadeFn(),
JavaApiHelper.getTypeTag(String.class),
JavaApiHelper.getTypeTag(Integer.class));
{code}

 You cannot use the lambda expression because the UDF function should be 
serializable.

 The _JavaApiHelper.getTypeTag()_ method  comes from 
*com.datastax.spark.connector.util.JavaApiHelper* here: 
https://github.com/datastax/spark-cassandra-connector/blob/master/spark-cassandra-connector/src/main/scala/com/datastax/spark/connector/util/JavaApiHelper.scala#L35


was (Author: doanduyhai):
Work-around: *define the UDF using the Scala API instead*

{code:java}
public static final class ComputeDecadeFn extends 
AbstractFunction1 implements Serializable {
@Override
public String apply(Integer year) {
return computeDecade(year);
}
}
  sqlContext.udf().register("computeDecade", new ComputeDecadeFn(),
JavaApiHelper.getTypeTag(String.class),
JavaApiHelper.getTypeTag(Integer.class));
{code}

 You cannot use the lambda expression because the UDF function should be 
serializable.


> Java UDFs don't work with GROUP BY expressions
> --
>
> Key: SPARK-9435
> URL: https://issues.apache.org/jira/browse/SPARK-9435
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.4.1
> Environment: All
>Reporter: James Aley
> Attachments: IncMain.java, points.txt
>
>
> If you define a UDF in Java, for example by implementing the UDF1 interface, 
> then try to use that UDF on a column in both the SELECT and GROUP BY clauses 
> of a query, you'll get an error like this:
> {code}
> "SELECT inc(y),COUNT(DISTINCT x) FROM test_table GROUP BY inc(y)"
> org.apache.spark.sql.AnalysisException: expression 'y' is neither present in 
> the group by, nor is it an aggregate function. Add to group by or wrap in 
> first() if you don't care which value you get.
> {code}
> We put together a minimal reproduction in the attached Java file, which makes 
> use of the data in the text file attached.
> I'm guessing there's some kind of issue with the equality implementation, so 
> Spark can't tell that those two expressions are the same maybe? If you do the 
> same thing from Scala, it works fine.
> Note for context: we ran into this issue while working around SPARK-9338.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-9435) Java UDFs don't work with GROUP BY expressions

2015-11-12 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-9435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15002126#comment-15002126
 ] 

DOAN DuyHai commented on SPARK-9435:


Work-around: *define the UDF using the Scala API instead*

{code:java}
public static final class ComputeDecadeFn extends 
AbstractFunction1 implements Serializable {
@Override
public String apply(Integer year) {
return computeDecade(year);
}
}
  sqlContext.udf().register("computeDecade", new ComputeDecadeFn(),
JavaApiHelper.getTypeTag(String.class),
JavaApiHelper.getTypeTag(Integer.class));
{code}

 You cannot use the lambda expression because the UDF function should be 
serializable.


> Java UDFs don't work with GROUP BY expressions
> --
>
> Key: SPARK-9435
> URL: https://issues.apache.org/jira/browse/SPARK-9435
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.4.1
> Environment: All
>Reporter: James Aley
> Attachments: IncMain.java, points.txt
>
>
> If you define a UDF in Java, for example by implementing the UDF1 interface, 
> then try to use that UDF on a column in both the SELECT and GROUP BY clauses 
> of a query, you'll get an error like this:
> {code}
> "SELECT inc(y),COUNT(DISTINCT x) FROM test_table GROUP BY inc(y)"
> org.apache.spark.sql.AnalysisException: expression 'y' is neither present in 
> the group by, nor is it an aggregate function. Add to group by or wrap in 
> first() if you don't care which value you get.
> {code}
> We put together a minimal reproduction in the attached Java file, which makes 
> use of the data in the text file attached.
> I'm guessing there's some kind of issue with the equality implementation, so 
> Spark can't tell that those two expressions are the same maybe? If you do the 
> same thing from Scala, it works fine.
> Note for context: we ran into this issue while working around SPARK-9338.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org