[jira] [Updated] (HUDI-1551) Support Partition with BigDecimal/Integer field

2021-04-07 Thread Chanh Le (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chanh Le updated HUDI-1551:
---
Description: 
In my data the time indicator field is in BigDecimal/Integer -> due to trading 
data related so need to records in more precision than normal.

I would like to add support to partition based on this field type for 
TimestampBasedKeyGenerator.

 

  was:
In my data the time indicator field is in BigDecimal -> due to trading data 
related so need to records in more precision than normal.

I would like to add support to partition based on this field type for 
TimestampBasedKeyGenerator.

 


> Support Partition with BigDecimal/Integer field
> ---
>
> Key: HUDI-1551
> URL: https://issues.apache.org/jira/browse/HUDI-1551
> Project: Apache Hudi
>  Issue Type: New Feature
>  Components: newbie
>Reporter: Chanh Le
>Priority: Trivial
>
> In my data the time indicator field is in BigDecimal/Integer -> due to 
> trading data related so need to records in more precision than normal.
> I would like to add support to partition based on this field type for 
> TimestampBasedKeyGenerator.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HUDI-1551) Support Partition with BigDecimal/Integer field

2021-04-07 Thread Chanh Le (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chanh Le updated HUDI-1551:
---
Fix Version/s: (was: 0.7.0)

> Support Partition with BigDecimal/Integer field
> ---
>
> Key: HUDI-1551
> URL: https://issues.apache.org/jira/browse/HUDI-1551
> Project: Apache Hudi
>  Issue Type: New Feature
>  Components: newbie
>Reporter: Chanh Le
>Priority: Trivial
>
> In my data the time indicator field is in BigDecimal -> due to trading data 
> related so need to records in more precision than normal.
> I would like to add support to partition based on this field type for 
> TimestampBasedKeyGenerator.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HUDI-1551) Support Partition with BigDecimal/Integer field

2021-04-07 Thread Chanh Le (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chanh Le updated HUDI-1551:
---
Summary: Support Partition with BigDecimal/Integer field  (was: Support 
Partition with BigDecimal field)

> Support Partition with BigDecimal/Integer field
> ---
>
> Key: HUDI-1551
> URL: https://issues.apache.org/jira/browse/HUDI-1551
> Project: Apache Hudi
>  Issue Type: New Feature
>  Components: newbie
>Reporter: Chanh Le
>Priority: Trivial
> Fix For: 0.7.0
>
>
> In my data the time indicator field is in BigDecimal -> due to trading data 
> related so need to records in more precision than normal.
> I would like to add support to partition based on this field type for 
> TimestampBasedKeyGenerator.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HUDI-1551) Support Partition with BigDecimal field

2021-01-25 Thread Chanh Le (Jira)
Chanh Le created HUDI-1551:
--

 Summary: Support Partition with BigDecimal field
 Key: HUDI-1551
 URL: https://issues.apache.org/jira/browse/HUDI-1551
 Project: Apache Hudi
  Issue Type: New Feature
  Components: newbie
Reporter: Chanh Le
 Fix For: 0.7.0


In my data the time indicator field is in BigDecimal -> due to trading data 
related so need to records in more precision than normal.

I would like to add support to partition based on this field type for 
TimestampBasedKeyGenerator.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ZEPPELIN-1723) Math formula support library path error

2016-11-28 Thread Chanh Le (JIRA)
Chanh Le created ZEPPELIN-1723:
--

 Summary: Math formula support library path error
 Key: ZEPPELIN-1723
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1723
 Project: Zeppelin
  Issue Type: Bug
  Components: front-end
Affects Versions: 0.7.0
Reporter: Chanh Le


I set ZEPPELIN_SERVER_CONTEXT_PATH is /zeppelin/

and this is what happen after I do that.
!https://camo.githubusercontent.com/586205cd96d380676754968157d0fe78fafdc78b/687474703a2f2f692e696d6775722e636f6d2f444531556769782e6a7067!

It is working without set ZEPPELIN_SERVER_CONTEXT_PATH.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SPARK-16518) Schema Compatibility of Parquet Data Source

2016-07-30 Thread Chanh Le (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-16518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15400755#comment-15400755
 ] 

Chanh Le commented on SPARK-16518:
--

Did we have a patch for that?
Right now I have this error too.


> Schema Compatibility of Parquet Data Source
> ---
>
> Key: SPARK-16518
> URL: https://issues.apache.org/jira/browse/SPARK-16518
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.0.0
>Reporter: Xiao Li
>
> Currently, we are not checking the schema compatibility. Different file 
> formats behave differently. This JIRA just summarizes what I observed for 
> parquet data source tables.
> *Scenario 1 Data type mismatch*:
> The existing schema is {{(col1 int, col2 string)}}
> The schema of appending dataset is {{(col1 int, col2 int)}}
> *Case 1*: _when {{spark.sql.parquet.mergeSchema}} is {{false}}_, the error we 
> got:
> {noformat}
> Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most 
> recent failure:
>  Lost task 0.0 in stage 4.0 (TID 4, localhost): java.lang.NullPointerException
>   at 
> org.apache.spark.sql.execution.vectorized.OnHeapColumnVector.getInt(OnHeapColumnVector.java:231)
>   at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(generated.java:62)
> {noformat}
> *Case 2*: _when {{spark.sql.parquet.mergeSchema}} is {{true}}_, the error we 
> got:
> {noformat}
> Job aborted due to stage failure: Task 0 in stage 3.0 failed 1 times, most 
> recent failure: Lost task 0.0 in stage 3.0 (TID 3, localhost): 
> org.apache.spark.SparkException:
>  Failed merging schema of file 
> file:/private/var/folders/4b/sgmfldk15js406vk7lw5llzwgn/T/spark-4c2f0b69-ee05-4be1-91f0-0e54f89f2308/part-r-0-6b76638c-a624-444c-9479-3c8e894cb65e.snappy.parquet:
> root
>  |-- a: integer (nullable = false)
>  |-- b: string (nullable = true)
> {noformat}
> *Scenario 2 More columns in append dataset*:
> The existing schema is {{(col1 int, col2 string)}}
> The schema of appending dataset is {{(col1 int, col2 string, col3 int)}}
> *Case 1*: _when {{spark.sql.parquet.mergeSchema}} is {{false}}_, the schema 
> of the resultset is {{(col1 int, col2 string)}}.
> *Case 2*: _when {{spark.sql.parquet.mergeSchema}} is {{true}}_, the schema of 
> the resultset is {{(col1 int, col2 string, col3 int)}}.
> *Scenario 3 Less columns in append dataset*:
> The existing schema is {{(col1 int, col2 string)}}
> The schema of appending dataset is {{(col1 int)}}
>*Case 1*: _when {{spark.sql.parquet.mergeSchema}} is {{false}}_, the 
> schema of the resultset is {{(col1 int, col2 string)}}.
>*Case 2*: _when {{spark.sql.parquet.mergeSchema}} is {{true}}_, the schema 
> of the resultset is {{(col1 int)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (MESOS-5868) Task is running but not show in UI

2016-07-19 Thread Chanh Le (JIRA)
Chanh Le created MESOS-5868:
---

 Summary: Task is running but not show in UI
 Key: MESOS-5868
 URL: https://issues.apache.org/jira/browse/MESOS-5868
 Project: Mesos
  Issue Type: Bug
  Components: webui
Affects Versions: 0.28.1
 Environment: Centos 6.7
Reporter: Chanh Le


This happen when I try to restart the masters node without downing any slaves.
As you can see 6 tasks are running and in Active Tasks show nothing.
!http://imgur.com/a/jmmak| Tasks are running!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5868) Task is running but not show in UI

2016-07-19 Thread Chanh Le (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chanh Le updated MESOS-5868:

Description: 
This happen when I try to restart the masters node without downing any slaves.
As you can see 6 tasks are running and in Active Tasks show nothing.
!http://i.imgur.com/UaYqDN1.png| Tasks are running!

  was:
This happen when I try to restart the masters node without downing any slaves.
As you can see 6 tasks are running and in Active Tasks show nothing.
!http://imgur.com/a/jmmak| Tasks are running!


> Task is running but not show in UI
> --
>
> Key: MESOS-5868
> URL: https://issues.apache.org/jira/browse/MESOS-5868
> Project: Mesos
>  Issue Type: Bug
>  Components: webui
>Affects Versions: 0.28.1
> Environment: Centos 6.7
>Reporter: Chanh Le
>  Labels: easyfix
>
> This happen when I try to restart the masters node without downing any slaves.
> As you can see 6 tasks are running and in Active Tasks show nothing.
> !http://i.imgur.com/UaYqDN1.png| Tasks are running!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (SPARK-7703) Task failure caused by block fetch failure in BlockManager.doGetRemote() when using TorrentBroadcast

2016-05-27 Thread Chanh Le (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chanh Le updated SPARK-7703:

Comment: was deleted

(was: Any update on that? 
I have the same error too.
java.io.IOException: org.apache.spark.storage.BlockFetchException: Failed to 
fetch block from 1 locations. Most recent failure cause:
https://gist.github.com/giaosudau/3f7087707dcabc53c3b3bf54b0503720)

> Task failure caused by block fetch failure in BlockManager.doGetRemote() when 
> using TorrentBroadcast
> 
>
> Key: SPARK-7703
> URL: https://issues.apache.org/jira/browse/SPARK-7703
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.2.1, 1.3.1
> Environment: Red Hat Enterprise Linux Server release 7.0 (Maipo)
> Spark 1.3.1 Release
>Reporter: Hailong Wen
>
> I am from IBM Platform Symphony team and we are working to integration Spark 
> with our EGO to provide a fine-grained dynamic allocation Resource Manager. 
> We found a defect in current implementation of BlockManager.doGetRemote():
> {noformat}
>   private def doGetRemote(blockId: BlockId, asBlockResult: Boolean): 
> Option[Any] = {
> require(blockId != null, "BlockId is null")
> val locations = Random.shuffle(master.getLocations(blockId)) 
> <--- Issue2: locations may be out of date
> for (loc <- locations) {
>   logDebug(s"Getting remote block $blockId from $loc")
>   val data = blockTransferService.fetchBlockSync(
> loc.host, loc.port, loc.executorId, blockId.toString).nioByteBuffer() 
>  <--- Issue1: This statement is not in try/catch
>   if (data != null) {
> if (asBlockResult) {
>   return Some(new BlockResult(
> dataDeserialize(blockId, data),
> DataReadMethod.Network,
> data.limit()))
> } else {
>   return Some(data)
> }
>   }
>   logDebug(s"The value of block $blockId is null")
> }
> logDebug(s"Block $blockId not found")
> None
>   }
> {noformat}
> * Issue 1: Although the block fetch uses "for" to try all available 
> locations, the fetch method is not guarded by a "Try" block. When exception 
> occurs, this method will directly throw the error instead of trying other 
> block locations. The uncaught exception will cause task failure.
> * Issue 2: Constant "location" is acquired before fetching, however in a 
> dynamic allocation environment the block locations may change.
> We hit the above 2 issues in our use case, where Executors exit after all its 
> assigned tasks are done. We *occasionally* get the following error (issue 1.):
> {noformat}
> 15/05/13 10:28:35 INFO Executor: Running task 27.0 in stage 0.0 (TID 27)
> 15/05/13 10:28:35 DEBUG Executor: Task 26's epoch is 0
> 15/05/13 10:28:35 DEBUG Executor: Task 28's epoch is 0
> 15/05/13 10:28:35 DEBUG Executor: Task 27's epoch is 0
> 15/05/13 10:28:35 DEBUG BlockManager: Getting local block broadcast_0
> 15/05/13 10:28:35 DEBUG BlockManager: Block broadcast_0 not registered locally
> 15/05/13 10:28:35 INFO TorrentBroadcast: Started reading broadcast variable 0
> 15/05/13 10:28:35 DEBUG TorrentBroadcast: Reading piece broadcast_0_piece0 of 
> broadcast_0
> 15/05/13 10:28:35 DEBUG BlockManager: Getting local block broadcast_0_piece0 
> as bytes
> 15/05/13 10:28:35 DEBUG BlockManager: Block broadcast_0_piece0 not registered 
> locally
> 15/05/13 10:28:35 DEBUG BlockManager: Getting remote block broadcast_0_piece0 
> as bytes
> 15/05/13 10:28:35 DEBUG BlockManager: Getting remote block broadcast_0_piece0 
> from BlockManagerId(c390c311-bd97-4a99-bcb9-b32fd3dede17, sparkbj01, 37599)
> 15/05/13 10:28:35 TRACE NettyBlockTransferService: Fetch blocks from 
> sparkbj01:37599 (executor id c390c311-bd97-4a99-bcb9-b32fd3dede17)
> 15/05/13 10:28:35 DEBUG TransportClientFactory: Creating new connection to 
> sparkbj01/9.111.254.195:37599
> 15/05/13 10:28:35 ERROR RetryingBlockFetcher: Exception while beginning fetch 
> of 1 outstanding blocks 
> java.io.IOException: Failed to connect to sparkbj01/9.111.254.195:37599
>   at 
> org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:191)
>   at 
> org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:156)
>   at 
> org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:78)
>   at 
> org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)
>   at 
> org.apache.spark.network.shuffle.RetryingBlockFetcher.start(RetryingBlockFetcher.java:120)
>   at 
> 

[jira] [Commented] (SPARK-7703) Task failure caused by block fetch failure in BlockManager.doGetRemote() when using TorrentBroadcast

2016-05-27 Thread Chanh Le (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303655#comment-15303655
 ] 

Chanh Le commented on SPARK-7703:
-

Any update on that? 
I have the same error.
java.io.IOException: org.apache.spark.storage.BlockFetchException: Failed to 
fetch block from 1 locations. Most recent failure cause:
https://gist.github.com/giaosudau/3f7087707dcabc53c3b3bf54b0503720

> Task failure caused by block fetch failure in BlockManager.doGetRemote() when 
> using TorrentBroadcast
> 
>
> Key: SPARK-7703
> URL: https://issues.apache.org/jira/browse/SPARK-7703
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.2.1, 1.3.1
> Environment: Red Hat Enterprise Linux Server release 7.0 (Maipo)
> Spark 1.3.1 Release
>Reporter: Hailong Wen
>
> I am from IBM Platform Symphony team and we are working to integration Spark 
> with our EGO to provide a fine-grained dynamic allocation Resource Manager. 
> We found a defect in current implementation of BlockManager.doGetRemote():
> {noformat}
>   private def doGetRemote(blockId: BlockId, asBlockResult: Boolean): 
> Option[Any] = {
> require(blockId != null, "BlockId is null")
> val locations = Random.shuffle(master.getLocations(blockId)) 
> <--- Issue2: locations may be out of date
> for (loc <- locations) {
>   logDebug(s"Getting remote block $blockId from $loc")
>   val data = blockTransferService.fetchBlockSync(
> loc.host, loc.port, loc.executorId, blockId.toString).nioByteBuffer() 
>  <--- Issue1: This statement is not in try/catch
>   if (data != null) {
> if (asBlockResult) {
>   return Some(new BlockResult(
> dataDeserialize(blockId, data),
> DataReadMethod.Network,
> data.limit()))
> } else {
>   return Some(data)
> }
>   }
>   logDebug(s"The value of block $blockId is null")
> }
> logDebug(s"Block $blockId not found")
> None
>   }
> {noformat}
> * Issue 1: Although the block fetch uses "for" to try all available 
> locations, the fetch method is not guarded by a "Try" block. When exception 
> occurs, this method will directly throw the error instead of trying other 
> block locations. The uncaught exception will cause task failure.
> * Issue 2: Constant "location" is acquired before fetching, however in a 
> dynamic allocation environment the block locations may change.
> We hit the above 2 issues in our use case, where Executors exit after all its 
> assigned tasks are done. We *occasionally* get the following error (issue 1.):
> {noformat}
> 15/05/13 10:28:35 INFO Executor: Running task 27.0 in stage 0.0 (TID 27)
> 15/05/13 10:28:35 DEBUG Executor: Task 26's epoch is 0
> 15/05/13 10:28:35 DEBUG Executor: Task 28's epoch is 0
> 15/05/13 10:28:35 DEBUG Executor: Task 27's epoch is 0
> 15/05/13 10:28:35 DEBUG BlockManager: Getting local block broadcast_0
> 15/05/13 10:28:35 DEBUG BlockManager: Block broadcast_0 not registered locally
> 15/05/13 10:28:35 INFO TorrentBroadcast: Started reading broadcast variable 0
> 15/05/13 10:28:35 DEBUG TorrentBroadcast: Reading piece broadcast_0_piece0 of 
> broadcast_0
> 15/05/13 10:28:35 DEBUG BlockManager: Getting local block broadcast_0_piece0 
> as bytes
> 15/05/13 10:28:35 DEBUG BlockManager: Block broadcast_0_piece0 not registered 
> locally
> 15/05/13 10:28:35 DEBUG BlockManager: Getting remote block broadcast_0_piece0 
> as bytes
> 15/05/13 10:28:35 DEBUG BlockManager: Getting remote block broadcast_0_piece0 
> from BlockManagerId(c390c311-bd97-4a99-bcb9-b32fd3dede17, sparkbj01, 37599)
> 15/05/13 10:28:35 TRACE NettyBlockTransferService: Fetch blocks from 
> sparkbj01:37599 (executor id c390c311-bd97-4a99-bcb9-b32fd3dede17)
> 15/05/13 10:28:35 DEBUG TransportClientFactory: Creating new connection to 
> sparkbj01/9.111.254.195:37599
> 15/05/13 10:28:35 ERROR RetryingBlockFetcher: Exception while beginning fetch 
> of 1 outstanding blocks 
> java.io.IOException: Failed to connect to sparkbj01/9.111.254.195:37599
>   at 
> org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:191)
>   at 
> org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:156)
>   at 
> org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:78)
>   at 
> org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)
>   at 
> org.apache.spark.network.shuffle.RetryingBlockFetcher.start(RetryingBlockFetcher.java:120)
>   at 
> 

[jira] [Comment Edited] (SPARK-7703) Task failure caused by block fetch failure in BlockManager.doGetRemote() when using TorrentBroadcast

2016-05-27 Thread Chanh Le (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303655#comment-15303655
 ] 

Chanh Le edited comment on SPARK-7703 at 5/27/16 6:52 AM:
--

Any update on that? 
I have the same error too.
java.io.IOException: org.apache.spark.storage.BlockFetchException: Failed to 
fetch block from 1 locations. Most recent failure cause:
https://gist.github.com/giaosudau/3f7087707dcabc53c3b3bf54b0503720


was (Author: giaosuddau):
Any update on that? 
I have the same error.
java.io.IOException: org.apache.spark.storage.BlockFetchException: Failed to 
fetch block from 1 locations. Most recent failure cause:
https://gist.github.com/giaosudau/3f7087707dcabc53c3b3bf54b0503720

> Task failure caused by block fetch failure in BlockManager.doGetRemote() when 
> using TorrentBroadcast
> 
>
> Key: SPARK-7703
> URL: https://issues.apache.org/jira/browse/SPARK-7703
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.2.1, 1.3.1
> Environment: Red Hat Enterprise Linux Server release 7.0 (Maipo)
> Spark 1.3.1 Release
>Reporter: Hailong Wen
>
> I am from IBM Platform Symphony team and we are working to integration Spark 
> with our EGO to provide a fine-grained dynamic allocation Resource Manager. 
> We found a defect in current implementation of BlockManager.doGetRemote():
> {noformat}
>   private def doGetRemote(blockId: BlockId, asBlockResult: Boolean): 
> Option[Any] = {
> require(blockId != null, "BlockId is null")
> val locations = Random.shuffle(master.getLocations(blockId)) 
> <--- Issue2: locations may be out of date
> for (loc <- locations) {
>   logDebug(s"Getting remote block $blockId from $loc")
>   val data = blockTransferService.fetchBlockSync(
> loc.host, loc.port, loc.executorId, blockId.toString).nioByteBuffer() 
>  <--- Issue1: This statement is not in try/catch
>   if (data != null) {
> if (asBlockResult) {
>   return Some(new BlockResult(
> dataDeserialize(blockId, data),
> DataReadMethod.Network,
> data.limit()))
> } else {
>   return Some(data)
> }
>   }
>   logDebug(s"The value of block $blockId is null")
> }
> logDebug(s"Block $blockId not found")
> None
>   }
> {noformat}
> * Issue 1: Although the block fetch uses "for" to try all available 
> locations, the fetch method is not guarded by a "Try" block. When exception 
> occurs, this method will directly throw the error instead of trying other 
> block locations. The uncaught exception will cause task failure.
> * Issue 2: Constant "location" is acquired before fetching, however in a 
> dynamic allocation environment the block locations may change.
> We hit the above 2 issues in our use case, where Executors exit after all its 
> assigned tasks are done. We *occasionally* get the following error (issue 1.):
> {noformat}
> 15/05/13 10:28:35 INFO Executor: Running task 27.0 in stage 0.0 (TID 27)
> 15/05/13 10:28:35 DEBUG Executor: Task 26's epoch is 0
> 15/05/13 10:28:35 DEBUG Executor: Task 28's epoch is 0
> 15/05/13 10:28:35 DEBUG Executor: Task 27's epoch is 0
> 15/05/13 10:28:35 DEBUG BlockManager: Getting local block broadcast_0
> 15/05/13 10:28:35 DEBUG BlockManager: Block broadcast_0 not registered locally
> 15/05/13 10:28:35 INFO TorrentBroadcast: Started reading broadcast variable 0
> 15/05/13 10:28:35 DEBUG TorrentBroadcast: Reading piece broadcast_0_piece0 of 
> broadcast_0
> 15/05/13 10:28:35 DEBUG BlockManager: Getting local block broadcast_0_piece0 
> as bytes
> 15/05/13 10:28:35 DEBUG BlockManager: Block broadcast_0_piece0 not registered 
> locally
> 15/05/13 10:28:35 DEBUG BlockManager: Getting remote block broadcast_0_piece0 
> as bytes
> 15/05/13 10:28:35 DEBUG BlockManager: Getting remote block broadcast_0_piece0 
> from BlockManagerId(c390c311-bd97-4a99-bcb9-b32fd3dede17, sparkbj01, 37599)
> 15/05/13 10:28:35 TRACE NettyBlockTransferService: Fetch blocks from 
> sparkbj01:37599 (executor id c390c311-bd97-4a99-bcb9-b32fd3dede17)
> 15/05/13 10:28:35 DEBUG TransportClientFactory: Creating new connection to 
> sparkbj01/9.111.254.195:37599
> 15/05/13 10:28:35 ERROR RetryingBlockFetcher: Exception while beginning fetch 
> of 1 outstanding blocks 
> java.io.IOException: Failed to connect to sparkbj01/9.111.254.195:37599
>   at 
> org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:191)
>   at 
> org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:156)
>   at 
> 

[jira] [Commented] (MESOS-4565) slave recovers and attempt to destroy executor's child containers, then begins rejecting task status updates

2016-05-23 Thread Chanh Le (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15297666#comment-15297666
 ] 

Chanh Le commented on MESOS-4565:
-

Any update on that?
I still get the issues.

> slave recovers and attempt to destroy executor's child containers, then 
> begins rejecting task status updates
> 
>
> Key: MESOS-4565
> URL: https://issues.apache.org/jira/browse/MESOS-4565
> Project: Mesos
>  Issue Type: Bug
>  Components: docker
>Affects Versions: 0.26.0
>Reporter: James DeFelice
>  Labels: mesosphere
>
> AFAICT the slave is doing this:
> 1) recovering from some kind of failure
> 2) checking the containers that it pulled from its state store
> 3) complaining about cgroup children hanging off of executor containers
> 4) rejecting task status updates related to the executor container, the first 
> of which in the logs is:
> {code}
> E0130 02:22:21.979852 12683 slave.cpp:2963] Failed to update resources for 
> container 1d965a20-849c-40d8-9446-27cb723220a9 of executor 
> 'd701ab48a0c0f13_k8sm-executor' running task 
> pod.f2dc2c43-c6f7-11e5-ad28-0ad18c5e6c7f on status update for terminal task, 
> destroying container: Container '1d965a20-849c-40d8-9446-27cb723220a9' not 
> found
> {code}
> To be fair, I don't believe that my custom executor is re-registering 
> properly with the slave prior to attempting to send these (failing) status 
> updates. But the slave doesn't complain about that .. it complains that it 
> can't find the **container**.
> slave log here:
> https://gist.github.com/jdef/265663461156b7a7ed4e



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10661) Integrate SASI to Cassandra

2016-04-27 Thread Chanh Le (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261541#comment-15261541
 ] 

Chanh Le commented on CASSANDRA-10661:
--

[~xedin] Thank man. You got my day.

> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.4
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10661) Integrate SASI to Cassandra

2016-04-27 Thread Chanh Le (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15261528#comment-15261528
 ] 

Chanh Le commented on CASSANDRA-10661:
--

Hi I am using cassandra 3.5 and I have problem when create index with that.
CREATE CUSTOM INDEX ON bar (fname) USING 
'org.apache.cassandra.db.index.SSTableAttachedSecondaryIndex'
WITH OPTIONS = {
'analyzer_class':
'org.apache.cassandra.db.index.sasi.analyzer.NonTokenizingAnalyzer',
'case_sensitive': 'false'
};

it throws: unable to find custom indexer class 
'org.apache.cassandra.db.index.SSTableAttachedSecondaryIndex



> Integrate SASI to Cassandra
> ---
>
> Key: CASSANDRA-10661
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10661
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local Write-Read Paths
>Reporter: Pavel Yaskevich
>Assignee: Pavel Yaskevich
>  Labels: sasi
> Fix For: 3.4
>
>
> We have recently released new secondary index engine 
> (https://github.com/xedin/sasi) build using SecondaryIndex API, there are 
> still couple of things to work out regarding 3.x since it's currently 
> targeted on 2.0 released. I want to make this an umbrella issue to all of the 
> things related to integration of SASI, which are also tracked in 
> [sasi_issues|https://github.com/xedin/sasi/issues], into mainline Cassandra 
> 3.x release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)