Re: Binary Serialization of Enums By Name

2019-02-07 Thread Stuart Macdonald
on BinaryConfiguration oto specify serialization of enums by name rather than ordinal. Any other ideas would be appreciated. Stuart. On Mon, 4 Feb 2019 at 16:35, Stuart Macdonald wrote: > Hi Mike, > > Thanks for the response. I can’t see how that’s possible with the current > B

Re: Table Names in Spark Catalog

2018-09-03 Thread Stuart Macdonald
es that requires the creation of table we > > should disallow usage of table outside of `SQL_PUBLIC` > > or usage of `OPTION_SCHEMA`. We should throw proper exception for > > this case. > > > > 2. Create a ticket to support `CREATE TABLE` with custom schem

Re: Table Names in Spark Catalog

2018-08-26 Thread Stuart Macdonald
> > > > > > > When I develop Ignite integration with Spark Data Frame I use > following > > > > abstraction described by Vladimir Ozerov: > > > > > > > > "1) Let's consider Ignite cluster as a single database ("catalog" in > &

Re: Table Names in Spark Catalog

2018-08-22 Thread Stuart Macdonald
roach > having multiple databases would be a very rare case. I believe we should > get rid of this logic and use Ignite schema name as database name in > Spark's catalog. > > Nikolay, what do you think? > > -Val > > On Tue, Aug 21, 2018 at 8:17 AM Stuart Macdonald &

Re: Table Names in Spark Catalog

2018-08-21 Thread Stuart Macdonald
his into account somehow. > > > > -Val > > > > On Mon, Aug 20, 2018 at 6:12 AM Nikolay Izhikov > wrote: > > > Hello, Stuart. > > > > > > Personally, I think we should change current tables naming and return > table in form of `schema.table`.

Table Names in Spark Catalog

2018-08-20 Thread Stuart Macdonald
Igniters, While reviewing the changes for IGNITE-9228 [1,2], Nikolay and I are discussing whether to introduce a change which may impact backwards compatibility; Nikolay suggested we take the discussion to this list. Ignite implements a custom Spark catalog which provides an API by which Spark

[jira] [Created] (IGNITE-9317) Table Names With Special Characters Don't Work in Spark SQL Optimisations

2018-08-19 Thread Stuart Macdonald (JIRA)
Stuart Macdonald created IGNITE-9317: Summary: Table Names With Special Characters Don't Work in Spark SQL Optimisations Key: IGNITE-9317 URL: https://issues.apache.org/jira/browse/IGNITE-9317

Re: Spark SQL Table Name Resolution

2018-08-17 Thread Stuart Macdonald
Hi Dmitriy, thanks - that’s done now, Stuart. On 16 Aug 2018, at 22:23, Dmitriy Setrakyan wrote: Stuart, can you please move the ticket into PATCH_AVAILABLE state? You need to click "Submit Patch" button in Jira. D. On Wed, Aug 15, 2018 at 10:22 AM, Stuart Macdonald wrote:

Re: Spark SQL Table Name Resolution

2018-08-15 Thread Stuart Macdonald
on a call if this isn't clear. https://github.com/apache/ignite/pull/4551 On Thu, Aug 9, 2018 at 2:32 PM, Stuart Macdonald wrote: > Hi Nikolay, yes would be happy to - will likely be early next week. I’ll > go with the approach of adding a new optional field to the Spark data > source provid

Re: Spark SQL Table Name Resolution

2018-08-09 Thread Stuart Macdonald
t to work on this ticket? > > В Вт, 07/08/2018 в 11:13 -0700, Stuart Macdonald пишет: >> Thanks Val, here’s the ticket: >> >> https://issues.apache.org/jira/projects/IGNITE/issues/IGNITE-9228 >> <https://issues.apache.org/jira/projects/IGNITE/issues/IGNITE-9228?filter=allo

Re: Spark SQL Table Name Resolution

2018-08-07 Thread Stuart Macdonald
either separate SCHEMA_NAME parameter, or similar to what you suggested in option 3 but with schema name instead of cache name. Please feel free to create a ticket. -Val On Tue, Aug 7, 2018 at 9:32 AM Stuart Macdonald wrote: Hello Igniters, The Ignite Spark SQL interface currently takes just

[jira] [Created] (IGNITE-9228) Spark SQL Table Schema Specification

2018-08-07 Thread Stuart Macdonald (JIRA)
Stuart Macdonald created IGNITE-9228: Summary: Spark SQL Table Schema Specification Key: IGNITE-9228 URL: https://issues.apache.org/jira/browse/IGNITE-9228 Project: Ignite Issue Type

Spark SQL Table Name Resolution

2018-08-07 Thread Stuart Macdonald
Hello Igniters, The Ignite Spark SQL interface currently takes just “table name” as a parameter which it uses to supply a Spark dataset with data from the underlying Ignite SQL table with that name. To do this it loops through each cache and finds the first one with the given table name [1].

IgniteSparkSession Should Copy State on cloneSession()

2018-08-03 Thread Stuart Macdonald
Hello Igniters, The IgniteSparkSession class extends SparkSession and overrides the cloneSession() method. The contract for cloneSession() explicitly states that it should clone all state (ie. the sharedState and sessionState fields), however the IgniteSparkSession implementation doesn't clone

[jira] [Created] (IGNITE-9180) IgniteSparkSession Should Copy State on cloneSession()

2018-08-03 Thread Stuart Macdonald (JIRA)
Stuart Macdonald created IGNITE-9180: Summary: IgniteSparkSession Should Copy State on cloneSession() Key: IGNITE-9180 URL: https://issues.apache.org/jira/browse/IGNITE-9180 Project: Ignite

Re: Spark DataFrames With Cache Key and Value Objects

2018-08-01 Thread Stuart Macdonald
work in exact same way with Dataframes and Datasets, we just need to provide proper support for the latter. -Val On Wed, Aug 1, 2018 at 11:52 AM Stuart Macdonald wrote: > Val, > > Happy to clarify my thoughts. Let’s take an example, say we have an Ignite > cache of Person object

Re: Spark DataFrames With Cache Key and Value Objects

2018-08-01 Thread Stuart Macdonald
Val On Wed, Aug 1, 2018 at 12:05 AM Stuart Macdonald wrote: > I believe suggested approach will not work with the Spark SQL > relational optimisations which perform predicate pushdown from Spark > to Ignite. For that to work we need both the key/val and the > relational fields in a d

Re: Spark DataFrames With Cache Key and Value Objects

2018-08-01 Thread Stuart Macdonald
ain/scala/org/apache/ignite/scalar/examples/ScalarCachePopularNumbersExample.scala#L124 >>> >>> В Пт, 27/07/2018 в 15:22 -0700, Valentin Kulichenko пишет: >>>> Stuart, >>>> >>>> _key and _val fields is quite a dirty hack that was added years ago

Re: Spark DataFrames With Cache Key and Value Objects

2018-07-27 Thread Stuart Macdonald
Stuart Macdonald wrote: Val, Yes you can already get access to the cache objects as an RDD or Dataset but you can’t use the Ignite-optimised DataFrames with these mechanisms. Optimised DataFrames have to be passed through Spark SQL’s Catalyst engine to allow for predicate pushdown to Ignite

Re: Spark DataFrames With Cache Key and Value Objects

2018-07-27 Thread Stuart Macdonald
nd verified, and there might be certain > pieces missing to fully support the use case. But generally I like these > approaches much more. > > https://spark.apache.org/docs/2.3.1/sql-programming-guide.html#creating-datasets > > -Val > >> On Fri, Jul 27, 20

Re: Spark DataFrames With Cache Key and Value Objects

2018-07-27 Thread Stuart Macdonald
Here’s the ticket: https://issues.apache.org/jira/browse/IGNITE-9108 Stuart. On Friday, 27 July 2018 at 14:19, Nikolay Izhikov wrote: > Sure. > > Please, send ticket number in this thread. > > пт, 27 июля 2018 г., 16:16 Stuart Macdonald (mailto:stu...@stuwee.org)&g

[jira] [Created] (IGNITE-9108) Spark DataFrames With Cache Key and Value Objects

2018-07-27 Thread Stuart Macdonald (JIRA)
Stuart Macdonald created IGNITE-9108: Summary: Spark DataFrames With Cache Key and Value Objects Key: IGNITE-9108 URL: https://issues.apache.org/jira/browse/IGNITE-9108 Project: Ignite

Re: Spark DataFrames With Cache Key and Value Objects

2018-07-27 Thread Stuart Macdonald
approach to the regular key, value caches. Feel free to create a ticket. В Пт, 27/07/2018 в 09:37 +0100, Stuart Macdonald пишет: Ignite Dev Community, Within Ignite-supplied Spark DataFrames, I’d like to propose adding support for _key and _val columns which represent the cache key and value

Spark DataFrames With Cache Key and Value Objects

2018-07-27 Thread Stuart Macdonald
Ignite Dev Community, Within Ignite-supplied Spark DataFrames, I’d like to propose adding support for _key and _val columns which represent the cache key and value objects similar to the current _key/_val column semantics in Ignite SQL. If the cache key or value objects are standard SQL types

Re: Spark DataFrame Partition Ordering Issue

2018-07-24 Thread Stuart Macdonald
to provide a fix? > > В Пт, 20/07/2018 в 19:37 +0300, Nikolay Izhikov пишет: >> Hello, Stuart. >> >> I will investigate this issue and return to you in a couple days. >> >> пт, 20 июля 2018 г., 17:59 Stuart Macdonald : >>> Ignite Dev Community, >>

Spark DataFrame Partition Ordering Issue

2018-07-20 Thread Stuart Macdonald
Ignite Dev Community, I’m working with the Ignite 2.4+ Spark SQL DataFrame functionality and have run into what I believe to be a bug where spark partition information is incorrect for non-trivial sizes of Ignite clusters. The partition array returned to Spark via