on BinaryConfiguration oto specify serialization of enums by name
rather than ordinal.
Any other ideas would be appreciated.
Stuart.
On Mon, 4 Feb 2019 at 16:35, Stuart Macdonald wrote:
> Hi Mike,
>
> Thanks for the response. I can’t see how that’s possible with the current
> B
; >
> > 1. For all write modes that requires the creation of table we
> > should disallow usage of table outside of `SQL_PUBLIC`
> > or usage of `OPTION_SCHEMA`. We should throw proper exception for
> > this case.
> >
> > 2. Create a ticket t
in Spark's catalog.
> > > >
> > > > When I develop Ignite integration with Spark Data Frame I use
> following
> > > > abstraction described by Vladimir Ozerov:
> > > >
> > > > "1) Let's consider Ignite cluster as a single data
to mention that with this approach
> having multiple databases would be a very rare case. I believe we should
> get rid of this logic and use Ignite schema name as database name in
> Spark's catalog.
>
> Nikolay, what do you think?
>
> -Val
>
> On Tue, Aug 21, 2018 at
sure Spark integrations take this into account somehow.
> >
> > -Val
> >
> > On Mon, Aug 20, 2018 at 6:12 AM Nikolay Izhikov
> wrote:
> > > Hello, Stuart.
> > >
> > > Personally, I think we should change current tables naming and return
> tab
Igniters,
While reviewing the changes for IGNITE-9228 [1,2], Nikolay and I are
discussing whether to introduce a change which may impact backwards
compatibility; Nikolay suggested we take the discussion to this list.
Ignite implements a custom Spark catalog which provides an API by which
Spark us
Stuart Macdonald created IGNITE-9317:
Summary: Table Names With Special Characters Don't Work in Spark
SQL Optimisations
Key: IGNITE-9317
URL: https://issues.apache.org/jira/browse/IGNITE
Hi Dmitriy, thanks - that’s done now,
Stuart.
On 16 Aug 2018, at 22:23, Dmitriy Setrakyan wrote:
Stuart, can you please move the ticket into PATCH_AVAILABLE state? You need
to click "Submit Patch" button in Jira.
D.
On Wed, Aug 15, 2018 at 10:22 AM, Stuart Macdonald
wrote:
&g
on a call if this isn't clear.
https://github.com/apache/ignite/pull/4551
On Thu, Aug 9, 2018 at 2:32 PM, Stuart Macdonald wrote:
> Hi Nikolay, yes would be happy to - will likely be early next week. I’ll
> go with the approach of adding a new optional field to the Spark data
> sou
k on this ticket?
>
> В Вт, 07/08/2018 в 11:13 -0700, Stuart Macdonald пишет:
>> Thanks Val, here’s the ticket:
>>
>> https://issues.apache.org/jira/projects/IGNITE/issues/IGNITE-9228
>> <https://issues.apache.org/jira/projects/IGNITE/issues/IGNITE-9228?filter=allo
either separate SCHEMA_NAME parameter, or
similar to what you suggested in option 3 but with schema name instead of
cache name.
Please feel free to create a ticket.
-Val
On Tue, Aug 7, 2018 at 9:32 AM Stuart Macdonald wrote:
Hello Igniters,
The Ignite Spark SQL interface currently takes just
Stuart Macdonald created IGNITE-9228:
Summary: Spark SQL Table Schema Specification
Key: IGNITE-9228
URL: https://issues.apache.org/jira/browse/IGNITE-9228
Project: Ignite
Issue Type
Hello Igniters,
The Ignite Spark SQL interface currently takes just “table name” as a
parameter which it uses to supply a Spark dataset with data from the
underlying Ignite SQL table with that name.
To do this it loops through each cache and finds the first one with the
given table name [1]. This
Hello Igniters,
The IgniteSparkSession class extends SparkSession and overrides the
cloneSession() method. The contract for cloneSession() explicitly states
that it should clone all state (ie. the sharedState and sessionState
fields), however the IgniteSparkSession implementation doesn't clone its
Stuart Macdonald created IGNITE-9180:
Summary: IgniteSparkSession Should Copy State on cloneSession()
Key: IGNITE-9180
URL: https://issues.apache.org/jira/browse/IGNITE-9180
Project: Ignite
I'm
sure that would work in exact same way with Dataframes and Datasets, we
just need to provide proper support for the latter.
-Val
On Wed, Aug 1, 2018 at 11:52 AM Stuart Macdonald wrote:
> Val,
>
> Happy to clarify my thoughts. Let’s take an example, say we have an Igni
-Val
On Wed, Aug 1, 2018 at 12:05 AM Stuart Macdonald wrote:
> I believe suggested approach will not work with the Spark SQL
> relational optimisations which perform predicate pushdown from Spark
> to Ignite. For that to work we need both the key/val and the
> relational fields i
ithub.com/apache/ignite/blob/master/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCachePopularNumbersExample.scala#L124
>>>
>>> В Пт, 27/07/2018 в 15:22 -0700, Valentin Kulichenko пишет:
>>>> Stuart,
>>>>
>>>> _key and _val fie
Stuart Macdonald wrote:
Val,
Yes you can already get access to the cache objects as an RDD or
Dataset but you can’t use the Ignite-optimised DataFrames with these
mechanisms. Optimised DataFrames have to be passed through Spark SQL’s
Catalyst engine to allow for predicate pushdown to Ignite
gt; Of course, this needs to be tested and verified, and there might be certain
> pieces missing to fully support the use case. But generally I like these
> approaches much more.
>
> https://spark.apache.org/docs/2.3.1/sql-programming-guide.html#creating-datasets
>
&g
Here’s the ticket:
https://issues.apache.org/jira/browse/IGNITE-9108
Stuart.
On Friday, 27 July 2018 at 14:19, Nikolay Izhikov wrote:
> Sure.
>
> Please, send ticket number in this thread.
>
> пт, 27 июля 2018 г., 16:16 Stuart Macdonald (mailto:stu...@stuwee.org)&g
Stuart Macdonald created IGNITE-9108:
Summary: Spark DataFrames With Cache Key and Value Objects
Key: IGNITE-9108
URL: https://issues.apache.org/jira/browse/IGNITE-9108
Project: Ignite
same approach to the regular key, value
caches.
Feel free to create a ticket.
В Пт, 27/07/2018 в 09:37 +0100, Stuart Macdonald пишет:
Ignite Dev Community,
Within Ignite-supplied Spark DataFrames, I’d like to propose adding support
for _key and _val columns which represent the cache key and
Ignite Dev Community,
Within Ignite-supplied Spark DataFrames, I’d like to propose adding support
for _key and _val columns which represent the cache key and value objects
similar to the current _key/_val column semantics in Ignite SQL.
If the cache key or value objects are standard SQL types (eg
you want to provide a fix?
>
> В Пт, 20/07/2018 в 19:37 +0300, Nikolay Izhikov пишет:
>> Hello, Stuart.
>>
>> I will investigate this issue and return to you in a couple days.
>>
>> пт, 20 июля 2018 г., 17:59 Stuart Macdonald :
>>> Ignite Dev Community
Ignite Dev Community,
I’m working with the Ignite 2.4+ Spark SQL DataFrame functionality and have run
into what I believe to be a bug where spark partition information is incorrect
for non-trivial sizes of Ignite clusters.
The partition array returned to Spark via
org.apache.ignite.spark.i
26 matches
Mail list logo