If we adapt table-per-cache policy, then table name should be equal to
cache name, especially when table is created via SQL.
For complex types, the type should also be equal to the table name. If the
value type is primitive, then you can still use the table name in SQL and
use the table name as ca
Dima,
Value type name doesn't necessarily maps to table name. For instance, what
if I have two tables like this? They both have "java.lang.Long" as type
name.
CREATE table *t1* {
pk_id BIGINT PRIMARY KEY,
val BIGINT
}
CREATE table *t2* {
pk_id BIGINT PRIMARY KEY,
val BIGINT
}
On
Vladimir, I am not sure I understand your point. The value type name should
be the table name, no?
On Thu, Feb 16, 2017 at 12:13 AM, Vladimir Ozerov
wrote:
> Dima,
>
> At this point we require the following additional data which is outside of
> standard SQL:
> - Key type
> - Value type
> - Set o
Dima,
At this point we require the following additional data which is outside of
standard SQL:
- Key type
- Value type
- Set of key columns
I do not know yet how we will define these values. At the very least we can
calculate them automatically in some cases. For "keyFieldName" and
"valFieldName"
On Wed, Feb 15, 2017 at 2:41 PM, Alexander Paschenko <
alexander.a.pasche...@gmail.com> wrote:
> Folks,
>
> Regarding INSERT semantics in JDBC DML streaming mode - I've left only
> INSERTs supports as we'd agreed before.
>
> However, current architecture of streaming related internals does not
> g
Folks,
Regarding INSERT semantics in JDBC DML streaming mode - I've left only
INSERTs supports as we'd agreed before.
However, current architecture of streaming related internals does not
give any clear way to intercept key duplicates and inform the user -
say, I can't just throw an exception fro
On Wed, Feb 15, 2017 at 4:28 AM, Vladimir Ozerov
wrote:
> Ok, let's put aside current fields configuration, I'll create separate
> thread for it. As far as _KEY and _VAL, proposed change is exactly about
> mappings:
>
> class QueryEntity {
> ...
> String keyFieldName;
> String valFiel
Vladimir,
Looks good to me.
Pavel,
No worries, it will work exactly like you described: hidden _key and _val
fields will be always accessible.
Sergi
2017-02-15 15:56 GMT+03:00 Pavel Tupitsyn :
> I have no particular opinion on how we should handle _key/_val,
> but we certainly need a way to
I have no particular opinion on how we should handle _key/_val,
but we certainly need a way to select entire key and value objects via
SqlFieldsQuery,
and this should work without any additional configuration.
We can rename these, turn them into system functions, whatever.
Ignite.NET LINQ provide
Ok, let's put aside current fields configuration, I'll create separate
thread for it. As far as _KEY and _VAL, proposed change is exactly about
mappings:
class QueryEntity {
...
String keyFieldName;
String valFieldName;
...
}
The key thing is that we will not require users to be a
I don't see any improvement here. Usability will only suffer with this
change.
I'd suggest to just add mapping for system columns like _key, _val , _ver.
Sergi
2017-02-15 13:18 GMT+03:00 Vladimir Ozerov :
> I think the whole QueryEntity class require rework to allow for this
> change. I would s
I think the whole QueryEntity class require rework to allow for this
change. I would start with creating QueryField class which will encapsulate
all field properties which are currently set through different setters:
class QueryField {
String name;
String type;
String alias;
boolea
Vova,
Agree about the primitive types. However, it is not clear to me how the
mapping from a primitive type to a column name will be supported. Do you
have a design in mind?
D.
On Tue, Feb 14, 2017 at 6:16 AM, Vladimir Ozerov
wrote:
> Dima,
>
> This will not work for primitive keys and values
Dima,
This will not work for primitive keys and values as currently the only way
to address them is to use "_KEY" and "_VAL" aliases respectively. For this
reason I would rather postpone UPDATE/DELETE implementation until "_KEY"
and "_VAL" are hidden from public API and some kind of mapping is
int
On Fri, Feb 10, 2017 at 3:36 AM, Vladimir Ozerov
wrote:
> I propose to ship streaming with INSERT support only for now. This is
> enough for multitude cases and will add value to Ignite 1.9 immediately. We
> can think about correct streaming UPDATE/DELETE architecture separately .It
> is much mor
On Fri, Feb 10, 2017 at 12:55 AM, Alexander Paschenko <
alexander.a.pasche...@gmail.com> wrote:
> And to avoid further confusion: UPDATE and DELETE are simply
> impossible in streaming mode when the key is not completely defined as
> long as data streamer operates with key-value pairs and not just
On Fri, Feb 10, 2017 at 12:49 AM, Alexander Paschenko <
alexander.a.pasche...@gmail.com> wrote:
> Dima,
> >
> > There are several ways to handle it. I would check how other databases
> > handle it, maybe we can borrow something. To the least, we should log
> such
> > errors in the log for now.
> >
In general, the data streamer approach should be mostly used for data loading
scenarios. The data is usually loaded with INSERTS which means that the
scenario is already supported and we’re free to merge the changes to 1.9.
If you UPDATE or DELETE data in the streaming mode then you are required
I propose to ship streaming with INSERT support only for now. This is
enough for multitude cases and will add value to Ignite 1.9 immediately. We
can think about correct streaming UPDATE/DELETE architecture separately .It
is much more difficult thing, we cannot support it in a clean way right now
d
And to avoid further confusion: UPDATE and DELETE are simply
impossible in streaming mode when the key is not completely defined as
long as data streamer operates with key-value pairs and not just
tuples of named values. That's why we can't do DELETE from Person
WHERE id1 = 5 from prev example with
Dima,
>
> There are several ways to handle it. I would check how other databases
> handle it, maybe we can borrow something. To the least, we should log such
> errors in the log for now.
>
Logging errors would mean introducing some kind of stream receiver to
do that and thus that would be really t
On Thu, Feb 9, 2017 at 1:53 AM, Alexander Paschenko <
alexander.a.pasche...@gmail.com> wrote:
> Sergey,
>
> Streaming does not make sense for INSERT FROM SELECT as this pattern does
> not match primary use case for streaming (bulk data load to Ignite).
>
> Dima,
>
> No, I suggest that data streame
Sergey,
Streaming does not make sense for INSERT FROM SELECT as this pattern does
not match primary use case for streaming (bulk data load to Ignite).
Dima,
No, I suggest that data streamer mode supports full semantic sense of
INSERT (throw an ex if there's a duplicate of PK) optionally and depe
Alexander,
Are you suggesting that currently to execute a simple INSERT for 1 row we
invoke a data streamer on Ignite API? How about an update by a primary key?
Why not execute a simple cache put in either case?
I think we had a separate thread where we agreed that the streamer should
only be tur
Hi Alexander.
What's about supporting statement *INSERT INTO ... SELECT FROM* for
streams? Does it make sense?
On Wed, Feb 8, 2017 at 6:44 PM, Alexander Paschenko <
alexander.a.pasche...@gmail.com> wrote:
> Also, currently it's possible to run SELECTs on "streamed"
> connections, and probably t
Also, currently it's possible to run SELECTs on "streamed"
connections, and probably this is odd and should not be released too,
what do you think?
- Alex
2017-02-08 18:00 GMT+03:00 Alexander Paschenko
:
> Hello Igniters,
>
> I'd like to raise few questions regarding data streaming via DML statem
26 matches
Mail list logo