Brane, I agree that ideally no direct mapping should exist between types with different size or precision. And of course we can introduce a new type which will have the same characteristics on all platforms. The downside is that users will have to learn new types and use them instead of standard ones (like DateTime in .Net) what is not always possible. Moreover, implicit mappings from .Net to Java types are important for some Ignite internals, like indexing. This is why I think it is better to map .Net DateTime to Java Timestamp. The only situation when data loss is possible is when timestamp is created in Java, but read in .Net. In this corner case we can advise users to use their own types with no precision loss.
Vladimir. On Tue, Oct 6, 2015 at 3:57 PM, Branko Čibej <[email protected]> wrote: > On 06.10.2015 12:26, Vladimir Ozerov wrote: > > Yakov, this could work in .Net where you have real generics. But it will > > not work in Java in general case due to type erasure - you simply cannot > > infer the type. > > > > Let's look closely to this: > > Date Java: 10^-3 > > Timestamp Java: 10^-9 > > DateTime .Net: 10^-7 > > > > What we see here, is that mapping Java Date to .Net DateTime is alomst > > certainly a bad thing because we loos too much data. But interoping > between > > Timestamp and DateTime is more or less sensible, we loose only 0.1-s of > > microseconds. > > > > I would suggest the following solution: > > 1) Fully decouple Date and Timestamp in Java. These are completely > > different types from Java perspective, H2 perspective (see GridH2Date, > > GridH2Timestamp), any data database perspective, etc.. > > 2) Map .Net DateTime to Java Timestamp with warning about possible > > precision loss. > > From the peanut gallery ... it seems like a really bad idea to design a > marshalling format based on what some language standard library happens > to provide. IMO, the way to do this is to define your own max precision > timestamp type, marshal it at full precision, and provide conversions to > standard types. That way your users can choose to use your type which > provides full precision on all platforms, or decide to use the standard > types with the potential loss of precision that entails; but it becomes > *their* decision, not a limitation set by the library. > > -- Brane > > > > On Tue, Oct 6, 2015 at 1:08 PM, Yakov Zhdanov <[email protected]> > wrote: > > > >> 2015-10-06 12:45 GMT+03:00 Dmitriy Setrakyan <[email protected]>: > >> > >>> On Tue, Oct 6, 2015 at 2:42 AM, Vladimir Ozerov <[email protected]> > >>> wrote: > >>> > >>>> This doesn't answers the question. First, Java Timestamp has greater > >>>> precision than .Net DateTime, so silent data loss could happen in this > >>> case > >>>> as well. Second, "use timestamp" is defined on class level. It means > we > >>>> cannot handle a class which have both Date and Timestamp fields. > >>>> > >>>> Looks like a bug and/or invalid design for me. > >>>> > >>> Agree, current design is not ideal. Vladimir, do you have other > >>> suggestions? > >> > >> how about writing at max precision possible (+ proper type ID) and > >> interpreting binary data on read depending on (a) portable reader method > >> call or (b) on actual field type. > >> > >> --Yakov > >> > >
