Hi again!

// 3. Fetching into the TableRecord
// WORKS NOT! VALUES ARE NULL
TestRecord fetchOneInto_TableRecord = 
ctx.select(TEST).from(TEST).where(TEST.ID.eq(id)).fetchOneInto(TestRecord.class);


 This is record-to-record mapping, which is governed by field equivalence. 
The nested Record1<TestRecord> record doesn't have any corresponding fields 
with TestRecord


I do get that. But if DefaultRecordMapper would treat TestRecord just as a 
custom type and then use the same algorithm as above for the POJO (or any 
other custom type), would it then work? That's just not what happens, 
because the DefaultRecordMapper always field-equivalence for 
Record-to-Record mapping, right?


But it doesn't treat TestRecord as a custom type, otherwise, it couldn't 
transfer record state (such as changed flags, the fetched flag, original 
values, etc.).


I see, that makes sense! Now I also found the relevant code sections in the 
jooq source code. I still wonder if it would somehow be possible for the 
DefaultRecordMapper make the behaviour similar to the other case. E.g. 
would it be possible to determine that (in the discussed case) the usual 
record mapping doesn't fit for mapping Record1<TestRecord> to 
TestRecord.class and instead try to unnest Record1 and map the result to 
the target? I.e. in simplified pseudo code for the general case:

// "Parameters":
Record<?> row;
Class<E> type;

if (AbstractRecord.class.isAssignableFrom(type)) {
    if (row is Record1 && "no sensible mapping from row to E" possible) { 
// <-- Is it possible to determine this?
        Object v = row.get(0);
        // Try to map v to E, by (recursively) using the 
DefaultRecordMapper again
        map(v, type);
    } else {
        // Use default RecordToRecordMapper;
    }

But I guess records don't have enough type information to determine if they 
"match" or not?! Even if, I just made this up and have no clue about the 
consequences for all other cases which are not like in my example.


It would be a fun exercise to specify fully and formally what it means for 
there to be "no sensible mapping" (including any potential future "sensible 
mapping" that we may still want to add). Think about things like: 
https://github.com/jOOQ/jOOQ/issues/11148. Though, I'd rather spend my time 
on more pressing features, currently. I believe that since jOOQ 3.15's 
various changes to add more ad-hoc conversion and type safety to mapping 
(including nested collections), the reflective DefaultRecordMapper might 
become less popular.


Ah so the order (implemented in the DefaultRecordMapper) of which Mapper to 
choose was, in fact, the other way around before! And that caused #11148. 
Now I see where the order of the conditions in the JavaDoc comes from and 
why it's not up-to-date anymore :)

When I came up with this idea up there, it was more a "would this 
theoretically be possible" question than a desperate wish for an 
implementation. I do agree that there are more important features to spend 
time on.

As to the new type-safe mappings: I'm always happy about more type safety 
and will definitely use them when appropriate. To me the reflective mapping 
has some advantage though, because sometimes the type-safe mapper might be 
too verbose in comparison. Also, when the target object (e.g. a Java 16 
record) has 7 String fields, using a constructor method reference might 
(more easily) lead to interchanged values just by mistakenly confusing the 
order of columns in the query-select. Mapping values in wrong fields like 
this cannot happen with the reflective mapping (or with explicit custom 
mapping by using old-school very verbose transformation by setters and 
getters, at least not so easily). Because of that and because of its simple 
syntax, I'll probably continue using the reflective mapping in multiple 
situations even though I'd prefer a type-safe, non-reflective way.

I am especially confused by the difference between 2 and 3. In 2 the 
mapping works and the POJO will contain the correct values. In 3 however, 
the TableRecord's values will be null.


So, 2 really just is a special case. It was designed for queries like:

  select(SOME_STRING_COLUMN).from(T).fetchInto(SomeStringWrapper.class);

This may or may not have been a good idea.
 

I do understand that this kind of select query produces a 
Record1<TestRecord> and not a plain TestRecord itself. According to the 
JavaDoc of into(Table<R>), the mapping algorithm compares the Table<R> 
field names with the field names of Record1<TestRecord>, so I guess that's 
the reason why 4 doesn't work (because Record1 doesn't have the same fields 
as the table TEST). But 2 and 3 both use into(Class<E>), and although the 
JavaDoc states that different mapping algorithms are used for different 
constellations, I didn't see which one of these constellations would match 
2 and make it work and don't make 3 work.


I don't think any explanation is missing in the Javadoc? 
https://www.jooq.org/javadoc/latest/org.jooq/org/jooq/impl/DefaultRecordMapper.html
What's missing, in your opinion?

 
I reckon that in "2." this constellation applies:


"If <E> is a field "value type" and <R extends Record1<?>>, i.e. it has 
exactly one column:

Any Java type available from SQLDataType qualifies as a well-known "value 
type" that can be converted from a single-field Record1. [...]"


But I'm not sure why it applies, because there is this SQLDataType 
restriction. So my questions is:

<E> is my POJO (or custom data type), right? Which SQLDataType does it 
match then? Or is the record nested in Record1 the type that needs to be an 
SQLDataType? I.e. which type needs to match an SQLDataType and why does it 
match in my example case (and which SQLDataType does it match)?


There's SQLDataType.RECORD for your case.


I'm confused, because E (my POJO or custom type) is not a Record in this 
case.


You're right, that doesn't explain it here. The actual explanation is the 
one I provided earlier. If the ConverterProvider provides a Converter 
between <T1> (from Record1<T1>) and <E>, then that is used. The 
DefaultConverterProvider can convert from any Record to any POJO simply by 
using the Record's attached RecordMapperProvider again. So, we recurse.


Great, thanks for clarifying :)
 

 

I think I'm misunderstanding the JavaDoc. I think I should rephrase my 
question more explicitly:

  a) What does the term «field "value type"» mean that's referring to <E> 
at the beginning? Which properties must <E> have to make this condition 
true?
  b) The second sentence about the SQLDataType qualifying as a "value type" 
then refers to the type that is nested in Record1<?>, and not to <E>, 
correct?


I do have a suggestion for improvement of the JavaDoc of the 
DefaultRecordMapper. Right now it reads:

    The mapping algorithm is this:    
    If <E> is an array type:
    [...]
    If <E> is a field "value type" and <R extends Record1<?>>, i.e. it has 
exactly one column:
    [...]
    If <E> is a TableRecord type (e.g. from a generated record), then its 
meta data are used:
    [...]

In the confusing example case 3. above, the second condition is fulfilled 
(I think), but the third one too. And the third condition is actually 
checked before the second one source code, so the RecordToRecordMapper is 
applied and not the mapping algorithm of the second block. So I'd suggest 
to switch the order of the 2nd and 3rd condition in the JavaDoc to make 
this clearer.


Yes, that order should be reflected correctly, as well as what I said 
earlier about ConverterProvider being the deciding factor, not whether the 
<E> type is a "built-in" type (all built-in types are recognised by the 
DefaultConverterProvider, but it does more than that). I've created:
https://github.com/jOOQ/jOOQ/issues/15598

Thanks for being thorough here. Fresh eyes often help see flaws in ancient 
documentation that is outdated only in subtle ways. This Javadoc used to be 
correct, but then changes happened.


Thanks for being appreciative about it! I wasn't sure if I just didn't 
understand the JavaDoc correctly. It's good to receive confirmation that 
daring to ask and expressing my confusion (instead of letting it go) 
actually helped. :)
 

I don't really like to engage in a discussion about what is the "best" way. 
jOOQ is made of tons of operators. They all work in well defined ways. 
Which ones you find "best" is very subjective.

Any time you nest something, you will have the effects of nesting (pros and 
cons). It's a tradeoff.

Where would such a discussion even stop? Some folks prefer creating 1 view 
or (table valued function) per query. When you do that, every query will 
have its own MY_VIEW class and MyViewRecord type, and you will be able (in 
a type safe way) to project the entire record, without projecting too many 
columns. If you're careful to craft updatable views (and tell jOOQ about 
underlying primary keys using synthetic keys in the code gen), then you can 
even store data back to the underlying table via your updatable view! 
Different style, same result. Not really "better" or "worse". I've done 
this in the past, and it was wonderful for that system. A library of highly 
reusable views that build upon views, and automatic DTO generation for each 
one of them.


Thanks for clarifying! I'm very amazed by the flexibility jooq offers. 
There is a lot to discover for me that simplifies previously tedious work, 
that's super great!

I've never (explicitly) used custom views in SQL. How do I create one to 
use it with jooq?


CREATE VIEW v AS SELECT ... in every SQL dialect
 

And do I understand correctly that it's possible to let jooq automatically 
generate a MY_VIEW class and MyViewRecord for every such view? If you have 
a link to documentation about that in the manual or on your blog, I'd be 
very happy to check it out there :)


Why would this need additional documentation, in your opinion? Views have 
been a core concept of the relational model since Codd's 12 rules:
https://en.wikipedia.org/wiki/Codd%27s_12_rules

For ages, jOOQ didn't even know something was a view or a table (or a 
materialized view) if the RDBMS listed them at the same place in the 
dictionary views (INFORMATION_SCHEMA, etc.).


Thanks for clarifying – I think now I understand it. Views are new to me 
and I didn't know how to work with that while using SQL with jOOQ.
Now that I know what you mean, I can see that the option of creating 1 view 
for per query and generating the corresponding MY_VIEW and MyViewRecord 
classes for each of them does definitely not fit my use case ^^

If that doesn't work, I could of course write an ad-hoc 
create().update(...) statement, but I would rather use the easily 
understandable POJOs or TableRecords API to store and update records.

So what is "the recommended way" to do this without selecting all columns? 


I would probably run an explicit update. From your example, it isn't clear 
why you're running 2 round trips in the first place (SELECT, then UPDATE). 
But other than that, there's no "recommended way".


Yeah, that's true. I do have some use cases where I need to fetch data, do 
some logical checks in Java, and then update, but in most cases an update 
would be sufficient.

I do now see the pros of a plain update. It didn't feel natural to me, 
because I'm just so used to having an ORM and working directly on the BOs 
instead of using SQL. But I do like the direct approach via jooq and SQL 
and will use that more often now. :)

Both have their place. SQL fails at making repetitive CRUD simple (it gets 
too verbose). But a lot of CRUD isn't "just" CRUD. DML statements can be 
very complex. A lot of times, moving more logic into a single SQL statement 
will drastically accelerate *and* simplify your system.

I guess this talk I made some years ago will give you a bit more insight 
into the wonderful world of doing more with just SQL:
https://www.youtube.com/watch?v=wTPGW1PNy_Y


Yeah, I saw it two weeks ago and it was pretty mind-blowing for me too. The 
domain that I'm working in doesn't have a lot of use cases for more complex 
SQL queries, but I guess that even in our code I will stumble upon more 
possible applications for this kind of optimisation than I can think of 
right now. And for the cases that do come to my mind, I really wanna go 
there and change them right now. :D Unfortunately, new features have higher 
priority at the moment ...
  

Btw I made a mistake when I mentioned "POJOs or TableRecords API". I meant 
"DAO or UpdateableRecord API". While the UpdateableRecord only updates 
changed values (which is very cool), that doesn't apply to DAOs and POJOs, 
right? So the following

Book book = 
create().select(BOOK.ID).from(BOOK).where(BOOK.ID.eq(5)).fetchInto(Book.class); 
// Book.class = POJO generated by jooq
book.setTitle("New Title");
bookDao.update(book);

would update all columns, because the POJO cannot track which fields were 
changed (and neithe can the DAO, I guess)?! ..... Ok I just saw the 
documentation 
https://www.jooq.org/doc/3.18/manual/sql-execution/fetching/pojos/#interaction-with-daos.
 
I guess, the above is just not the way to work with the POJOs. Looking at 
the example in the manual, it looks like I should rather use the DAO for 
querying too, if I really want to update the table by using the whole POJO:

Book book = bookDao.findById(5; // Book.class = POJO generated by jooq
book.setTitle("New Title");
bookDao.update(book);


Yeah, sigh.

The DAO is my biggest regret. It was so super easy to implement, so I just 
went ahead and did it, hoping folks will find it useful. And they did. And 
requested 500 feature requests from the DAO, which they could have simply 
implemented with SQL, but the "Spring Repository mindset" made them prefer 
the DAO as the single source of database interaction. But of course, the 
DAO (and POJOs) can't easily model dirty flags, as little as they can model 
the simplest of joins or nested collections, or whatever.

I said there's no "right way" of doing things, and jOOQ usually isn't very 
opinionated. But the DAO is opinionated in a narrow way, and opinions only 
lead to regrets. I completely underestimated the user's desire for The One 
True Way™ to do things, and users who discover the DAO (and who don't 
reject it) will now want to do *everything* with the DAO. I was going to 
deprecate the DAO but was overruled. Look at its history here, if you have 
time:
https://github.com/jOOQ/jOOQ/issues/5984

I personally recommend not using it. Look at the jOOQ manual. 99.9% of all 
pages are *not* about the DAO. The DAO is just a simple utility for the 
most boring database interactions. It's not even really good at it. It only 
covers the most trivial queries, e.g. the ones you usually see in some 
content marketer authored "beginner's guide tutorial." or Spring Pet Clinic 
style example projects.

SQL is vast and powerful and really good at what it does. DAOs (or ORMs in 
general) try to hide this for no really good reason other than "someone 
else also has this, so we must, too".


Thanks for the advice and the background! I did read through the linked 
issue and it helped me getting your point about DAOs and grasp a better 
understanding of jooq's rationale towards this aspect.

I do agree (with other people in the thread) that some 3ish lines of 
explicit comment in the manual (on the DAO and POJO pages), explaining that 
DAOs are not the recommended way to use jooq, could really prevent people 
from using them or make them aware of the drawbacks, at least. It would be 
good to also give a reason so that people understand why they shouldn't use 
jooq's DAOs – you could e.g. put a link (to one of the comprehensive 
explanatory comments in the issue above) in this 3 line paragraph, if the 
reasoning would be too long for the manual page.

For me this definitely would have been helpful. I took over a project from 
a colleague who started building it and used jooq, so I was thrown into 
jooq without knowing it before. He decided to set up jooq to generate POJOs 
and DAOs and he also used them in some cases (I haven't seen all the code 
yet, so I don't know how much they are being used). Hence, for me this 
looked like an appropriate (or even desired) way to store (insert, update) 
database entries.
I even looked up the sections about POJOs and DAOs in the manual to see how 
to use them in the right way. I agree that they have very little space in 
the manual, but I wasn't consciously aware of that – I just didn't notice, 
because I (as a noobie) was not analyzing the manual (or its table of 
contents) as a whole when searching for explanations in there. I just 
wanted to know how to use DAOs and POJOs. Maybe later on I would have 
noticed, but now I didn't. An explicit remark within the sections about 
POJOs and DAOs would have helped me to realize this earlier.
[I'm sharing this point of view here because I could imagine that it might 
help you to gain some insight from this "fresh eyes" jooq-noobie 
perspective of mine. ;)]

-- 
You received this message because you are subscribed to the Google Groups "jOOQ 
User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jooq-user/9f9085bf-5fad-40f1-940f-548cd01f1354n%40googlegroups.com.

Reply via email to