Hey Lukas,

thanks a lot for taking time to answer! It helps me a lot to develop a 
better understanding for jooq (and, indirectly, for SQL too).
 

// 3. Fetching into the TableRecord
// WORKS NOT! VALUES ARE NULL
TestRecord fetchOneInto_TableRecord = 
ctx.select(TEST).from(TEST).where(TEST.ID.eq(id)).fetchOneInto(TestRecord.class);


 This is record-to-record mapping, which is governed by field equivalence. 
The nested Record1<TestRecord> record doesn't have any corresponding fields 
with TestRecord


I do get that. But if DefaultRecordMapper would treat TestRecord just as a 
custom type and then use the same algorithm as above for the POJO (or any 
other custom type), would it then work? That's just not what happens, 
because the DefaultRecordMapper always field-equivalence for 
Record-to-Record mapping, right?


But it doesn't treat TestRecord as a custom type, otherwise, it couldn't 
transfer record state (such as changed flags, the fetched flag, original 
values, etc.).


I see, that makes sense! Now I also found the relevant code sections in the 
jooq source code. I still wonder if it would somehow be possible for the 
DefaultRecordMapper make the behaviour similar to the other case. E.g. 
would it be possible to determine that (in the discussed case) the usual 
record mapping doesn't fit for mapping Record1<TestRecord> to 
TestRecord.class and instead try to unnest Record1 and map the result to 
the target? I.e. in simplified pseudo code for the general case:

// "Parameters":
Record<?> row;
Class<E> type;

if (AbstractRecord.class.isAssignableFrom(type)) {
    if (row is Record1 && "no sensible mapping from row to E" possible) { 
// <-- Is it possible to determine this?
        Object v = row.get(0);
        // Try to map v to E, by (recursively) using the 
DefaultRecordMapper again
        map(v, type);
    } else {
        // Use default RecordToRecordMapper;
    }

But I guess records don't have enough type information to determine if they 
"match" or not?! Even if, I just made this up and have no clue about the 
consequences for all other cases which are not like in my example.
 

 

  

I am especially confused by the difference between 2 and 3. In 2 the 
mapping works and the POJO will contain the correct values. In 3 however, 
the TableRecord's values will be null.


So, 2 really just is a special case. It was designed for queries like:

  select(SOME_STRING_COLUMN).from(T).fetchInto(SomeStringWrapper.class);

This may or may not have been a good idea.
 

I do understand that this kind of select query produces a 
Record1<TestRecord> and not a plain TestRecord itself. According to the 
JavaDoc of into(Table<R>), the mapping algorithm compares the Table<R> 
field names with the field names of Record1<TestRecord>, so I guess that's 
the reason why 4 doesn't work (because Record1 doesn't have the same fields 
as the table TEST). But 2 and 3 both use into(Class<E>), and although the 
JavaDoc states that different mapping algorithms are used for different 
constellations, I didn't see which one of these constellations would match 
2 and make it work and don't make 3 work.


I don't think any explanation is missing in the Javadoc? 
https://www.jooq.org/javadoc/latest/org.jooq/org/jooq/impl/DefaultRecordMapper.html
What's missing, in your opinion?

 
I reckon that in "2." this constellation applies:


"If <E> is a field "value type" and <R extends Record1<?>>, i.e. it has 
exactly one column:

Any Java type available from SQLDataType qualifies as a well-known "value 
type" that can be converted from a single-field Record1. [...]"


But I'm not sure why it applies, because there is this SQLDataType 
restriction. So my questions is:

<E> is my POJO (or custom data type), right? Which SQLDataType does it 
match then? Or is the record nested in Record1 the type that needs to be an 
SQLDataType? I.e. which type needs to match an SQLDataType and why does it 
match in my example case (and which SQLDataType does it match)?


There's SQLDataType.RECORD for your case.


I'm confused, because E (my POJO or custom type) is not a Record in this 
case. I think I'm misunderstanding the JavaDoc. I think I should rephrase 
my question more explicitly:

  a) What does the term «field "value type"» mean that's referring to <E> 
at the beginning? Which properties must <E> have to make this condition 
true?
  b) The second sentence about the SQLDataType qualifying as a "value type" 
then refers to the type that is nested in Record1<?>, and not to <E>, 
correct?


I do have a suggestion for improvement of the JavaDoc of the 
DefaultRecordMapper. Right now it reads:

    The mapping algorithm is this:    
    If <E> is an array type:
    [...]
    If <E> is a field "value type" and <R extends Record1<?>>, i.e. it has 
exactly one column:
    [...]
    If <E> is a TableRecord type (e.g. from a generated record), then its 
meta data are used:
    [...]

In the confusing example case 3. above, the second condition is fulfilled 
(I think), but the third one too. And the third condition is actually 
checked before the second one source code, so the RecordToRecordMapper is 
applied and not the mapping algorithm of the second block. So I'd suggest 
to switch the order of the 2nd and 3rd condition in the JavaDoc to make 
this clearer.

I don't really like to engage in a discussion about what is the "best" way. 
jOOQ is made of tons of operators. They all work in well defined ways. 
Which ones you find "best" is very subjective.

Any time you nest something, you will have the effects of nesting (pros and 
cons). It's a tradeoff.

Where would such a discussion even stop? Some folks prefer creating 1 view 
or (table valued function) per query. When you do that, every query will 
have its own MY_VIEW class and MyViewRecord type, and you will be able (in 
a type safe way) to project the entire record, without projecting too many 
columns. If you're careful to craft updatable views (and tell jOOQ about 
underlying primary keys using synthetic keys in the code gen), then you can 
even store data back to the underlying table via your updatable view! 
Different style, same result. Not really "better" or "worse". I've done 
this in the past, and it was wonderful for that system. A library of highly 
reusable views that build upon views, and automatic DTO generation for each 
one of them.


Thanks for clarifying! I'm very amazed by the flexibility jooq offers. 
There is a lot to discover for me that simplifies previously tedious work, 
that's super great!

I've never (explicitly) used custom views in SQL. How do I create one to 
use it with jooq? And do I understand correctly that it's possible to let 
jooq automatically generate a MY_VIEW class and MyViewRecord for every such 
view? If you have a link to documentation about that in the manual or on 
your blog, I'd be very happy to check it out there :)


To make matters even more interesting, you're using the reflective 
DefaultRecordMapper. But you could also just pass lambdas (or DTO 
constructor references) around. Some cool examples here:
https://blog.jooq.org/jooq-3-15s-new-multiset-operator-will-change-how-you-think-about-sql/


Yeah, it's cool to have this flexibility to decide on using a custom mapper 
or the reflective approach. In general, I really like the very extensive 
(and hence agnostic) APIs in jooq!

And wow, the multiset possibilites in this blog post are just purely 
awesome! A whole new world is opening up for me :)

A) When I want to update a record with a new value, I assume that I need to 
load the POJO or the record from the database, change the affected values 
and then store the POJO or record to the database again. Is that correct? 
Or can I just do the following?

BookRecord bookRecord = create().select(BOOK.ID
).from(BOOK).fetchInto(BookRecord.class);
bookRecord.setTitle("New Title");
bookRecord.update(); // or .store();

I wonder if this only updates the title or if it would override all the 
other columns (except id) with null values, because they are not present in 
the Record.

 

UpdatableRecord operations act only on columns whose changed() flag is set. 
Not only will you not want "accidental" explicit NULL values in your INSERT 
/ UPDATE statements, but you also don't want "accidental" explicit DEFAULT 
values in those statements, because trigger and locking behaviour might 
depend on the presence of a column in a statement.


That's great, thanks for clarifying! 
 

If that doesn't work, I could of course write an ad-hoc 
create().update(...) statement, but I would rather use the easily 
understandable POJOs or TableRecords API to store and update records.

So what is "the recommended way" to do this without selecting all columns? 


I would probably run an explicit update. From your example, it isn't clear 
why you're running 2 round trips in the first place (SELECT, then UPDATE). 
But other than that, there's no "recommended way".


Yeah, that's true. I do have some use cases where I need to fetch data, do 
some logical checks in Java, and then update, but in most cases an update 
would be sufficient.

I do now see the pros of a plain update. It didn't feel natural to me, 
because I'm just so used to having an ORM and working directly on the BOs 
instead of using SQL. But I do like the direct approach via jooq and SQL 
and will use that more often now. :)

Btw I made a mistake when I mentioned "POJOs or TableRecords API". I meant 
"DAO or UpdateableRecord API". While the UpdateableRecord only updates 
changed values (which is very cool), that doesn't apply to DAOs and POJOs, 
right? So the following

Book book = 
create().select(BOOK.ID).from(BOOK).where(BOOK.ID.eq(5)).fetchInto(Book.class); 
// Book.class = POJO generated by jooq
book.setTitle("New Title");
bookDao.update(book);

would update all columns, because the POJO cannot track which fields were 
changed (and neithe can the DAO, I guess)?! ..... Ok I just saw the 
documentation 
https://www.jooq.org/doc/3.18/manual/sql-execution/fetching/pojos/#interaction-with-daos.
 
I guess, the above is just not the way to work with the POJOs. Looking at 
the example in the manual, it looks like I should rather use the DAO for 
querying too, if I really want to update the table by using the whole POJO:

Book book = bookDao.findById(5; // Book.class = POJO generated by jooq
book.setTitle("New Title");
bookDao.update(book);
  

B) In my program, I have to e.g. query tables and send their data to the 
frontend quite often. Or I query a table and need it for complicated 
business logic. If I write select(C1, C2, C3, ...) queries retrieving only 
the necessary columns, I'm facing the issue of either:

  B.1) defining, for each query, a specific Dto (custom type) which 
contains exactly the fields that were queried. That produces a lot of Dto's 
and therefore might make my project structure messy. I can try to use 
encapsulation, but it will still produce a Dto for each (different) query.
  B.2) exposing the fetched record(s) to the caller method, which can then 
retrieve values from it. This doesn't feel great, because I'd rather want 
to have the select and mapping within the same method or class (or at least 
package). Also at the calling point in my code, I don't (immediately) know 
which fields are contained in the record and which not. I'd have to go to 
and take a look at the the jooq-query to find out. This problem does not 
occur with B.1.
  B.3) or mapping into a TableRecord, POJO or similar (to avoid B.1) and 
exposing this. That yields the problem that the caller method cannot know 
which fields in the POJO were actually retrieved from the DB table and 
whether a null value is null, because it wasn't selected or because it's 
actually null in the database. This problem does not occur with B.1 or B.2.

If, on the other hand, I fetch all columns into the POJO or TableRecord, 
these problems all disappear. It does produce the overhead, as you 
mentioned, though, which I don't like either.

To me, all these options have caveats. Actually, after writing it down, I 
now do prefer B.1 or B.2 (or using both depending on the use case) stronger 
than before.

It does depend on the use case of course, but do you (from your experience) 
have an advice how to deal with that in a nice and manageable way? What 
turned out good to be practice for that or could you recommend something to 
me?


Well, jOOQ can only offer you the perfect tool for *each* of these 
approaches, and it will never judge you for choosing one of them :) The 
burden of making the choice is yours. Is a JOIN or a correlated subquery 
the better way to express something in SQL, or even a set operation, like 
UNION ALL? Luckily, SQL isn't opinionated and offers you multiple tools, so 
you can freely express yourself. Is a for loop better than a while loop? 
Choose your own! Is it better to write String.indexOf(x) != -1 or 
String.contains(x)?


Yeah, I agree. Unfortunately, I don't know more than (very) basic SQL 
(yet), but by using jooq I'm already learning a lot about it. And all the 
explanations and examples in the jooq manual and on the blog are super 
helpful. Thanks for putting so much care into this, it really helps using 
jooq and simplifying my queries and code!
 

I can give you another example from previous work I did. An API was 
completely defined in WSDL (those were the days!). We generated Java 
classes from those WSDL files using Axis (I think). So we did have a set of 
DTOs for each service. The DTOs had shared nested data structures (which 
were also shared nested XSD structures in the WSDL). Everything was crystal 
clear, well documented, and type safe. Now, SQL queries could just populate 
the generated DTOs for each service. Some coworkers would have preferred 
writing Java code first and generating the WSDL. That wasn't *my* style, 
though. In my opinion, the hand-written WSDL formed a better API contract 
than the generated one, and I didn't care about hand-writing Java classes 
all that much. Which approach was better? None. I had to run this project 
so I did it "my way". I don't know what "your way" is, here.


Thanks for sharing! I like your agnostic attitude! :) And I can relate, 
because in a different project, I am also actually (still) working in a 
very similar context: An API that we use is defined in XSD, so we (in an 
automatic way) create WSDL files and then Java classes out of that and use 
these to build our code. It works pretty well and I'm happy I don't have to 
write all these classes by hand :D
 

But since you're "sending data to the frontend quite often," why not just 
design a formal API of some sorts? Either with an API language (swagger? 
I'm not up to date) and generate the Java classes, or with Java classes, 
and generate the API spec. Or, you don't do that, and embrace the lack of 
type safety, in case of which you don't need any DTOs or Records. The 
question what you're doing with jOOQ arises automatically from this 
decision of how you want to design your API.


Honestly, I'm not experienced enough to know how to do any of that or even 
completely understand what your ideas exactly mean. So that would be a task 
for a more senior colleague ;)
Thanks, for the ideas, though. I might come back to them when I gathered 
enough knowledge to enter that path.
 

I hope this helps


Yes, it does, a lot! Thanks :)

-- 
You received this message because you are subscribed to the Google Groups "jOOQ 
User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jooq-user+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jooq-user/ed05e5f7-03ef-4dca-be01-1500877f1c3dn%40googlegroups.com.

Reply via email to