On Wed, Sep 13, 2023 at 2:50 PM Yafl Wabei <yafl.wa...@gmail.com> wrote:

> Hi, thanks for your reply! And btw thanks for creating jooq! I'm new to it
> and still learning, but I really like it a lot and appreciate how much
> thought you put into developing it!
>
> // 1. Fetching the plain result
> // WORKS - VALUES ARE PRESENT
> Record1<TestRecord> fetchOne =
> ctx.select(TEST).from(TEST).where(TEST.ID.eq(id)).fetchOne();
>
>
> This probably needs no explanation?
>
>
> Correct :)
>
>
>
> // 2. Fetching into a custom type or a POJO
> // WORKS!
> Test fetchOneInto_POJO =
> ctx.select(TEST).from(TEST).where(TEST.ID.eq(id)).fetchOneInto(Test.class);
>
>
> This is due to a feature from the DefaultRecordMapper where single record
> results can be mapped to classes directly using the
> DefaultConverterProvider:
> https://github.com/jOOQ/jOOQ/issues/10071
>
> Note that just because a POJO class is called Test (like the table), or
> the fact that it's generated from the table doesn't mean there's a link
> between the two things.
>
>
> Do you mean to say (with the last sentence) that it would also just work
> with a random custom type (like a custom Dto)?
>
> And thanks for the reference! If I see it correctly (see question below,
> too), then this special feature makes my call equivalent to fetchOne(r ->
> r.get(0, Test.class)) and that's then similar to case "5." from above. Now
> that I got that, I still wonder how r.get(0, Test.class) actually converts
> the record to a Test instance. Is it correct that this is equivalent to
> r.get(0).into(Test.class) and hence to testRecordInstance.into(Test.class)?
> And the mapping algorithm for that is then (one of) the basic mapping
> algorithms of jooq, i.e. the naming convention algorithm mentioned in the
> DefaultRecordMapper JavaDoc which compares field names, right? I guess I
> just gathered the understanding while thinking "out loud" here. ;) So these
> questions are more rhetorical (or just asking for confirmation). If (parts
> of) my explanation is wrong, please let me know.
>

The conversion of values to other values happens via the ConverterProvider
SPI. The conversion of records to custom types happens via the
RecordMapperProvider SPI. The DefaultRecordMapper has a special case for
Record1<T> to Class<E> conversion, which it delegates to the
ConverterProvider


>
>
> // 3. Fetching into the TableRecord
> // WORKS NOT! VALUES ARE NULL
> TestRecord fetchOneInto_TableRecord =
> ctx.select(TEST).from(TEST).where(TEST.ID.eq(id)).fetchOneInto(TestRecord.class);
>
>
>  This is record-to-record mapping, which is governed by field equivalence.
> The nested Record1<TestRecord> record doesn't have any corresponding fields
> with TestRecord
>
>
> I do get that. But if DefaultRecordMapper would treat TestRecord just as a
> custom type and then use the same algorithm as above for the POJO (or any
> other custom type), would it then work? That's just not what happens,
> because the DefaultRecordMapper always field-equivalence for
> Record-to-Record mapping, right?
>

But it doesn't treat TestRecord as a custom type, otherwise, it couldn't
transfer record state (such as changed flags, the fetched flag, original
values, etc.).


>
>
> I am especially confused by the difference between 2 and 3. In 2 the
> mapping works and the POJO will contain the correct values. In 3 however,
> the TableRecord's values will be null.
>
>
> So, 2 really just is a special case. It was designed for queries like:
>
>   select(SOME_STRING_COLUMN).from(T).fetchInto(SomeStringWrapper.class);
>
> This may or may not have been a good idea.
>
>
> I do understand that this kind of select query produces a
> Record1<TestRecord> and not a plain TestRecord itself. According to the
> JavaDoc of into(Table<R>), the mapping algorithm compares the Table<R>
> field names with the field names of Record1<TestRecord>, so I guess that's
> the reason why 4 doesn't work (because Record1 doesn't have the same fields
> as the table TEST). But 2 and 3 both use into(Class<E>), and although the
> JavaDoc states that different mapping algorithms are used for different
> constellations, I didn't see which one of these constellations would match
> 2 and make it work and don't make 3 work.
>
>
> I don't think any explanation is missing in the Javadoc?
> https://www.jooq.org/javadoc/latest/org.jooq/org/jooq/impl/DefaultRecordMapper.html
> What's missing, in your opinion?
>
>
> I reckon that in "2." this constellation applies:
>
>
> "If <E> is a field "value type" and <R extends Record1<?>>, i.e. it has
> exactly one column:
>
> Any Java type available from SQLDataType qualifies as a well-known "value
> type" that can be converted from a single-field Record1. [...]"
>
>
> But I'm not sure why it applies, because there is this SQLDataType
> restriction. So my questions is:
>
> <E> is my POJO (or custom data type), right? Which SQLDataType does it
> match then? Or is the record nested in Record1 the type that needs to be an
> SQLDataType? I.e. which type needs to match an SQLDataType and why does it
> match in my example case (and which SQLDataType does it match)?
>

There's SQLDataType.RECORD for your case.


>
> So my questions are:
>
> a) What makes 2 work, i.e. which kind of mapping algorithm is applied
> there?
>
> b) Why does that not apply to 3?
>
> c) The difference between 2 and 3 seems to make the use of select(Table)
> error prone. Should I avoid using this? Or should I just always use get(0)
> (and get(1), get(2) etc. when using more select arguments)? What's the best
> practice for that?
>
>
> There are no best practices. Map things the way you need them to be
> mapped. You already said you understood that you're getting a
> Record1<TestRecord>, but obviously, you never wanted that. And now, you're
> trying to unnest it again. So, the solution is just to not nest it at all!
> Nesting is useful when you have more complex queries.
>
>
>
> d) If select(Table) should be avoided, what's the best (safest and
> readable) way to get all columns of a Table? (E.g. if I need to fetch the
> POJO to make persistent changes on it.)
>
>
> Maybe selectFrom() method, that projects the desired TestRecord type.
>
>
> Thanks for the advices! This questions originates from me wanting to avoid
> asterisks (SELECT *), but we (currently) still have code where we fetch all
> columns. When that is necessary, I want to do this as safe as possible, so
> I considered using variants where jooq (or Java) knows about the record
> type at compile time. This applies to selectFrom(BOOK) and
> select(BOOK).from(BOOK), but not to select().from(BOOK). When I
> experimented with the variants, I stumbled upon the confusing behaviour
> above. But you are right, if I want all columns with jooq knowing the type,
> I should just go for selectFrom(BOOK) instead of select(BOOK).from(BOOK) if
> I don't need joins. However, when I need joins, I'd need to fall back to
> select(BOOK, AUTHOR.NAME).from(BOOK).join(AUTHOR)...
>
> Or I just go with select().from(BOOK).join(AUTHOR), but that will fetch to
> many columns from AUTHOR which I didn't want to fetch. Is select(BOOK,
> AUTHOR.NAME).from(BOOK).join(AUTHOR)... the best solution to avoid this?
> Or maybe select(BOOK.fields()).select(AUTHOR.NAME
> ).from(BOOK).join(AUTHOR)...?
>

I don't really like to engage in a discussion about what is the "best" way.
jOOQ is made of tons of operators. They all work in well defined ways.
Which ones you find "best" is very subjective.

Any time you nest something, you will have the effects of nesting (pros and
cons). It's a tradeoff.

Where would such a discussion even stop? Some folks prefer creating 1 view
or (table valued function) per query. When you do that, every query will
have its own MY_VIEW class and MyViewRecord type, and you will be able (in
a type safe way) to project the entire record, without projecting too many
columns. If you're careful to craft updatable views (and tell jOOQ about
underlying primary keys using synthetic keys in the code gen), then you can
even store data back to the underlying table via your updatable view!
Different style, same result. Not really "better" or "worse". I've done
this in the past, and it was wonderful for that system. A library of highly
reusable views that build upon views, and automatic DTO generation for each
one of them.

To make matters even more interesting, you're using the reflective
DefaultRecordMapper. But you could also just pass lambdas (or DTO
constructor references) around. Some cool examples here:
https://blog.jooq.org/jooq-3-15s-new-multiset-operator-will-change-how-you-think-about-sql/

None of the approaches is really better. Obviously, though, reflection
might be more surprising, occasionally, because not type safe. But if type
safety isn't necessary because ultimately, the consumer is a UI written in
JavaScript, then you could even produce JSON directly from within the
database!
https://blog.jooq.org/stop-mapping-stuff-in-your-middleware-use-sqls-xml-or-json-operators-instead/

jOOQ won't judge you.


> As I write this, I wonder: Is it in the case of a query without join also
> "better" (for whatever reason) to use select(BOOK.fields()).from(BOOK)
> instead of selectFrom(BOOK)? Or is that mere preference?
>
>
> At this point, it's imperative to mention that projecting all columns is
> usually a mistake:
>
> https://www.jooq.org/doc/latest/manual/reference/dont-do-this/dont-do-this-sql-select-all/
>
> Yes, it's convenient, but very often, this leads to avoidable, systematic
> overhead. But anyway, you can just omit any projection at all, i.e. write
> select(), and then you get a flat, untyped Record with all columns from the
> FROM clause in it
>
>
> Thanks for this reminder. I do want to avoid select(), especially in
> joins, even when I need all columns from table1, but not all from table2. I
> do also question retrieving all columns in general since you first gave me
> the hint about it, but I don't understand yet how to achieve the following
> use cases without it:
>
> A) When I want to update a record with a new value, I assume that I need
> to load the POJO or the record from the database, change the affected
> values and then store the POJO or record to the database again. Is that
> correct? Or can I just do the following?
>
> BookRecord bookRecord = create().select(BOOK.ID
> ).from(BOOK).fetchInto(BookRecord.class);
> bookRecord.setTitle("New Title");
> bookRecord.update(); // or .store();
>
I wonder if this only updates the title or if it would override all the
> other columns (except id) with null values, because they are not present in
> the Record.
>

UpdatableRecord operations act only on columns whose changed() flag is set.
Not only will you not want "accidental" explicit NULL values in your INSERT
/ UPDATE statements, but you also don't want "accidental" explicit DEFAULT
values in those statements, because trigger and locking behaviour might
depend on the presence of a column in a statement.


> If that doesn't work, I could of course write an ad-hoc
> create().update(...) statement, but I would rather use the easily
> understandable POJOs or TableRecords API to store and update records.
>
> So what is "the recommended way" to do this without selecting all columns?
>

I would probably run an explicit update. From your example, it isn't clear
why you're running 2 round trips in the first place (SELECT, then UPDATE).
But other than that, there's no "recommended way".


> B) In my program, I have to e.g. query tables and send their data to the
> frontend quite often. Or I query a table and need it for complicated
> business logic. If I write select(C1, C2, C3, ...) queries retrieving only
> the necessary columns, I'm facing the issue of either:
>
>   B.1) defining, for each query, a specific Dto (custom type) which
> contains exactly the fields that were queried. That produces a lot of Dto's
> and therefore might make my project structure messy. I can try to use
> encapsulation, but it will still produce a Dto for each (different) query.
>   B.2) exposing the fetched record(s) to the caller method, which can then
> retrieve values from it. This doesn't feel great, because I'd rather want
> to have the select and mapping within the same method or class (or at least
> package). Also at the calling point in my code, I don't (immediately) know
> which fields are contained in the record and which not. I'd have to go to
> and take a look at the the jooq-query to find out. This problem does not
> occur with B.1.
>   B.3) or mapping into a TableRecord, POJO or similar (to avoid B.1) and
> exposing this. That yields the problem that the caller method cannot know
> which fields in the POJO were actually retrieved from the DB table and
> whether a null value is null, because it wasn't selected or because it's
> actually null in the database. This problem does not occur with B.1 or B.2.
>
> If, on the other hand, I fetch all columns into the POJO or TableRecord,
> these problems all disappear. It does produce the overhead, as you
> mentioned, though, which I don't like either.
>
> To me, all these options have caveats. Actually, after writing it down, I
> now do prefer B.1 or B.2 (or using both depending on the use case) stronger
> than before.
>
> It does depend on the use case of course, but do you (from your
> experience) have an advice how to deal with that in a nice and manageable
> way? What turned out good to be practice for that or could you recommend
> something to me?
>

Well, jOOQ can only offer you the perfect tool for *each* of these
approaches, and it will never judge you for choosing one of them :) The
burden of making the choice is yours. Is a JOIN or a correlated subquery
the better way to express something in SQL, or even a set operation, like
UNION ALL? Luckily, SQL isn't opinionated and offers you multiple tools, so
you can freely express yourself. Is a for loop better than a while loop?
Choose your own! Is it better to write String.indexOf(x) != -1 or
String.contains(x)?

I can give you another example from previous work I did. An API was
completely defined in WSDL (those were the days!). We generated Java
classes from those WSDL files using Axis (I think). So we did have a set of
DTOs for each service. The DTOs had shared nested data structures (which
were also shared nested XSD structures in the WSDL). Everything was crystal
clear, well documented, and type safe. Now, SQL queries could just populate
the generated DTOs for each service. Some coworkers would have preferred
writing Java code first and generating the WSDL. That wasn't *my* style,
though. In my opinion, the hand-written WSDL formed a better API contract
than the generated one, and I didn't care about hand-writing Java classes
all that much. Which approach was better? None. I had to run this project
so I did it "my way". I don't know what "your way" is, here.

Personally, I don't understand why B.1) is perceived as a problem by folks.
You either want type safety (then B.1 is the way to go), or you don't (then
List<Map<String, Object>> is good enough). And then, there are the middle
grounds of B.2) and B.3) and other approaches, which can work, too.

But since you're "sending data to the frontend quite often," why not just
design a formal API of some sorts? Either with an API language (swagger?
I'm not up to date) and generate the Java classes, or with Java classes,
and generate the API spec. Or, you don't do that, and embrace the lack of
type safety, in case of which you don't need any DTOs or Records. The
question what you're doing with jOOQ arises automatically from this
decision of how you want to design your API.

I hope this helps
Lukas

-- 
You received this message because you are subscribed to the Google Groups "jOOQ 
User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jooq-user+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jooq-user/CAB4ELO7CYEZk%3DOFdTVXNSTPsDwyfwH1VuJc1RV9oVArfEr6-1Q%40mail.gmail.com.

Reply via email to