Hi:
  I'm new to griffin. If in the verification, every field of the source
table and the target table is checked for accuracy, the efficiency will be
very low?
  Our source table is tens of millions every day, and there are hundreds of
fields. If each field is verified, the number of reads from the original
database will be very large, and the requirements on the database will also
be very high. Because I do n’t quite understand the bottom-layer
principles, if it is read from the database one by one, then it needs to
read ten million times from the database, which is very inefficient. If it
is a batch read, the amount of one-time read will be very large, which is
also a challenge to the database.

Best
Lec Ssmi

Reply via email to