I have found the problem.
To have a "clean" installation, I have the following directory structure

/opt/nifi/nifi-1.23.2
/opt/nifi/current (symbolic link to the current version)
/opt/nifi/driver
/opt/nifi/extensions

I defined the ../extension and ../driver directories separately, so that when I upgrade NiFi, I only have to adjust the directories in the config.

Then in the DBCPConnectionPool I specify where the driver is under "Database Driver Location(s)" -> "/opt/nifi/driver/postgresql-42.6.0.jar".
This works fine, but PGVector doesn't like it.

If I copy the driver to /opt/nifi/current/lib and leave "Database Driver Location(s)" empty, everything works fine.
Also addVectorType(con) is not needed.

Interesting. I don't really understand why, but I accept that some things are closed to me :)

Maybe my explanation will help if other developers have similar problems.
And maybe there should be a "don't do" chapter in the documentation, pointing out that the JDBC driver should be placed in the ../lib directory. If someone can tell me why the behavior is like this, I would be happy to learn something.

Thanks again for your help.

Regards,
Uwe

On 01.09.23 05:59, Matt Burgess wrote:
Maybe this [1]? Perhaps you have to call unwrap() yourself in this
case. IIRC you don't have access to the DataSource but you can check
it directly on the connection.

Regards,
Matt

[1] 
https://stackoverflow.com/questions/36986653/cast-java-sql-connection-to-pgconnection

On Thu, Aug 31, 2023 at 8:15 PM u...@moosheimer.com <u...@moosheimer.com> wrote:
Mark & Matt,

Thanks for the quick help. I really appreciate it.

PGvector.addVectorType(con) returns the following:
*java.sql.SQLException: Cannot unwrap to org.postgresql.PGConnection*

Could this be a connection pool issue?

Interestingly, I didn't call addVectorType() at all in my test java code
and it still works?!
I'll have to check again ... maybe I'm not seeing it correctly anymore.
It is already 2:05 a.m. here.


Regards,
Uwe


java.sql.SQLException: Cannot unwrap to org.postgresql.PGConnection

On 31.08.23 18:53, Matt Burgess wrote:
This means the JDBC driver you're using does not support the use of
the two-argument setObject() call when the object is a PGVector. Did
you register the Vector type by calling:

PGvector.addVectorType(conn);

The documentation [1] says that the two-argument setObject() should
work if you have registered the Vector type.

Regards,
Matt

[1]https://github.com/pgvector/pgvector-java

On Thu, Aug 31, 2023 at 12:01 PM Mark Payne<marka...@hotmail.com>  wrote:
Hey Uwe,

The DBCPConnectionPool returns a java.sql.Connection. From that you’d create a 
Statement. So I’m a little confused when you say that you’ve got it working in 
Pure JDBC but not with NiFi, as the class returned IS pure JDBC. Perhaps you 
can share a code snippet of what you’re doing in the “Pure JDBC” route that is 
working versus what you’re doing in the NiFi processor that’s not working?

Thanks
-Mark


On Aug 31, 2023, at 10:58 AM,u...@moosheimer.com  wrote:

Hi,

I am currently writing a processor to write OpenAI embeddings to Postgres.
I am using DBCPConnectionPool for this.
I use Maven to integrate PGVector (https://github.com/pgvector/pgvector).

With pure JDBC this works fine. With the database classes from NiFi I get the 
error:
*Cannot infer the SQL type to use for an instance of com.pgvector.PGvector. Use 
setObject() with an explicit Types value to specify the type to use.*

I use -> setObject (5, new PGvector(embeddingArray)).
embeddingArray is defined as: float[] embeddingArray

Of course I know why I get the error from NiFi and not from the JDBC driver, 
but unfortunately this knowledge does not help me.

Can anyone tell me what SQLType I need to specify for this?
I have searched the internet and the NiFi sources on GitHub for several hours 
now and have found nothing.

One option would be to use native JDBC and ignore the ConnectionPool. But that 
would be a very bad style in my opinion.
Perhaps there is a better solution?

Any help, especially from Matt B., is appreciated as I'm at a loss.
Thanks guys.

Reply via email to