Re: Using Postgres connection functions

2018-01-20 Thread Joe via Digitalmars-d-learn

On Saturday, 20 January 2018 at 04:54:47 UTC, Adam D. Ruppe wrote:

Same as above. The general pattern is:

C_Type[] name = new C_Type[](requested_size);
// pass as `name.ptr`. This becomes a C_Type*


Thanks, Adam. Perhaps something like this ought to make its way 
into the "D for C Programmers" page.


Re: Using Postgres connection functions

2018-01-19 Thread Adam D. Ruppe via Digitalmars-d-learn

On Saturday, 20 January 2018 at 04:09:01 UTC, Joe wrote:

   extern(C) char * [2] pvs;
   foreach (i, val; paramValues)
   pvs[i] = cast(char *)toStringz(val);

And then use "cast(const char **)pvs" for the paramValues 
argument.


A slight improvement here that removes the need for any casts 
(and the extern(C) is useless here btw) is to just define it as 
const(char)*:


const(char) * [2] pvs;
foreach (i, val; paramValues)
pvs[i] = toStringz(val);

Then you can call the argument with plain `pvs.ptr` instead of 
casting it. Though, make sure you remember the null at the end 
too!


1. Is malloc() the only way to allocate the arrays, either of 
Oid's, int's or char *'s, for passing to the libpq functions?  
IOW, isn't there a way to take advantage of D's 'new' (and thus 
the GC)?


const(char)*[] pvs = new const(char)*[](size_here);
// could also have been `auto pvs = ...;` btw

// then still pass it to pqFunc(pvs.ptr)
// the .ptr gets the char** out of it.


Just double check the documentation of any C function you pass it 
to to make sure it doesn't explicitly say you must malloc it. 
Some C functions will try to free() the pointer you pass it, or 
may hold on to it. D's GC can't see pointers C functions keep, so 
 it might try to free that array any time after the local 
variable `pvs` in D goes out of scope.



2. How to convert a slice of Oid's or int's to an array of 
pointers suitable by processing by the libpq C function?


Same as above. The general pattern is:

C_Type[] name = new C_Type[](requested_size);
// pass as `name.ptr`. This becomes a C_Type*


Just again, remember requested_size needs to include the null 
terminator when C requires it, so a +1 might be helpful.


And also, no need for extern(C). That's more for function 
declarations/pointers or maybe global variables C needs to be 
able to see.


But I'm not sure if it's really working (when I mistakenly had 
a * in the pts declaration, at one point it also seemed to 
work).


Oh yikes, that would have been undefined behavior there. So [x] 
on a pointer will just access memory past it - just like in C - 
and might sometimes work, might sometimes overwrite other memory 
(in your case, it probably overwrote some other local variable on 
the stack), or, best case scenario really, might cause a 
segmentation fault or access violation.


Re: Using Postgres connection functions

2018-01-19 Thread Joe via Digitalmars-d-learn

On Saturday, 13 January 2018 at 05:28:17 UTC, Joe wrote:
Going beyond the connection, there are various other libpq 
functions that use a similar pattern of values passed using 
multiple parallel C arrays, e.g.,


   PGresult *PQexecParams(PGconn *conn,
   const char *command,
   int nParams,
   const Oid *paramTypes,
   const char * const *paramValues,
   const int *paramLengths,
   const int *paramFormats,
   int resultFormat);

Each of the `paramXxxxs' arguments are arrays (Oid is an alias 
for uint).

[...]


Focusing on the above function, suppose the first two parameter 
arrays are defined in a C program as:


Oid paramTypes[] = {23, 25};
char *paramValues[] = {"1", "abcde"};

which could be expressed in D as:

Oid [] paramTypes = [23, 25];
string [] paramValues = ["1", "abcde"];

I know the paramTypes could be passed as null, letting PG deduce 
the data types but suppose we have some reason to accumulate 
types, etc., in D slices.  I know the paramValues can be 
manipulated in a manner similar to the one shown in my first 
post, namely something like this:


   extern(C) char * [2] pvs;
   foreach (i, val; paramValues)
   pvs[i] = cast(char *)toStringz(val);

And then use "cast(const char **)pvs" for the paramValues 
argument. The only feedback that I received that was explicit to 
this approach was in response to my initial post, in which Adam 
D. Ruppe said that what I did was "decent".


So I have two lingering doubts:

1. Is malloc() the only way to allocate the arrays, either of 
Oid's, int's or char *'s, for passing to the libpq functions?  
IOW, isn't there a way to take advantage of D's 'new' (and thus 
the GC)?


2. How to convert a slice of Oid's or int's to an array of 
pointers suitable by processing by the libpq C function?  A 
technique similar to the previous one seems to work, e.g.,


extern(C) Oid [2] pts;
foreach (i, typ; paramTypes)
pts[i] = typ;

But I'm not sure if it's really working (when I mistakenly had a 
* in the pts declaration, at one point it also seemed to work).


Re: Using Postgres connection functions

2018-01-16 Thread Boris-Barboris via Digitalmars-d-learn

On Saturday, 13 January 2018 at 17:58:14 UTC, Joe wrote:
...ddb. The latter perhaps has the distinction that it doesn't 
use libpq, but rather implements the Postgres FE/BE protocol. 
That's a bit *too* native for my taste. It means the library 
maintainer has to keep up with changes to the internal 
protocol, which although published, the Postgres group doesn't 
have to maintain compatibility from version to version.


Not that it matters, but client-server protocol is actually the 
most stable one, it hasn't changed since Postgress 7.4 (Release 
date: 2003-11-17). It's the language-level abstractions like 
libpq that keep being changed\updated on almost each release.





Re: Using Postgres connection functions

2018-01-15 Thread Joe via Digitalmars-d-learn

On Monday, 15 January 2018 at 02:28:29 UTC, Matthias Klumpp wrote:
In any case, please don't start another Postgres library and 
consider contributing to one of the existing ones, so that we 
maybe have one really awesome, 100% complete library at some 
point.


If, on the other hand, your goal is to learn about the 
low-level Postgres interface and not just to have a Postgres 
interface for an application you develop, by all means, play 
with it :-)


At this point, I am indeed learning about low-level Postgres 
interfaces (but not so low-level as the client-server protocol) 
as a way to understand the challenges of interfacing D to C.


However, as part of the Pyrseas open source project, which I 
maintain, I had started to create a Postgres interface in Python 
inspired by The Third Manifesto, as opposed to ORMs like 
SQLAlchemy (see 
https://pyrseas.wordpress.com/2013/03/07/a-pythonic-ttm-inspired-interface-to-postgresql-requirements/). I got criticized for "reinventing the wheel" but I believe TTM, if properly done, is quite different from an ORM.


I understand your concern about not starting another PG library. 
From the cursory investigation of the existing libraries, I think 
they span a spectrum, with ddb at one end (low-level protocol), 
then derelict-pq (low-level binding over libpq), ddbc at the 
opposite end (multi-DBMS support) and several others in between. 
So I  guess the real problem is with the proliferation in the 
latter group.


Re: Using Postgres connection functions

2018-01-14 Thread Matthias Klumpp via Digitalmars-d-learn

On Saturday, 13 January 2018 at 17:58:14 UTC, Joe wrote:
On Saturday, 13 January 2018 at 10:10:41 UTC, Jacob Carlborg 
wrote:
There's a native D library, ddb [1], for connecting to 
Postgres. Then you don't have to worry about null-terminated 
strings.


There are several D libraries that I would consider "native": 
derelict-pq, dpq, dpq2 and ddb. The latter perhaps has the 
distinction that it doesn't use libpq, but rather implements 
the Postgres FE/BE protocol. That's a bit *too* native for my 
taste. It means the library maintainer has to keep up with 
changes to the internal protocol, which although published, the 
Postgres group doesn't have to maintain compatibility from 
version to version. For example, they haven't dropped the 
PQsetdbLogin function even though the PQconnectdb and 
PQconnectdbParams functions are obviously preferred. OTOH, 
there used to be an AsciiRow message format in the protocol, 
that was dropped, unceremoniously (not even mentioned in the 
release notes).


If you are after a good way to use Postgres in a real-world 
application, I highly recommend ddbc[1] (which also supports 
other backends).
There are a lot of D Postgres bindings out there, and all of them 
are about 70% completed, but nobody really bothered to make one 
finished and really good (and well maintained) binding. DDBC is 
really close to being complete, and contains a few convenience 
features that make it nice to use in an application. It also is 
used by Hibernated[2] in case you want an ORM for your app at 
some point.
Both libraries aren't up to tools like SQLAlchemy & Co. from 
other programming languages, but they are decent.

For simple cases, dpq2 & Co. might work well enough as well.
In any case, please don't start another Postgres library and 
consider contributing to one of the existing ones, so that we 
maybe have one really awesome, 100% complete library at some 
point.


If, on the other hand, your goal is to learn about the low-level 
Postgres interface and not just to have a Postgres interface for 
an application you develop, by all means, play with it :-)


Cheers,
Matthias

[1]: https://github.com/buggins/ddbc
[2]: https://github.com/buggins/hibernated


Re: Using Postgres connection functions

2018-01-13 Thread Joe via Digitalmars-d-learn
On Saturday, 13 January 2018 at 10:10:41 UTC, Jacob Carlborg 
wrote:
There's a native D library, ddb [1], for connecting to 
Postgres. Then you don't have to worry about null-terminated 
strings.


There are several D libraries that I would consider "native": 
derelict-pq, dpq, dpq2 and ddb. The latter perhaps has the 
distinction that it doesn't use libpq, but rather implements the 
Postgres FE/BE protocol. That's a bit *too* native for my taste. 
It means the library maintainer has to keep up with changes to 
the internal protocol, which although published, the Postgres 
group doesn't have to maintain compatibility from version to 
version. For example, they haven't dropped the PQsetdbLogin 
function even though the PQconnectdb and PQconnectdbParams 
functions are obviously preferred. OTOH, there used to be an 
AsciiRow message format in the protocol, that was dropped, 
unceremoniously (not even mentioned in the release notes).




Re: Using Postgres connection functions

2018-01-13 Thread Jacob Carlborg via Digitalmars-d-learn

On 2018-01-13 05:17, Joe wrote:
I'm trying to learn how to use D to connect (and send queries) to 
Postgres, i.e., libpq in C.


So my question is: is there an easier or better way of passing two 
arrays of C null-terminated strings to an extern(C) function?


There's a native D library, ddb [1], for connecting to Postgres. Then 
you don't have to worry about null-terminated strings.


[1] http://code.dlang.org/packages/ddb

--
/Jacob Carlborg


Re: Using Postgres connection functions

2018-01-12 Thread Joe via Digitalmars-d-learn
Going beyond the connection, there are various other libpq 
functions that use a similar pattern of values passed using 
multiple parallel C arrays, e.g.,


   PGresult *PQexecParams(PGconn *conn,
   const char *command,
   int nParams,
   const Oid *paramTypes,
   const char * const *paramValues,
   const int *paramLengths,
   const int *paramFormats,
   int resultFormat);

Each of the `paramXxxxs' arguments are arrays (Oid is an alias 
for uint).


   PGresult *PQprepare(PGconn *conn,
const char *stmtName,
const char *query,
int nParams,
const Oid *paramTypes);

   PGresult *PQexecPrepared(PGconn *conn,
 const char *stmtName,
 int nParams,
 const char * const *paramValues,
 const int *paramLengths,
 const int *paramFormats,
 int resultFormat);

My point is that there seems to be a need to have a generic or 
generalized mechanism for passing these argument arrays from D to 
C.


Re: Using Postgres connection functions

2018-01-12 Thread Joe via Digitalmars-d-learn

On Saturday, 13 January 2018 at 04:26:06 UTC, Adam D. Ruppe wrote:

If and only if the values are known at compile time, you can do:

const char** keywords = ["hostaddr".ptr, "port".ptr, 
"dbname".ptr, null].ptr;


or even do it inline:


PQconnectdbParams(["hostaddr".ptr, "port".ptr, "dbname".ptr, 
null].ptr, ditto_for_values, 1);


The keywords are (or could be) known at compile time, but almost 
by definition, the associated values are only known at runtime.




Re: Using Postgres connection functions

2018-01-12 Thread Adam D. Ruppe via Digitalmars-d-learn

On Saturday, 13 January 2018 at 04:17:02 UTC, Joe wrote:
It only compiled after I removed the second 'const' in the 
first and second arguments.


Yeah, D's const applies down the chain automatically, so you 
don't write it twice there.



string[] keywords = ["hostaddr", "port", "dbname"];
string[] values = ["127.0.0.1", "5432", "testdb"];



conn = PQconnectdbParams(cast(const char **)kws,
 cast(const char **)vals, 1);

So my question is: is there an easier or better way of passing 
two arrays of C null-terminated strings to an extern(C) 
function?


If and only if the values are known at compile time, you can do:

const char** keywords = ["hostaddr".ptr, "port".ptr, 
"dbname".ptr, null].ptr;


or even do it inline:


PQconnectdbParams(["hostaddr".ptr, "port".ptr, "dbname".ptr, 
null].ptr, ditto_for_values, 1);




Otherwise, what you did there is decent... being a C style of 
array of arrays, it will need to be coded in a C style with stuff 
like malloc and toStringz to convert D to C strings too.