I wrote:

In this case, the data itself is either small or NULL for the test
cases.  And I believe I've loaded other columns (e.g., varchar(2000))
with more than 255.  My thinking is maybe it's something about that
type -- much like you see issues with image, text, and Oracle CLOB
types sometimes in different areas.

I'll post back once I run tests ... it may not even be that.  Here's
the error I get back from the driver:

  DBD::Sybase::db commit failed: Server message number=4815 severity=17
  state=1 line=1 server=NEXDB\DEV text=Received an invalid column length
  from the bcp client for colid 6.OpenClient message: LAYER = (2)
  ORIGIN = (5) SEVERITY = (1) NUMBER = (140)

More info:

I can eventually get the problem set to load by playing around
with NULL (undef) column values.  The difference from my initial
working sets seems to be that the working sets all loaded tables
with only varchar columns.  This set has some numeric columns and
those appear to be the issue -- by setting those instead of passing
undef, I can get data to load.  The columns are all defined as
nullable.

Even with that, trying to load to a table with varchar(max) results
in a memory fault/core dump.  That is, I created a new test table
with varchar(max) replaced with varchar(2000).  The varchar(2000)
version doesn't core dump but both tables fail if non-varchar
columns are given undef to load.

I read the docs again and they mention conversions and catching
those errors.  I set that up but the errors all appear to be
bypassing that ... They all appear to be happening on commit.  Still
playing around with error catching so I may have missed something
there.

If anything above triggers any thoughts or ideas, let me know.

--
Steve Sapovits  [EMAIL PROTECTED]

Reply via email to