this is what I get from the trace log (repeated twice for each insert):
INFO [SharedPool-Worker-1] 2017-05-26 23:10:12,587 SLF4JAuditWriter.java:96 -
host:/*.*.*.*|source:/*.*.*.*|user:*|timestamp:1495840212587|category:DML|type:CQL_UPDATE|ks:myks|cf:mytab|operation:INSERT
INTO myks.mytab (id, day, month, email, e_at, fn, ln, last_upd_dt, pc, pn,
ptype, rs, sm)
VALUES
('TSTUSERID', 4, 6, 'tstemail_lt...@crmqa.com', 1495840211904,
'tstFirst_LVMF4', 'tstLast_PVAKZ', 1495840212584, null, null, [('5208096456',
'5208096456', 'Registered'), ('11230103039', '11230103039', 'Registered'),
('4649861872074', 'Garbage Phone number', 'Unknown')], 'Web', 'US') USING
TIMESTAMP 1495840211904000;|consistency level:LOCAL_QUORUM
On Saturday, May 27, 2017, 5:58:21 PM PDT, Jeff Jirsa
wrote:Can you upload the trace somewhere ?
--
Jeff Jirsa
> On May 27, 2017, at 5:34 PM, Subroto Barua
> wrote:
>
>
> i have a table with one of the column as Tuple datatype.
>
> when I insert rows with "using timestamp", I see multiple inserts in the
> trace log and I see multiple data sets per partition key.
>
>
>
>
> for eg: for each partition key, I get extra tuple for each insert as opposed
> to 1 row for the key, which keeps growing with new insert for the same key.
>
> this issue happens when I insert via api using Phantom driver; works fine via
> cqlsh.
> not sure whether there is any limitation with "using timestamp" clause or a
> bug with Phantom driver.
> appreciate any advice/comments on Tuple or "using timestamp"...
>
-
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org