No, no parallel or concurrent runnings, it is a simple serial, linear 
importation... and yes it seems like a write lock because when the lock 
happens, the next file that correspond to the same collection is not read 
because of the same error, I mean something like this

first set of data
header -> imported
products -> partially imported, timeout lock
others -> imported

second set of data
header -> imported
product -> timeout lock, not imported
others -> imported

third set of data
header -> imported
product -> imported
others -> imported

fourth set of data
header -> imported
products -> partially imported, timeout lock
others -> imported

and so... 

Is there a way to avoid it or to increase that timeout lock wait?? Each 
file takes way to much time to load, not seconds but minutes... 

On Monday, March 13, 2017 at 4:36:35 AM UTC-5, [email protected] wrote:
>
> Are you running multiple `arangoimp` jobs hitting the same collection 
> concurrently? The trouble with this is that currently we have collection 
> level locks, which serialise such requests. Maybe the chunk sizes we are 
> using with `arangoimp` are too large, such that one `arangoimp` job has to 
> wait for more than 15s on the write lock held by another one and therefore 
> timeout. Could this be the case?
>

-- 
You received this message because you are subscribed to the Google Groups 
"ArangoDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to