Ok.

We launch our experiment to  *myria.criann.fr <http://myria.criann.fr> *from
one instance of openmole server running into one VM at university :
*motril.univ-rouen.fr
<http://motril.univ-rouen.fr>*

Actually
a) we remove the .*openmole* folder on *myria.criann.fr
<http://myria.criann.fr>* Criann cluster,
b) Removing the db folder *rm -rf
~/.openmole/motril.univ-rouen.fr/database/
<http://motril.univ-rouen.fr/database/>*
c) We relaunch openmole server at  *motril.univ-rouen.fr
<http://motril.univ-rouen.fr> *

Now jobs are submitted, thks.


2017-05-02 10:52 GMT+02:00 Romain Reuillon <[email protected]>:

> It is not due to the DoE, it du to the fact that you erase you preference
> file localy, therefore openmole create a new userid. Remove all your files
> and don't delete you .openmole folder on the submission machine and it
> should be fine.
>
>
> Le 02/05/2017 à 10:46, reyman a écrit :
>
> Ok, after some investigation, gama model folder with shape equal 300 Mo,
> gamaTask on tmp folder equal ~ 600 Mo and config ~ 343 Mo :
>
> 512K    fileFromArchivef430d46b-04ef-49c0-9afa-844b19fe8cb9.bin
> 19M     dirReplicabf5f0001-175c-4111-bdff-434fc1749cc4
> 28M     dirReplica164ec788-3b87-48ea-b834-7c35751fcb9e
> 253M    dirReplica7c887dcf-2aa1-43c8-8e21-157a2e317244
> 300M    runtime0af71660-fa5d-447c-b111-bf2e42b2c33f
>
> ---
>
> rmissl01@Myria-3:~/.openmole/.tmp/ssh/openmole-fbaa4689-
> b4db-4dbc-bff8-1b14c403409c/tmp/1493394134605/fbccd018-
> 486c-4e86-8074-94eb57a8bf00/7fb6a6c0-881e-4880-b730-0e080f3b5cd3:$ du
> -sch .[!.]* * |sort -h
> 0       persistent
> 343M    config
> 601M    .tmp
> 943M    total
>
> A little design experiment totally explode the quota of 59Go i suppose ...
>
>
> 944M    2dc12a4b-19b7-423b-a91f-c609188dec80
> 944M    310e4c6e-85b7-4761-81b8-82d68e8953e7
> 944M    36106233-0702-4a6a-8902-329f82908159
> 944M    3a4ce0e6-7d4f-430d-8f62-ea948d90e3f9
> 944M    4142d042-2c3e-4512-96ff-57564dd76a45
> 944M    48eefc45-1c72-4f21-abe7-e39da4c6a57f
> 944M    4d948e34-f1b0-4532-b246-c5b6dc280edf
> 944M    4e8d785e-c68c-409d-b245-97267d78d0ce
> 944M    4eccec0a-9c3f-4945-b1d8-22a9627bada9
> 944M    5701402e-dc9f-4743-8ea0-b8a79c3b2e28
> 944M    5ae85937-fda6-432f-b0e8-fa3c56fca4d0
> 944M    5e36f518-b829-45cf-883d-f267abdc75eb
> 944M    64466e5a-e217-4925-8bf7-cbdd8bdce5cd
> 944M    6bbaf7da-c2ce-4b57-85cd-5217ef314160
> 944M    741a65e6-3658-4a4e-a677-4a551c3c92d3
> 944M    77fe6df3-3239-4c2c-977e-ee2be9d50566
> 944M    79b268c3-638c-4548-a0b2-4db27ad9fd2a
> 944M    7cc690fe-6ce6-489f-94e5-bc861959730c
> 944M    7e1af0ac-a1fd-49f5-8789-2d3457734ecc
> 944M    8054d86f-c926-4dc7-bba0-efed7681e408
> 944M    8676d22c-0dd7-48e5-adf3-3003d6421e64
> 944M    8dbdbfb8-4f14-4713-bb50-c9b10da6e061
> 944M    95e97c42-377c-4fab-a63a-fb90f0c7812c
> 944M    a7a5de29-8490-4551-a110-d0d020a593f5
> 944M    ac412200-c4df-4ed4-b5a9-54880dffbc6e
> 944M    ac894f5b-9a3e-4d8c-9c82-9b94cea970b4
> 944M    ac8ede8e-6e4d-44d3-9140-bb8f471ee7e1
> 944M    add35e51-dd86-42a3-a9d0-5edf2cc4500e
> 944M    b5d9b9f1-4a40-4989-a5cc-0af1f6b48a6b
> 944M    b6bb807e-6423-42c7-ba1f-a04e2e3e1b04
> 944M    baa37482-42cd-41de-9121-4476e683e6c6
> 944M    bc7b6d04-7f21-4db4-b9ff-f76c76661dca
> 944M    bff61f11-69ab-49c0-9249-ffb9355f6134
> 944M    c6658725-f2c5-4364-a68e-46345e428833
> 944M    c7a9fcf8-35cc-4a19-8a5b-60991c444528
> 944M    c7e79d31-308c-44ad-9f37-85fec01ddd0b
> 944M    cc9a94c8-570d-4c47-bf0a-9151b778919f
> 944M    cfe7f689-a496-4054-b33b-8f8749c4386f
> 944M    d24e3661-2414-4053-a6df-fb791f6bb5af
> 944M    d2ae5a24-8a0e-420d-8931-cbc85cbeb2be
> 944M    d79ebf03-77ed-4cb5-b42c-08a4a87637d9
> 944M    e3e16739-b126-4d7e-b340-240372d0240d
> 944M    ebe59ca0-31d2-43fd-827a-b4a11b0609fe
> 944M    ecc0be1c-b34b-4d16-8989-fe54e989a5b0
> 944M    f00201e2-a7a5-46b7-8358-e95a9028fa08
> 944M    fbccd018-486c-4e86-8074-94eb57a8bf00
> 58G     total
>
> Any idea to reduce that with slurm ?
>
> 2017-05-02 10:33 GMT+02:00 reyman <[email protected]>:
>
>> Ouch, we have a disk quota exceeded. Seems weird, the .openmole/.tmp
>> folder take 59G ...
>>
>> 512K    .bash_history
>> 512K    .java
>> 59G     .openmole
>> 59G     total
>>
>>
>> rmissl01@Myria-3:~:$ mkdir /gpfs1/home/2017016/rmissl01/.
>> openmole/.tmp/ssh/openm
>> ole-fbaa4689-b4db-4dbc-bff8-1b14c403409c/tmp/1493709912791/9
>> 052b12d-8806-457a-a1
>> cb-79a7ca00f4ff
>> mkdir: cannot create directory ‘/gpfs1/home/2017016/rmissl01/
>> .openmole/.tmp/ssh/openmole-fbaa4689-b4db-4dbc-bff8-1b14c403
>> 409c/tmp/1493709912791/9052b12d-8806-457a-a1cb-79a7ca00f4ff’: Disk quota
>> exceeded
>>
>> 2017-05-02 10:14 GMT+02:00 Romain Reuillon <[email protected]>:
>>
>>> Just for test, can you mkdir this directory on the headnode of the
>>> cluster to check if there is an error ?
>>>
>>>
>>> Le 02/05/2017 à 09:53, reyman a écrit :
>>>
>>> Hi OpenMOLE team,
>>>
>>> Using dev version and SLURM environment on CRIANN cluster we have write
>>> error with no submission at all. Any idea ?
>>>
>>> *java.io.IOException: Error make dir
>>> /gpfs1/home/2017016/rmissl01/.openmole/.tmp/ssh/openmole-fbaa4689-b4db-4dbc-bff8-1b14c403409c/tmp/1493709912791/9052b12d-8806-457a-a1cb-79a7ca00f4ff
>>> on org.openmole.plugin.environment.ssh.SSHStorageService$$anon$1@4afd1205*
>>> * at fr.iscpif.gridscale.storage.St
>>> <http://fr.iscpif.gridscale.storage.St>orage$class.errorWrapping(Storage.scala:72)*
>>> * at org.openmole.plugin.environment.ssh.SSHStorageService$$anon$1.fr
>>> <http://1.fr>$iscpif$gridscale$ssh$SSHStorage$$super$errorWrapping(SSHStorageService.scala:33)*
>>> * at
>>> fr.iscpif.gridscale.ssh.SSHStorage$class.errorWrapping(SSHStorage.scala:79)*
>>> * at
>>> org.openmole.plugin.environment.ssh.SSHStorageService$$anon$1.errorWrapping(SSHStorageService.scala:33)*
>>> * at fr.iscpif.gridscale.storage.St
>>> <http://fr.iscpif.gridscale.storage.St>orage$class.wrapException(Storage.scala:77)*
>>> * at
>>> org.openmole.plugin.environment.ssh.SSHStorageService$$anon$1.wrapException(SSHStorageService.scala:33)*
>>> * at fr.iscpif.gridscale.storage.St
>>> <http://fr.iscpif.gridscale.storage.St>orage$class.makeDir(Storage.scala:50)*
>>> * at
>>> org.openmole.plugin.environment.ssh.SSHStorageService$$anon$1.makeDir(SSHStorageService.scala:33)*
>>> * at
>>> org.openmole.plugin.environment.gridscale.GridScaleStorage$class._makeDir(GridScaleStorage.scala:33)*
>>> * at
>>> org.openmole.plugin.environment.ssh.SSHPersistentStorage$$anon$2._makeDir(SSHPersistentStorage.scala:64)*
>>> * at
>>> org.openmole.plugin.environment.batch.storage.StorageService$$anonfun$makeDir$1.apply$mcV$sp(StorageService.scala:193)*
>>> * at
>>> org.openmole.plugin.environment.batch.storage.StorageService$$anonfun$makeDir$1.apply(StorageService.scala:193)*
>>> * at
>>> org.openmole.plugin.environment.batch.storage.StorageService$$anonfun$makeDir$1.apply(StorageService.scala:193)*
>>> * at
>>> org.openmole.plugin.environment.batch.control.LimitedAccess$LimitedAccessToken.access(LimitedAccess.scala:37)*
>>> * at
>>> org.openmole.plugin.environment.batch.storage.StorageService$class.makeDir(StorageService.scala:193)*
>>> * at
>>> org.openmole.plugin.environment.ssh.SSHPersistentStorage$$anon$2.makeDir(SSHPersistentStorage.scala:64)*
>>> * at
>>> org.openmole.plugin.environment.batch.refresh.UploadActor$$anonfun$initCommunication$1.apply(UploadActor.scala:81)*
>>> * at
>>> org.openmole.plugin.environment.batch.refresh.UploadActor$$anonfun$initCommunication$1.apply(UploadActor.scala:72)*
>>> * at org.openmole.core.workspace.Ne
>>> <http://org.openmole.core.workspace.Ne>wFile.withTmpFile(NewFile.scala:21)*
>>> * at
>>> org.openmole.plugin.environment.batch.refresh.UploadActor$.initCommunication(UploadActor.scala:72)*
>>> * at
>>> org.openmole.plugin.environment.batch.refresh.UploadActor$.receive(UploadActor.scala:52)*
>>> * at
>>> org.openmole.plugin.environment.batch.refresh.JobManager$DispatcherActor$.receive(JobManager.scala:47)*
>>> * at
>>> org.openmole.plugin.environment.batch.refresh.JobManager$$anonfun$dispatch$1.apply$mcV$sp(JobManager.scala:57)*
>>> * at
>>> org.openmole.core.threadprovider.ThreadProvider$RunClosure.run(ThreadProvider.scala:21)*
>>> * at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)*
>>> * at java.util.concurrent.FutureTask.run(FutureTask.java:266)*
>>> * at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)*
>>> * at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)*
>>> * at java.lang.Thread.run(Thread.java:745)*
>>> *Caused by: net.schmizz.sshj.sftp.SFTPException: Failure*
>>> * at net.schmizz.sshj.sftp.Response.error(Response.java:113)*
>>> * at net.schmizz.sshj.sftp.Response.ensureStatusIs(Response.java:106)*
>>> * at
>>> net.schmizz.sshj.sftp.Response.ensureStatusPacketIsOK(Response.java:99)*
>>> * at net.schmizz.sshj.sftp.SFTPEngine.makeDir(SFTPEngine.java:178)*
>>> * at net.schmizz.sshj.sftp.SFTPEngine.makeDir(SFTPEngine.java:183)*
>>> * at net.schmizz.sshj.sftp.SFTPClient.mkdir(SFTPClient.java:87)*
>>> * at
>>> fr.iscpif.gridscale.ssh.impl.SSHJSFTPClient$.mkdir(SSHJSFTPClient.scala:57)*
>>> * at fr.iscpif.gridscale.ssh.SSHClient$$anon$2.mkdir(SSHClient.scala:84)*
>>> * at
>>> fr.iscpif.gridscale.ssh.SSHStorage$$anonfun$_makeDir$1.apply(SSHStorage.scala:91)*
>>> * at
>>> fr.iscpif.gridscale.ssh.SSHStorage$$anonfun$_makeDir$1.apply(SSHStorage.scala:91)*
>>> * at
>>> fr.iscpif.gridscale.ssh.SSHHost$$anonfun$withSftpClient$1.apply(SSHHost.scala:51)*
>>> * at
>>> fr.iscpif.gridscale.ssh.SSHHost$$anonfun$withSftpClient$1.apply(SSHHost.scala:49)*
>>> * at
>>> fr.iscpif.gridscale.ssh.SSHConnectionCache$class.withConnection(SSHConnectionCache.scala:27)*
>>> * at
>>> org.openmole.plugin.environment.ssh.SSHStorageService$$anon$1.withConnection(SSHStorageService.scala:33)*
>>> * at
>>> fr.iscpif.gridscale.ssh.SSHHost$class.withSftpClient(SSHHost.scala:48)*
>>> * at
>>> org.openmole.plugin.environment.ssh.SSHStorageService$$anon$1.withSftpClient(SSHStorageService.scala:33)*
>>> * at
>>> fr.iscpif.gridscale.ssh.SSHStorage$class._makeDir(SSHStorage.scala:90)*
>>> * at
>>> org.openmole.plugin.environment.ssh.SSHStorageService$$anon$1._makeDir(SSHStorageService.scala:33)*
>>> * at fr.iscpif.gridscale.storage.St
>>> <http://fr.iscpif.gridscale.storage.St>orage$$anonfun$makeDir$1.apply$mcV$sp(Storage.scala:50)*
>>> * at fr.iscpif.gridscale.storage.St
>>> <http://fr.iscpif.gridscale.storage.St>orage$$anonfun$makeDir$1.apply(Storage.scala:50)*
>>> * at fr.iscpif.gridscale.storage.St
>>> <http://fr.iscpif.gridscale.storage.St>orage$$anonfun$makeDir$1.apply(Storage.scala:50)*
>>> * at fr.iscpif.gridscale.storage.St
>>> <http://fr.iscpif.gridscale.storage.St>orage$class.wrapException(Storage.scala:75)*
>>> * ... 24 more*
>>>
>>>
>>>
>>> _______________________________________________
>>> OpenMOLE-users mailing 
>>> [email protected]http://fedex.iscpif.fr/mailman/listinfo/openmole-users
>>>
>>> _______________________________________________ OpenMOLE-users mailing
>>> list [email protected] http://fedex.iscpif.fr/mailman
>>> /listinfo/openmole-users
>>
>> --
>> <http://stackoverflow.com/users/385881/reyman64>
>>
> --
> <http://stackoverflow.com/users/385881/reyman64>
>
> _______________________________________________
> OpenMOLE-users mailing 
> [email protected]http://fedex.iscpif.fr/mailman/listinfo/openmole-users
>
>
> _______________________________________________
> OpenMOLE-users mailing list
> [email protected]
> http://fedex.iscpif.fr/mailman/listinfo/openmole-users
>
>


-- 
<http://stackoverflow.com/users/385881/reyman64>
_______________________________________________
OpenMOLE-users mailing list
[email protected]
http://fedex.iscpif.fr/mailman/listinfo/openmole-users

Reply via email to