OK - I see your question in a different light....

This may be a tangent to the question, but it might be an important 
distinction:  I am using Docker Desktop 
<https://www.docker.com/products/docker-desktop>, not the legacy Toolbox 
<https://docs.docker.com/toolbox/toolbox_install_windows/> solution.  From 
what I understand, the Desktop app is very different and uses Hyper-V to 
run a Linux guest to run the containers and built-in Kubernetes workers.  
Having started with the Toolbox app, it is my opinion that the Desktop app 
is *far* superior and I would highly recommend using it, but I realize that 
you may not be able to for one or more reasons (no Hyper-V, incompatibility 
with other VMx solution, etc.).

But as for your question, I do not believe that you can map a Windows 
folder/volume to a Linux container, at least not without a *LOT* of 
trouble.  You can map volumes from Windows-to-Windows, but not between 
different platforms/modes.  The messages about "NFS shares" are likely a 
product of how Toolbox maps volumes (through NFS services rather than I/O 
mapping), but it's not possible on Desktop either as it gives me an 
"invalid mode" error when I try.

This is why I suggested using a persistent container (read this 
<https://www.altaro.com/msp-dojo/persistent-docker-containers/>, this 
<https://thenewstack.io/methods-dealing-container-storage/>, or this 
<https://stackoverflow.com/questions/18496940/how-to-deal-with-persistent-storage-e-g-databases-in-docker>
 
for more info).  This way, both the app and it's datastore are on the same 
platform (Linux vs. Windows) so there should be no translation going on (
*mapping*, not translation).  The persistent container won't be running, 
but as long as you don't delete it, the data won't go away.  Again, you 
don't need to use the ArangoDB <https://hub.docker.com/r/arangodb/arangodb> 
container for this - just about any Linux container will do as long as you 
define the volumes.  Also, using a persistent container allows you to run 
tools like commit 
<https://docs.docker.com/engine/reference/commandline/commit/> and save 
<https://docs.docker.com/engine/reference/commandline/save/>, allowing you 
to take "snapshots" of the data and/or export it (i.e. backup to .tar file).

Hopefully this better addresses your question.


On Tuesday, May 19, 2020 at 10:49:50 PM UTC-7, Martin Krc wrote:
>
> Hello and thanks for the detailed explanation.
> I just wonder -- is this really supposed to work with Docker Toolbox? I 
> understood that the key problem in my case is that Docker Toolbox for 
> Windows is based on a virtual machine (Oracle VirtualBox). The virtual 
> machine has a capability of connecting a local machine folder, but it seems 
> to be doing via NFS or something (and indeed the local machine and virtual 
> machine even have different IPs). So then when I run Docker in the virtual 
> machine and mount a folder which is in turn mapped to my local folder, 
> ArangoDB won't start and is complaining about the file system type. I am 
> not sure if your answer is supposed to address this problem, is it?
>
> Martin
>
> Dne úterý 19. května 2020 22:03:26 UTC+2 Kerry Hormann napsal(a):
>>
>> Hi Martin,
>>
>> The key is to use a "persistent container" to store the data.  There are 
>> several ways of doing this (most people use a very small image like 
>> BusyBox), but I use the ArangoDB container so that I have access to all the 
>> tools if necessary.  Here's what I use to create the persistent data 
>> container:
>>
>>
>> docker create \
>>   --name arangodb-persist \
>>   --entrypoint /bin/sh \
>>   -v /var/log/arangodb3 \
>>   -v /var/lib/arangodb3 \
>>   -v /var/lib/arangodb3-apps \
>>   -v /etc/arangodb3 \
>>   arangodb/arangodb:3.6.3 true
>>
>>
>> The key for this is changing the "entrypoint" to /bin/sh and issuing the 
>> "true" command at the end.  This uses the shell to evaluate the "true' 
>> statement and exits without starting the DB services.  Also, I specify the 
>> volumes I want to export (with the -v flag).
>>
>> Then, you can map/use those exported volumes in the application service 
>> like this:
>>
>>
>> docker run \
>>   --name arangodb \
>>   --volumes-from arangodb-persist \
>>   -e ARANGO_ROOT_PASSWORD=somepassword \
>>   -e ARANGO_STORAGE_ENGINE=rocksdb \
>>   -v /etc/arangodb3/arangod.db.conf:/etc/arangodb3/arangod.conf \
>>   arangodb/arangodb:3.6.3 --configuration /etc/arangodb3/arangod.conf
>>
>>
>> Here, I'm also specifying the arangod.conf file, which I maintain outside 
>> the container.
>>
>> Now, you can start/stop/destroy the "arangodb" application container 
>> without losing any data.  Also, you can upgrade the app without upgrading 
>> the data container - the built-in tools will be out-of-date, but you would 
>> only use those in a dire emergency anyway.
>>
>> Persistent containers have a lot of other advantages, just browse the 
>> interwebs 
>> <https://duckduckgo.com/?q=docker+using+persistent+container+for+data+storage&atb=v192-1&ia=web>
>>  
>> and you'll find a bunch of great info.
>>
>> Cheers,
>> Kerry
>>
>>
>> On Tuesday, May 19, 2020 at 12:37:03 PM UTC-7, Martin Krc wrote:
>>>
>>> Hi,
>>>
>>> I am using Docker ToolBox on my Windows (Home) machine. I tried to 
>>> deploy ArangoDB there today, but did not manage to do it in the way that 
>>> keeps data between restarts of the docker virtual machine. The problem is 
>>> that when I connect a folder within the virtual machine to a real folder, 
>>> ArangoDB will complain about file system upon startup (mentioning NFS). My 
>>> question: Is there any way how to get around this issue?
>>>
>>> Martin
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"ArangoDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/arangodb/2a74cd63-a689-4c97-af5d-b8c3efc1049f%40googlegroups.com.

Reply via email to