Hello,

Are you saying you want to backup some existing blobs, to another
directory (which happens a mounted directory to be a remote share)?

If yes, there are 3 (or 4) ways you could do that:

1) Let your "source" perkeepd be the destination of the sync as well,
by setting up a sync or a replica handler for the destination. Look at
the "/sync-r1/" or "/repl/" handlers in config/dev-server-config.json
for examples.
2) Set up another perkeepd which uses the shared directory as its
blobstorage (but I think there are issues with that iirc). Then you
can simply use 'pk sync' to sync your blobs from your source
blobserver to your destination blobserver.
3) Use 'pk sync' with the -dest set to your shared directory, e.g. 'pk
sync -src=source -dest=/your/mounted/shared/directory' , with source
being defined properly in your client config.
4) Use a third-party tool, like rsync ?

hth,
Mathieu


On 3 August 2018 at 13:37, simon.bohlin <[email protected]> wrote:
> Hi, I tried searching the group archives for perkeep for a few phrases:
> "share" "samba" "smb" "cifs" and also the camlistore archive for more or
> less the same, but are still confused about the simplest way to get my
> laptop perkeepd to offload all its (new) blobs to a network attached
> "windows share".
>
> Probably I should use the sync command line tool for this?
>
> Maybe someone even has a (home-grown) systemd .service that does something
> similar enough? <-- [req]uest for example :)
>
> Thankyou for your time and have a nice day
>
> --
> You received this message because you are subscribed to the Google Groups
> "Perkeep" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Perkeep" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to