ZFS best practices talk at LinuxFest was packed last weekend. I recall many 
tips shared for ZFS throughput. Jim Salter (@jrssnet) also admins ZFS on 
Reddit. 
https://jrs-s.net/presentations/2019-LFNW-ZFS-Best-Practices/img0.html

On Friday, May 3, 2019 at 11:13:44 AM UTC-7, ian wrote:
>
> Hey All, 
>
> I have about 2TB of files that I'm looking at importing into perkeep. I 
> have a couple questions. 
>
> First, do others have experience they can share re: how perkeep performs 
> holding this much data? From what I've read it sounds like 
> architecturally it should be manageable, but I'd like to know if anyone 
> can say how that's worked out in practice for them. 
>
> Assuming this is realistic, I have some logistical questions about 
> getting the data in there in the first place. 
>
> I left a pk-put going on a large sub-tree last night, and came back to 
> it today. It had spent about 12 hours copying things, finally running in 
> to some hiccough uploading a particular file (I don't have the error 
> message recorded, but it was something along the lines of "server did 
> not receive blob"). Trying to upload that file again worked fine, so I 
> assume some transient thing. 
>
> During the transfer, usage on the drives holding the blobs grew by about 
> 80 GiB. This is transferring data between two hard drives connected to 
> the same machine via USB 3.0. Questions: 
>
> 1. Is that kind of performance normal for pk-put? 
> 2. Is there currently any way to do a "resumable" version of pk-put, 
>    where it can quickly pick up where it left off? 
>
> If the answer to (2) is no, I might be interested in contributing such a 
> feature, and would appreciate pointers as to where to start. 
>
> Thanks. 
>
> -Ian 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Perkeep" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to