Re: [9fans] A potentially useful venti client

2017-12-13 Thread Ole-Hjalmar Kristensen
Here is a pointer to a discussion on comp.os.plan9, but I did not really
get a clear understanding of whether it was possible or not. It seems to me
that it was possible at some time, but based on my own findings, changes to
the format may have made vac and fossil incompatible.

On Wed, Dec 13, 2017 at 12:22 PM, Richard Miller <9f...@hamnavoe.com> wrote:

> > The difficulty is how to convince fossil to install a score into its
> hierarchy as though
> > its one that it created.
>
> Wouldn't that cause a problem with the two origin file systems
> having overlapping Qid spaces?  I think you would need to walk
> and rebuild the directory tree of the vac being inserted, to
> assign new Qid.path values.
>
>
>


Re: [9fans] A potentially useful venti client

2017-12-13 Thread Ole-Hjalmar Kristensen
I don't know either, but when I tried flfmt with a vac score as an
experiment, I got this:

ole@ole-TECRA-R940 ~/Desktop/plan9 $ bin/fossil/flfmt -h 192.168.0.101 -v
f648dbae0075eb73bc394ad6cd4c059e655e127c fossil.dat
fs header block already exists; are you sure? [y/n]: y
fs file is mounted via devmnt (is not a kernel device); are you sure?
[y/n]: y
0xfb1e734c
0x1d1feaf1
c85978546e4048fce83120d3992cfc2f57ff2f8c
bin/fossil/flfmt: bad root: no qidSpace

/*
 * Maximum qid is recorded in root's msource, entry #2 (conveniently in
e).
 */
ventiRead(e.score, VtDataType);
if(!mbUnpack(, buf, bsize))
sysfatal("bad root: mbUnpack");
meUnpack(, , 0);
if(!deUnpack(, ))
sysfatal("bad root: dirUnpack");
if(!de.qidSpace)
sysfatal("bad root: no qidSpace");
qid = de.qidMax;

It seems that the vac archive does not contain the max qid that
flfmt needs. This seems strange to me, as vac -a should need this info just
as much as fossil needs it. Maybe it's tucked away somewhere else. Guess I
need to look some more at the code.

Digging further, I found the comment in file.c, but did not pursue the matter:

 * Fossil generates slightly different vac files, due to a now
 * impossible-to-change bug, which contain a VtEntry
 * for just one venti file, that itself contains the expected
 * three directory entries.  Sigh.
 */
VacFile*
_vacfileroot(VacFs *fs, VtFile *r)


On Wed, Dec 13, 2017 at 11:00 AM, Steve Simon  wrote:

> I don't think there is any difference between vac and what fossil uses,
> just where it appears in the hierarchy (though maybe I am wrong).
>
> Fossil adds a fixed upper layer of hierarchy
>
> active
> dump
> 
> 
> snap
> 
> 
> 
>
> The difficulty is how to convince fossil to install a score into its
> hierarchy as though
> its one that it created.
>
> I am pretty sure this is doable, it just needs a rather deep understanding
> of how fossil
> works and when I tried to do it I discovered fossil is really rather
> complex.
>
> -Steve
>
>


Re: [9fans] A potentially useful venti client

2017-12-13 Thread Richard Miller
> The difficulty is how to convince fossil to install a score into its 
> hierarchy as though
> its one that it created.

Wouldn't that cause a problem with the two origin file systems
having overlapping Qid spaces?  I think you would need to walk
and rebuild the directory tree of the vac being inserted, to
assign new Qid.path values.




Re: [9fans] A potentially useful venti client

2017-12-13 Thread Steve Simon
I don't think there is any difference between vac and what fossil uses,
just where it appears in the hierarchy (though maybe I am wrong).

Fossil adds a fixed upper layer of hierarchy

active
dump


snap




The difficulty is how to convince fossil to install a score into its hierarchy 
as though
its one that it created.

I am pretty sure this is doable, it just needs a rather deep understanding of 
how fossil
works and when I tried to do it I discovered fossil is really rather complex.

-Steve



Re: [9fans] A potentially useful venti client

2017-12-13 Thread Bakul Shah
On Dec 12, 2017, at 3:36 PM, Skip Tavakkolian  
wrote:
> 
> i think it's not being taken advantage of, rather than ability:
> 
> https://github.com/0intro/plan9/blob/7524062cfa4689019a4ed6fc22500ec209522ef0/sys/src/cmd/fcp.c
>  
> 
> 
> 
> On Tue, Dec 12, 2017 at 11:38 AM Steven Stallion  > wrote:
> I suspect the main
> culprit is the fact that 9p doesn't support multiple outstanding.

{fossil,vac}<==>venti uses venti networking protocol, not 9p.
You can have upto 256 outstanding requests but I don't think
libventi exploits this. It seems to do strict RPC.



Re: [9fans] A potentially useful venti client

2017-12-13 Thread hiro
thanks for backing me skip.



Re: [9fans] A potentially useful venti client

2017-12-12 Thread Ole-Hjalmar Kristensen
No need to be sorry. I've been looking at the code now and then, but
haven't really got the hang of the difference between the vac and venti
formats.

On Wed, Dec 13, 2017 at 1:03 AM, Steve Simon  wrote:

> grief, sorry.
>
> what can i say, too old, too many kids. important stuff gets pushed out of
> my brain (against my will) to make room for the lyrics of “Let it go”.
>
>
> On 12 Dec 2017, at 21:40, Ole-Hjalmar Kristensen <
> ole.hjalmar.kristen...@gmail.com> wrote:
>
> Yes, I know. I was thinking along the same lines a while ago, we even
> discussed this here on this mailing list. I did some digging, and I found
> this interesting comment in vac/file.c:
>
> /*
>  
>  *
>  * Fossil generates slightly different vac files, due to a now
>  * impossible-to-change bug, which contain a VtEntry
>  * for just one venti file, that itself contains the expected
>  * three directory entries.  Sigh.
>  */
> VacFile*
> _vacfileroot(VacFs *fs, VtFile *r)
>
> Ole-Hj
>
> On Tue, Dec 12, 2017 at 9:38 PM, Steve Simon  wrote:
>
>> The best solution (imho) for what you want to do is the feature I never
>> added.
>>
>> It would be great if you could vac up your linux fs and then just cut and
>> past the
>> vac score into fossil's console with a command like this:
>>
>> main import -v 7478923893289ef928932a9888c98b2333 /active/usr/ole/linux
>>
>> the alternative is a 1.6Tb fossil.
>>
>> -Steve
>>
>>
>


Re: [9fans] A potentially useful venti client

2017-12-12 Thread Steve Simon
grief, sorry. 

what can i say, too old, too many kids. important stuff gets pushed out of my 
brain (against my will) to make room for the lyrics of “Let it go”.


> On 12 Dec 2017, at 21:40, Ole-Hjalmar Kristensen 
>  wrote:
> 
> Yes, I know. I was thinking along the same lines a while ago, we even 
> discussed this here on this mailing list. I did some digging, and I found 
> this interesting comment in vac/file.c:
> 
> /* 
>  
>  *
>  * Fossil generates slightly different vac files, due to a now
>  * impossible-to-change bug, which contain a VtEntry
>  * for just one venti file, that itself contains the expected
>  * three directory entries.  Sigh.
>  */
> VacFile*
> _vacfileroot(VacFs *fs, VtFile *r)
> 
> Ole-Hj
> 
>> On Tue, Dec 12, 2017 at 9:38 PM, Steve Simon  wrote:
>> The best solution (imho) for what you want to do is the feature I never 
>> added.
>> 
>> It would be great if you could vac up your linux fs and then just cut and 
>> past the
>> vac score into fossil's console with a command like this:
>> 
>> main import -v 7478923893289ef928932a9888c98b2333 /active/usr/ole/linux
>> 
>> the alternative is a 1.6Tb fossil.
>> 
>> -Steve
>> 
> 


Re: [9fans] A potentially useful venti client

2017-12-12 Thread Skip Tavakkolian
i think it's not being taken advantage of, rather than ability:

https://github.com/0intro/plan9/blob/7524062cfa4689019a4ed6fc22500ec209522ef0/sys/src/cmd/fcp.c


On Tue, Dec 12, 2017 at 11:38 AM Steven Stallion 
wrote:

> I suspect the main
> culprit is the fact that 9p doesn't support multiple outstanding.
>


Re: [9fans] A potentially useful venti client

2017-12-12 Thread Ole-Hjalmar Kristensen
Yes, you better have high-endurance SSD's. I put the venti index at work on
an ordinary SSD, and it lasted six months. The log itself was fine, of
course, so I only had to rebuild the index to recover. This was plan9port
on Solaris, btw.
Now this venti runs on an ordinary disk, the speed is less, but not that
much, since I moved it to another machine with about 1G allocated to venti
buffer caches.

On Tue, Dec 12, 2017 at 10:02 PM, Steven Stallion 
wrote:

> I have a similar setup. On my file server I have a mirrored pair of
> high-endurance SSDs tied together via devfs with two fossil file
> systems: main and other. main is a 32GB write cache which is dumped
> each night at midnight (this is similar to the labs configuration for
> sources). other is the remaining 96GB for data that doesn't need to
> survive if both SSDs happen to fail at the same time.
>
> My venti store is run on a large Linux machine (~6TB of RAID6 storage)
> and is served via plan9port. Another highly recommended setup is if
> you happen to have a Coraid EtherDrive (I'm biased towards the SRX
> line) this make fantastic stores via the magic of AoE. Unfortunately I
> don't have the rack space, otherwise I'd be using one of those
> instead.
>
> If you're curious about the venti-on-linux setup, I have some scripts
> and a README posted on sources:
> https://9p.io/magic/webls?dir=/sources/contrib/stallion/venti
>
> Somewhat more recently, I wrote a collectd client for plan9 and I also
> monitor my file server using nagios. If there's any interest, I'd be
> happy to post those sources as well.
>
> Cheers,
> Steve
>
> On Tue, Dec 12, 2017 at 2:15 PM, Steve Simon  wrote:
> > Re: fossil
> >
> > Fossil must not fill up, however I would say that the dropoff was the
> lack of clear
> > documentation stating this.
> >
> > Fossil has two modes of operation.
> >
> > As a stand alone filesystem, not really intented (I believe) as a
> production
> > system, more as a replacement for kfs - for laptops or installation
> systems.
> >
> > A full fossil system is when it is combined with a local venti (venti on
> the same
> > machine or on a fast, low latency network connection). Here most files
> are pulled
> > from venti (in the limit fossil only contains a single score which
> redirects the root
> > of the filesystem to a venti score. However as you change files the new
> version
> > is stored on fossil.
> >
> > Every night aty 4 or 5 am (by convention) fossil does a snap, bumps it
> epoch which
> > marks all the changed files as readonly and further changes creates a
> new file.
> > The readonly files are then written to venti in the background and their
> space in fossil
> > reclaimed.
> >
> > This means the fossil only needs to be big enough to contain all the
> changes you
> > are likely to make in a day - in reality 10Gb or fossil will never fill
> up unless
> > you decide to archive your entire dvd collection on the same day.
> > I have been running fossil and venti since 2004. Fossil did have
> problems doing
> > ephemerial dumps (short lived dumps every 15 mins which live for a few
> days).
> > This bug used to cause occasional fossil crashes but venti never lost a
> byte.
> >
> > The bug was fixed before the labs froze and fossil has been solid since.
> >
> > I used an ssd for venti which helps its performance, though even with
> this it will
> > never match liniux filesystem performance (cwfs may well do better), but
> I know it
> > and its fast enough for me for now.
> >
> > -Steve
> >
>
>


Re: [9fans] A potentially useful venti client

2017-12-12 Thread Ole-Hjalmar Kristensen
Yes, I know. I was thinking along the same lines a while ago, we even
discussed this here on this mailing list. I did some digging, and I found
this interesting comment in vac/file.c:

/*
 
 *
 * Fossil generates slightly different vac files, due to a now
 * impossible-to-change bug, which contain a VtEntry
 * for just one venti file, that itself contains the expected
 * three directory entries.  Sigh.
 */
VacFile*
_vacfileroot(VacFs *fs, VtFile *r)

Ole-Hj

On Tue, Dec 12, 2017 at 9:38 PM, Steve Simon  wrote:

> The best solution (imho) for what you want to do is the feature I never
> added.
>
> It would be great if you could vac up your linux fs and then just cut and
> past the
> vac score into fossil's console with a command like this:
>
> main import -v 7478923893289ef928932a9888c98b2333 /active/usr/ole/linux
>
> the alternative is a 1.6Tb fossil.
>
> -Steve
>
>


Re: [9fans] A potentially useful venti client

2017-12-12 Thread Steven Stallion
I have a similar setup. On my file server I have a mirrored pair of
high-endurance SSDs tied together via devfs with two fossil file
systems: main and other. main is a 32GB write cache which is dumped
each night at midnight (this is similar to the labs configuration for
sources). other is the remaining 96GB for data that doesn't need to
survive if both SSDs happen to fail at the same time.

My venti store is run on a large Linux machine (~6TB of RAID6 storage)
and is served via plan9port. Another highly recommended setup is if
you happen to have a Coraid EtherDrive (I'm biased towards the SRX
line) this make fantastic stores via the magic of AoE. Unfortunately I
don't have the rack space, otherwise I'd be using one of those
instead.

If you're curious about the venti-on-linux setup, I have some scripts
and a README posted on sources:
https://9p.io/magic/webls?dir=/sources/contrib/stallion/venti

Somewhat more recently, I wrote a collectd client for plan9 and I also
monitor my file server using nagios. If there's any interest, I'd be
happy to post those sources as well.

Cheers,
Steve

On Tue, Dec 12, 2017 at 2:15 PM, Steve Simon  wrote:
> Re: fossil
>
> Fossil must not fill up, however I would say that the dropoff was the lack of 
> clear
> documentation stating this.
>
> Fossil has two modes of operation.
>
> As a stand alone filesystem, not really intented (I believe) as a production
> system, more as a replacement for kfs - for laptops or installation systems.
>
> A full fossil system is when it is combined with a local venti (venti on the 
> same
> machine or on a fast, low latency network connection). Here most files are 
> pulled
> from venti (in the limit fossil only contains a single score which redirects 
> the root
> of the filesystem to a venti score. However as you change files the new 
> version
> is stored on fossil.
>
> Every night aty 4 or 5 am (by convention) fossil does a snap, bumps it epoch 
> which
> marks all the changed files as readonly and further changes creates a new 
> file.
> The readonly files are then written to venti in the background and their 
> space in fossil
> reclaimed.
>
> This means the fossil only needs to be big enough to contain all the changes 
> you
> are likely to make in a day - in reality 10Gb or fossil will never fill up 
> unless
> you decide to archive your entire dvd collection on the same day.
> I have been running fossil and venti since 2004. Fossil did have problems 
> doing
> ephemerial dumps (short lived dumps every 15 mins which live for a few days).
> This bug used to cause occasional fossil crashes but venti never lost a byte.
>
> The bug was fixed before the labs froze and fossil has been solid since.
>
> I used an ssd for venti which helps its performance, though even with this it 
> will
> never match liniux filesystem performance (cwfs may well do better), but I 
> know it
> and its fast enough for me for now.
>
> -Steve
>



Re: [9fans] A potentially useful venti client

2017-12-12 Thread Ole-Hjalmar Kristensen
I can understand that it cannot fill up. What I do not understand is why
there are no safeguards in place to ensure that it doesn't. (And my inner
geek wants to know)
As you say, in reality it will not fill up unless you dump huge amounts of
data on it at once. Unfortunately, this is just what I intended to do, dump
a 1.5 TB Linux file system on it. :-)

On Tue, Dec 12, 2017 at 9:15 PM, Steve Simon  wrote:

> Re: fossil
>
> Fossil must not fill up, however I would say that the dropoff was the lack
> of clear
> documentation stating this.
>
> Fossil has two modes of operation.
>
> As a stand alone filesystem, not really intented (I believe) as a
> production
> system, more as a replacement for kfs - for laptops or installation
> systems.
>
> A full fossil system is when it is combined with a local venti (venti on
> the same
> machine or on a fast, low latency network connection). Here most files are
> pulled
> from venti (in the limit fossil only contains a single score which
> redirects the root
> of the filesystem to a venti score. However as you change files the new
> version
> is stored on fossil.
>
> Every night aty 4 or 5 am (by convention) fossil does a snap, bumps it
> epoch which
> marks all the changed files as readonly and further changes creates a new
> file.
> The readonly files are then written to venti in the background and their
> space in fossil
> reclaimed.
>
> This means the fossil only needs to be big enough to contain all the
> changes you
> are likely to make in a day - in reality 10Gb or fossil will never fill up
> unless
> you decide to archive your entire dvd collection on the same day.
> I have been running fossil and venti since 2004. Fossil did have problems
> doing
> ephemerial dumps (short lived dumps every 15 mins which live for a few
> days).
> This bug used to cause occasional fossil crashes but venti never lost a
> byte.
>
> The bug was fixed before the labs froze and fossil has been solid since.
>
> I used an ssd for venti which helps its performance, though even with this
> it will
> never match liniux filesystem performance (cwfs may well do better), but I
> know it
> and its fast enough for me for now.
>
> -Steve
>
>


Re: [9fans] A potentially useful venti client

2017-12-12 Thread Steve Simon
The best solution (imho) for what you want to do is the feature I never added.

It would be great if you could vac up your linux fs and then just cut and past 
the
vac score into fossil's console with a command like this:

main import -v 7478923893289ef928932a9888c98b2333 /active/usr/ole/linux

the alternative is a 1.6Tb fossil.

-Steve



Re: [9fans] A potentially useful venti client

2017-12-12 Thread hiro
"the fact that 9p doesn't support multiple outstanding"

that's not a sentence, but i'm not sure it's thus a joke.



Re: [9fans] A potentially useful venti client

2017-12-12 Thread Steve Simon
Re: fossil

Fossil must not fill up, however I would say that the dropoff was the lack of 
clear
documentation stating this.

Fossil has two modes of operation.

As a stand alone filesystem, not really intented (I believe) as a production
system, more as a replacement for kfs - for laptops or installation systems.

A full fossil system is when it is combined with a local venti (venti on the 
same
machine or on a fast, low latency network connection). Here most files are 
pulled
from venti (in the limit fossil only contains a single score which redirects 
the root
of the filesystem to a venti score. However as you change files the new version
is stored on fossil.

Every night aty 4 or 5 am (by convention) fossil does a snap, bumps it epoch 
which
marks all the changed files as readonly and further changes creates a new file.
The readonly files are then written to venti in the background and their space 
in fossil
reclaimed.

This means the fossil only needs to be big enough to contain all the changes you
are likely to make in a day - in reality 10Gb or fossil will never fill up 
unless
you decide to archive your entire dvd collection on the same day.
I have been running fossil and venti since 2004. Fossil did have problems doing
ephemerial dumps (short lived dumps every 15 mins which live for a few days).
This bug used to cause occasional fossil crashes but venti never lost a byte.

The bug was fixed before the labs froze and fossil has been solid since.

I used an ssd for venti which helps its performance, though even with this it 
will
never match liniux filesystem performance (cwfs may well do better), but I know 
it
and its fast enough for me for now.

-Steve



Re: [9fans] A potentially useful venti client

2017-12-12 Thread Ole-Hjalmar Kristensen
Same place as I found another useful script, dumpvacroots:

#!/bin/rc
# dumpvacroots - dumps all the vac scores ever stored to the venti server
# if nothing else, this illustrates that you have to control access
# to the physical disks storing the archive!

ventihttp=`{
echo $venti | sed 's/^[a-z]+!([0-9\.]+)![a-z0-9]+$/\1/
s/^[a-z]+!([0-9\.]+)/\1/; s/$/:8000/'
}

hget http://$ventihttp/index |
awk '
 /^index=/ { blockSize = 0 + substr($3, 11) }
 /^arena=/ { arena = substr($1, 7) }
 /^arena=/ {
start = (0 + substr($5, 2)) - blockSize
printf("venti/printarena -o %.0f %s\n", start, $3 "")
}
' |
rc |
awk '$3 == 16 { printf("vac:%s\n", $2 "") }'

This definitely looks like it could be hacked to support an incremental
dump of scores.

No printarenas there on my (9front) system, though. I'll have to see on a
proper plan9 system, maybe.

On Tue, Dec 12, 2017 at 8:53 PM, Steve Simon  wrote:

> /sys/src/cmd/venti/words/printarenas
>
> no idea why it lived there though.
>
> -Steve
>
>
> On 12 Dec 2017, at 18:33, Ole-Hjalmar Kristensen <
> ole.hjalmar.kristen...@gmail.com> wrote:
>
> Hmm. On both my plan9port and on a 9front system I find printarenas.c, but
> no script. Maybe you are thinking of the script for backup of individual
> arenas to file? Yes, that could be a starting point.
>
> Anyway, printarenas.c doesn't look too scary, basically a loop checking
> all (or matching) arenas. It seems possible to modify the logic to start at
> a specific offset.
>
> Not running fossil at the moment, btw., my main file server is a Linux
> box, but I use vac for backup, both at home and at work. Fossil is
> definitely on my todo list, although the reported behavior when running out
> of space is a bit scary. Do you know why it does not simply block further
> requests while checkpointing to venti, or even better, starts a snapshot
> before it runs out of space?
>
> On Tue, Dec 12, 2017 at 3:07 PM, Steve Simon  wrote:
>
>> printarenas is a script - it walks through all your arenas at each offset.
>>
>> You could craft another script that remembers the last arena and offset
>> you successfully
>> transferred and only send those after that.
>>
>> I think there is a pattern where you can save the last arena,offset in
>> the local
>> fossil. Then you could mount the remote venti to check that last
>> arena,offset
>> that actually arrived and stuck to the disk on the remote site.
>>
>> On a similar subject I have 10 years of backups from a decomissioned work
>> server
>> that I need to merge into my home venti one of these days...
>>
>> -Steve
>>
>>
>


Re: [9fans] A potentially useful venti client

2017-12-12 Thread Steve Simon
r

sorry I meant /sys/src/cmd/venti/words/dumpvacroots of course.



Re: [9fans] A potentially useful venti client

2017-12-12 Thread Steve Simon
/sys/src/cmd/venti/words/printarenas

no idea why it lived there though.

-Steve


> On 12 Dec 2017, at 18:33, Ole-Hjalmar Kristensen 
>  wrote:
> 
> Hmm. On both my plan9port and on a 9front system I find printarenas.c, but no 
> script. Maybe you are thinking of the script for backup of individual arenas 
> to file? Yes, that could be a starting point.
> 
> Anyway, printarenas.c doesn't look too scary, basically a loop checking all 
> (or matching) arenas. It seems possible to modify the logic to start at a 
> specific offset.
> 
> Not running fossil at the moment, btw., my main file server is a Linux box, 
> but I use vac for backup, both at home and at work. Fossil is definitely on 
> my todo list, although the reported behavior when running out of space is a 
> bit scary. Do you know why it does not simply block further requests while 
> checkpointing to venti, or even better, starts a snapshot before it runs out 
> of space?
> 
>> On Tue, Dec 12, 2017 at 3:07 PM, Steve Simon  wrote:
>> printarenas is a script - it walks through all your arenas at each offset.
>> 
>> You could craft another script that remembers the last arena and offset you 
>> successfully
>> transferred and only send those after that.
>> 
>> I think there is a pattern where you can save the last arena,offset in the 
>> local
>> fossil. Then you could mount the remote venti to check that last arena,offset
>> that actually arrived and stuck to the disk on the remote site.
>> 
>> On a similar subject I have 10 years of backups from a decomissioned work 
>> server
>> that I need to merge into my home venti one of these days...
>> 
>> -Steve
>> 
> 


Re: [9fans] A potentially useful venti client

2017-12-12 Thread Steven Stallion
I ran back through my old notes. Turns out I inflated the numbers a
bit - it was about a week rather than a month. I suspect the main
culprit is the fact that 9p doesn't support multiple outstanding. I
wasn't in much of a hurry at the time, so I'm sure there are more
efficient ways than simply firing up cwfs and using vac -s.

On Tue, Dec 12, 2017 at 12:42 PM, Ole-Hjalmar Kristensen
 wrote:
> Thanks for the tip about mounting with 9fs. I have used vacfs on Linux ,
> though.
> But why so slow? Did you import a root with lots of backup versions? It was
> partly because of that I made this client which can import venti blocks
> without needing to traverse a file tree over and over again.
>
> On Tue, Dec 12, 2017 at 4:45 PM, Steven Stallion 
> wrote:
>>
>> Get ready to wait! It took almost a month for me to import about 30GB
>> from a decommissioned file server. It was well worth the wait though -
>> if you place the the resulting .vac file under /lib/vac (or
>> $home/lib/vac) you can just use 9fs to mount with zero fuss.
>>
>> On a related note, once sources starting having issues with
>> availability, I started running nightly snaps of my contrib directory
>> via cron:
>>
>> contrib=/n/sources/contrib/$user
>> 9fs sources
>> @{cd $contrib && vac -a $home/lib/vac/contrib.vac .} >[2]/dev/null
>>
>> Now I have a dump-like history of changes I've made to my contrib
>> directory without the need to connect to sources:
>>
>> % 9fs contrib.vac
>> % lc /n/contrib
>> 201520162017
>>
>> Cheers,
>> Steve
>>
>> On Tue, Dec 12, 2017 at 8:07 AM, Steve Simon  wrote:
>> > printarenas is a script - it walks through all your arenas at each
>> > offset.
>> >
>> > You could craft another script that remembers the last arena and offset
>> > you successfully
>> > transferred and only send those after that.
>> >
>> > I think there is a pattern where you can save the last arena,offset in
>> > the local
>> > fossil. Then you could mount the remote venti to check that last
>> > arena,offset
>> > that actually arrived and stuck to the disk on the remote site.
>> >
>> > On a similar subject I have 10 years of backups from a decomissioned
>> > work server
>> > that I need to merge into my home venti one of these days...
>> >
>> > -Steve
>> >
>>
>



Re: [9fans] A potentially useful venti client

2017-12-12 Thread Ole-Hjalmar Kristensen
Thanks for the tip about mounting with 9fs. I have used vacfs on Linux ,
though.
But why so slow? Did you import a root with lots of backup versions? It was
partly because of that I made this client which can import venti blocks
without needing to traverse a file tree over and over again.

On Tue, Dec 12, 2017 at 4:45 PM, Steven Stallion 
wrote:

> Get ready to wait! It took almost a month for me to import about 30GB
> from a decommissioned file server. It was well worth the wait though -
> if you place the the resulting .vac file under /lib/vac (or
> $home/lib/vac) you can just use 9fs to mount with zero fuss.
>
> On a related note, once sources starting having issues with
> availability, I started running nightly snaps of my contrib directory
> via cron:
>
> contrib=/n/sources/contrib/$user
> 9fs sources
> @{cd $contrib && vac -a $home/lib/vac/contrib.vac .} >[2]/dev/null
>
> Now I have a dump-like history of changes I've made to my contrib
> directory without the need to connect to sources:
>
> % 9fs contrib.vac
> % lc /n/contrib
> 201520162017
>
> Cheers,
> Steve
>
> On Tue, Dec 12, 2017 at 8:07 AM, Steve Simon  wrote:
> > printarenas is a script - it walks through all your arenas at each
> offset.
> >
> > You could craft another script that remembers the last arena and offset
> you successfully
> > transferred and only send those after that.
> >
> > I think there is a pattern where you can save the last arena,offset in
> the local
> > fossil. Then you could mount the remote venti to check that last
> arena,offset
> > that actually arrived and stuck to the disk on the remote site.
> >
> > On a similar subject I have 10 years of backups from a decomissioned
> work server
> > that I need to merge into my home venti one of these days...
> >
> > -Steve
> >
>
>


Re: [9fans] A potentially useful venti client

2017-12-12 Thread Ole-Hjalmar Kristensen
Hmm. On both my plan9port and on a 9front system I find printarenas.c, but
no script. Maybe you are thinking of the script for backup of individual
arenas to file? Yes, that could be a starting point.

Anyway, printarenas.c doesn't look too scary, basically a loop checking all
(or matching) arenas. It seems possible to modify the logic to start at a
specific offset.

Not running fossil at the moment, btw., my main file server is a Linux box,
but I use vac for backup, both at home and at work. Fossil is definitely on
my todo list, although the reported behavior when running out of space is a
bit scary. Do you know why it does not simply block further requests while
checkpointing to venti, or even better, starts a snapshot before it runs
out of space?

On Tue, Dec 12, 2017 at 3:07 PM, Steve Simon  wrote:

> printarenas is a script - it walks through all your arenas at each offset.
>
> You could craft another script that remembers the last arena and offset
> you successfully
> transferred and only send those after that.
>
> I think there is a pattern where you can save the last arena,offset in the
> local
> fossil. Then you could mount the remote venti to check that last
> arena,offset
> that actually arrived and stuck to the disk on the remote site.
>
> On a similar subject I have 10 years of backups from a decomissioned work
> server
> that I need to merge into my home venti one of these days...
>
> -Steve
>
>


Re: [9fans] A potentially useful venti client

2017-12-12 Thread Steven Stallion
It depends - the 30GB I was mentioning before was from an older Ken's
fs that I imported with a modified cwfs. Rather than deal with all of
the history, I just took a snap with vac -s of the latest state of the
file system. I keep the original dump along with the cwfs binary in
case I ever need to dig into the dump (I haven't needed to in years).

The last venti store I needed to move around I was able to just use
rdarena/wrarena to reconstitute the fs on new hardware.

I think it comes down to what you want to preserve - life gets easier
if you don't need to worry about the dump. I don't think it would be
too tough to script the dump though. You probably would just need to
walk through each successive vac and archive it using vac -a. Probably
easier said than done though :-)

Steve

On Tue, Dec 12, 2017 at 10:11 AM, Steve Simon  wrote:
> Interesting.
>
> how did you do the import? did you use vac -q and vac -d previous-score for 
> each
> imported day to try and speed things up?
>
> Previously I imported stuff into venti by copying it into fossil first
> and then taking a snap. I always wanted a better solution, like being able
> to use vac and then installing the score into my main filesystem through
> a special fscons command. Sadly I never got around to it.
>
> -Steve
>



Re: [9fans] A potentially useful venti client

2017-12-12 Thread Steve Simon
Interesting.

how did you do the import? did you use vac -q and vac -d previous-score for each
imported day to try and speed things up?

Previously I imported stuff into venti by copying it into fossil first
and then taking a snap. I always wanted a better solution, like being able
to use vac and then installing the score into my main filesystem through
a special fscons command. Sadly I never got around to it.

-Steve



Re: [9fans] A potentially useful venti client

2017-12-12 Thread Steven Stallion
Get ready to wait! It took almost a month for me to import about 30GB
from a decommissioned file server. It was well worth the wait though -
if you place the the resulting .vac file under /lib/vac (or
$home/lib/vac) you can just use 9fs to mount with zero fuss.

On a related note, once sources starting having issues with
availability, I started running nightly snaps of my contrib directory
via cron:

contrib=/n/sources/contrib/$user
9fs sources
@{cd $contrib && vac -a $home/lib/vac/contrib.vac .} >[2]/dev/null

Now I have a dump-like history of changes I've made to my contrib
directory without the need to connect to sources:

% 9fs contrib.vac
% lc /n/contrib
201520162017

Cheers,
Steve

On Tue, Dec 12, 2017 at 8:07 AM, Steve Simon  wrote:
> printarenas is a script - it walks through all your arenas at each offset.
>
> You could craft another script that remembers the last arena and offset you 
> successfully
> transferred and only send those after that.
>
> I think there is a pattern where you can save the last arena,offset in the 
> local
> fossil. Then you could mount the remote venti to check that last arena,offset
> that actually arrived and stuck to the disk on the remote site.
>
> On a similar subject I have 10 years of backups from a decomissioned work 
> server
> that I need to merge into my home venti one of these days...
>
> -Steve
>



Re: [9fans] A potentially useful venti client

2017-12-12 Thread Steve Simon
printarenas is a script - it walks through all your arenas at each offset.

You could craft another script that remembers the last arena and offset you 
successfully
transferred and only send those after that.

I think there is a pattern where you can save the last arena,offset in the local
fossil. Then you could mount the remote venti to check that last arena,offset
that actually arrived and stuck to the disk on the remote site.

On a similar subject I have 10 years of backups from a decomissioned work server
that I need to merge into my home venti one of these days...

-Steve



[9fans] A potentially useful venti client

2017-12-12 Thread Ole-Hjalmar Kristensen
Based on copy.c and readlist.c, I have cobbled together a venti client to
copy a list of venti blocks from one venti server to another. I am thinking
of using it to incrementally replicate the contents on one site site to
another. It could even be used for two-way replication, since the CAS and
deduplicating properties of venti ensures that you will never have write
conflicts at a block level.

I have tried it out by feeding it with the output from printarenas, and it
seems to work reasonably well. Does anyone have any good ideas about how to
incrementally extract the set of scores that has been added to a venti
server? You could extract the whole set of scores and do a diff with an old
set of course, but that's rather inefficient.

Ole-Hj.


#include 
#include 
#include 
#include 
#include 

enum
{
// XXX What to do here?
VtMaxLumpSize = 65535,
};

char *srchost;
char *dsthost;
Biobuf b;
VtConn *zsrc;
VtConn *zdst;
uchar *buf;
void run(Biobuf*);
int nn;

void
usage(void)
{
fprint(2, "usage: copylist srchost dsthost list\n");
threadexitsall("usage");
}

int
parsescore(uchar *score, char *buf, int n)
{
int i, c;

memset(score, 0, VtScoreSize);

if(n != VtScoreSize*2){
werrstr("score wrong length %d", n);
return -1;
}
for(i=0; i= '0' && buf[i] <= '9')
c = buf[i] - '0';
else if(buf[i] >= 'a' && buf[i] <= 'f')
c = buf[i] - 'a' + 10;
else if(buf[i] >= 'A' && buf[i] <= 'F')
c = buf[i] - 'A' + 10;
else {
c = buf[i];
werrstr("bad score char %d '%c'", c, c);
return -1;
}

if((i & 1) == 0)
c <<= 4;

score[i>>1] |= c;
}
return 0;
}

void
threadmain(int argc, char *argv[])
{
int fd, i;

ARGBEGIN{
default:
usage();
break;
}ARGEND

if(argc < 2)
usage();

fmtinstall('V', vtscorefmt);
buf = vtmallocz(VtMaxLumpSize);

srchost = argv[0];
zsrc = vtdial(srchost);
if(zsrc == nil)
sysfatal("could not dial src server: %r");
if(vtconnect(zsrc) < 0)
sysfatal("vtconnect src: %r");

dsthost = argv[1];
zdst = vtdial(dsthost);
if(zdst == nil)
sysfatal("could not dial dst server: %r");
if(vtconnect(zdst) < 0)
sysfatal("vtconnect dst: %r");

if(argc == 2){
Binit(, 0, OREAD);
run();
}else{
for(i=2; i