Here is a pointer to a discussion on comp.os.plan9, but I did not really
get a clear understanding of whether it was possible or not. It seems to me
that it was possible at some time, but based on my own findings, changes to
the format may have made vac and fossil incompatible.
On Wed, Dec 13,
I don't know either, but when I tried flfmt with a vac score as an
experiment, I got this:
ole@ole-TECRA-R940 ~/Desktop/plan9 $ bin/fossil/flfmt -h 192.168.0.101 -v
f648dbae0075eb73bc394ad6cd4c059e655e127c fossil.dat
fs header block already exists; are you sure? [y/n]: y
fs file is mounted via
> The difficulty is how to convince fossil to install a score into its
> hierarchy as though
> its one that it created.
Wouldn't that cause a problem with the two origin file systems
having overlapping Qid spaces? I think you would need to walk
and rebuild the directory tree of the vac being
I don't think there is any difference between vac and what fossil uses,
just where it appears in the hierarchy (though maybe I am wrong).
Fossil adds a fixed upper layer of hierarchy
active
dump
snap
On Dec 12, 2017, at 3:36 PM, Skip Tavakkolian
wrote:
>
> i think it's not being taken advantage of, rather than ability:
>
> https://github.com/0intro/plan9/blob/7524062cfa4689019a4ed6fc22500ec209522ef0/sys/src/cmd/fcp.c
>
>
thanks for backing me skip.
No need to be sorry. I've been looking at the code now and then, but
haven't really got the hang of the difference between the vac and venti
formats.
On Wed, Dec 13, 2017 at 1:03 AM, Steve Simon wrote:
> grief, sorry.
>
> what can i say, too old, too many kids. important
grief, sorry.
what can i say, too old, too many kids. important stuff gets pushed out of my
brain (against my will) to make room for the lyrics of “Let it go”.
> On 12 Dec 2017, at 21:40, Ole-Hjalmar Kristensen
> wrote:
>
> Yes, I know. I was thinking
i think it's not being taken advantage of, rather than ability:
https://github.com/0intro/plan9/blob/7524062cfa4689019a4ed6fc22500ec209522ef0/sys/src/cmd/fcp.c
On Tue, Dec 12, 2017 at 11:38 AM Steven Stallion
wrote:
> I suspect the main
> culprit is the fact that 9p
Yes, you better have high-endurance SSD's. I put the venti index at work on
an ordinary SSD, and it lasted six months. The log itself was fine, of
course, so I only had to rebuild the index to recover. This was plan9port
on Solaris, btw.
Now this venti runs on an ordinary disk, the speed is less,
Yes, I know. I was thinking along the same lines a while ago, we even
discussed this here on this mailing list. I did some digging, and I found
this interesting comment in vac/file.c:
/*
*
* Fossil generates slightly different vac files, due to a now
* impossible-to-change bug, which contain
I have a similar setup. On my file server I have a mirrored pair of
high-endurance SSDs tied together via devfs with two fossil file
systems: main and other. main is a 32GB write cache which is dumped
each night at midnight (this is similar to the labs configuration for
sources). other is the
I can understand that it cannot fill up. What I do not understand is why
there are no safeguards in place to ensure that it doesn't. (And my inner
geek wants to know)
As you say, in reality it will not fill up unless you dump huge amounts of
data on it at once. Unfortunately, this is just what I
The best solution (imho) for what you want to do is the feature I never added.
It would be great if you could vac up your linux fs and then just cut and past
the
vac score into fossil's console with a command like this:
main import -v 7478923893289ef928932a9888c98b2333 /active/usr/ole/linux
"the fact that 9p doesn't support multiple outstanding"
that's not a sentence, but i'm not sure it's thus a joke.
Re: fossil
Fossil must not fill up, however I would say that the dropoff was the lack of
clear
documentation stating this.
Fossil has two modes of operation.
As a stand alone filesystem, not really intented (I believe) as a production
system, more as a replacement for kfs - for laptops or
Same place as I found another useful script, dumpvacroots:
#!/bin/rc
# dumpvacroots - dumps all the vac scores ever stored to the venti server
# if nothing else, this illustrates that you have to control access
# to the physical disks storing the archive!
ventihttp=`{
echo $venti | sed
r
sorry I meant /sys/src/cmd/venti/words/dumpvacroots of course.
/sys/src/cmd/venti/words/printarenas
no idea why it lived there though.
-Steve
> On 12 Dec 2017, at 18:33, Ole-Hjalmar Kristensen
> wrote:
>
> Hmm. On both my plan9port and on a 9front system I find printarenas.c, but no
> script. Maybe you are thinking of
I ran back through my old notes. Turns out I inflated the numbers a
bit - it was about a week rather than a month. I suspect the main
culprit is the fact that 9p doesn't support multiple outstanding. I
wasn't in much of a hurry at the time, so I'm sure there are more
efficient ways than simply
Thanks for the tip about mounting with 9fs. I have used vacfs on Linux ,
though.
But why so slow? Did you import a root with lots of backup versions? It was
partly because of that I made this client which can import venti blocks
without needing to traverse a file tree over and over again.
On Tue,
Hmm. On both my plan9port and on a 9front system I find printarenas.c, but
no script. Maybe you are thinking of the script for backup of individual
arenas to file? Yes, that could be a starting point.
Anyway, printarenas.c doesn't look too scary, basically a loop checking all
(or matching)
It depends - the 30GB I was mentioning before was from an older Ken's
fs that I imported with a modified cwfs. Rather than deal with all of
the history, I just took a snap with vac -s of the latest state of the
file system. I keep the original dump along with the cwfs binary in
case I ever need to
Interesting.
how did you do the import? did you use vac -q and vac -d previous-score for each
imported day to try and speed things up?
Previously I imported stuff into venti by copying it into fossil first
and then taking a snap. I always wanted a better solution, like being able
to use vac and
Get ready to wait! It took almost a month for me to import about 30GB
from a decommissioned file server. It was well worth the wait though -
if you place the the resulting .vac file under /lib/vac (or
$home/lib/vac) you can just use 9fs to mount with zero fuss.
On a related note, once sources
printarenas is a script - it walks through all your arenas at each offset.
You could craft another script that remembers the last arena and offset you
successfully
transferred and only send those after that.
I think there is a pattern where you can save the last arena,offset in the local
Based on copy.c and readlist.c, I have cobbled together a venti client to
copy a list of venti blocks from one venti server to another. I am thinking
of using it to incrementally replicate the contents on one site site to
another. It could even be used for two-way replication, since the CAS and
27 matches
Mail list logo