> 
> Many thanks.
> 
> I think I had an additional problem in that the linux system had been
> updated since last I compiled p9ports.  Redoing ./INSTALL and running
> venti with no options works now.  Unfortunately, at the time of day that
> I asked the question my mind was in no condition to look at the source code.
> 

did you get vbackup working?  i think i have venti running on a server:

% hget http://lepton5/storage
index=main
total arenas=19 active=1
total space=9,999,253,504 used=13,628,050
clumps=3,466 compressed clumps=2,802 data=23,379,423 compressed data=13,409,692
% 

but when i try to use vbackup from my laptop, i run into problems:

% echo $venti
tcp!lepton5!venti
% vbackup /dev/hda5
offset of magic: 1372
ffs magic 0xbe800ea0
offset of magic: 1372
ffs magic 0x0
offset of magic: 1372
ffs magic 0x0
*** glibc detected *** double free or corruption (fasttop): 0x08083978 ***
11425: signal: sys: abort
% core
stack ./core
        # Thu Sep 29 11:14:31 EDT 2005
        # vbackup /dev/hda5 
% stack ./core
gsignal()  called from abort+0xeb 
abort()  called from 0xb7e88365 
0xb7e88365  called from 0xb7e8ea07 
0xb7e8ea07  called from __libc_free+0x82 
__libc_free()  called from p9free+0x21 
p9free(v=0x8083978) /usr/local/plan9/src/lib9/malloc.c:32 called from 
ffssync+0xbc 
ffssync(fsys=0x80838e0) /usr/local/plan9/src/libdiskfs/ffs.c:175 called from 
fsysopenffs+0x7b 
fsysopenffs(disk=0x8078328) /usr/local/plan9/src/libdiskfs/ffs.c:51 called from 
fsysopen+0x16 
fsysopen(disk=0x8078328) /usr/local/plan9/src/libdiskfs/fsys.c:25 called from 
threadmain+0xf0 
threadmain(argc=0x2, argv=0xbf8c0b64) 
/usr/local/plan9/src/cmd/vbackup/vbackup.c:150 called from threadmainstart+0x24 
threadmainstart(v=0x0) /usr/local/plan9/src/libthread/thread.c:582 called from 
threadstart+0x16 
threadstart(y=0xb7de5008, x=0x0) /usr/local/plan9/src/libthread/thread.c:91 
called from makecontext+0x44 
% 

i'm sure i'm doing something obviously wrong, but i'm having trouble
seein' it, can anyone else?

john cummings
[EMAIL PROTECTED]

Reply via email to