Re: [9fans] Question about fossil

2014-06-08 Thread Bakul Shah
On Sat, 07 Jun 2014 21:39:29 BST Riddler riddler...@gmail.com wrote:
 
 Onto my question: What if I shrunk that fossil partition to, say, 1GB
 and then wrote either more than 1GB in small files or a single 2GB
 file.

Why would you want to make the fossil partition that small?

I would keep it at least twice as large as the largest file
I'd ever want to create.

 Will fossil error on the write that pushes it over the edge?
 Perhaps 'spill' the extra data over to venti?
 Something else entirely?

I haven't studied fossil but AFAIK it won't spill data to
venti when it runs low on disk. Rather, you set it up to take
daily snapshots so the partition should be large enough to
hold all the new data you may generate in a day.

Other things to note:

- make sure you mirror or backup the venti disk or else you
  may lose the only copy of a block!
- try this out on a small scale before you commit to it, as I
  suspect you'll run into various limits and may be bugs. Do
  report what you discover.
- performance will likely be poor. For better performance you
  may want to keep venti index on a separate (flash) disk.
- it would be nice if you can come up with a workable setup
  for a venti server!



Re: [9fans] Question about fossil

2014-06-08 Thread erik quanstrom
 - try this out on a small scale before you commit to it, as I
   suspect you'll run into various limits and may be bugs. Do
   report what you discover.
 - performance will likely be poor. For better performance you
   may want to keep venti index on a separate (flash) disk.
 - it would be nice if you can come up with a workable setup
   for a venti server!

usb performance is ~4-7MB/s.  this is the best you can hope for
from the disk.  venti will only slow this down by multiplying
disk accesses and being a bit seeky.  keep in mind if you're
using this for a venti server, that usb b/w needs to be shared
with tcp.

- erik



Re: [9fans] Question about fossil

2014-06-08 Thread erik quanstrom
 I wasn't thinking I would need a big venti, more  I only need a small
 fossil. My train of thought was because the fossil size is used to store
 the unarchived files after which they can be gotten from venti that it
 might be practical to only have the fossil be big enough to store the
 maximal size of files that will change per day(snapshot interval).
 
 I would struggle to change 16GB a day unless I'm backing up a VM so 64
 seemed like it should accommodate any changes and still leave room for lots
 of often used files to be kept there (if fossil thinks like that).

why bother optimizing this?  fossil is going to be 1% of the disk even
if you make it silly huge.

- erik



Re: [9fans] installs which hang

2014-06-08 Thread erik quanstrom
On Sat Jun  7 19:22:41 EDT 2014, st...@quintile.net wrote:
  - timesync.  i saw this issue one in 2008, so i don't remember much about 
  it.
 
 I think this was a bug in cron. When the time lept forward as timesync 
 corrected
 the time at boot cron would try to run all the intervening events and hang 
 the machine.
 
 cron now ignores time changes if they are big.
 
 Having said this my memory is a little hazy too...

it can also lock up the machine updating too fast.  without help
from cron.  i just don't remember the details of why this resulted
in a hang.

- erik



Re: [9fans] Question about fossil

2014-06-08 Thread Steve Simon
 I wasn't thinking I would need a big venti, more  I only need a small
 fossil. My train of thought was because the fossil size is used to store
 the unarchived files after which they can be gotten from venti that it
 might be practical to only have the fossil be big enough to store the
 maximal size of files that will change per day(snapshot interval).
 

Quite right, 16Gb is fine.

 I would struggle to change 16GB a day unless I'm backing up a VM so 64
 seemed like it should accommodate any changes and still leave room for lots
 of often used files to be kept there (if fossil thinks like that).

No fossil does not do this, after the snapshot fossil is emptied completely
and a pointer is placed at the root directory of fossil pointing to venti
so all access now go to venti. From then on fossil runs as copy on write
so it stores changes since the last snapshot.

 I realise there would be a delay as fossil realises it needs to fall back
 on venti but I was thinking as they're the same PC and disk the delay would
 be negligible.

There should be no delay in fossil as such.

I will admit venti is not as fast as a more traditional fs, or even cwfs/ken's 
fs
on plan9 but it has its own advantages which outweigh this for me.

BTW: I have a mirror of 2 disks for fossil and venti and plan to add a SSD
to this mirror in the hope that this will improve venti performance.

It is all a changing of thinking - for example, never truncate logfiles,
as truncating them actually uses more space in venti than just letting them 
grow.

never worry about cloneing large directories, its (almost) free.

if you have big files you don't want to keep in venti use chmod -t on them
to stop archiving (keep the file in fossil only). This means they are not 
backed up
in venti but I find it helpful for things like downloaded ISO files where they
can be easily regenerated.

-Steve



Re: [9fans] Question about fossil

2014-06-08 Thread erik quanstrom
 It is all a changing of thinking - for example, never truncate logfiles,
 as truncating them actually uses more space in venti than just letting them 
 grow.
 
 never worry about cloneing large directories, its (almost) free.

in my mind these are not related to the content-addressed storage feature
of venti, but rather the idea that storage is practically infinite.

 if you have big files you don't want to keep in venti use chmod -t on them
 to stop archiving (keep the file in fossil only). This means they are not 
 backed up
 in venti but I find it helpful for things like downloaded ISO files where they
 can be easily regenerated.

keeping the fact that storage is infinte in mind, i don't understand +t.
it breaks the model, without a big benefit.  the n/other model announces
itself as non-archived storage, and won't be lost if fossil is recovered from
archive.

- erik



Re: [9fans] 9pi on qemu failure

2014-06-08 Thread erik quanstrom
On Sat Jun  7 20:58:54 EDT 2014, p...@fb.com wrote:

 Hi,
 
 I've tried to run 9pi from richard miller on qemu but failed
 http://plan9.bell-labs.com/sources/contrib/miller/
 
 qemu-system-arm -cpu arm1176 -m 512 -M versatilepb -kernel 9pi
 qemu: fatal: Trying to execute code outside RAM or ROM at 0x80010028

does anyone have the s9pi kernel image that corresponds to this?
i don't so i can't connect this failure to the source.

- erik



[9fans] duppage

2014-06-08 Thread erik quanstrom
i was experimenting a bit with cinap's version of dropping duppage, and for
the lame build the kernel tests there's quite a bit more i/o

duppage no duppage
read4597629153366962
rpc 73674   75718

you can see below that both end up reading 6909416 bytes
from 6c for 136 executions.  6c is only 264450 text+data,
so that's 26 unnecessary reads (6c had already been cached).

the original fairs better reading only 1816296, but that's
still way too much.

this needs a better algorithm.

- erik

---
without duppage
OpensReads  (bytes)   Writes  (bytes) File
4   43   30180100 /bin/ape/sh
3   126113700 /bin/ls
  155  476  164271100 /bin/rc
5   84   60266000 /bin/awk
9   79   51429400 /bin/6a
  136 1060  690941600 /bin/6c
5   50   32939700 /bin/6l
8   206690100 /bin/echo
3   115123800 /bin/xd
3   137270000 /bin/sed
4   135435700 /bin/cp
4   33   20105700 /bin/file
4   21   12596200 /bin/strip
4   125102400 /bin/aux/data2s
294268600 /bin/rm
 
with duppage
OpensReads  (bytes)   Writes  (bytes) File
4   31   21616900 /bin/ape/sh
3   105083300 /bin/ls
  155  190   21042300 /bin/rc
5   57   39863600 /bin/awk
9   57   35744100 /bin/6a
  136  370  181629600 /bin/6c
5   29   17826900 /bin/6l
8   133548500 /bin/echo
3   115975600 /bin/sed
394074200 /bin/xd
4   103990900 /bin/cp
4   18   10061700 /bin/file
4   126063400 /bin/strip
493710400 /bin/aux/data2s
283815000 /bin/rm



Re: [9fans] Fwd: 9pi on qemu failure

2014-06-08 Thread Richard Miller
 This works though with a linux kernel compiled for the raspberry, e.g.
 from http://xecdesign.com/qemu-emulating-raspberry-pi-the-easy-way/ 
 wget http://xecdesign.com/downloads/linux-qemu/kernel-qemu

I would bet that linux kernel isn't actually configured for the
raspberry pi -- it will be for a generic arm1176.  The pi's processor isn't
an arm1176 exactly; it's a Broadcom bcm2835 videocore gpu, with an arm
core grafted on.  It's highly unlikely that qemu knows how to emulate
this well enough for a native bcm kernel like 9pi to run successfuly.
The emulating raspberry pi the easy way is not really emulating the
pi, just using a generic arm kernel to run linux software from a
raspberry pi linux distribution image.

If you want to run 9pi, I recommend buying a raspberry pi.  They aren't
expensive, and native Plan 9 is a much more rewarding experience.




Re: [9fans] Fwd: 9pi on qemu failure

2014-06-08 Thread Richard Miller
 I've tried to run 9pi from richard miller on qemu but failed
 ...
 does anyone have the s9pi kernel image that corresponds to this?
 i don't so i can't connect this failure to the source.

Looking at file dates, that was an old kernel which came from
9pi.img-old.gz .  I've now replaced 9pi (and added a corresponding
s9pi) with the kernel from the newer 9pi.img.gz, to reduce confusion.




Re: [9fans] Question about fossil

2014-06-08 Thread Bakul Shah
On Sun, 08 Jun 2014 03:56:24 EDT erik quanstrom quans...@quanstro.net wrote:
  - try this out on a small scale before you commit to it, as I
suspect you'll run into various limits and may be bugs. Do
report what you discover.
  - performance will likely be poor. For better performance you
may want to keep venti index on a separate (flash) disk.
  - it would be nice if you can come up with a workable setup
for a venti server!
 
 usb performance is ~4-7MB/s.  this is the best you can hope for
 from the disk.  venti will only slow this down by multiplying
 disk accesses and being a bit seeky.  keep in mind if you're
 using this for a venti server, that usb b/w needs to be shared
 with tcp.

The last time I measured this (Aug 2012) raw disk write was
10MB/s, file writes were 2MB/s. On the same h/w  disk linux
got 25MB/s (don't recall file throughput). And Linux gets
11.3MB/s ethernet throughput compared 3.7MB/s on 9pi (both
with ttcp). Linux tcp throughput is close to linespeed.

Might almost be worth using p9p vent under linux!  And keep
isects on the sdcard and arenas on an external disk.



Re: [9fans] Question about fossil

2014-06-08 Thread erik quanstrom
 The last time I measured this (Aug 2012) raw disk write was
 10MB/s, file writes were 2MB/s. On the same h/w  disk linux
 got 25MB/s (don't recall file throughput). And Linux gets
 11.3MB/s ethernet throughput compared 3.7MB/s on 9pi (both
 with ttcp). Linux tcp throughput is close to linespeed.

i can push the usb hard enough to trigger bugs.  :-)
i usually get about 6MB/s.

there's only one trick: use qmalloc instead of pool.
pool astounds with its slowness.  the code is in 9atom.

- erik



Re: [9fans] duppage

2014-06-08 Thread cinap_lenrek
i get consistent results with iostats for building pc64.
(on amd64)

  166  192   10491600 /bin/rc
4   90   34330800 /bin/awk
   37   515128000 /bin/echo
   17   43   10378600 /bin/sed
3   175156700 /bin/ls
3   123462400 /bin/grep
   13   244011400 /bin/cp
161911200 /bin/pwd
4   154065800 /bin/xd
  128  192   26378500 /bin/6c
5   34   11314900 /bin/6l
4   278955100 /bin/6a
1   103336000 /bin/mkdir
292201600 /bin/dd
4   185508500 /bin/strip
1   176230100 /bin/mkpaqfs
4   143379700 /bin/rm
23  12100 /bin/membername
2   154643200 /bin/tr
1   197254400 /bin/ar
141099200 /bin/cat
2   248658000 /bin/hoc
4   268643300 /bin/file
4   133343900 /bin/aux/data2s
172348800 /bin/date
1   144950200 /bin/size
1   259496900 /bin/mk

i made a small test trying running echo 1,2,3 and 4 times
and i get exactly one additional read per exec (which
is the read of the file header) all the other pages
are cached.

with MCACHE mount, it is exactly the same amount of
reads no matter how often i run it. :)

but thats not loaded. is your machine starved of memory?

my guess would be that the cached pages getting uncached
and reused in your case. i remember fixing some bugs
in imagereclaim that could potentially cause this.

but thats all speculation...

theres a statistics struct there that you can peek on
from acid -k and see how often imagereclaim runs between
your test passes.

--
cinap



Re: [9fans] duppage

2014-06-08 Thread Charles Forsyth
On 8 June 2014 15:53, erik quanstrom quans...@quanstro.net wrote:

 i was experimenting a bit with cinap's version of dropping duppage, and for
 the lame build the kernel tests there's quite a bit more i/o


that doesn't make any sense. duppage copied the page the wrong way round
(used the image page and put another copy in).
eliminating duppage simply copies the page from the image cache instead of
using that page. there isn't any i/o in either case.


Re: [9fans] duppage

2014-06-08 Thread erik quanstrom
 that doesn't make any sense.  duppage copied the page the wrong way
 round (used the image page and put another copy in).  eliminating
 duppage simply copies the page from the image cache instead of using
 that page.  there isn't any i/o in either case.

well, those are the measurements.  do you think they are misleading?  perhaps
with the pio happening in another context?  i haven't hunted this down.

- erik



Re: [9fans] duppage

2014-06-08 Thread cinap_lenrek
duppage() causes the freelist to be shuffled differently. without
stuffing cached pages at the freelist tail, the tail accumulates
a uncached stopper page which breaks the invariant of imagereclaim
which just scans from the tail backwards as long as the pages are
cached.

imagereclaim does not move the pages to the head after uncaching them!
so by default imagereclaim prevents the cached pages before the ones
it reclaimed from being reclaimed ever.

before image reclaim: H CC T
after image reclaim:  H UU T
^- as far was imagereclaim went

nwith duppage, theres always new cached pages added at the tail.

i suspect once you run out of images, imagereclaim will run constantly and
blow away the little usefull image cache you still have causing additional
reads to page them back in.

--
cinap



Re: [9fans] duppage

2014-06-08 Thread Charles Forsyth
On 8 June 2014 18:34, erik quanstrom quans...@quanstro.net wrote:

 well, those are the measurements.  do you think they are misleading?
  perhaps
 with the pio happening in another context?  i haven't hunted this down.


the difference is only how fault makes the copy (easy or hard), there
shouldn't be any call to pio either way.


Re: [9fans] duppage

2014-06-08 Thread erik quanstrom
On Sun Jun  8 13:51:18 EDT 2014, charles.fors...@gmail.com wrote:

 On 8 June 2014 18:34, erik quanstrom quans...@quanstro.net wrote:
 
  well, those are the measurements.  do you think they are misleading?
   perhaps
  with the pio happening in another context?  i haven't hunted this down.
 
 
 the difference is only how fault makes the copy (easy or hard), there
 shouldn't be any call to pio either way.

unless the image is not cached, or doesn't have 1 reference.

- erik



Re: [9fans] duppage

2014-06-08 Thread cinap_lenrek
right. the question is, how did it vanish from the image cache.

--
cinap



Re: [9fans] duppage

2014-06-08 Thread erik quanstrom
On Sun Jun  8 13:55:52 EDT 2014, cinap_len...@felloff.net wrote:
 right. the question is, how did it vanish from the image cache.

i think it is in the image cache, but .ref 1.

- erik



Re: [9fans] duppage

2014-06-08 Thread Charles Forsyth
On 8 June 2014 19:15, erik quanstrom quans...@quanstro.net wrote:

 i think it is in the image cache, but .ref 1.


but in that case it will still not pio, but make a local writable copy.


[9fans] 9fans RSS Feed?

2014-06-08 Thread Brian Vito
Is there a working RSS feed that corresponds to the 9fans mailing list? The
gmane feeds seem to be at least a few weeks behind, and the
groups.google.com feeds, to the extent they exist/work at all, are even
more out of date (on a side note, is groups no longer being synchronized
with the mailing list?). Any suggestions would be appreciated. Thanks very
much.