[9fans] Dual dialing/forking sessions to increase 9P throughput

2020-12-29 Thread cigar562hfsp952fans
It's well-known that 9P has trouble transferring large files (high
volume/high bandwith) over high-latency networks, such as the Internet.
While sleeping, one night last week, I got an idea which might improve
throughput:

#define PSEUDOCODE ("this is psuedocode, not actual C")

#define OJUMBO   1< IOUNIT) ? chan.bulksrv : chan.srv;
  return write_9p(srv.conn, buf, len);
}

main() {
  fd = open(filename, ORD|OWR|OJUMBO);
  read(fd, buf, n);
  write(fd, buf, n);
  celebrate_newfound_speed();
}

The idea, basically, is to use an open flag (OJUMBO) to signal that two
connections to the same server should be attempted.  If a second
connection can be established, it is used for normal 9P transactions,
while the first connection is used for large ("jumbo") writes.  Of
course, this approach will only work if the server forks and accepts
multiple connections.  If the second connection cannot be established,
open() falls back to its customary behavior (with, perhaps, a larger
iosize, depending on the setting of OJUMBO).

There is, however, a very simple reason why this approach won't really
work: the fids for a file opened on one connection won't be recognized
by the server on the other connection.  I guess those are the kinds of
details you miss when you engineer software in your sleep.  :) The qids
would probably still be the same, assuming the same server answers both
connections.  But even that can't be guaranteed.

If it were possible to fork a 9P session off, onto another connection,
something like this could work.  But that introduces the possibility
that messages could arrive at the server or client out-of-order, which
would violate the 9P protocol.

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Te69bb0fce0f0ffaf-M7768691deb99f4bd060052c8
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] 9Front / cwfs64x and hjfs storage

2020-12-29 Thread Ethan Gardener
On Tue, Dec 29, 2020, at 1:06 PM, Alex Musolino wrote:
> > While it is not yet a concern, I am trying to figure something out
> > that does not seem to be well documented in the man pages or the fqa
> > about the file systems.
> 
> Parts of fs(4), fs(8), and fsconfig(8) can be applied to cwfs.  The
> syntax that Ethan talked about for concatenating WORM devices is
> described in fsconfig(8).

Only approximately. This page puts several paragraphs into describing syntax 
specific to Ken Thompson's standalone fileserver which we no longer have. CWFS 
is a port of this, so the syntax is similar, but the device specifications are 
different. Looking at a real config will help make the transition, along with 
some guesswork. There was something about adding option letters to the 
beginning of paths...

I'm *really* not happy with the state of cwfs documentation and I'm not 
qualified to fix it.

> > I am currently running a plan9front instance with cwfs64x (the whole
> > "hjfs is experimental, you could loose your files" seemed to be a
> > bit dangerous when I started everything) and I understand that it is
> > a WORM file system.  My question is for the end game.  If the
> > storage gets full with all of the diffs, is there a way for the
> > oldest ones to roll off, or do you need to expand the storage or
> > export them or ?  I come from the linux world where this is not a
> > feature file system wise and worst case I would have lvm's that I
> > could just grow or with repos I could cull the older diffs, if
> > needed.
> 
> Cwfs doesn't know anything about diffs as such, it just keeps track of
> dirty blocks and writes these out to the WORM partition when a dump is
> requested.  The plan 9 approach to storage is to just keep adding
> capacity since the price of storage falls faster than you can use it
> up.
> 
> I recently upgraded my home file server from an 80GB HDD to a 240GB
> SSD and documented the process [1].  The WORM partition contained 25GB
> and dates back to 2016-04-12.  Now, maybe you'll generate much more
> data than me over less time, but in this day and age of cheap
> multi-terrabyte HDDs and hundred-gigabyte SSDs I think it's still
> perfectly reasonable to just keep adding capacity as you need it.
> [1] http://docs.a-b.xyz/migrating-cwfs.html

Good link.

Incidentally, I bought 2 720GB drives in January 2007 and partitioned them as 
500GB/remainder. (I always keep spare partitions.) I never even filled one of 
those 500GB partitions in 10 years. ;) In my case, it would be well worth 
migrating rather than extending the space.

> Another thing to consider is how much data you really need to be
> sending to the WORM in the first place.  Multimedia, for example,
> might be better stored on a more convential file system since the lack
> of content-based deduping in cwfs might result in these files being
> dumped multiple times as they are moved around or have their metadata
> edited.  Even venti won't dedup in the latter case as it doesn't do
> content-defined chunking.

Yes. 9p lacks any sort of move command and moving is done by copying, so you'll 
end up with multiple copies of these large unchanging files in the dump for no 
good reason. I'm not so sure about metadata because the dump is block-based, 
not file-based, but if the metadata is variable-length, changing it will 
pollute any block-based WORM, dedup'd or not.

It might seem more reasonable to keep such large, unchanging files on cwfs 
"other", except "other" reacts badly to being accidentally filled. (Other is 
really just the cwfs cache partition code which also can't handle being filled 
due to some cache-specific practical issue.) It's better to use another 
filesystem entirely. I haven't heard of many problems with hjfs, but 
filesystems in general are the worst for unpleasant surprises. There is also 
dossrv (very heavily used and tested), extsrv (possibly not so much?), and kfs 
(if we still have it). Kfs is a bit lightweight, but I had no problems using it 
for root in a 9vx setup years ago. It may have smaller limits. Dossrv supports 
fat32, so the file size limit is 2 or 4GB.

> Plan 9 is really good at combining multiple filesystems from multiple
> machines (running different operating systems!) together into a single
> namespace.  My music collection lives on an ext4 filesystem mirrored
> across 2 drives (and backed up elsewhere) but can be easily accessed
> from 9front using sshfs(4).  I just run `9fs music` and the entire
> collection appears under /n/music.

Yes indeed! I haven't used sshfs myself, I used u9fs. (The needed updates to 
ssh only happened in recent years.) We have so many options for networked 
filesystems. :) Anyway, it's recommended to use u9fs over ssh so there's 
probably no point to it, but I dug up the links anyway for nostalgic reasons.

source
http://9p.io/sources/plan9/sys/src/cmd/unix/u9fs/
man page:
http://9p.io/magic/man2html/4/u9fs

--
9fans: 9fans

Re: [9fans] 9Front / cwfs64x and hjfs storage

2020-12-29 Thread Alex Musolino
> While it is not yet a concern, I am trying to figure something out
> that does not seem to be well documented in the man pages or the fqa
> about the file systems.

Parts of fs(4), fs(8), and fsconfig(8) can be applied to cwfs.  The
syntax that Ethan talked about for concatenating WORM devices is
described in fsconfig(8).

> I am currently running a plan9front instance with cwfs64x (the whole
> "hjfs is experimental, you could loose your files" seemed to be a
> bit dangerous when I started everything) and I understand that it is
> a WORM file system.  My question is for the end game.  If the
> storage gets full with all of the diffs, is there a way for the
> oldest ones to roll off, or do you need to expand the storage or
> export them or ?  I come from the linux world where this is not a
> feature file system wise and worst case I would have lvm's that I
> could just grow or with repos I could cull the older diffs, if
> needed.

Cwfs doesn't know anything about diffs as such, it just keeps track of
dirty blocks and writes these out to the WORM partition when a dump is
requested.  The plan 9 approach to storage is to just keep adding
capacity since the price of storage falls faster than you can use it
up.

I recently upgraded my home file server from an 80GB HDD to a 240GB
SSD and documented the process [1].  The WORM partition contained 25GB
and dates back to 2016-04-12.  Now, maybe you'll generate much more
data than me over less time, but in this day and age of cheap
multi-terrabyte HDDs and hundred-gigabyte SSDs I think it's still
perfectly reasonable to just keep adding capacity as you need it.

Another thing to consider is how much data you really need to be
sending to the WORM in the first place.  Multimedia, for example,
might be better stored on a more convential file system since the lack
of content-based deduping in cwfs might result in these files being
dumped multiple times as they are moved around or have their metadata
edited.  Even venti won't dedup in the latter case as it doesn't do
content-defined chunking.

Plan 9 is really good at combining multiple filesystems from multiple
machines (running different operating systems!) together into a single
namespace.  My music collection lives on an ext4 filesystem mirrored
across 2 drives (and backed up elsewhere) but can be easily accessed
from 9front using sshfs(4).  I just run `9fs music` and the entire
collection appears under /n/music.

[1] http://docs.a-b.xyz/migrating-cwfs.html

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-Md9c09155880fcdeb3dc85cfc
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] 9Front / cwfs64x and hjfs storage

2020-12-29 Thread sirjofri



29.12.2020 10:15:29 Kurt H Maier :

On Tue, Dec 29, 2020 at 08:53:55AM +, sirjofri wrote:

for ori's new filesystem, maybe?


If he implements this and the resulting filesystem is not called
Oriborous I will be extraordinarily, possibly fatally, disappointed.




Absolutely 

sirjofri

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-Me30a5774c03d500dc7781c2f
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] 9Front / cwfs64x and hjfs storage

2020-12-29 Thread Kurt H Maier
On Tue, Dec 29, 2020 at 08:53:55AM +, sirjofri wrote:
> 
> Then removing WORM1, storing it as backup or reformat it as a new WORM4:
> 
...
> Is something like that possible? If not, it still could be an inspiration 
> for ori's new filesystem, maybe?

If he implements this and the resulting filesystem is not called
Oriborous I will be extraordinarily, possibly fatally, disappointed.

khm

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-M7f2afbac7f1adf7ebd1019ee
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] 9Front / cwfs64x and hjfs storage

2020-12-29 Thread sirjofri

Hello,

29.12.2020 03:27:19 Ethan Gardener :
You can add disks. CWFS config allows multiple devices/partitions to 
form the WORM. It's like a simple form of LVM. I forget the exact syntax 
and I don't think there's a man page documenting cwfs's particular 
variant syntax, but I think it's something like (/dev/sdE0/worm 
/dev/sdF0/worm) in place of just /dev/sdE0/worm


Is it then also possible to remove older disks at some point 
(physically)? Something like this:


- WORM1 (full)
- WORM2 (full)
- WORM3 (not full)
- cache

Then removing WORM1, storing it as backup or reformat it as a new WORM4:

- WORM2 (full)
- WORM3 (not full)
- WORM4 (new, empty)
- cache

Is something like that possible? If not, it still could be an inspiration 
for ori's new filesystem, maybe?


sirjofri

--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-Md728027c4f6ed59af86e6aaf
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription