[9fans] Re: Dual dialing/forking sessions to increase 9P throughput

2020-12-30 Thread joey
Do you know if there has ever been a comprehensive evaluation of tuning 
parameters for 9P?  I am sure from my previous post that it is obvious I am on 
the newer side of Plan9 users.

I feel like part of it could be a configuration issue, that is to say 
specifying the gbe vs ether data type and setting -m to a 9k size.  
Additionally, would it violate the 9P protocol if you chunked the data first 
(i.e. if you have a file with 256 kbytes and ran it over 4 connections with 64 
kbytes each).  There is an overhead dx/dt that would consume the gains at some 
point but from a theoretical stand point is it possible?  Or more accurately, 
would such an approach violate 9p?

> celebrate_newfound_speed();
This is honestly phenomenal :)

>  switch (srv.proto) {
  case TCP:
  iosize = max(chan.rsize, chan.wsize);  
  init_9p_tcp(srv.addr, ver9p, iosize);
Again maybe this is ignorance but my understanding was that while Plan9 can 
support a lot of things running TCP (for the rest of the world) it supports and 
prefers to utilize IL/9P for such a connection.  TCP vis-a-vis re-transmission 
throttling is universally bad, so it might be a function more of TCP then of 
the Plan9 server.  I once had a dedicated 100G link between Dallas and Denver 
and it initially pre-tuned only had about 4G in bandwidth (yes, this is not a 
typo).  Some simple tuning (both Linux devices) got that up to 50G almost 
immediately.  But TCP was the transport of choice and we never got to the 100G 
level, there were just too many variables and getting close would knock the 
connection bandwidth way back.  We only had the link for a short time, so 
possibly this could have been worked out but my point is that really anything 
over 1G copper cables is non-trivial when TCP is involved.

~Joey
--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Te69bb0fce0f0ffaf-M06a2dd85933dbb4fe106607c
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


Re: [9fans] 9Front / cwfs64x and hjfs storage

2020-12-30 Thread joey
Thank you everyone for all of your knowledge!

I have a much better understanding of the WORM file systems for Plan9 and I 
never thought of using external storage as a solution/ tiering the storage 
based on what is stored.

Thanks again,
~Joey
--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-M7f625973d98a0edcfaa8b37d
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription


[9fans] 9Front / cwfs64x and hjfs storage

2020-12-28 Thread joey
Hello,

While it is not yet a concern, I am trying to figure something out that does 
not seem to be well documented in the man pages or the fqa about the file 
systems.

I am currently running a plan9front instance with cwfs64x (the whole "hjfs is 
experimental, you could loose your files" seemed to be a bit dangerous when I 
started everything) and I understand that it is a WORM file system.  My 
question is for the end game.  If the storage gets full with all of the diffs, 
is there a way for the oldest ones to roll off, or do you need to expand the 
storage or export them or ?  I come from the linux world where this is not a 
feature file system wise and worst case I would have lvm's that I could just 
grow or with repos I could cull the older diffs, if needed.

If there is additional features for this in hjfs, that would be nice to know 
too.  I am just really trying to understand the limits of the technology and 
what expectations to have.  Otherwise, I love the plan9 environment and knowing 
what options I have for when I inevitably get to that point would put me more 
at ease in trusting more operations to be conducted on Plan9 systems.

Best and thank you!
~Joey
--
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/Tc951a224dde6dde5-M58a27bc5d68fd6f7b293a922
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription