Re: free collaborative tools for low BW and losy connections

2020-03-31 Thread Paul Nash
> Exactly. And there's no disconnect: usenet doesn't scale because each object 
> is copied to all core nodes rather than referenced, or copied-as-needed, or 
> other.  This design of distributed messaging platform will eventually break 
> as it grows.  

Usenet scales far more gracefully than the current web.

Each node sends content to a few downstream nodes.  This makes it easy to 
scale; there is no central mega-node that gets overwhelmed, connectivity is to 
a nearby upstream where there is a reasonabe amount of bandwidth. Last time I 
ran a server, the sender could filter based on newsgroup or message size, so 
avoid swamping links.  Content was mostly text.

It is possible to use offline transmission — certain groups dumped onto mag 
tape and mailed, get pulled in at the destination.  BTDT.

More demand = more client nodes which in turn distribute to other nodes, so 
each node does not need to talk to a large number of others.

We did this about 30 years ago in South Africa; Rhodes university brought in 
most groups, I brought in alt.*.  We each distributed to a select number of 
nodes, who distributed again.  Lather, rinse, repeat.  Usenet for the entire 
sub-continent (along with email) over 9600 bps dial-up circuits.

paul

Re: free collaborative tools for low BW and losy connections

2020-03-31 Thread Grant Taylor via NANOG

On 3/31/20 10:06 AM, Nick Hilliard wrote:
Not pretty, but at least it could fit 4 xterms on-screen.  In that 
sense, it was almost as functional as my ragingly fast desktop is 
these days.

Link - Terminal forever <3
 - http://www.commitstrip.com/en/2016/12/22/terminal-forever/



--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature


Re: free collaborative tools for low BW and losy connections

2020-03-31 Thread Nick Hilliard

Joe Greco wrote on 31/03/2020 15:55:

There's a strange disconnect here.  The concept behind Usenet is to have
a distributed messaging platform.  It isn't clear how this would work
without ... well, distribution.  The choice is between flood fill and
perhaps something a little smarter, for which options were proposed and
designed and never really caught on.

Without the distribution mechanism (flooding), you don't have Usenet,
you have something else entirely.


Exactly. And there's no disconnect: usenet doesn't scale because each 
object is copied to all core nodes rather than referenced, or 
copied-as-needed, or other.  This design of distributed messaging 
platform will eventually break as it grows.  It's ok to acknowledge this 
explicitly: message buses are useful, but they have their limits.



Kinda like how there's a problem with the technology of the Internet
because if I wanna be a massive network or a tier 1 or whatever, I
gotta have a massive investment in routers and 100G circuits and all
that?  Why can't we just build an Internet out of 10 megabit ethernet
and T1's?  Isn't this just another example of your "problem with the
technology at the design level?"


No, not even slightly.  Is an NSP expected to carry all traffic for the 
entire DFZ?  Because that's your proposed analogue here.



Usage grows.  I used to run Usenet on a 24MB Sun 3/60 with a pile of
disks and Telebits.  Now I'm blowing gigabits through massive machines.
This isn't a poorly designed technology.  It's scaled well past what
anyone would have expected.


yeah, I don't miss those days.  I ran news on a decsystem 5100 with a 
couple of megs of RAM and a single disk. My desktop was a sun 3/60.  Not 
pretty, but at least it could fit 4 xterms on-screen.  In that sense, it 
was almost as functional as my ragingly fast desktop is these days.


Nick


Re: free collaborative tools for low BW and losy connections

2020-03-31 Thread Joe Greco
On Tue, Mar 31, 2020 at 01:46:09PM +0100, Nick Hilliard wrote:
> Joe Greco wrote on 29/03/2020 23:14:
> Flood often works fine until you attempt to scale it.  Then it breaks,
> just like Bj??rn admitted. Flooding is inherently problematic at scale.
> >>>
> >>>For... what, exactly?  General Usenet?
> >>
> >>yes, this is what we're talking about.  It couldn't scale to general
> >>usenet levels.
> >
> >The scale issue wasn't flooding, it was bandwidth and storage.
> 
> the bandwidth and storage problems happened because of flooding.  Short 
> of cutting off content, there's no way to restrict bandwidth usage, but 
> cutting off content restricts the functionality of the ecosystem.  You 
> can work around this using remote readers and manually distributing , 
> but there's still a fundamental scaling issue going on here, namely that 
> the model of flooding all posts in all groups to all nodes has terrible 
> scaling design characteristics.  It's terrible because it requires all 
> core nodes to linearly scale their individual resourcing requirements 
> according to the overall load of the entire system.  You can manually 
> configure load splitting to work around some of these limitations, but 
> it's not possible to ignore the design problems here.

There's a strange disconnect here.  The concept behind Usenet is to have
a distributed messaging platform.  It isn't clear how this would work
without ... well, distribution.  The choice is between flood fill and
perhaps something a little smarter, for which options were proposed and
designed and never really caught on.  

Without the distribution mechanism (flooding), you don't have Usenet,
you have something else entirely.

> [...]
> >The Usenet "backbone" with binaries isn't going to be viable without a
> >real large capex investment and significant ongoing opex.  This isn't a
> >failure in the technology.
> 
> We may need to agree to disagree on this then.  Reasonable engineering 
> entails being able to build workable solutions within a feasible budget. 
>  If you can't do this, then there's a problem with the technology at 
> the design level.

Kinda like how there's a problem with the technology of the Internet 
because if I wanna be a massive network or a tier 1 or whatever, I 
gotta have a massive investment in routers and 100G circuits and all 
that?  Why can't we just build an Internet out of 10 megabit ethernet 
and T1's?  Isn't this just another example of your "problem with the 
technology at the design level?"

See, here's the thing.  Twenty six years ago, one of the local NSP's here
spent some modest thousands of dollars on a few routers, a switch, and a
circuit down to Chicago and set up shop on a folding table (really).  
This was not an unreasonable outlay of cash to get bootstrapped back in 
those days.  However, within just a few years, the amount of cash that
you'd need to invest to get started as an NSP had exploded dramatically.

Usage grows.  I used to run Usenet on a 24MB Sun 3/60 with a pile of 
disks and Telebits.  Now I'm blowing gigabits through massive machines.  
This isn't a poorly designed technology.  It's scaled well past what
anyone would have expected.

> >Usenet is a great technology for doing collaboration on low bandwidth and
> >lossy connections.
> 
> For small, constrained quantities of traffic, it works fine.

It seems like that was the point of this thread...

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"The strain of anti-intellectualism has been a constant thread winding its way
through our political and cultural life, nurtured by the false notion that
democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov


Re: free collaborative tools for low BW and losy connections

2020-03-31 Thread Nick Hilliard

Joe Greco wrote on 29/03/2020 23:14:

Flood often works fine until you attempt to scale it.  Then it breaks,
just like Bj??rn admitted. Flooding is inherently problematic at scale.


For... what, exactly?  General Usenet?


yes, this is what we're talking about.  It couldn't scale to general
usenet levels.


The scale issue wasn't flooding, it was bandwidth and storage.


the bandwidth and storage problems happened because of flooding.  Short 
of cutting off content, there's no way to restrict bandwidth usage, but 
cutting off content restricts the functionality of the ecosystem.  You 
can work around this using remote readers and manually distributing , 
but there's still a fundamental scaling issue going on here, namely that 
the model of flooding all posts in all groups to all nodes has terrible 
scaling design characteristics.  It's terrible because it requires all 
core nodes to linearly scale their individual resourcing requirements 
according to the overall load of the entire system.  You can manually 
configure load splitting to work around some of these limitations, but 
it's not possible to ignore the design problems here.


[...]

The Usenet "backbone" with binaries isn't going to be viable without a
real large capex investment and significant ongoing opex.  This isn't a
failure in the technology.


We may need to agree to disagree on this then.  Reasonable engineering 
entails being able to build workable solutions within a feasible budget. 
 If you can't do this, then there's a problem with the technology at 
the design level.



Usenet is a great technology for doing collaboration on low bandwidth and
lossy connections.


For small, constrained quantities of traffic, it works fine.

Nick


Re: free collaborative tools for low BW and losy connections

2020-03-30 Thread Michael Thomas



On 3/30/20 11:18 AM, Keith Medcalf wrote:

On Monday, 30 March, 2020 11:19, Michael Thomas  wrote:


On 3/30/20 5:52 AM, Rich Kulawiec wrote:

On Mon, Mar 30, 2020 at 06:30:16AM -0500, Joe Greco wrote:

Actual text traffic has been slowly dying off for years as webforums
have matured and become a better choice of technology for nontechnical
end users on high speed Internet connections.

My view is that the move to web forums is a huge downgrade.  Mailing
lists are vastly superior.

[]
The thing that mailing lists lack is a central directory of their
existence. The discovery problem is a pretty big one.

Where is this to be found for webforums?  I have never seen one.  Or do you 
think Google is such a master index?  Can you please pose your Google query 
that you think results in a comprehensive index of *all* webforums?

Or is your comment nothing more that you noticing that NEITHER e-mail lists NOR 
webforums have a master index, which is a rather useless observation that would 
indicate that webforums have zero advantage over mailing lists in this regard, 
so what is the point of the whataboutism?



I didn't expect to get an apology, and wasn't disappointed.

MIke



Re: free collaborative tools for low BW and losy connections

2020-03-30 Thread Jay Farrell via NANOG
On Mon, Mar 30, 2020 at 8:56 AM Rich Kulawiec  wrote:

> On Mon, Mar 30, 2020 at 06:30:16AM -0500, Joe Greco wrote:
> > Actual text traffic has been slowly dying off for years as webforums
> > have matured and become a better choice of technology for nontechnical
> > end users on high speed Internet connections.
>
> My view is that the move to web forums is a huge downgrade.  Mailing lists
> are vastly superior.
>

Are web forums even still much of a thing in recent years? My own
experience in several non-networking realms, where I was active in a number
of web-based forums, is that over the past 4 or 5 years facebook groups,
both public and private, have siphoned off the bulk of the former
discussion traffic of once-thriving web forums, with few exceptions. Talk
about a huge downgrade. While facebook's groups allow for virtually
unlimited image uploads, the are extremely lacking in features such as
threaded discussion and searching. I grudgingly live in a number of
facebook groups, including one I admin, but only because all the people I
know from usenet, and then later in forums, have migrated to facebook
groups. One popular city-based discussion group withered away from hundreds
of posts and comments daily to sometimes several days with NO comments at
all. Network effect is in full effect.


Re: free collaborative tools for low BW and losy connections

2020-03-30 Thread Michael Thomas



On 3/30/20 11:28 AM, Joe Greco wrote:

On Mon, Mar 30, 2020 at 12:18:37PM -0600, Keith Medcalf wrote:

The thing that mailing lists lack is a central directory of their
existence. The discovery problem is a pretty big one.

Where is this to be found for webforums?  I have never seen one.  Or do
you think Google is such a master index?  Can you please pose your
Google query that you think results in a comprehensive index of *all*
webforums?

Or is your comment nothing more that you noticing that NEITHER e-mail
lists NOR webforums have a master index, which is a rather useless
observation that would indicate that webforums have zero advantage
over mailing lists in this regard, so what is the point of the
whataboutism?

In the context of the discussion, I expect the point is that Usenet had
such functionality.

This is a significant feature advantage.  As are a bunch of other things
pointed out earlier today.



Even a steaming piece of crap software like Reddit has search 
facilities. They may be buggy, but they're there. Nothing I know of 
that's even remotely similar for email lists.


Mike



Re: free collaborative tools for low BW and losy connections

2020-03-30 Thread Joe Greco
On Mon, Mar 30, 2020 at 12:18:37PM -0600, Keith Medcalf wrote:
> >The thing that mailing lists lack is a central directory of their
> >existence. The discovery problem is a pretty big one.
> 
> Where is this to be found for webforums?  I have never seen one.  Or do 
> you think Google is such a master index?  Can you please pose your 
> Google query that you think results in a comprehensive index of *all* 
> webforums?
> 
> Or is your comment nothing more that you noticing that NEITHER e-mail 
> lists NOR webforums have a master index, which is a rather useless 
> observation that would indicate that webforums have zero advantage 
> over mailing lists in this regard, so what is the point of the 
> whataboutism?

In the context of the discussion, I expect the point is that Usenet had
such functionality.

This is a significant feature advantage.  As are a bunch of other things
pointed out earlier today.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"The strain of anti-intellectualism has been a constant thread winding its way
through our political and cultural life, nurtured by the false notion that
democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov


Re: free collaborative tools for low BW and losy connections

2020-03-30 Thread Michael Thomas



On 3/30/20 11:18 AM, Keith Medcalf wrote:

On Monday, 30 March, 2020 11:19, Michael Thomas  wrote:


On 3/30/20 5:52 AM, Rich Kulawiec wrote:

On Mon, Mar 30, 2020 at 06:30:16AM -0500, Joe Greco wrote:

Actual text traffic has been slowly dying off for years as webforums
have matured and become a better choice of technology for nontechnical
end users on high speed Internet connections.

My view is that the move to web forums is a huge downgrade.  Mailing
lists are vastly superior.

[]
The thing that mailing lists lack is a central directory of their
existence. The discovery problem is a pretty big one.

Where is this to be found for webforums?  I have never seen one.  Or do you 
think Google is such a master index?  Can you please pose your Google query 
that you think results in a comprehensive index of *all* webforums?

Or is your comment nothing more that you noticing that NEITHER e-mail lists NOR 
webforums have a master index, which is a rather useless observation that would 
indicate that webforums have zero advantage over mailing lists in this regard, 
so what is the point of the whataboutism?


Usenet had them.

Mike



RE: free collaborative tools for low BW and losy connections

2020-03-30 Thread Keith Medcalf


On Monday, 30 March, 2020 11:19, Michael Thomas  wrote:

>On 3/30/20 5:52 AM, Rich Kulawiec wrote:
>> On Mon, Mar 30, 2020 at 06:30:16AM -0500, Joe Greco wrote:

>>> Actual text traffic has been slowly dying off for years as webforums
>>> have matured and become a better choice of technology for nontechnical
>>> end users on high speed Internet connections.

>> My view is that the move to web forums is a huge downgrade.  Mailing
>> lists are vastly superior.

>[]

>The thing that mailing lists lack is a central directory of their
>existence. The discovery problem is a pretty big one.

Where is this to be found for webforums?  I have never seen one.  Or do you 
think Google is such a master index?  Can you please pose your Google query 
that you think results in a comprehensive index of *all* webforums?

Or is your comment nothing more that you noticing that NEITHER e-mail lists NOR 
webforums have a master index, which is a rather useless observation that would 
indicate that webforums have zero advantage over mailing lists in this regard, 
so what is the point of the whataboutism?

--
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.





Re: free collaborative tools for low BW and losy connections

2020-03-30 Thread Michael Thomas



On 3/30/20 5:52 AM, Rich Kulawiec wrote:

On Mon, Mar 30, 2020 at 06:30:16AM -0500, Joe Greco wrote:

Actual text traffic has been slowly dying off for years as webforums
have matured and become a better choice of technology for nontechnical
end users on high speed Internet connections.

My view is that the move to web forums is a huge downgrade.  Mailing lists
are vastly superior.


[]

The thing that mailing lists lack is a central directory of their 
existence. The discovery problem is a pretty big one.


Mike



Re: free collaborative tools for low BW and losy connections

2020-03-30 Thread Rich Kulawiec
On Mon, Mar 30, 2020 at 06:30:16AM -0500, Joe Greco wrote:
> Actual text traffic has been slowly dying off for years as webforums
> have matured and become a better choice of technology for nontechnical
> end users on high speed Internet connections.

My view is that the move to web forums is a huge downgrade.  Mailing lists
are vastly superior.

Here's a (large) snip from http://www.firemountain.net/why-mailing-lists.html
(which I wrote) with a comment from me shoved in the middle.

[...]

1. Mailing lists require no special software: anyone with a sensible mail
client can participate. Thus they allow you to use *your* software with the
user interface of *your* choosing rather than being compelled to learn 687
different web forums with 687 different user interfaces, all of which range
from "merely bad" to "hideously bad".

2. Mailing lists are simple: learn a few basic rules of netiquette and a couple
of Internet-wide conventions, and one's good to go. Web forums are complicated
because all of them are different. In other words, participating in 20
different mailing lists is just about as easy as participating in one; but
participating in 20 different web forums is onerous.

3. They impose minimal security risk.

4. They impose minimal privacy risk.

Points 3 and 4 stand in stark contrast to the security and privacy risks
imposed on users of web forums and "social" media, especially the latter.

5. Mailing lists are bandwidth-friendly -- an increasing concern for people on
mobile devices and thus on expensive data plans. Web forums are
bandwidth-hungry.

6. Mailing lists interoperate. I can easily forward a message from one list to
another one. Or to a person. I can send a message to multiple lists. I can
forward a message from a person to this list. And so on. Try doing this with
web forum software A on host B with destinations web forum software X and Y on
hosts X1 and Y1. Good luck with that.

7. They're asynchronous: you don't have to interact in real time. You can
download messages when connected to the Internet, then read them and compose
responses when offline.

8. As a result of 7, They work reasonably well even in the presence of multiple
outages and severe congestion. Messages may be delayed, but once everything's
up again, they'll go through. Web-based forums simply don't work.

9. They're push, not pull, so new content just shows up. Web forums require
that you go fishing for it.

10. They scale beautifully.

11. Mailing lists -- when properly run -- are highly resistant to abuse.
Web forums, because of their complexity, are highly vulnerable to software
security issues as well as spam/phishing and other attacks.

[ I'm going to interject a comment here that's not on the web
page I'm quoting myself from.  There are, of course, counter examples
to this.  There is a very busy very well-known mailing list that
is an absolute cesspool of trivially-blockable spam.  Hence
the phrase "when properly run", becase when that's done spam incidents
should be at the 1 per year level or less.  It's not that hard.]

12. They handle threading well. And provided users take a few seconds to edit
properly, they handle quoting well.

13. They're portable: lists can be rehosted (different domain, different host)
rather easily.

14. They can be freely interconverted -- that is, you can move a list hosted by
A using software B on operating system C to host X using software Y on
operating system Z.

15. They can be written to media and read from it. This is a very non-trivial
task with web forums.

16. The computing resources require to support them are minimal -- CPU, memory,
disk, bandwidth, etc.

17. Mailing lists can be uni- or bidirectionally gatewayed to Usenet. (The main
Python language mailing list is an example of this.) This can be highly useful.

18. They're easily archivable in a format that is simple and likely to be
readable long into the future. Mail archives from 10, 20, even 30 or more years
ago are still completely usable. And they take up very little space. (Numerous
tools exist for handling Unix "mbox" format: for example, "grepmail" is a
highly useful basic search tool. Most search engines include parsers for email,
and the task of ingesting mail archives into search engines is very well
understood.)

19. You can archive them locally...

20. ...which means you can search them locally with the software of *your*
choice. Including when you're offline. And provided you make backups, you'll
always have an archive -- even if the original goes away. Web forums don't
facilitate this. (Those of who've been around for a while have seen a lot of
web-based discussions vanish forever because a host crashed or a domain expired
or a company went under or a company was acquired or someone made a mistake or
there was a security breach or a government confiscated it.)

[...]

---rsk


Re: free collaborative tools for low BW and losy connections

2020-03-30 Thread colin johnston
> 
> Actual text traffic has been slowly dying off for years as webforums
> have matured and become a better choice of technology for nontechnical
> end users on high speed Internet connections.
> 

The solaris groups for info worked great
Psinet uk liked using the sun kit for great nntp even uucp nntp as well


Col



Re: free collaborative tools for low BW and losy connections

2020-03-30 Thread Joe Greco
On Sun, Mar 29, 2020 at 04:18:51PM -0700, Michael Thomas wrote:
> 
> On 3/29/20 1:46 PM, Joe Greco wrote:
> >On Sun, Mar 29, 2020 at 07:46:28PM +0100, Nick Hilliard wrote:
> >>Joe Greco wrote on 29/03/2020 15:56:
> >>
> >>The concept of flooding isn't problematic by itself.
> >>Flood often works fine until you attempt to scale it.  Then it breaks,
> >>just like Bj??rn admitted. Flooding is inherently problematic at scale.
> >For... what, exactly?  General Usenet?  Perhaps, but mainly because you
> >do not have a mutual agreement on traffic levels and a bunch of other
> >factors.  Flooding works just fine within private hierarchies, and since
> >I thought this was a discussion of "free collaborative tools" rather than
> >"random newbie trying to masochistically keep up with a full backbone
> >Usenet feed", it definitely should work fine for a private hierarchy and
> >collaborative use.
> 
> AFAIK, Usenet didn't die because it wasn't scalable. It died because 
> people figured out how to make it a business model.

Not at all.  I can see why you say that, but it isn't the reality, any
more than commercial uses killed the Internet when it was opened up to
people who made it a business model.

The introduction of the DMCA ratcheted up the potential for enforcement
and penalties against end users doing Napster, bittorrent, illicit web
and FTP, or whatever your other favorite form of digital piracy might
happen to have been back in the '90's.

The CDA 230 protection for providers allowed Usenet to be served without
significant concern.  

So pirates had a safe model where they could distribute pirated traffic,
posting it somewhere "safe" and then it would be available everywhere,
and it was HARD to get it taken down.

This, along with legitimate binaries traffic increases, caused an 
explosion in traffic, which made Usenet increasingly impractical for 
ISP's to self-host.  The problem is that it scaled far too well as a 
binary traffic distribution system.  As this happened, most ISP's 
outsourced to Usenet service providers, and end users often picked up 
"premium" Usenet services from such providers directly as well.

I do not see the people who made it a business model as responsible
for the state of affairs.  Had commercial USP's not stepped up, Usenet
probably would have died off in the late 90's-early 2000's as ISP's
dropped support.  They (and I have to include myself as I run a Usenet
company) are arguably the ones who kept it going.  Demand was there.

The users who are dumping binaries on Usenet are helping to kill it.

Actual text traffic has been slowly dying off for years as webforums
have matured and become a better choice of technology for nontechnical
end users on high speed Internet connections.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"The strain of anti-intellectualism has been a constant thread winding its way
through our political and cultural life, nurtured by the false notion that
democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov


Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Michael Thomas



On 3/29/20 1:46 PM, Joe Greco wrote:

On Sun, Mar 29, 2020 at 07:46:28PM +0100, Nick Hilliard wrote:

Joe Greco wrote on 29/03/2020 15:56:

The concept of flooding isn't problematic by itself.
Flood often works fine until you attempt to scale it.  Then it breaks,
just like Bj??rn admitted. Flooding is inherently problematic at scale.

For... what, exactly?  General Usenet?  Perhaps, but mainly because you
do not have a mutual agreement on traffic levels and a bunch of other
factors.  Flooding works just fine within private hierarchies, and since
I thought this was a discussion of "free collaborative tools" rather than
"random newbie trying to masochistically keep up with a full backbone
Usenet feed", it definitely should work fine for a private hierarchy and
collaborative use.


AFAIK, Usenet didn't die because it wasn't scalable. It died because 
people figured out how to make it a business model.


Mike



Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Joe Greco
On Sun, Mar 29, 2020 at 07:46:28PM +0100, Nick Hilliard wrote:
> Joe Greco wrote on 29/03/2020 15:56:
> >On Sun, Mar 29, 2020 at 03:01:04PM +0100, Nick Hilliard wrote:
> >>because it uses flooding and can't guarantee reliable message
> >>distribution, particularly at higher traffic levels.
> >
> >That's so hideously wrong.  It's like claiming web forums don't
> >work because IP packet delivery isn't reliable.
> 
> Really, it's nothing like that.

Sure it is.  At a certain point you can get web forums to stop working
by DDoS.  You can't guarantee reliable interaction with a web site if
that happens.

> >Usenet message delivery at higher levels works just fine, except that
> >on the public backbone, it is generally implemented as "best effort"
> >rather than a concerted effort to deliver reliably.
> 
> If you can explain the bit of the protocol that guarantees that all 
> nodes have received all postings, then let's discuss it.

There isn't, just like there isn't a bit of the protocol that guarantees
that an IP packet is received by its intended recipient.  No magic.

It's perfectly possible to make sure that you are not backlogging to a
peer and to contact them to remediate if there is a problem.  When done 
at scale, this does actually work.  And unlike IP packet delivery, news
will happily backlog and recover from a server being down or whatever.

> >The concept of flooding isn't problematic by itself.
> 
> Flood often works fine until you attempt to scale it.  Then it breaks, 
> just like Bj??rn admitted. Flooding is inherently problematic at scale.

For... what, exactly?  General Usenet?  Perhaps, but mainly because you
do not have a mutual agreement on traffic levels and a bunch of other
factors.  Flooding works just fine within private hierarchies, and since
I thought this was a discussion of "free collaborative tools" rather than
"random newbie trying to masochistically keep up with a full backbone 
Usenet feed", it definitely should work fine for a private hierarchy and
collaborative use.

> > If you wanted to
> >implement a collaborative system, you could easily run a private
> >hierarchy and run a separate feed for it, which you could then monitor
> >for backlogs or issues.  You do not need to dump your local traffic on
> >the public Usenet.  This can happily coexist alongside public traffic
> >on your server.  It is easy to make it 100% reliable if that is a goal.
> 
> For sure, you can operate mostly reliable self-contained systems with 
> limited distribution.  We're all in agreement about this.

Okay, good. 

> >>The fact that it ended up having to implement TAKETHIS is only one
> >>indication of what a truly awful protocol it is.
> >
> >No, the fact that it ended up having to implement TAKETHIS is a nod to
> >the problem of RTT.
> 
> TAKETHIS was necessary to keep things running because of the dual 
> problem of RTT and lack of pipelining.  Taken together, these two 
> problems made it impossible to optimise incoming feeds, because of ... 
> well, flooding, which meant that even if you attempted an IHAVE, by the 
> time you delivered the article, some other feed might already have 
> delivered it.  TAKETHIS managed to sweep these problems under the 
> carpet, but it's a horrible, awful protocol hack.

It's basically cheap pipelining.  If you want to call pipelining in
general a horrible, awful protocol hack, then that's probably got
some validity.

> >It did and has.  The large scale binaries sites are still doing a
> >great job of propagating binaries with very close to 100% reliability.
> 
> which is mostly because there are so few large binary sites these days, 
> i.e. limited distribution model.

No, there are so few large binary sites these days because of consolidation
and buyouts.

> >I was there.
> 
> So was I, and probably so were lots of other people on nanog-l.  We all 
> played our part trying to keep the thing hanging together.

I'd say most of the folks here were out of this fifteen to twenty years
ago, well before the explosion of binaries in the early 2000's.

> >I'm the maintainer of Diablo.  It's fair to say I had a
> >large influence on this issue as it was Diablo's distributed backend
> >capability that really instigated retention competition, and a number
> >of optimizations that I made helped make it practical.
> 
> Diablo was great - I used it for years after INN-related head-bleeding. 
> Afterwards, Typhoon improved things even more.
> 
> >The problem for smaller sites is simply the immense traffic volume.
> >If you want to carry binaries, you need double digits Gbps.  If you
> >filter them out, the load is actually quite trivial.
> 
> Right, so you've put your finger on the other major problem relating to 
> flooding which isn't the distribution synchronisation / optimisation 
> problem: all sites get all posts for all groups which they're configured 
> for.  This is a profound waste of resources + it doesn't scale in any 
> meaningful way.

So if you don't like that 

Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Joe Greco
On Sun, Mar 29, 2020 at 03:01:04PM +0100, Nick Hilliard wrote:
> Bj??rn Mork wrote on 29/03/2020 13:44:
> >How is nntp non-scalable?
> 
> because it uses flooding and can't guarantee reliable message 
> distribution, particularly at higher traffic levels.

That's so hideously wrong.  It's like claiming web forums don't
work because IP packet delivery isn't reliable.

Usenet message delivery at higher levels works just fine, except that
on the public backbone, it is generally implemented as "best effort"
rather than a concerted effort to deliver reliably.

The concept of flooding isn't problematic by itself.  If you wanted to
implement a collaborative system, you could easily run a private
hierarchy and run a separate feed for it, which you could then monitor
for backlogs or issues.  You do not need to dump your local traffic on
the public Usenet.  This can happily coexist alongside public traffic
on your server.  It is easy to make it 100% reliable if that is a goal.

> The fact that it ended up having to implement TAKETHIS is only one 
> indication of what a truly awful protocol it is.

No, the fact that it ended up having to implement TAKETHIS is a nod to
the problem of RTT.

> Once again in simpler terms:
> 
> > How is nntp non-scalable?
> [...]
> > Binaries broke USENET.  That has little to do with nntp.
> 
> If it had been scalable, it could have scaled to handling the binary groups.

It did and has.  The large scale binaries sites are still doing a 
great job of propagating binaries with very close to 100% reliability.

I was there.  I'm the maintainer of Diablo.  It's fair to say I had a 
large influence on this issue as it was Diablo's distributed backend
capability that really instigated retention competition, and a number
of optimizations that I made helped make it practical.

The problem for smaller sites is simply the immense traffic volume. 
If you want to carry binaries, you need double digits Gbps.  If you
filter them out, the load is actually quite trivial.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"The strain of anti-intellectualism has been a constant thread winding its way
through our political and cultural life, nurtured by the false notion that
democracy means that 'my ignorance is just as good as your knowledge.'"-Asimov


Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Joe Greco
On Sun, Mar 29, 2020 at 10:31:50PM +0100, Nick Hilliard wrote:
> Joe Greco wrote on 29/03/2020 21:46:
> >On Sun, Mar 29, 2020 at 07:46:28PM +0100, Nick Hilliard wrote:
> >>>That's so hideously wrong.  It's like claiming web forums don't
> >>>work because IP packet delivery isn't reliable.
> >>
> >>Really, it's nothing like that.
> >
> >Sure it is.  At a certain point you can get web forums to stop working
> >by DDoS.  You can't guarantee reliable interaction with a web site if
> >that happens.
> 
> this is failure caused by external agency, not failure caused by 
> inherent protocol limitations.

Yet we're discussing "low BW and losy(sic) connections".  Which would be
failure of IP to be magically available with zero packet loss and at high
speeds.  There are lots of people for whom low speed DSL, dialup, WISP,
4G, GPRS, satellite, or actually nothing at all are available as the 
Internet options.

> >>>Usenet message delivery at higher levels works just fine, except that
> >>>on the public backbone, it is generally implemented as "best effort"
> >>>rather than a concerted effort to deliver reliably.
> >>
> >>If you can explain the bit of the protocol that guarantees that all
> >>nodes have received all postings, then let's discuss it.
> >
> >There isn't, just like there isn't a bit of the protocol that guarantees
> >that an IP packet is received by its intended recipient.  No magic.
> 
> tcp vs udp.

IP vs ... what exactly?

> >>Flood often works fine until you attempt to scale it.  Then it breaks,
> >>just like Bj??rn admitted. Flooding is inherently problematic at scale.
> >
> >For... what, exactly?  General Usenet?
> 
> yes, this is what we're talking about.  It couldn't scale to general 
> usenet levels.

The scale issue wasn't flooding, it was bandwidth and storage.  It's 
actually not problematic to do history lookups (the key mechanism in 
what you're calling "flooding") because even at a hundred thousand per 
second, that's well within the speed of CPU and RAM.  Oh, well, yes, 
if you're trying to do it on HDD, that won't work anymore, and quite 
possibly SSD will reach limits.  But that's a design issue, not a scale
problem.

Most of Usenet's so-called "scale" problems had to do with disk I/O and
network speeds, not flood fill.

> >Perhaps, but mainly because you
> >do not have a mutual agreement on traffic levels and a bunch of other
> >factors.  Flooding works just fine within private hierarchies and since
> >I thought this was a discussion of "free collaborative tools" rather than
> >"random newbie trying to masochistically keep up with a full backbone
> >Usenet feed", it definitely should work fine for a private hierarchy and
> >collaborative use.
> 
> Then we're in violent agreement on this point.  Great!

Okay, fine, but it's kinda the same thing as "last week some noob got a
1990's era book on setting up a webhost, bought a T1, and was flummoxed
at why his service sucked."

The Usenet "backbone" with binaries isn't going to be viable without a
real large capex investment and significant ongoing opex.  This isn't a
failure in the technology.

> >>delivered it.  TAKETHIS managed to sweep these problems under the
> >>carpet, but it's a horrible, awful protocol hack.
> >
> >It's basically cheap pipelining.
> 
> no, TAKETHIS is unrestrained flooding, not cheap pipelining.

It is definitely not unrestrained.  Sorry, been there inside the code.
There's a limited window out of necessity, because you get interesting
behaviours if a peer is held off too long.

> >If you want to call pipelining in
> >general a horrible, awful protocol hack, then that's probably got
> >some validity.
> 
> you could characterise pipelining as a necessary reaction to the fact 
> that the speed of light is so damned slow.

Sure.

> >>which is mostly because there are so few large binary sites these days,
> >>i.e. limited distribution model.
> >
> >No, there are so few large binary sites these days because of consolidation
> >and buyouts.
> 
> 20 years ago, lots of places hosted binaries.  They stopped because it 
> was pointless and wasteful, not because of consolidation.

I thought they stopped it because some of us offered them a better model 
that reduced their expenses and eliminated the need to have someone who was 
an expert in an esoteric '80's era service, while also investing in all the
capex/opex. 

Lots of companies sold wholesale Usenet, usually just by offering access to 
a remote service.  As the amount of Usenet content exploded, the increasing 
cost of storage for a feature a declining number of users was using didn't 
make sense. 

One of my companies specialized in shipping dreader boxes to ISP's and 
letting them backend off remote spools, usually for a fairly modest cost 
(high three, low four figures?).  This let them have control over the service
that was unlike anything other service providers were doing.  Custom groups,
custom integration with their auth/billing, etc., required about a megabit
of 

Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Nick Hilliard

Joe Greco wrote on 29/03/2020 21:46:

On Sun, Mar 29, 2020 at 07:46:28PM +0100, Nick Hilliard wrote:

That's so hideously wrong.  It's like claiming web forums don't
work because IP packet delivery isn't reliable.


Really, it's nothing like that.


Sure it is.  At a certain point you can get web forums to stop working
by DDoS.  You can't guarantee reliable interaction with a web site if
that happens.


this is failure caused by external agency, not failure caused by 
inherent protocol limitations.



Usenet message delivery at higher levels works just fine, except that
on the public backbone, it is generally implemented as "best effort"
rather than a concerted effort to deliver reliably.


If you can explain the bit of the protocol that guarantees that all
nodes have received all postings, then let's discuss it.


There isn't, just like there isn't a bit of the protocol that guarantees
that an IP packet is received by its intended recipient.  No magic.


tcp vs udp.


Flood often works fine until you attempt to scale it.  Then it breaks,
just like Bj??rn admitted. Flooding is inherently problematic at scale.


For... what, exactly?  General Usenet?


yes, this is what we're talking about.  It couldn't scale to general 
usenet levels.



Perhaps, but mainly because you
do not have a mutual agreement on traffic levels and a bunch of other
factors.  Flooding works just fine within private hierarchies and since
I thought this was a discussion of "free collaborative tools" rather than
"random newbie trying to masochistically keep up with a full backbone
Usenet feed", it definitely should work fine for a private hierarchy and
collaborative use.


Then we're in violent agreement on this point.  Great!


delivered it.  TAKETHIS managed to sweep these problems under the
carpet, but it's a horrible, awful protocol hack.


It's basically cheap pipelining.


no, TAKETHIS is unrestrained flooding, not cheap pipelining.


If you want to call pipelining in
general a horrible, awful protocol hack, then that's probably got
some validity.


you could characterise pipelining as a necessary reaction to the fact 
that the speed of light is so damned slow.



which is mostly because there are so few large binary sites these days,
i.e. limited distribution model.


No, there are so few large binary sites these days because of consolidation
and buyouts.


20 years ago, lots of places hosted binaries.  They stopped because it 
was pointless and wasteful, not because of consolidation.



Right, so you've put your finger on the other major problem relating to
flooding which isn't the distribution synchronisation / optimisation
problem: all sites get all posts for all groups which they're configured
for.  This is a profound waste of resources + it doesn't scale in any
meaningful way.


So if you don't like that everyone gets everything they are configured to
get, you are suggesting that they... what, exactly?  Shouldn't get everything
they want?


The default distribution model of the 1990s was *.  These days, only a 
tiny handful of sites handle everything, because the overheads of 
flooding are so awful.  To make it clear, this awfulness is resource 
related, and the knock-on effect is that the resource cost is untenable.


Usenet, like other systems, can be reduced to an engineering / economics 
management problem.  If the cost of making it operate correctly doesn't 
work, then it's non-viable.



None of this changes that it's a robust, mature protocol that was originally
designed for handling non-binaries and is actually pretty good in that role.
Having the content delivered to each site means that there is no dependence
on long-distance interactive IP connections and that each participating site
can keep the content for however long they deem useful.  Usenet keeps hummin'
along under conditions that would break more modern things like web forums.


It's a complete crock of a protocol with robust and mature 
implementations.  Diablo is one and for that, we have people like Matt 
and you to thank.


Nick


Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Nick Hilliard

Joe Greco wrote on 29/03/2020 15:56:

On Sun, Mar 29, 2020 at 03:01:04PM +0100, Nick Hilliard wrote:

because it uses flooding and can't guarantee reliable message
distribution, particularly at higher traffic levels.


That's so hideously wrong.  It's like claiming web forums don't
work because IP packet delivery isn't reliable.


Really, it's nothing like that.


Usenet message delivery at higher levels works just fine, except that
on the public backbone, it is generally implemented as "best effort"
rather than a concerted effort to deliver reliably.


If you can explain the bit of the protocol that guarantees that all 
nodes have received all postings, then let's discuss it.



The concept of flooding isn't problematic by itself.


Flood often works fine until you attempt to scale it.  Then it breaks, 
just like Bjørn admitted. Flooding is inherently problematic at scale.



 If you wanted to
implement a collaborative system, you could easily run a private
hierarchy and run a separate feed for it, which you could then monitor
for backlogs or issues.  You do not need to dump your local traffic on
the public Usenet.  This can happily coexist alongside public traffic
on your server.  It is easy to make it 100% reliable if that is a goal.


For sure, you can operate mostly reliable self-contained systems with 
limited distribution.  We're all in agreement about this.



The fact that it ended up having to implement TAKETHIS is only one
indication of what a truly awful protocol it is.


No, the fact that it ended up having to implement TAKETHIS is a nod to
the problem of RTT.


TAKETHIS was necessary to keep things running because of the dual 
problem of RTT and lack of pipelining.  Taken together, these two 
problems made it impossible to optimise incoming feeds, because of ... 
well, flooding, which meant that even if you attempted an IHAVE, by the 
time you delivered the article, some other feed might already have 
delivered it.  TAKETHIS managed to sweep these problems under the 
carpet, but it's a horrible, awful protocol hack.



It did and has.  The large scale binaries sites are still doing a
great job of propagating binaries with very close to 100% reliability.


which is mostly because there are so few large binary sites these days, 
i.e. limited distribution model.



I was there.


So was I, and probably so were lots of other people on nanog-l.  We all 
played our part trying to keep the thing hanging together.



I'm the maintainer of Diablo.  It's fair to say I had a
large influence on this issue as it was Diablo's distributed backend
capability that really instigated retention competition, and a number
of optimizations that I made helped make it practical.


Diablo was great - I used it for years after INN-related head-bleeding. 
Afterwards, Typhoon improved things even more.



The problem for smaller sites is simply the immense traffic volume.
If you want to carry binaries, you need double digits Gbps.  If you
filter them out, the load is actually quite trivial.


Right, so you've put your finger on the other major problem relating to 
flooding which isn't the distribution synchronisation / optimisation 
problem: all sites get all posts for all groups which they're configured 
for.  This is a profound waste of resources + it doesn't scale in any 
meaningful way.


Nick


Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Nick Hilliard

Bjørn Mork wrote on 29/03/2020 13:44:

How is nntp non-scalable?


because it uses flooding and can't guarantee reliable message 
distribution, particularly at higher traffic levels.


The fact that it ended up having to implement TAKETHIS is only one 
indication of what a truly awful protocol it is.


Once again in simpler terms:

> How is nntp non-scalable?
[...]
> Binaries broke USENET.  That has little to do with nntp.

If it had been scalable, it could have scaled to handling the binary groups.

Nick


Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Bjørn Mork
Nick Hilliard  writes:

> nntp is a non-scalable protocol which broke under its own
> weight.

How is nntp non-scalable?  It allows an infinite number of servers
connected in a tiered network, where you only have to connect to a few
other peers and carry whatever part of the traffic you want.

Binaries broke USENET.  That has little to do with nntp.

nntp is still working just fine and still carrying a few discussion
groups here and there.  And you have a really nice mailling list gateway
at news.gmane.io (which recently replaced gmane.org - see
https://lars.ingebrigtsen.no/2020/01/15/news-gmane-org-is-now-news-gmane-io/
for full story)


Bjørn


Re: free collaborative tools for low BW and losy connections

2020-03-29 Thread Rich Kulawiec
On Wed, Mar 25, 2020 at 05:27:41PM +, Nick Hilliard wrote:
> nntp is a non-scalable protocol which broke under its own weight. Threaded
> news-readers are a great way of catching up with large mailing lists if
> you're prepared to put in the effort to create a bidirectional gateway.  But
> that's really a statement that mail readers are usually terrible at handling
> large threads rather than a statement about nntp as a useful media delivery
> protocol.

Some mail readers are terrible at that: mutt isn't.

And one of the nice things about trn (and I believe slrn, although
that's an educated guess, I haven't checked) is that it can save
Usenet news articles in Unix mbox format, which means that you can
read them with mutt as well.  I have trn set up to run via a cron job
that executes a script that grabs the appropriate set of newsgroups,
spam-filters them, saves what's left to a per-newsgroup mbox file that
I can read just like I read this list.

Similarly, rss2email saves RSS feeds in Unix mbox format.  And one of
the *very* nice things about coercing everything into mbox format is
that myriad tools existing for sorting, searching, indexing, etc.

---rsk


Re: free collaborative tools for low BW and losy connections

2020-03-25 Thread Grant Taylor via NANOG

On 3/25/20 11:27 AM, Nick Hilliard wrote:

nntp is a non-scalable protocol which broke under its own weight.


That statement surprises me.  But I'm WAY late to the NNTP / Usenet game.

Threaded news-readers are a great way of catching up with large mailing 
lists if you're prepared to put in the effort to create a bidirectional 
gateway.  But that's really a statement that mail readers are usually 
terrible at handling large threads rather than a statement about nntp as 
a useful media delivery protocol.


Especially when most of the news readers that I use or hear others talk 
about using are primarily email clients that also happen to be news 
clients.  As such, it's the same threading code.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature


Re: free collaborative tools for low BW and losy connections

2020-03-25 Thread Grant Taylor via NANOG


On 3/25/20 3:47 PM, Randy Bush wrote:

some of us still do uucp, over tcp and over pots.


My preference is to do UUCP over SSH (STDIO) over TCP/IP.  IMHO the SSH 
adds security (encryption and more friendly authentication (keys / certs 
/ Kerberos)) and reduces the number of ports that need to be exposed to 
the world / allowed through the network.



archaic, but still the right tool for some tasks.Yep.  Though I think they are 
few and far in between.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature


Re: free collaborative tools for low BW and losy connections

2020-03-25 Thread Randy Bush
some of us still do uucp, over tcp and over pots.  archaic, but still
the right tool for some tasks.

randy


Re: free collaborative tools for low BW and losy connections

2020-03-25 Thread Rich Kulawiec
On Wed, Mar 25, 2020 at 09:59:53AM -0600, Grant Taylor via NANOG wrote:
> Something that might make you groan even more than NNTP is UUCP.  UUCP
> doesn't even have the system-to-system (real time) requirement that NNTP
> has.  It's quite possible to copy UUCP "Bag" files to removable media and
> use sneaker net t transfer things. 

I was remiss not to mention this as well.  *Absolutely* UUCP still has
its use cases, sneakernetting data among them.  It's been a long time
since "Never underestimate the bandwidth of a station wagon full of tapes"
(Dr. Warren Jackson, Director, UTCS) but it still holds true for certain
values of (transport container, storage medium).

---rsk


Re: free collaborative tools for low BW and losy connections

2020-03-25 Thread Scott Weeks




Thanks, my facepalm moment of the day (so far; it's 
only 7:30am here) is...

Use tools from the past when the connections everywhere
were losy and slow.  They already mentioned RT.  I'll
mention that and NNTP/UUCP/etc.

scott



Re: free collaborative tools for low BW and losy connections

2020-03-25 Thread Nick Hilliard

Paul Ebersman wrote on 25/03/2020 16:59:
And scary as it sounds, UUCP over SLIP/PPP worked remarkably 
robustly.
uucp is a batch oriented protocol so it's pretty decent for situations 
where there's no permanent connectivity, but uncompelling otherwise.


nntp is a non-scalable protocol which broke under its own weight. 
Threaded news-readers are a great way of catching up with large mailing 
lists if you're prepared to put in the effort to create a bidirectional 
gateway.  But that's really a statement that mail readers are usually 
terrible at handling large threads rather than a statement about nntp as 
a useful media delivery protocol.


Nick


Re: free collaborative tools for low BW and losy connections

2020-03-25 Thread Paul Ebersman
woody> UUCP kicks ass.

And scary as it sounds, UUCP over SLIP/PPP worked remarkably
robustly. When system/network resources are skinny or scarce, you get
really good at keeping things working.

:)


Re: free collaborative tools for low BW and losy connections

2020-03-25 Thread Bill Woodcock


> On Mar 25, 2020, at 4:59 PM, Grant Taylor via NANOG  wrote:
> UUCP doesn't even have the system-to-system (real time) requirement that NNTP 
> has.

Brian Buhrow and I replaced a completely failing 
database-synchronization-over-Microsoft-Exchange system with UUCP across 
American President Lines and Neptune Orient Lines fleets, back in the mid-90s.  
UUCP worked perfectly (Exchange connections were failing ~90% of the time), was 
much faster (average sync time on each change reduced from about three minutes 
to a few seconds), and saved them several million dollars a year in satellite 
bandwidth costs.

UUCP kicks ass.

-Bill



signature.asc
Description: Message signed with OpenPGP


Re: free collaborative tools for low BW and losy connections

2020-03-25 Thread John Levine
In article <9f22cde2-d0a2-1ea1-89e9-ae65c4d47...@tnetconsulting.net> you write:
>I hadn't considered having a per system NNTP server.  I sort of like the 
>idea.  I think it could emulate the functionality that I used to get out 
>of Lotus Notes & Domino with local database replication.  I rarely 
>needed the offline functionality, but having it was nice.  I also found 
>that the local database made searches a lot faster than waiting on them 
>to traverse the network.
>
>> Also note that bi- or unidirectional NNTP/SMTP gateways are useful.

I've been reading nanog and many other lists on my own NNTP server via
a straightforward mail gateway for about a decade.  Works great.  I'm
sending this message as a mail reply to a news article.

-- 
Regards,
John Levine, jo...@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly


Re: free collaborative tools for low BW and losy connections

2020-03-25 Thread Grant Taylor via NANOG
On 3/25/20 5:39 AM, Rich Kulawiec wrote:> One of the tools that we've 
had for a very long time but which is
often overlooked is NNTP. It's an excellent way to move information 
around under exactly these circumstances: low bandwidth, lossy 
connections -- and intermittent connectivity, limited resources, 
etc.


I largely agree.  Though NNTP does depend on system-to-system TCP/IP 
connectivity.  I say system-to-system instead of end-to-end because 
there can be intermediate systems between the end systems.  NNTP's store 
and forward networking quite capable.


Something that might make you groan even more than NNTP is UUCP.  UUCP 
doesn't even have the system-to-system (real time) requirement that NNTP 
has.  It's quite possible to copy UUCP "Bag" files to removable media 
and use sneaker net t transfer things.  I've heard tell of people 
configuring UUCP on systems at the office, their notebook that they take 
with them, and systems at home.  The notebook (push or poll) connects to 
the systems that it can currently communicate with and transfers files.


UUCP can also be used to transfer files, news (NNTP: public (Usenet) and 
/ or private), email, and remote command execution.


Nearly any laptop/desktop has enough computing capacity to run an 
NNTP server


Agreed.  I dare say that anything that has a TCP/IP stack is probably 
capable of running an NNTP server (and / or UUCP).


depending on the quantity of information being moved 
around, it's not at all out of the question to do exactly that, so 
that every laptop/desktop (and thus every person) has their own copy 
right there, thus enabling them to continue using it in the absence 
of any connectivity.


I hadn't considered having a per system NNTP server.  I sort of like the 
idea.  I think it could emulate the functionality that I used to get out 
of Lotus Notes & Domino with local database replication.  I rarely 
needed the offline functionality, but having it was nice.  I also found 
that the local database made searches a lot faster than waiting on them 
to traverse the network.



Also note that bi- or unidirectional NNTP/SMTP gateways are useful.


Not only that, but given the inherent one-to-many nature of NNTP, you 
can probably get away with transmitting that message once instead of 
(potentially) once per recipient.  (Yes, I know that SMTP is supposed to 
optimize this, but I've seen times when it doesn't work, properly.)


It's not fancy, but anybody who demands fancy at a time like this is 
an idiot.  It *works*, it gets the basics done, and thanks to decades 
of development/experience, it holds up  well under duress.


I completely agree with your statement about NNTP.  I do think that UUCP 
probably holds up even better.  UUCP bag files make it easy to bridge 
communications across TCP/IP gaps.  You could probably even get NNTP and 
/ or UUCP to work across packet radio.  }:-)




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature


Re: free collaborative tools for low BW and losy connections

2020-03-25 Thread Rich Kulawiec


One of the tools that we've had for a very long time but which is
often overlooked is NNTP. It's an excellent way to move information
around under exactly these circumstances: low bandwidth, lossy
connections -- and intermittent connectivity, limited resources, etc.

Nearly any laptop/desktop has enough computing capacity to run an
NNTP server and depending on the quantity of information being moved
around, it's not at all out of the question to do exactly that, so that
every laptop/desktop (and thus every person) has their own copy right
there, thus enabling them to continue using it in the absence of any
connectivity.

Also note that bi- or unidirectional NNTP/SMTP gateways are useful.

It's not fancy, but anybody who demands fancy at a time like this
is an idiot.  It *works*, it gets the basics done, and thanks to
decades of development/experience, it holds up  well under duress.

---rsk


Re: free collaborative tools for low BW and losy connections

2020-03-24 Thread Miles Fidelman
It would be a lot MORE relevant if there were some actual tools listed & 
discussed!


Miles Fidelman

On 3/24/20 1:48 PM, Scott Weeks wrote:



Hello,

I was watching SDNOG and saw the below conversation recently.
Here is the relavant part:

"I think this is the concern of all of us, how to work from home and to
keep same productivity level,, we need collaborative tools to engaging
the team.  I am still searching for tool and apps that are free, 
tolerance

the poor internet speed."

I know of some free tools and all, but am not aware of the tolerance
they may have to slow speed and (likely) poor internet connections.

I was wondering if anyone here has experience with tools that'd work,
so I could suggest something to them.

I don't know if everyone's aware of what they have been going through
in Sudan (both of them), but it has been a rough life there recently.

Thanks!
scott





 Original Message 
Subject: [sdnog] How to work from home

Hi all
Hope you are all safe wherever you are,

Regards to the current situation around the world , and as we all
adviced/forced to start working from home which is not common here
in our community , and I know some bosses are not convinced unless
they saw you in your desk :D

my question is , for simple offices ,with no great infrastructure
, just an internet connection to their edge ,how can they work
from home ? Is there any free tools /ways they can use, what are
the options, with taking along the security concerns

what is your advice to achieve that in a proper way , and for
those who managed to work from home , how did you do that ?
Please share your experience ^_^


And how we as "sdnog community" can help in that "for the old
fashioned bosses :D"

--

From: "aseromeru...@hotmail.com" 

I think this is the concern of all of us, how to work from home
and to keep same productivity level,, we need collaborative tools
to engaging the team.
I am still searching for tool and apps that are free, tolerance
the poor internet speed.

Any suggestion


--
In theory, there is no difference between theory and practice.
In practice, there is.   Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown