>     Il 22/12/2020 21:56 Mark E. Jeftovic via mailop <mailop@mailop.org> ha 
> scritto:
> 
> 
>     There are clear, existing, unambiguous laws against this, and nobody 
> questions any entity who not only declines to facilitate this but actively 
> purges it from their platforms.
> 
>     Where I have a problem is the idea that infrastructure vendors should 
> take action against all content John Levine finds irresponsible. Alternative 
> views on COVID, and the vaccine, if we must get specific, can not in any 
> reasonable stretch be put into the same bucket as child pornography and 
> beheading videos. Trying to make that argument is what is really 
> irresponsible.
> 
This thread seems to be reinventing the policy wheel of the discussion about 
illegal vs harmful content. Illegal content is illegal and no one is allowed to 
post or redistribute it. Harmful content is not illegal, yet it can seriously 
harm people; antivax propaganda, and fake news in general, fall into this 
category. The problem with harmful content is that, not being addressed by a 
law, it is generally ill defined, both in terms of what it is and of what you 
are supposed to do with it. There is also an additional dimension to this 
discussion: content which is illegal in one country can be just harmful in 
another, or even perfectly legitimate in a third one.

In general, it is up to each person and company to decide what to do with 
harmful content, and to decide what they think of other people and companies 
that adopt policies about it.

However, there is growing agreement that media and content distribution 
platforms with dominant roles should not define and handle "harmful content" on 
their own terms only, as this may lead to censorship of legitimate views from 
one side, and to mass uncontrolled circulation of harmful content on the other, 
as these platforms have an economic incentive not to remove any content, since 
they make money out of content distribution.

This is leading many countries to establish laws that turn certain kinds of 
harmful content into illegal content and/or define how platforms should deal 
with harmful content, often including procedures to judge the content and to 
review the judgement, and timelines for removal.

This has also created a bit of bad image around the Internet content industry 
as a whole, as "these are people that make a lot of money by spreading hate and 
harmful things, and hide under free expression excuses to continue doing so". 
So this also explains why part of the industry is now (over?) reacting.

For some reason, mass mailing services have always been under the radar, partly 
because mass mailing of harmful content is almost always spam independently 
from the content itself, but also because social networks are a much more 
common way of spreading out harmful content at scale. At this point in time, I 
don't think that there are requirements for email service providers to be 
active against harmful content. There definitely are expectations about it in 
some political spheres, though. But in the end, at this point in time, there is 
no "right" policy and we have to accept that different people will have 
different approaches.

--

Vittorio Bertola | Head of Policy & Innovation, Open-Xchange
vittorio.bert...@open-xchange.com mailto:vittorio.bert...@open-xchange.com 
Office @ Via Treviso 12, 10144 Torino, Italy
_______________________________________________
mailop mailing list
mailop@mailop.org
https://list.mailop.org/listinfo/mailop

Reply via email to