Interesting concept. That would mean you'd have to add a additional element to each message that would update the spam content by ID.
Here's the problem. If you get enough people together, you can flame/spam messages and make the messages go away. Lets say you hard code a number like '100' reported spams, Easy if kelly has 200 followers, and 100 people are active enough to mark it as spam, then it self deletes. The problem is there is a serious lag time to get 100 people to do it. If AplusK hater fans could easily censor 100 people via bots. So, lets assume based on number of friends, we do it via percent. 10% of members have to choose the message as spam. (relatively easy number to not fudge up... you couldn't bot your way to 80k users to silence a mashible post.) But there has to be one other element to make it fail safe. User Rating algorithms that calcuate based on the last 50 tweets, resetting every hour with the Epoch time limit, based on the number of unique posts or the similarity of said posts. with regex, strip @names, strip urls, strip hashtags, test on message uniqueness (70% similar to other message or 80% common words after stripping out and/of/the/RT strings) using urls, unique urls/hashtags count, etc come up with an algorithm that uses that to create a weighted index on the number of keywords found inside each message. Now you have a threshhold for 'spam accts'. But you can also use that to test on limits as well.. if your threshhold never goes above 40% then there is some give on limits, if it is above 70% then impose strict limits on the acct. So, community member vs. community spammer would become very easy to track. Existing Rate shaping limits (follow/ip/etc) will prevent people from pulling numerous accounts to get around bad scores. So, in short to auto destroy, if 10% of users say 'yeah spam' and they have a very high index level, would make sense. Low index level, they could get a dm from spam cop after 10% of people mark it as spam with a link that says.. people are marking you as spam. etc. Anything auto would be too easy to exploit. ~Doc On Jun 10, 3:05 pm, Dewald Pretorius <[email protected]> wrote: > The bock takes care of the account level. It does not take care of the > individual tweet level. > > And with block you don't have the aggregation of reported spam tweets > that automatically results in an account suspension. > > Plus, to block you have to specifically visit the user's profile to > find the block link. With tweet spam reporting the button would be > right there in your own timeline. Far more people will participate in > that action, because it requires no additional navigation. > > On Jun 10, 4:58 pm, Jesse Stay <[email protected]> wrote: > > > > > How is that different than block, other than terminology? > > > Jesse > > > On Wed, Jun 10, 2009 at 1:37 PM, Dewald Pretorius <[email protected]> wrote: > > > > Twitter already has a few million Dels, namely us, the users. > > > > All they need to do is to add a report spam button to the tweet, much > > > like the favorite button. > > > > X number of strikes against a tweet, and it is automatically deleted. > > > > X number of strikes against an account, and it is automatically > > > suspended.- Hide quoted text - > > - Show quoted text -
