I'm also confused. How is Twitter supposed to know that the Tweet was made by the OP or was a compromised Tweet? When something is egregious what should Twitter do if it doesn't have resources to investigate whether the tweet was "intentional" or not, if action must happen quickly to avoid problems? And isn't a claim of "wasn't me" an easy claim whether or not it's true? I'm likewise confused. He says "What really pisses me off is that this is fairly obviously not my fault " But I can't find out where it is so obvious. Am I missing something? To an outsider it looks like a threat, without any indication that it wasn't made intentionally.
> How is Twitter supposed to know that the Tweet was made by the OP or was a compromised Tweet?
For starters: it would be interesting to see what IP address the tweet was made from, whether or not it was preceded by a password change or contact with Twitter support to turn the account over to someone else.
> When something is egregious what should Twitter do if it doesn't have resources to investigate whether the tweet was "intentional" or not, if action must happen quickly to avoid problems?
Good question. That makes me wonder if they are able to operate safely at scale at all.
> And isn't a claim of "wasn't me" an easy claim whether or not it's true?
Yes. And yet: it wasn't me.
> He says "What really pisses me off is that this is fairly obviously not my fault " But I can't find out where it is so obvious. Obvious to whom?
To me, and presumably, to those who know me and presumably to Twitter employees who have access to a whole lot more data than I do.
>> How is Twitter supposed to know that the Tweet was made by the OP or was a compromised Tweet?
> For starters: it would be interesting to see what IP address the tweet was made from, whether or not it was preceded by a password change or contact with Twitter support to turn the account over to someone else.
>> When something is egregious what should Twitter do if it doesn't have resources to investigate whether the tweet was "intentional" or not, if action must happen quickly to avoid problems?
> Good question. That makes me wonder if they are able to operate safely at scale at all.
i'll hazard a guess that this isn't a factor at all, since the vast majority of cases where such differences exist could be explained by the user traveling and logging in from a different location. your abuse team would declare open riot if they needed to investigate whether every "posted abusive tweet, but from a coffee shop wifi instead of their home" needed to be evaluated as a possible hijack.
like most large services, Twitter has self-service hijack protections: you should receive email notifications when Twitter sees a login from an unknown location (i sure do) with the usual CTA about changing your password and such if you do not recognize it. that does appear to be what you should do here, insofar as they state you can cancel the appeal, delete the tweet, and regain access. asking users to delete tweets made by a compromised account sounds normal enough, given that it's both what will happen anyway
i'll grant that Twitter's account blocks and support system can be _bad_, in that they often have conflicting or outdated instructions, but that's only a problem when the recovery process fails, not when you don't attempt it. this seems more a complaint that they don't offer concierge service but, eh, not much surprise there.
Yeah, the Twitter policy here is 1) ask the user to delete the bad tweet 2) tell the user to change their password if they think they've been hijacked 3) internally investigate any credible claims of a security issue on their side. They have zero interest in allowing users to participate in any such investigation.
A side question for folks who work on these sorts of social media / UGC sites. Wouldn't a shadowban / deletion of messages be a lot less antagonistic as a way of dealing with problematic posts instead of instant and total account ban? I mean, if Reddit can shadowban so that your account still works, even if the posts go into the "ether", why can't Twitter do this? Instaban seems a bit... harsh, even if merited? Couldn't you combine shadowban with account ban if there's a persistent set of violation posts? Is there a practical reason why shadowbanning on twitter doesn't work?
I mean if we're going to use fairly simple approaches (keywords on ban lists or user-based flagging / reporting), then shouldn't step one simply be not allowing the post at all, instead of a retroactive instaban on the account after the post has been shared? To me, the simple way is delete the post, or put in a hold queue (warning the user), or at very least not actually share the post on a timeline. Warn the user, don't share the post, and/or delete the post. Then you can still have the desired effect of keeping the platform "safe". Am I missing something? I'm confused all around about social media practices, honestly.
A shadowban is IMO worse than an instaban, you still aren't able to post anything, but it will look like you can. It also feels morally wrong to shadowban people and there's no process to appeal, it's not even straightforward to discover you've been shadowbanned, you need a bot or searching for your posts in a private window[0].
Agree on the "simple way" of warning the user and putting the post on a queue though.
> A shadowban is IMO worse than an instaban, you still aren't able to post anything, but it will look like you can. It also feels morally wrong to shadowban people and there's no process to appeal, it's not even straightforward to discover you've been shadowbanned, you need a bot or searching for your posts in a private window[0].
I think that's true on a first offense. But if the user has demonstrated they have no intent to behave or their offense is egregious or demonstrates they know what they're doing, then I don't think there's any problem with a shadowban.
>Good question. That makes me wonder if they are able to operate safely at scale at all.
With those standards, I don't think many of the tech giants can operate at scale at all. Not that I think they shouldn't be held to those standards, but that incompetence just doesn't surprise me at all.
There are several things they could check and factor into the score before banning someone. Client source (if the use the website 100% of the time and suddenly this was from the Android client for example), IP address (do they tweet from the U.S. exclusively and then suddenly they're tweeting from Moscow?), VPN affiliation (did this tweet originate from a known VPN egress?), and so on. These things _should_ be factored into the "omg ban this account" score IMO, but I have no idea if they are.