Elon Musk and Twitter chief Parag Agrawal are butting heads over the way the social media giant handles so-called automated bots, stoking speculation Musk may try to lower the price or even walk away from his $44 billion offer for the company. Musk told a tech conference in Miami that fake users make up at least 20% of all Twitter accounts, possibly as high as 90%. Twitter disagrees. It reports that spam accounts make up fewer than 5% of total users, and Agrawal posted a long thread laying out his company's methodology. Musk replied by tweeting "this deal cannot move forward" unless Twitter provides proof of its claims.
1. What are Twitter bots and what are they used for?
On Twitter, bots are automated accounts that can do the same things as real human beings: send out tweets, follow other users and like and retweet postings by others. Spam bots use these abilities to engage in potentially deceptive, harmful or annoying activity. Spam bots programmed with a commercial motivation might tweet incessantly in an attempt to drive traffic to a website for a product or service. They can be used to spread misinformation and promote political messages. In the 2016 presidential election, there were concerns that Russian bots helped influence the race in favor of the winner, Donald Trump. Spam bots can also disseminate links to fake giveaways and other financial scams. After announcing his plans to acquire Twitter, Musk said one of his priorities is cracking down on spam bots that promote scams involving cryptocurrencies.
2. Are bots and fake accounts allowed on Twitter?
Bots are allowed on Twitter, though company policy requires such accounts to indicate that they're automated. The platform has even launched a label for "good" bots, such as @tinycarebot, an account that tweets self-care reminders. Spam bots, however, aren't permitted and the company has policies meant to combat them. Users are encouraged to report policy violations. The company locks accounts with suspicious activity. To get back in, users may have to provide additional information such as a phone number or solve a reCAPTCHA challenge, which entails completing a puzzle or typing in a phrase seen in an image to confirm they're human. Twitter also can permanently suspend spam accounts. The company estimated that fake accounts and spam accounted for fewer than 5% of its daily active users in the fourth quarter of 2021.
3. Can Elon Musk crack down on bots?
Musk certainly seems to think so. On April 25 he said he wanted to improve Twitter by, among other things, "defeating the spam bots, and authenticating all humans." Making greater use of security methods like reCAPTCHA could help crack down on spam bots. Twitter could increase deployment of multifactor authentication, a type of identity verification where users have to confirm who they are and that they're human by using another channel such as phone or email. The company could also boost usage of machine-learning algorithms that could help identify spam bots based on their Twitter activity. Musk also tweeted another suggestion -- asking why Twitter doesn't just call users to verify their identity -- and then posted a poop emoji.
4. What's at stake for Twitter?
Twitter could lose users who are frustrated, concerned or even harmed by spam bots and fraudulent activity. Persistent security issues could also draw more attention from regulators who want to rein in Twitter and the broader tech industry. On the flip side, a tougher crackdown on spam bots could hurt Twitter's total user count by cleaning out fake accounts. More immediately, Musk, chief executive officer of Tesla Inc. and SpaceX, said on May 13 that his bid to buy Twitter was "temporarily on hold" pending details about how many spam and fake accounts are on the platform. Four days later he declared he won't proceed unless Twitter can prove bots make up fewer than 5% of its users.
5. Why is security such a challenge for Twitter?
Mobile apps are often more vulnerable than websites accessed through an internet browser on a desktop computer or laptop. Web browsers like Google Chrome update and make security improvements in the background without a user realizing it. When it comes to a mobile app, users often have to make the update themselves to ensure that a new security patch is in place. More established tech companies like Google and Microsoft also have large designated security teams putting them ahead of social media companies when it comes to security.