Behind The Screen Digital Power
Active Facebook Usage Linked to 'Toxic' Narcissism (Study)
A new paper links such things as a large amount of "friends" and frequent status updates with negative forms of the personality trait.
If you update your status frequently and have hundreds of “friends” on Facebook, you may think this article is about you.
PHOTOS: THR's Social Media Poll: How Facebook and Twitter Impact the Entertainment Industry
According to a new study of Facebook users, a high amount of activity on the social media site is linked to not just narcissism, but the “toxic” version of the personality trait. The Guardian UK reports that a study published in the journal, Personality and Individual Differences, compared Facebook usage to scores on the Narcissistic Personality Inventory Questionnaire.
Its findings show that those participants who scored highest on the Narcissistic Personality Inventory, were more likely to have more “friends,” tag themselves more often, and update their status more frequently.
It also showed that participants labeled as narcissistic were more likely to aggressively respond to derogatory comments about them and changed their profile photos more often.
While there are certainly healthy levels of narcissism, high amounts of the trait that includes excessive selfish behavior, arrogance, a lack of empathy, and entitlement can become socially disruptive for individuals and their relationships.
Of course, there have been other studies that show narcissistic behavior associated with social media sites, but this is one of the first to directly link something like the amount of Facebook friends and possessing the “toxic” elements of narcissism.
The link between high Facebook use and negative forms of narcissism may be especially troubling in light of an exclusive socialmedia poll by market research firm Penn Schoen Berland for The Hollywood Reporter.
The poll of 750 social network users ages 13 to 49 showed that 88 percent of the respondents considered social media a form of entertainment and 79 percent actively use Facebook while watching television. A good amount of the study’s millennials also desire more opportunities to use social media with entertainment, such as while watching movies.
“Did We Create This Monster?” How Twitter Turned 'Toxic'
For years, the company’s zeal for free speech blinded it to safety concerns. Now it’s scrambling to make up for lost time.
Yair Rosenberg wanted to troll the trolls.
Rosenberg, a senior writer for Jewish-focused news-and-culture website Tablet Magazine, had become a leading target of anti-Semitic Twitter users during his reporting on the 2016 U.S. presidential campaign. Despite being pelted with slurs, he wasn’t overly fixated on the Nazis who had embraced the service. “For the most part I found them rather laughable and easily ignored,” he says.
But one particular type of Twitter troll did gnaw at him: the ones who posed as minorities–using stolen photos of real people–and then infiltrated high-profile conversations to spew venom. “Unsuspecting readers would see this guy who looks like an Orthodox Jew or a Muslim woman saying something basically offensive,” he explains. “So they think, Oh, Muslims are religious. Jews are religious. And they are horrifically offensive people.”
Rosenberg decided to fight back. Working with Neal Chandra, a San Francisco–based developer he’d never met, he created an automated Twitter bot called Imposter Buster. Starting in December 2016, it inserted itself into the same Twitter threads as the hoax accounts and politely exposed the trolls’ masquerade (“FYI, this account is a racist impersonating a Jew to defame Jews”).
Imposter Buster soon came under attack itself–by racists who reported it to Twitter for harassment. Unexpectedly, the company sided with the trolls: It suspended the bot for spammy behavior the following April. With assistance from the Anti-Defamation League, Rosenberg and Chandra got that decision reversed three days later. But their targets continued to file harassment reports, and last December Twitter once again blacklisted Imposter Buster, this time for good.
Rosenberg, who considers his effort good citizenship rather than vigilantism, still isn’t sure why Twitter found it unacceptable; he never received an explanation directly from the company. But the ruling gave racists a win by technical knockout.
For all the ways in which the Imposter Buster saga is unique, it’s also symptomatic of larger issues that have long bedeviled Twitter: abuse, the weaponizing of anonymity, bot wars, and slow-motion decision making by the people running a real-time platform. These problems have only intensified since Donald Trump became president and chose Twitter as his primary mouthpiece. The platform is now the world’s principal venue for politics and outrage, culture and conversation–the home for both #MAGA and #MeToo.
This status has helped improve the company’s fortunes. Daily usage is up a healthy 12% year over year, and Twitter reported its first-ever quarterly profit in February, capping a 12-month period during which its stock doubled. Although the company still seems unlikely ever to match Facebook’s scale and profitability, it’s not in danger of failing. The occasional cries from financial analysts for CEO Jack Dorsey to sell Twitter or from critics for him to shut it down look more and more out of step.
Despite Twitter’s more comfortable standing, Dorsey has been increasingly vocal about his service’s problems. “We are committed to making Twitter safer,” the company pledged in its February shareholder letter. On the accompanying investor call, Dorsey outlined an “information quality” initiative to improve content and accounts on the service. Monthly active users have stalled at 330 million–a fact that the company attributes in part to its ongoing pruning of spammers. Twitter’s cleanup efforts are an admission, albeit an implicit one, that the array of troublemakers who still roam the platform–the hate-mongers, fake-news purveyors, and armies of shady bots designed to influence public opinion–are impeding its ability to grow. (Twitter did not make Dorsey, or any other executive, available to be interviewed for this story. Most of the more than 60 sources we spoke to, including 44 former Twitter employees, requested anonymity.)
Though the company has taken significant steps in recent years to remove bad actors, it hasn’t shaken the lingering impression that it isn’t trying hard enough to make the service a safer space. Twitter’s response to negative incidents is often unsatisfying to its users and more than a trifle mysterious–its punishment of Rosenberg, instead of his tormentors, being a prime example. “Please can someone smart make a new website where there’s only 140 characters and no Nazis?” one user tweeted shortly after Twitter introduced 280-character tweets in November.
Twitter is not alone in wrestling with the fact that its product is being corrupted for malevolence: Facebook and Google have come under heightened scrutiny since the presidential election, as more information comes to light revealing how their platforms manipulate citizens, from Cambridge Analytica to conspiracy videos. The companies’ responses have been timid, reactive, or worse. “All of them are guilty of waiting too long to address the current problem, and all of them have a long way to go,” says Jonathon Morgan, founder of Data for Democracy, a team of technologists and data experts who tackle governmental social-impact projects.
The stakes are particularly high for Twitter, given that enabling breaking news and global discourse is key to both its user appeal and business model. Its challenges, increasingly, are the world’s.
How did Twitter get into this mess? Why is it only now addressing the malfeasance that has dogged the platform for years? “Safety got away from Twitter,” says a former VP at the company. “It was Pandora’s box. Once it’s opened, how do you put it all back in again?”
In Twitter’s early days, as the microblogging platform’s founders were figuring out its purpose, its users showed them Twitter’s power for good. Galvanized by global social movements, dissidents, activists, and whistle-blowers embracing Twitter, free expression became the startup’s guiding principle. “Let the tweets flow,” said Alex Macgillivray, Twitter’s first general counsel, who later served as deputy CTO in the Obama administration. Internally, Twitter thought of itself as “the free-speech wing of the free-speech party.”
This ideology proved naive. “Twitter became so convinced of the virtue of its commitment to free speech that the leadership utterly misunderstood how it was being hijacked and weaponized,” says a former executive.
The first sign of trouble was spam. Child pornography, phishing attacks, and bots flooded the tweetstream. Twitter, at the time, seemed to be distracted by other challenges. When the company appointed Dick Costolo as CEO in October 2010, he was trying to fix Twitter’s underlying infrastructure–the company had become synonymous with its “fail whale” server-error page, which exemplified its weak engineering foundation. Though Twitter was rocketing toward 100 million users during 2011, its antispam team included just four dedicated engineers. “Spam was incredibly embarrassing, and they built these stupidly bare-minimum tools to [fight it],” says a former senior engineer, who remembers “goddamn bot wars erupting” as fake accounts fought each other for clicks.
“You can’t take credit for the Arab Spring without taking responsibility for Donald Trump,” says Leslie Miley, a former engineering safety manager at Twitter. [Photo illustration: Delcan & Company]Twitter’s trust and safety group, responsible for safeguarding users, was run by Del Harvey, Twitter employee No. 25. She had an atypical résumé for Silicon Valley: Harvey had previously worked with Perverted Justice, a controversial volunteer group that used web chat rooms to ferret out apparent sexual predators, and partnered with NBC’s To Catch a Predator, posing as a minor to lure in pedophiles for arrest on TV. Her lack of traditional technical and policy experience made her a polarizing figure within the organization, although allies have found her passion about safety issues inspiring. In the early days, “she personally responded to individual [affected] users–Del worked tirelessly,” says Macgillivray. “[She] took on some of the most complex issues that Twitter faced. We didn’t get everything right, but Del’s leadership was very often a factor when we did.”
Harvey’s view, championed by Macgillivray and other executives, was that bad speech could ultimately be defeated with more speech, a belief that echoed Supreme Court Justice Louis Brandeis’s 1927 landmark First Amendment decision that this remedy is always preferable to “enforced silence.” Harvey occasionally used as an example the phrase “Yo bitch,” which bad actors intend as invective, but others perceive as a sassy hello. Who was Twitter to decide? The marketplace of ideas would figure it out.
By 2012, spam was mutating into destructive trolling and hate speech. The few engineers in Harvey’s group had built some internal tools to enable her team to more quickly remove illegal content such as child pornography, but they weren’t prepared for the proliferation of harassment on Twitter. “Every time you build a wall, someone is going to build a higher ladder, and there are always more people outside trying to fuck you over than there are inside trying to stop them,” says a former platform engineer. That year, Australian TV personality Charlotte Dawson was subjected to a rash of vicious tweets–e.g., “go hang yourself”–after she spoke out against online abuse. Dawson attempted suicide and was hospitalized. The following summer, in the U.K., after activist Caroline Criado-Perez campaigned to get a woman’s image featured on the 10-pound note, her Twitter feed was deluged with trolls sending her 50 rape threats per hour.
The company responded by creating a dedicated button for reporting abuse within tweets, yet trolls only grew stronger on the platform. Internally, Costolo complained that the “abuse economics” were “backward.” It took just seconds to create an account to harass someone, but reporting that abuse required filling out a time-consuming form. Harvey’s team, earnest about reviewing the context of each reported tweet but lacking a large enough support staff, moved slowly. Multiple sources say it wasn’t uncommon for her group to take months to respond to backlogged abuse tickets. Because they lacked the necessary language support, team members had to rely on Google Translate for answering many non-English complaints. User support agents, who manually evaluated flagged tweets, were so overwhelmed by tickets that if banned users appealed a suspension, they would sometimes simply release the offenders back onto the platform. “They were drowning,” says a source who worked closely with Harvey. “To this day, it’s shocking to me how bad Twitter was at safety.”
Twitter’s leadership, meanwhile, was focused on preparing for the company’s November 2013 IPO, and as a result it devoted the bulk of its engineering resources to the team overseeing user growth, which was key to Twitter’s pitch to Wall Street. Harvey didn’t have the technical support she needed to build scalable solutions to Twitter’s woes.
+ Toxicity on the platform intensified during this time, especially in international markets. Trolls organized to spread misogynist messages in India and anti-Semitic ones in Europe. In Latin America, bots began infecting elections. Hundreds used during Brazil’s 2014 presidential race spread propaganda, leading a company executive to meet with government officials, during which, according to a source, “pretty much every member of the Brazilian house and senate asked, ‘What are you doing about bots?'” (Around this time, Russia reportedly began testing bots of its own to sway public opinion through disinformation. Twitter largely tolerated automated accounts on the platform; a knowledgeable source recalls the company once sending a cease-and-desist letter to a bot farmer, which was disregarded, a symbol of its anemic response to this issue.) Twitter’s leadership seemed deaf to cries from overseas offices. “It was such a Bay Area company,” says a former international employee, echoing a common grievance that Twitter fell victim to Silicon Valley myopia. “Whenever [an incident] happened in the U.S., it was a company-wide tragedy. We would be like, ‘But this happens to us every day!”
It wasn’t until mid-2014, around the time that trolls forced comedian Robin Williams’s daughter, Zelda, off the service in the wake of her father’s suicide–she later returned–that Costolo had finally had enough. Costolo, who had been the victim of abuse in his own feed, lost faith in Harvey, multiple sources say. He put a different department in charge of responding to user-submitted abuse tickets, though he left Harvey in charge of setting the company’s trust and safety guidelines.
Hashtag Wars
Discourse on Twitter often devolves into face-offs for opposing worldviews.
1/6 #NeverAgain: The Parkland, Florida shooting survivors have proven adept at using Twitter to combat the disinformation and harassment campaigns waged against them. [Photo: Michael Laughlin/Sun Sentinel/TNS/Getty Images]
Soon, the threats morphed again: ISIS began to leverage Twitter to radicalize followers. Steeped in free-speech values, company executives struggled to respond. Once beheading videos started circulating, “there were brutal arguments with Dick,” recalls a former top executive. “He’d say, ‘You can’t show people getting killed on the platform! We should just erase it!’ And [others would argue], ‘But what about a PhD student posting a picture of the Kennedy assassination?’ ” They decided to allow imagery of beheadings, but only until the knife touches the neck, and, according to two sources, the company assigned support agents to search for and report beheading content–so the same team could then remove it. “It was the stupidest thing in the world,” says the source who worked closely with Harvey. “[Executives] already made the policy decision to take down the content, but they didn’t want to build the tools to [proactively] enforce the policy.” (Twitter has since purged hundreds of thousands of ISIS-related accounts, a muscular approach that has won the platform praise.)
Costolo, frustrated with the company’s meager efforts in tackling these problems, sent a company-wide memo in February 2015 complaining that he was “ashamed” by how much Twitter “sucked” at dealing with abuse. “If I could rewind the clock, I’d get more aggressive earlier,” Costolo tells Fast Company, stressing that the “blame” lays on nobody “other than the CEO at the time: me.”
“I often hear people in Silicon Valley talking about fake news and disinformation as problems we can engineer our way out of,” says Brendan Nyhan, codirector of Bright Line Watch, a group that monitors threats to democratic processes. “That’s wrong. People are looking for a solution that doesn’t exist.”
The Valley may be coming around to this understanding. Last year, Facebook and YouTube announced initiatives to expand their content-policing teams to 20,000 and 10,000 workers, respectively. Twitter, meanwhile, had just 3,317 employees across the entire company at the end of 2017, a fraction of whom are dedicated to improving “information quality.”
Putting mass quantities of human beings on the job, though, isn’t a panacea either. It introduces new issues, from personal biases to having to make complicated calls on content in a matter of seconds. “These reviewers use detailed rules designed to direct them to make consistent decisions,” says Susan Benesch, faculty associate at Harvard’s Berkman Klein Center for Internet and Society and director of the Dangerous Speech Project. “That’s a hard thing to do, especially at scale.”
Humans are often to blame for overly broad purges that capture benign content that doesn’t violate policy, such as when YouTube did a sweep for extremist and gun-related videos after the Parkland shooting, deleting specific clips and even entire channels that shouldn’t have been subject to elimination. A YouTube spokesperson admitted to Bloomberg, “Newer members may misapply some of our policies resulting in mistaken removals.”
The enormity of this quality-control conundrum helps explain why Twitter frequently fails, at least initially, to remove tweets that users report for harassment–some including allusions to death or rape–even though they would appear to violate its community standards. The company also catches flak for taking action against tweets that do offend these rules but have an extraordinary context, as when it temporarily suspended actress Rose McGowan for including a private phone number in a flurry of tweets excoriating Hollywood notables in the wake of the Harvey Weinstein sexual harassment scandal. “You end up going down a slippery slope on a lot of these things,” says a former C-level Twitter executive. “ ’Oh, the simple solution is X!’ That’s why you hear now, ‘Why don’t you just get rid of bots?!’ Well, lots of [legitimate media] use automated [accounts] to post headlines. Lots of these easy solutions are a lot more complex.”
Five months after Costolo’s February 2015 lament, he resigned from Twitter. Cofounder Jack Dorsey, who had run the company until he was fired in 2008, replaced Costolo as CEO (while retaining the same job at his payments company, Square, headquartered one block away in San Francisco). Dorsey, an English major in a land of computer scientists, had deep thoughts about Twitter’s future, but he couldn’t always articulate them in a way that translated to engineers. “I’d be shocked if you found somebody [to whom] Jack gave an extremely clear articulation of his thesis for Twitter,” says the former top executive, noting that Dorsey has described the service by using such metaphors as the Golden Gate Bridge and an electrical outlet for a toaster. Once, he gathered the San Francisco office for a meeting where he told employees he wanted to define Twitter’s mission–and proceeded to play the Beatles’s “Blackbird” as attendees listened in confused silence.
There was no doubt, though, that he believed in Twitter’s defining ethos. “Twitter stands for freedom of expression. We stand for speaking truth to power,” Dorsey tweeted on his first official day back as Twitter’s CEO, in October 2015.
By the time Dorsey’s tenure got under way, Twitter had gotten a better handle on some of the verbal pollution plaguing the service. The company’s anti-abuse operations had been taken over by Tina Bhatnagar, a no-nonsense veteran of Salesforce who had little patience for free-speech hand-wringing. Bhatnagar dramatically increased the number of outsourced support agents working for the company and was able to reduce the average response time on abuse-report tickets to just hours, though some felt the process became too much of a numbers game. “She was more like, ‘Just fucking suspend them,'” says a source who worked closely with her. If much of the company was guided by Justice Brandeis’s words, Bhatnagar represented Justice Potter Stewart’s famous quote about obscenity: “I know it when I see it.”
This ideological split was reflected in the company’s organizational hierarchy, which kept Harvey and Bhatnagar in separate parts of the company–legal and engineering, respectively–with separate managers. “They often worked on the exact same things but with very different approaches–it was just bonkers,” says a former high-level employee who felt ricocheted between the two factions. Even those seemingly on the same team didn’t always see eye to eye: According to three sources, Colin Crowell, Twitter’s VP of public policy, at one point refused to report to Harvey’s boss, general counsel Vijaya Gadde (Macgillivray’s successor), due in part to disagreements about how best to approach free-speech issues.
Contentiousness grew common: Bhatnagar’s team would want to suspend users it found abusive, only to be overruled by Gadde and Harvey. “That drove Tina crazy,” says a source familiar with the dynamic. “She’d go looking for Jack, but Jack would be at Square, so the next day he’d listen and take notes on his phone and say, ‘Let me think about it.’ Jack couldn’t make a decision without either upsetting the free-speech people or the online-safety people, so things were never resolved.”
Dorsey’s supporters argue that he wasn’t necessarily indecisive–there were simply no easy answers. Disputes that bubbled up to Dorsey were often bizarre edge cases, which meant that any decision he made would be hard to generalize to a wide range of instances. “You can have a perfectly written rule, but if it’s impossible to apply to 330 million users, it’s as good as having nothing,” says a source familiar with the company’s challenges.
Dorsey had other business demands to attend to at the time. When he returned as CEO, user growth had stalled, the stock had declined nearly 70% since its high following the IPO, the company was on track to lose more than $500 million in 2015 alone, and a number of highly regarded employees were about to leave. Although Twitter made some progress in releasing new products, including Moments and its live-video features, it struggled to refresh its core experience. In January 2016, Dorsey teased users with hints at an expansion of Twitter’s long-standing 140-character limit, but it took another 22 months to launch 280-character tweets. “Twitter was a hot mess,” says Leslie Miley, who managed the engineering group responsible for safety features until he was laid off in late 2015. “When you switch product VPs every year, it’s hard to keep a strategy in place.”
Twitter did not fully appreciate the novelty of the 2016 attack against comedian Leslie Jones, which virally spread screenshots of fake, Photoshopped tweets purporting to show divisive things she had shared. [Photo illustration: Delcan & Company]Then the U.S. presidential election arrived. All of Twitter’s warts were about to be magnified on the world stage. Twitter’s support agents, the ones reviewing flagged content and wading through the darkest muck of social media, witnessed the earliest warning signs as Donald Trump started sweeping the primaries. “We saw this radical shift,” says one at the time. Discrimination seemed more flagrant, the propaganda and bots more aggressive. Says another: “You’d remove it and it’d come back within minutes, supporting Nazis, hating Jews, [memes featuring] ovens, and oh, the frog…the green frog!” (That would be Pepe, a crudely drawn cartoon that white supremacists co-opted.)
A July 2016 troll attack on SNL and Ghostbusters star Leslie Jones–incited by alt-right provocateur Milo Yiannopoulos–proved to be a seminal moment for Twitter’s anti-harassment efforts. After Jones was bombarded with racist and sexist tweets, Dorsey met with her personally to apologize and declared an “abuse emergency” internally. The company banned Yiannopoulos. It also enhanced its muting and blocking features and introduced an opt-in tool that allows users to filter out what Twitter has determined to be “lower-quality content. “The idea was that Twitter wouldn’t be suppressing free speech–it would merely not be shoving unwanted tweets into its users’ faces.
But these efforts weren’t enough to shield users from the noxiousness of the Clinton–Trump election cycle. During the Jones attack, screenshots of fake, Photoshopped tweets purporting to show divisive things Jones had shared spread virally across the platform. This type of disinformation gambit would become a hallmark of the 2016 election and beyond, and Twitter did not appreciate the strength of this new front in the information wars.
Of the two presidential campaigns, Trump’s better knew how to take advantage of the service to amplify its candidate’s voice. When Twitter landed massive ad deals from the Republican nominee, left-leaning employees complained to the sales team that it should stop accepting Trump’s “bullshit money.”
The ongoing, unresolved disputes over what Twitter should allow on its platform continued to flare into the fall. In October, the company reneged on a $5 million deal with the Trump campaign for a custom #CrookedHillary emoji. “There was vicious [internal] debate and back-channeling to Jack,” says a source involved. “Jack was conflicted. At the eleventh hour, he pulled the plug.” Trump allies later blasted Twitter for its perceived political bias.
On November 8, employees were shocked as the election returns poured in, and the morning after Trump’s victory, Twitter’s headquarters were a ghost town. Employees had finally begun to take stock of the role their platform had played not only in Trump’s rise but in the polarization and radicalization of discourse.
“We all had this ‘holy shit’ moment,” says a product team leader at the time, adding that everyone was asking the same question: “Did we create this monster?”
In the months following Trump’s win, employees widely expected Dorsey to address Twitter’s role in the election head-on, but about a dozen sources indicate that the CEO remained mostly silent on the matter internally. “You can’t take credit for the Arab Spring without taking responsibility for Donald Trump,” says Leslie Miley, the former safety manager.
Over time, though, Dorsey’s thinking evolved, and he seems to be less ambivalent about what he’ll allow on the platform. Sources cite Trump’s controversial immigration ban and continued alt-right manipulation as influences. At the same time, Twitter began to draw greater scrutiny from the public, and the U.S. Congress, for its role in spreading disinformation.
Dorsey empowered engineering leaders Ed Ho and David Gasca to go after Twitter’s problems full bore, and in February 2017, as part of what some internally called an “abuse sprint,” the company rolled out more aggressive measures to permanently bar bad actors on the platform and better filter out potentially abusive or low-quality content. “Jack became a little bit obsessed,” says a source. “Engineering in every department was asked to stop working on whatever they were doing and focus on safety.”
Twitter’s safety operations, previously siloed, became more integrated with the consumer-product side of the company. The results have been positive. In May 2017, for example, after learning how much abuse users were being subjected to via Twitter’s direct messages feature, the team overseeing the product came up with the idea of introducing a secondary inbox to capture bad content, akin to a spam folder. “They’re starting to get things right,” says a former manager at the company, “addressing these problems as a combination of product and policy.”
During a live video Q&A Dorsey hosted in March, he was asked why trust and safety didn’t work with engineering much earlier. The CEO laughed, then admitted, “We had a lot of historical divisions within the company where we weren’t as collaborative as we could be. We’ve been recognizing where that lack of collaboration has hurt us.”
Even previous victims of Twitter abuse have recognized that the company’s new safety measures have helped. “I think Twitter is doing a better job than they get public credit for,” says Brianna Wu, the developer who became a principal target of Gamergate, the loose-knit collective of trolls whose 2014 attacks on prominent women in the gaming industry was a canary in the Twitter-harassment coal mine. “Most of the death threats I get these days are either sent to me on Facebook or through email, because Twitter has been so effective at intercepting them before I can even see them,” she adds, sounding surprisingly cheery. (Wu’s encounters with the dark side of social networking helped inspire her current campaign for a U.S. House seat in the Boston area, with online safety as one of her principal issues.)
Hundreds of bots were used in Brazil’s 2014 presidential election to spread political propaganda on Twitter, leading a company executive to visit the country and meet with members of its National Congress. [Photo illustration: Delcan & Company + Jenue]Twitter has also been more proactive since the election in banning accounts and removing verifications, particularly of white nationalists and alt-right leaders such as Richard Spencer. (The blue check mark signifying a verified user was originally designed to confirm identity but has come to be interpreted as an endorsement.) According to three sources, Dorsey himself has personally directed some of these decisions.
Twitter began rolling out a series of policy and feature changes last October that prioritized civility and truthfulness over free-speech absolutism. For instance, while threatening murder has always been unacceptable, now even speaking of it approvingly in any context will earn users a suspension. The company has also made it more difficult to bulk-tweet misinformation.
Such crackdowns haven’t yet eliminated the service’s festering problems: After February’s Parkland mass shooting, some surviving students became targets of harassment, and Russia-linked bots reportedly spread pro-gun sentiments and disinformation. Nobody, though, can accuse Twitter of not confronting its worst elements. The pressure on Dorsey to keep this momentum going is coming from Wall Street, too: On a recent earnings call, a Goldman Sachs analyst pressed Dorsey about the company’s progress toward eliminating bots and enforcing safety policies. “Information quality,” Dorsey responded, is now Twitter’s “core job.”
This past Valentine’s Day, Senator Mark Warner entered his stately corner suite in Washington, D.C.’s Hart Senate Office Building, poured himself a Vitaminwater, and rushed into an explanation of why Silicon Valley needs to be held accountable for its role in the 2016 election. As the Democratic vice chairman of the Senate Intelligence Committee, Warner is swamped with high-profile hearings and classified briefings, but the topic is also personal for the self-described “tech guy” who made a fortune in the 1980s investing in telecoms.
Warner is coleading the committee’s investigation into Russian election interference, which has increasingly centered on the growing, unfettered power of technology giants, whom he believes need to get over their “arrogance” and fix their platforms. “One of the things that really offended me was the initial reaction from the tech companies to blow us off,” he began, leaning forward in his leather chair. “ ’Oh no! There’s nothing here! Don’t look!’ Only with relentless pressure did they start to come clean.”
He saved his harshest words for Twitter, which he said has dragged its feet far more than Facebook or Google. “All of Twitter’s actions were in the wake of Facebook’s,” Warner complained in his gravelly voice, his face reddening. “They’re drafting!” The company was the only one to miss the January 8 deadline for providing answers to the Intelligence Committee’s inquiries, and, making matters worse, Twitter disclosed weeks later that Kremlin-linked bots managed to generate more than 450 million impressions, substantially higher than the company previously reported. “There’s been this [excuse of], ‘Oh, well, that’s just Twitter.’ That’s not a long-term viable answer.”
“You end up going down a slippery slope,” says a former C-level Twitter executive when asked about why the service can’t fix some of its abuse woes. “‘Oh, the simple solution is X!’ Lots of these easy solutions are a lot more complex.” [Photo illustration: Delcan & Company]Warner stated that he has had offline conversations directly with Mark Zuckerberg, but never Dorsey. Throwing shade, Warner smiled as he suggested that the company may not be able to commit as many resources as Facebook and Google can because it has a “more complicated, less lucrative business model.”
The big question now is what government intervention might look like. Warner suggested several broad policy prescriptions, including antitrust and data privacy regulations, but the one with the greatest potential effect on Twitter and its rivals would be to make them liable for the content on their platforms. When asked if the European Union, which has been more forceful in its regulation of the technology industry, could serve as a model, the senator replied, “[I’m] glad the EU is acting. I think they’re bolder than we are.”
If the U.S. government does start taking a more activist role in overseeing social networks, it will unleash some of the same nettlesome issues that Europe is already working through. On January 1, for instance, Germany began enforcing a law known as (deep breath) Netzwerkdurchsetzungsgesetz, or NetzDG for short. Rather than establish new restrictions on hate speech, it mandates that large social networks remove material that violates the country’s existing speech laws–which are far more stringent than their U.S. equivalents–within 24 hours of being notified of its existence.”Decisions that would take months in a regular court are now [made] by social media companies in just minutes,” says Mirko Hohmann, a Berlin-based project manager for the Global Public Policy Institute.
As evidence of how this approach can create unintended outcomes, he points to an instance in which Twitter temporarily shut down the account of a German humor magazine after it tweeted satirically in the voice of Beatrix von Storch, a leader of a far-right party. “No court would have judged these tweets illegal, but a Twitter employee under pressure did,” Hohmann says. (The company apparently even deleted an old tweet by one of NetzDG’s architects, Heiko Maas, in which he called another politician an idiot.)
In the U.S., rather than wait for federal action or international guidance, state lawmakers in Maryland, New York, and Washington are already working to regulate political ads on social networks. As Warner said, the era of Silicon Valley self-policing is over.
Whether or not the federal government steps in, hardening the big social networks against abuse will involve implementing solutions which haven’t even been invented yet. “If there was a magical wand that they could wave to solve this challenge, with the substantial resources and expertise that they have, then they absolutely would,” says Graham Brookie, deputy director of the Atlantic Council’s Digital Forensic Research Lab.
Still, there are many things Twitter can do to protect its platform. Using technology to identify nefarious bots is a thorny matter, but Twitter could label all automated accounts as such, which wouldn’t hobble legitimate feeds but would make it tougher for Russian bots to pose as heartland Trump supporters.
“The issue here is not that there is automation on Twitter,” says Renée DiResta, head of policy for Data for Democracy and a founding advisor for the Center for Humane Technology. “The issue is that there are automated accounts that trying to be treated as real people, that are acting like real people, that are manipulating people.”
Twitter could also do more to discourage people from creating objectionable content in the first place by making its rules more visible and digestible. Susan Benesch, whose Dangerous Speech Project is a member of Twitter’s Trust and Safety Council, says she’s implored executives to raise the visibility of the “Twitter Rules” policies, which outline what you can’t say on the service. “They say, ‘Nobody reads the rules,'” she recounts. “And I say ‘That’s right. And nobody reads the Constitution, but that doesn’t mean we shouldn’t have civics classes and try to get people to read it.'”
The company could also build trust by embracing transparency as more than a buzzword, sharing more with users and marketers about how exactly Twitter works and collaborating with outside researchers. Compared to other social-media behemoths, its business model is far less reliant on using secretive algorithms to monetize its users’ data and behaviors, giving it an opportunity to be open in ways that the rest seldom are. “The way that people use Twitter, it becomes a little bit easier to see things and understand things,” says Jason Kint, CEO of publisher trade group Digital Content Next. “Whereas it’s incredibly difficult with YouTube and I’d say with Facebook it’s fairly difficult.”
Toward this more collaborative end, and inspired by research conducted by nonprofit Cortico and MIT’s Laboratory for Social Machines, Twitter announced in March that it will attempt to measure its own “conversational health.” It invited other organizations to participate in this process, and Twitter says it will reveal its first partners in July.
The effort is intriguing, but the crowdsourced initiative also sounds eerily similar to Twitter’s Trust and Safety Council, whose mission since it was convened in February 2016 has been for advocates, academics, and grassroots organizations to provide input on the company’s safety approach.
Many people who worked for Twitter want not a metric but a mea culpa. According to one source who has discussed these issues with the company’s leadership, “Their response to everything was basically, ‘Look, we hear you, but you can’t blame Twitter for what happened. If it wasn’t us, it would’ve been another medium.’ The executives didn’t own up to the fact that we are responsible, and that was one of the reasons why I quit.”
Even Senator Warner believes that before his colleagues consider legislation, the tech companies’ CEOs ought to testify before Congress. “I want them all, not just Dorsey. I want Mark and I want [Google cofounders] Sergey [Brin] and Larry [Page],” he said. “Don’t send your lawyers, don’t send the policy guys. They owe the American public an explanation.”
When Twitter debuted its new health metrics initiative, the American public seemed to finally get one, after Dorsey tweeted about Twitter, “We didn’t fully predict or understand the real-world negative consequences. We acknowledge that now.” He continued: “We aren’t proud of how people have taken advantage of our service, or our inability to address it fast enough. . . . We’ve focused most of our efforts on removing content against our terms, instead of building a systemic framework to help encourage more healthy debate, conversations, and critical thinking. This is the approach we now need.”
One week later, Dorsey continued to acknowledge past missteps during a 47-minute live video broadcast on Twitter. “We will make mistakes–I will certainly make mistakes,” he said. “I have done so in the past around this entire topic of safety, abuse, misinformation, [and] manipulation on the platform.”
The point of the live stream was to talk more about measuring discourse, and Dorsey tried to answer user-submitted questions. But the hundreds of real-time comments scrolling by on the screen illustrated the immense challenge ahead. As the video continued, his feed filled with anti-Semitic and homophobic insults, caustic complaints from users who fear Twitter is silencing their beliefs, and plaintive cries for the company to stop racism. Stroking his beard, Dorsey squinted at his phone, watching the bad speech flow as he searched for the good.
The AFTERMATH ...ACTING OUT a Sample Example...
YouTube shooting: Nasim Aghdam shoots three (3) before killing herself at San Bruno HQ, CA
Terrified employees fled as gunfire rang out at YouTube's sprawling headquarters in San Bruno, California, on Tuesday, prompting a massive police response and evacuation as victims were transported to nearby hospitals. San Bruno police identified the suspect late Tuesday as Nasim Najafi Aghdam, 39, who was found dead from what authorities believe is a self-inflicted gunshot wound.
San Bruno Police Chief Ed Barberini said three people were transported to local hospitals with gunshot wounds.
Police put on tactical gear outside of the YouTube headquarters on April 3, 2018 in San Bruno, California. (Photo by Justin Sullivan/Getty Images)His department said it is working to identify a motive for the shooting. Earlier reports indicated the suspect may have known one of the victims, but police said late Tuesday that "at this time there is no evidence that the shooter knew the victims of this shooting or that individuals were specifically targeted."
Family says shooter thought YouTube was ruining her life
Shooter went to gun range before she opened fire at YouTube HQ
Barberini said police arrived on scene at 12:48 p.m. local time and encountered frantic employees fleeing the building. "It was very chaotic as you can imagine," he said. nasim-najafi-aghdam-youtube-2018-4-4.jpg
Nasim Najafi AghdamSan Bruno Police Department
Responding officers encountered one victim with a gunshot wound toward the front of the building before finding the deceased suspect, Barberini said. Several minutes later, police located two additional victims at an adjacent building.
Barberini later said the suspect used a handgun and there was no further threat to the community. San Bruno police investigate motive
Police said they are investigating the motive for the shooting, but Aghdam's videos and website are filled with criticism of YouTube. Sources said she initially asked for one of the male victims by name, and that she used 9mm handgun during the shooting.
Jaclyn Schildkraut, an expert on mass shootings research and assistant professor at the State University of New York (SUNY), told CBS News it was "very uncommon" to see a female suspect carry out this type of shooting.
Women made up of only four percent of mass shooting suspects in the U.S. between 1966 and 2016, Schildkraut said. However, Tuesday's shooting might not fit the definition of a mass shooting. The Gun Violence Archive defines it as four or more people shot or killed -- excluding the shooter. Suspect criticized YouTube
The suspect's father, Ismail Aghdam, told CBS Los Angeles his daughter had gone missing for several days, and that he called police because he was concerned over her recent anger at YouTube. He said police eventually found his daughter and said she was in a car in Mountain View, about 28 miles south of San Bruno.
When the family realized she was close to YouTube's headquarters, they told police she said the company was "ruining her life." He said police told the family they would keep an eye on her. The family believes she did not know anyone at YouTube personally.
A Mountain View Police spokesperson confirmed to CBS News that they located a woman by the same name asleep inside a car early Tuesday. They confirmed that this was a missing person from Southern California and had notified her family.
Ismail Aghdam told Mercury News his daughter was angry because the company stopped paying for content she posted online.
Her website accuses "new closed-minded" YouTube employees of reducing her view count, suppressing her and discouraging her from creating content on the video platform. In a now-deleted video, she complained that YouTube began filtering her page and adding age restrictions her videos.
Shooting at YouTube headquarters wounds several (16 Photos) Shooting at YouTube headquarters wounds several
Hospital update on victims
A spokesman for San Francisco General Hospital told CBS News it has received three patients: a 36-year-old man in critical condition, a 32-year-old woman in serious condition and a 27-year-old woman in fair condition.
Heavily armed police surrounded the facility, with armed SWAT vehicles stationed outside. Police officers could be seen patting down employees evacuating the campus to a nearby parking lot, where they were surrounded by police cars.
White House press secretary Sarah Sanders said President Trump has been briefed on the shooting and they are "monitoring the ongoing situation."
Mr. Trump tweeted his "thoughts and prayers" to everyone involved.
Was just briefed on the shooting at YouTube’s HQ in San Bruno, California. Our thoughts and prayers are with everybody involved. Thank you to our phenomenal Law Enforcement Officers and First Responders that are currently on the scene.
— Donald J. Trump (@realDonaldTrump) April 3, 2018
The FBI and San Francisco Field Division of the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) said it was responding to the scene.
Several employees tweeted they heard gunfire Tuesday afternoon. Vadim Lavrusik said he barricaded himself and others inside a room before they were able to escape safely.
Active shooter at YouTube HQ. Heard shots and saw people running while at my desk. Now barricaded inside a room with coworkers.
— Vadim Lavrusik (@Lavrusik) April 3, 2018
Todd Sherman, a product manager at the company, tweeted that he "saw blood drips on the floor and stairs."
I looked down and saw blood drips on the floor and stairs. Peaked around for threats and then we headed downstairs and out the front.
— Todd Sherman (@tdd) April 3, 2018
Google, YouTube's parent company, said in a statement that they are "coordinating with authorities and will provide official information here from Google and YouTube as it becomes available."
Google CEO Sundar Pichai said the company is doing everything it can to support the victims and their families.
"I know a lot of you are in shock right now. Over the coming days, we will continue to provide support to help everyone in our Google family heal from this unimaginable tragedy," Pichai said in a statement.
Where in the world is ... San Bruno?
YouTube's headquarters is about 12 miles south of downtown San Francisco, close to San Francisco International Airport. It encompasses about 200,000 square feet, and YouTube leases the building from Gap, Inc., according to a 2017 article in the San Francisco Business Times.
Google says there are more than 1,100 employees at the office and that YouTube is San Bruno's largest employer, with a variety of people dedicated engineering and sales. About 43,000 residents live in the city.
The owner of a nearby restaurant told CBS News he was outside smoking a cigarette when he heard several pops. Denny, who didn't want to give his last name, said there was a brief pause in the gunfire before it continued. He said he heard a total of about 15 to 17 shots.
"It went on for awhile, those shots, it wasn't like emptying the clip like 'boom- boom - boom,' it was more of a slower pace," Denny said.
YouTube Shooter Said Company 'Ruined Her Life'
Read more...
Confusing Profile Emerges of YouTube HQ Shooter Nasim Aghdam
YouTube shooter wasn’t a scorned lover — so why did so many jump to that conclusion?
10 signs you're narcissistic --> 
No comments:
Post a Comment