Connect with us

Sci-Tech

Fake News Still Has a Home on Facebook

Published

on


Stuart Thompson collected and analyzed data on thousands of Facebook posts for this article.

On the morning of Jan. 6, 2021, Christopher Blair’s fake news empire was humming along.

Mr. Blair had been earning as much as $15,000 in some months by posting false stories to Facebook about Democrats and the election, reaching millions of people each month.

But after a mob of Trump supporters attacked the U.S. Capitol, his growing enterprise came to an abrupt halt. Facebook seemed to recognize its own role in fomenting an insurrection and tweaked its algorithm to limit the spread of political content, fake and otherwise. Mr. Blair watched his engagement flatline.

“It just kind of crashed — anything political crashed for about six months,” he said.

Today, though, Mr. Blair has fully recovered, and then some. His false posts — which he insists are satire intended to mock conservatives — are receiving more interactions on Facebook than ever, surging to 7.2 million interactions already this year compared with one million in all of 2021.

Mr. Blair has survived Facebook’s tweaks by pivoting away from politicians and toward culture war topics like Hollywood elites and social justice issues.

When Robert De Niro appeared outside a Manhattan courthouse last month to criticize former President Donald J. Trump, for example, Mr. Blair dashed off a false post claiming that a conservative actor had called him “horrible” and “ungodly.” It received nearly 20,000 shares.

Many writers like him — who publish falsehoods to fringe websites and social media accounts in a bid for clicks that can translate into profitable ad revenue — have also leaned into culture war topics. So far this year, only a quarter of the Facebook content that was rated “false” by PolitiFact, a fact-checking website, focused on politics or politicians, with nearly half focusing on issues like transgender athletes, liberal celebrities or health alternatives.

The success of those posts underscores an increasing reality on Facebook and similar platforms: Fake news is still finding an audience online.

The pivot has been so successful that Mr. Blair has seen an array of competitors spring up, many also calling their posts “satire.” They have copied his content and used artificial intelligence tools to supercharge their work.

“After what happened on Jan. 6, there was some progress, and then almost immediately that progress was rolled back,” said Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, who studies online disinformation. “I think we’re actually more vulnerable to this today than we were in spring of 2021.”

A spokeswoman for Meta, which owns Facebook, responded by highlighting the company’s misinformation policy and its efforts to combat falsehoods by limiting the spread of certain low-quality content.

Mr. Blair, a 52-year-old former construction foreman, is an avowed liberal.

He doesn’t see his work as fake news. He has long defended himself, including in profiles in The Washington Post and The Boston Globe, as a comedian who trolls conservative Facebook users into believing news that they should clearly question. He compares his work to that of Sacha Baron Cohen, the British comic who frequently dupes conservative Americans in an attempt to ridicule them. Mr. Blair uses a small “satire” label on each image he posts to Facebook.

But his headlines are often indistinguishable from many of the falsehoods that are posted to the social network.

Facebook allows satirical pages, whether or not they use a “satire” label. But the term has also become a popular defense for fake news operators, who typically disclose they are satire only in an obscure section of their Facebook pages, or sometimes omit it entirely.

“It’s a cat-and-mouse game,” said David Lazer, a professor at Northeastern University who has studied disinformation. “Wherever there’s a loophole in enforcement, it’s going to be a place that activity will go.”

Facebook’s attempts to limit the spread of political content left Mr. Blair and his contributors searching for a new approach.

“We used to kill Hillary Clinton every Saturday in the most ridiculous ways,” said Joe LaForm, a 48-year-old truck driver who identifies as a liberal and has contributed to Mr. Blair’s Facebook page. “You know, she’d get run over by a monster truck at a monster truck rally.”

“We stopped doing that,” he added, because of Facebook’s attempts to limit the spread of political content.

Mr. Blair now posts dozens of false stories to the social network each week on his main account, which has more than 320,000 followers and more than 225,000 likes. He populates his posts with a colorful cast of celebrities: actors like Tim Allen and Whoopi Goldberg or musicians like Jason Aldean and Kid Rock. He often stages them in dramatic but entirely fictitious feuds over culture war topics. A post from April, claiming that Beyoncé was criticized for “playing dress-up” by releasing country music, received more than 50,000 shares and 28,000 comments.

“If it’s somebody on the right, I reward them. If it’s somebody on the left, I punish them,” Mr. Blair said in a phone interview. “It’s my method.”

This was not the only pivot Mr. Blair had to make. After Facebook started down-ranking posts that linked to low-quality websites, Mr. Blair started posting only images and memes. Now, when a post seems to be a hit, he will add the link as the pinned comment.

“I know exactly what happened, in every situation, and why,” Mr. Blair said of the ups and downs of publishing on Facebook. “I’m constantly adjusting.”

Those pivots have rippled through the industry, with similar falsehoods appearing on Facebook pages with even larger audiences, like “Donald Trump Is My President,” which has more than 1.8 million followers. Some posts are shared directly to groups filled with conservatives, like fan pages for Tucker Carlson and Jesse Watters, two right-leaning anchors.

Many of the accounts have described themselves as news outlets. NewsGuard, a company that tracks online disinformation, identified 15 such accounts, with names like “Daily News” or “Breaking News USA,” that shared falsehoods about companies like Disney, Paramount, Nike and Tyson Foods.

“There are just tons and tons and tons of headlines being churned out every single day,” said Coalter Palmer, an analyst at NewsGuard who conducted the research. “It’s a lot of cultural war stuff.”

Today, Mr. Blair is facing stiffer competition from pages that use A.I. tools to write fake stories about the celebrities and culture war issues he has highlighted. NewsGuard has identified nearly 1,000 websites that use A.I. tools to write unreliable news articles, up from 138 one year ago.

That competition includes SpaceXMania, a competing network of Facebook pages with at least 890,000 followers.

“My material, my cast of characters, my keywords, my hot buttons — they take everything,” Mr. Blair said of the recent plagiarism. “They put it into an A.I. program, and it just spits out headlines. There’s nothing original about any of it.”

When Mr. Blair wrote a false story recently about Harrison Butker, a National Football League player who garnered national attention for his conservative views on women, SpaceXMania quickly followed suit with stories of its own about Mr. Butker — earning hundreds of thousands more comments than Mr. Blair.

The operator behind SpaceXMania is based in Pakistan and identifies himself by the name Shabayer, according to Facebook messages with Mr. Blair that he shared with The New York Times. He has cited Mr. Blair as a “role model” for his start-up, according to the messages.

“I’m a liberal troll social justice warrior serving satirical nonsense with a mission,” Mr. Blair said. “He’s selling fake news to American conservatives from Pakistan for profit.”

A representative for SpaceXMania initially responded to an email, but stopped responding after a reporter sent questions.

Many of SpaceXMania’s articles were written entirely by artificial intelligence tools like ChatGPT, according to a Times analysis that used software to detect A.I.-written text.

“He’s probably the most effective at using my stuff,” Mr. Blair said. “He’s trying to get away from the A.I., but he never will.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Sci-Tech

How Mark Zuckerberg’s Meta Failed Children on Safety, States Say

Published

on


In April 2019, David Ginsberg, a Meta executive, emailed his boss, Mark Zuckerberg, with a proposal to research and reduce loneliness and compulsive use on Instagram and Facebook.

In the email, Mr. Ginsberg noted that the company faced scrutiny for its products’ impacts “especially around areas of problematic use/addiction and teens.” He asked Mr. Zuckerberg for 24 engineers, researchers and other staff, saying Instagram had a “deficit” on such issues.

A week later, Susan Li, now the company’s chief financial officer, informed Mr. Ginsberg that the project was “not funded” because of staffing constraints. Adam Mosseri, Instagram’s head, ultimately declined to finance the project, too.

The email exchanges are just one slice of evidence cited among more than a dozen lawsuits filed since last year by the attorneys general of 45 states and the District of Columbia. The states accuse Meta of unfairly ensnaring teenagers and children on Instagram and Facebook while deceiving the public about the hazards. Using a coordinated legal approach reminiscent of the government’s pursuit of Big Tobacco in the 1990s, the attorneys general seek to compel Meta to bolster protections for minors.

A New York Times analysis of the states’ court filings — including roughly 1,400 pages of company documents and correspondence filed as evidence by the State of Tennessee — shows how Mr. Zuckerberg and other Meta leaders repeatedly promoted the safety of the company’s platforms, playing down risks to young people, even as they rejected employee pleas to bolster youth guardrails and hire additional staff.

In interviews, the attorneys general of several states suing Meta said Mr. Zuckerberg had led his company to drive user engagement at the expense of child welfare.

“A lot of these decisions ultimately landed on Mr. Zuckerberg’s desk,” said Raúl Torrez, the attorney general of New Mexico. “He needs to be asked explicitly, and held to account explicitly, for the decisions that he’s made.”

The state lawsuits against Meta reflect mounting concerns that teenagers and children on social media can be sexually solicited, harassed, bullied, body-shamed and algorithmically induced into compulsive online use. Last Monday, Dr. Vivek H. Murthy, the United States surgeon general, called for warning labels to be placed on social networks, saying the platforms present a public health risk to young people.

His warning could boost momentum in Congress to pass the Kids Online Safety Act, a bill that would require social media companies to turn off features for minors, like bombarding them with phone notifications, that could lead to “addiction-like” behaviors. (Critics say the bill could hinder minors’ access to important information. The News/Media Alliance, a trade group that includes The Times, helped win an exemption in the bill for news sites and apps that produce news videos.)

In May, New Mexico arrested three men who were accused of targeting children for sex after, Mr. Torrez said, they solicited state investigators who had posed as children on Instagram and Facebook. Mr. Torrez, a former child sex crimes prosecutor, said Meta’s algorithms enabled adult predators to identify children they would not have found on their own.

Meta disputed the states’ claims and has filed motions to dismiss their lawsuits.

In a statement, Liza Crenshaw, a spokeswoman for Meta, said the company was committed to youth well-being and had many teams and specialists devoted to youth experiences. She added that Meta had developed more than 50 youth safety tools and features, including limiting age-inappropriate content and restricting teenagers under 16 from receiving direct messages from people they didn’t follow.

“We want to reassure every parent that we have their interests at heart in the work we’re doing to help provide teens with safe experiences online,” Ms. Crenshaw said. The states’ legal complaints, she added, “mischaracterize our work using selective quotes and cherry-picked documents.”

But parents who say their children died as a result of online harms challenged Meta’s safety assurances.

“They preach that they have safety protections, but not the right ones,” said Mary Rodee, an elementary school teacher in Canton, N.Y., whose 15-year-old son, Riley Basford, was sexually extorted on Facebook in 2021 by a stranger posing as a teenage girl. Riley died by suicide several hours later.

Ms. Rodee, who sued the company in March, said Meta had never responded to the reports she submitted through automated channels on the site about her son’s death.

“It’s pretty unfathomable,” she said.

Meta has long wrestled with how to attract and retain teenagers, who are a core part of the company’s growth strategy, internal company documents show.

Teenagers became a major focus for Mr. Zuckerberg as early as 2016, according to the Tennessee complaint, when the company was still known as Facebook and owned apps including Instagram and WhatsApp. That spring, an annual survey of young people by the investment bank Piper Jaffray reported that Snapchat, a disappearing-message app, had surpassed Instagram in popularity.

Later that year, Instagram introduced a similar disappearing photo- and video-sharing feature, Instagram Stories. Mr. Zuckerberg directed executives to focus on getting teenagers to spend more time on the company’s platforms, according to the Tennessee complaint.

The “overall company goal is total teen time spent,” wrote one employee, whose name is redacted, in an email to executives in November 2016, according to internal correspondence among the exhibits in the Tennessee case. Participating teams should increase the number of employees dedicated to projects for teenagers by at least 50 percent, the email added, noting that Meta already had more than a dozen researchers analyzing the youth market.

In April 2017, Kevin Systrom, Instagram’s chief executive, emailed Mr. Zuckerberg asking for more staff to work on mitigating harms to users, according to the New Mexico complaint.

Mr. Zuckerberg replied that he would include Instagram in a plan to hire more staff, but he said Facebook faced “more extreme issues.” At the time, legislators were criticizing the company for having failed to hinder disinformation during the 2016 U.S. presidential campaign.

Mr. Systrom asked colleagues for examples to show the urgent need for more safeguards. He soon emailed Mr. Zuckerberg again, saying Instagram users were posting videos involving “imminent danger,” including a boy who shot himself on Instagram Live, the complaint said.

Two months later, the company announced that the Instagram Stories feature had hit 250 million daily users, dwarfing Snapchat. Mr. Systrom, who left the company in 2018, didn’t respond to a request for comment.

Meta said an Instagram team developed and introduced safety measures and experiences for young users. The company didn’t respond to a question about whether Mr. Zuckerberg had provided the additional staff.

In January 2018, Mr. Zuckerberg received a report estimating that four million children under the age of 13 were on Instagram, according to a lawsuit filed in federal court by 33 states.

Facebook’s and Instagram’s terms of use prohibit users under 13. But the company’s sign-up process for new accounts enabled children to easily lie about their age, according to the complaint. Meta’s practices violated a federal children’s online privacy law requiring certain online services to obtain parental consent before collecting personal data, like contact information, from children under 13, the states allege.

In March 2018, The Times reported that Cambridge Analytica, a voter profiling firm, had covertly harvested the personal data of millions of Facebook users. That set off more scrutiny of the company’s privacy practices, including those involving minors.

Mr. Zuckerberg testified the next month at a Senate hearing, “We don’t allow people under the age of 13 to use Facebook.”

Attorneys general from dozens of states disagree.

In late 2021, Frances Haugen, a former Facebook employee, disclosed thousands of pages of internal documents that she said showed the company valued “profit above safety.” Lawmakers held a hearing, grilling her on why so many children had accounts.

Meanwhile, company executives knew that Instagram use by children under 13 was “the status quo,” according to the joint federal complaint filed by the states. In an internal chat in November 2021, Mr. Mosseri acknowledged those underage users and said the company’s plan to “cater the experience to their age” was on hold, the complaint said.

In its statement, Meta said Instagram had measures in place to remove underage accounts when the company identified them. Meta has said it has regularly removed hundreds of thousands of accounts that could not prove they met the company’s age requirements.

A company debate over beauty filters on Instagram encapsulated the internal tensions over teenage mental health — and ultimately the desire to engage more young people prevailed.

It began in 2017 after Instagram introduced camera effects that enabled users to alter their facial features to make them look funny or “cute/pretty,” according to internal emails and documents filed as evidence in the Tennessee case. The move was made to boost engagement among young people. Snapchat already had popular face filters, the emails said.

But a backlash ensued in the fall of 2019 after Instagram introduced an appearance-altering filter, Fix Me, which mimicked the nip/tuck lines that cosmetic surgeons draw on patients’ faces. Some mental health experts warned that the surgery-like camera effects could normalize unrealistic beauty standards for young women, exacerbating body-image disorders.

As a result, Instagram in October 2019 temporarily disallowed camera effects that made dramatic, surgical-looking facial alterations — while still permitting obviously fantastical filters, like goofy animal faces. The next month, concerned executives proposed a permanent ban, according to Tennessee court filings.

Other executives argued that a ban would hurt the company’s ability to compete. One senior executive sent an email saying Mr. Zuckerberg was concerned whether data showed real harm.

In early 2020, ahead of an April meeting with Mr. Zuckerberg to discuss the issue, employees prepared a briefing document on the ban, according to the Tennessee court filings. One internal email noted that employees had spoken with 18 mental health experts, each of whom raised concerns that cosmetic surgery filters could “cause lasting harm, especially to young people.”

But the meeting with Mr. Zuckerberg was canceled. Instead, the chief executive told company leaders that he was in favor of lifting the ban on beauty filters, according to an email he sent that was included in the court filings.

Several weeks later, Margaret Gould Stewart, then Facebook’s vice president for product design and responsible innovation, reached out to Mr. Zuckerberg, according to an email included among the exhibits. In the email, she noted that as a mother of teenage daughters, she knew social media put “intense” pressure on girls “with respect to body image.”

Ms. Stewart, who subsequently left Meta, did not respond to an email seeking comment.

In the end, Meta said it barred filters “that directly promote cosmetic surgery, changes in skin color or extreme weight loss” and clearly indicated when one was being used.

In 2021, Meta began planning for a new social app. It was to be aimed specifically at children and called Instagram Kids. In response, 44 attorneys general wrote a letter that May urging Mr. Zuckerberg to “abandon these plans.”

“Facebook has historically failed to protect the welfare of children on its platforms,” the letter said.

Meta subsequently paused plans for an Instagram Kids app.

By August, company efforts to protect users’ well-being work had become “increasingly urgent” for Meta, according to another email to Mr. Zuckerberg filed as an exhibit in the Tennessee case. Nick Clegg, now Meta’s head of global affairs, warned his boss of mounting concerns from regulators about the company’s impact on teenage mental health, including “potential legal action from state A.G.s.”

Describing Meta’s youth well-being efforts as “understaffed and fragmented,” Mr. Clegg requested funding for 45 employees, including 20 engineers.

In September 2021, The Wall Street Journal published an article saying Instagram knew it was “toxic for teen girls,” escalating public concerns.

An article in The Times that same month mentioned a video that Mr. Zuckerberg had posted of himself riding across a lake on an “electric surfboard.” Internally, Mr. Zuckerberg objected to that description, saying he was actually riding a hydrofoil he pumped with his legs and wanted to post a correction on Facebook, according to employee messages filed in court.

Mr. Clegg found the idea of a hydrofoil post “pretty tone deaf given the gravity” of recent accusations that Meta’ s products caused teenage mental health harms, he said in a text message with communications executives included in court filings.

Mr. Zuckerberg went ahead with the correction.

In November 2021, Mr. Clegg, who had not heard back from Mr. Zuckerberg about his request for more staff, sent a follow-up email with a scaled-down proposal, according to Tennessee court filings. He asked for 32 employees, none of them engineers.

Ms. Li, the finance executive, responded a few days later, saying she would defer to Mr. Zuckerberg and suggested that the funding was unlikely, according to an internal email filed in the Tennessee case. Meta didn’t respond to a question about whether the request had been granted.

A few months later, Meta said that although its revenue for 2021 had increased 37 percent to nearly $118 billion from a year earlier, fourth-quarter profit plummeted because of a $10 billion investment in developing virtual reality products for immersive realms, known as the metaverse.

Last fall, the Match Group, which owns dating apps like Tinder and OKCupid, found that ads the company had placed on Meta’s platforms were running adjacent to “highly disturbing” violent and sexualized content, some of it involving children, according to the New Mexico complaint. Meta removed some of the posts flagged by Match, telling the dating giant that “violating content may not get caught a small percentage of the time,” the complaint said.

Dissatisfied with Meta’s response, Bernard Kim, the chief executive of the Match Group, reached out to Mr. Zuckerberg by email with a warning, saying his company could not “turn a blind eye,” the complaint said.

Mr. Zuckerberg didn’t respond to Mr. Kim, according to the complaint.

Meta said the company had spent years building technology to combat child exploitation.

Last month, a judge denied Meta’s motion to dismiss the New Mexico lawsuit. But the court granted a request regarding Mr. Zuckerberg, who had been named as defendant, to drop him from the case.





Source link

Continue Reading

Sci-Tech

These Grieving Parents Want Congress to Protect Children Online

Published

on


Deb Schmill has become a fixture on Capitol Hill. Last week alone, she visited the offices of 13 lawmakers, one of more than a dozen trips she has made from her home near Boston over the past two years.

In each meeting, Ms. Schmill talks about her daughter Becca, who died in 2020 at age 18. Ms. Schmill said Becca had died after taking fentanyl-laced drugs bought on Facebook. Before that, she said, her daughter was raped by a boy she had met online, then was cyberbullied on Snapchat.

“I have to do what I can to help pass legislation to protect other children and to prevent what happened to Becca from happening to them,” Ms. Schmill, 60, said. “It’s my coping mechanism.”

Ms. Schmill is among dozens of parents who are lobbying for the Kids Online Safety Act, or KOSA, a bill that would require social media, gaming and messaging apps to limit features that could heighten depression or bullying or lead to sexual exploitation. The bill, which has the greatest momentum of any broad tech industry legislation in years, would also require the tech services to turn on the highest privacy and safety settings by default for users under 17 and let youths opt out of some features that can lead to compulsive use.

Modeling themselves in part on Mothers Against Drunk Driving, which pushed for the 1984 federal law mandating a minimum drinking age of 21, about 20 of the parents have formed a group called ParentsSOS. Like members of MADD, the parents carry photos of their children who they say lost their lives because of social media, and explain their personal tragedies to legislators.

Dozens more parents have created organizations to fight social media addiction, eating disorders and fentanyl poisoning. All are pushing KOSA, swarming Capitol Hill to share how they say their children were harmed.

The bill, introduced in 2022, has bipartisan support in the Senate and is poised for a vote. It recently passed a key House subcommittee vote. President Biden has also supported the bill.

Dr. Vivek Murthy, the U.S. surgeon general, said this week that social media had contributed to an “emergency” mental health crisis among youths, adding more momentum.

But KOSA still faces steep obstacles. Tech lobbyists and the American Civil Liberties Union are fighting it, saying it could undermine free speech. Others worry that limiting children’s access to social media may further isolate vulnerable youths, including those in the L.G.B.T.Q. community.

To amp up the pressure as Congress’s August summer break approaches, ParentsSOS launched a Father’s Day ad campaign in Times Square, in New York, and a commercial campaign on streaming TV. (Fairplay, a child advocacy nonprofit, and the Eating Disorders Coalition provided funding.)

“I’ve had friends say, ‘Just let go and move on because it’s so painful,’ but I could not be quiet about what I’ve learned, which is that social media companies don’t have any accountability,” said Kristin Bride, 57, who lives in Oregon. Her son Carson died by suicide in 2020 at the age of 16 after what she said had been relentless bullying via an anonymous messaging app connected to Snapchat.

Snap, X and Microsoft have said they support KOSA.

“The safety of young people is an urgent priority, and we call on Congress to pass the Kids Online Safety Act,” Snapchat’s parent company, Snap, said in a statement. Snap no longer allows anonymous messaging apps to connect to its platform.

YouTube and Meta, which owns Facebook and Instagram, declined to comment. TikTok did not respond to a request for comment.

The parents’ push aligns with a global movement to regulate youth safety online. The European Union’s Digital Services Act of 2022 requires social media sites to block harmful content and restricts the use of features that can lead to addictive use by youths. Last year, Britain adopted a similar online safety law for children.

Domestically, 45 state attorneys general have sued Meta over allegations that it harms young users. Last year, 23 state legislatures adopted child safety laws, and this week New York adopted a law that restricts social media platforms from using recommendation feeds that could lead to compulsive consumption by users under 18.

Many of the parents turned lobbyists cited “The Social Dilemma,” a 2020 documentary about social media harms, as a call to action. They said they were also enraged by revelations in 2021 by the whistle-blower Frances Haugen, a former Facebook employee who testified in Congress that the company knew the dangers for young people on its apps.

“For the first time, I understood that it was the design, it was the companies,” said Christine McComas, 59, who lives in Maryland. She said her daughter Grace died at 15 by suicide in 2012 after being bullied on Twitter.

Many of the parents said the Center for Humane Technology, a nonprofit that advocates social media regulations and was part of the documentary, had connected them after they reached out.

Maurine Molak’s son David died by suicide in 2016 at age 16 after what she said had been cyberbullying on Instagram and messaging apps. Another of her sons found an online memorial page for Grace McComas and encouraged his mother to get in touch with Ms. McComas via email.

The two mothers began having phone calls and connected with other parents, too. Ms. Molak had set up a foundation to educate the public about online bullying and to push for anti-bullying state legislation.

By early 2022, some of the parents had begun working with Fairplay to push for state child safety laws. That February, Senators Richard Blumenthal, Democrat of Connecticut, and Marsha Blackburn, Republican of Tennessee, introduced KOSA.

It had early but modest support, moving out of a Senate committee before stalling for months. Growing impatient, several parents showed up on Capitol Hill that November. Ms. Bride and other parents said they had entered the office of Senator Maria Cantwell, chair of the Commerce Committee and Democrat of Washington, and demanded a meeting. She met with them the next day.

Ms. Cantwell was visibly moved and rubbed the backs of several parents as they talked about their children, Ms. Bride said.

“Having to look at us and to know that our children are no longer with us hits them, and it has gotten people on board,” Ms. Bride said. Ms. Cantwell’s office declined to comment.

Ms. Cantwell became a vocal supporter of the bill, then tried to attach it to a year-end spending bill, which failed.

For much of last year, the bill sat, in part over concerns that the language requiring companies to design sites to protect children was too vague. Some legislators were also concerned that the bill would give attorneys general too much power to police certain content, a potential political weapon.

Discouraged, the parents called one another to stay motivated. In September, Ms. Schmill rented a short-term apartment a 10-minute walk from the Capitol. She changed in and out of sneakers carried in a canvas bag as she visited the offices of nearly all 100 senators to tell them about Becca.

“As I thought about facing another year of her birth date and death date, for me to cope with having to live through another anniversary, I had to feel like I had to be doing something productive in her memory,” Ms. Schmill said.

Late last year, around the time the Senate Judiciary Committee announced a January hearing on child safety with tech chief executives, the parents decided to form ParentsSOS. The initiative, intended to help them gain more support for KOSA, was funded by Fairplay and Ms. Molak’s foundation focused on cyberbullying.

The parents — communicating in emails and texts and over Zoom — decided to go to the child safety hearing to confront the executives from Discord, Meta, Snap, TikTok and X with photos of their children.

At the hearing, Senator Josh Hawley, Republican of Missouri, tried to force Mark Zuckerberg, Meta’s chief executive, to apologize to the parents. Mr. Zuckerberg turned to the parents and said he was “sorry for everything you’ve all gone through.”

Todd Minor, a member of ParentsSOS who was in attendance, said the apology rang hollow. His 12-year-old son, Matthew, died in 2019 after taking part, Mr. Minor said, in a “blackout challenge” on TikTok, in which people choke themselves.

“We need KOSA. It’s that simple,” Mr. Minor, 48, said.

The parents then met with the Senate leader, Chuck Schumer, Democrat of New York, who promised to bring KOSA to a floor vote by June 20, according to Ms. Schmill and others in the meetings.

In April, the House introduced a companion bill.

Ms. Molak, 61, a San Antonio resident, met with Representative Randy Weber, Republican of Texas, last month to talk about her son David.

“Why am I not on this bill? Let’s get on this!” Mr. Weber, a member of the House Energy and Commerce Committee, said to his staff during the meeting, according to Ms. Molak. Mr. Weber’s office did not respond to a request for comment.

But progress in that committee stalled this month. The Senate version of the bill still faces opposition.

Ms. Schmill and three of the other parents trekked back to the Capitol again last week.

“I need to keep busy, to keep trying,” Ms. Schmill said.


If you are having thoughts of suicide, call or text 988 to reach the 988 Suicide and Crisis Lifeline or go to SpeakingOfSuicide.com/resources for a list of additional resources.



Source link

Continue Reading

Sci-Tech

Cyberattack on CDK Global Disrupts Car Sales in U.S. and Canada

Published

on


Thousands of auto dealers across the United States and Canada are suffering disruptions to their operations as a result of cyberattacks on a provider of critical software and data services used in auto retailing.

The provider, CDK Global, said it was targeted in two attacks on Wednesday, prompting the company to shut down its systems to prevent the loss of customer data and to allow testing and other measures to restore its services.

“We are assessing the impact and providing regular updates to our customers,” CDK Global said in a statement. “We remain vigilant in our efforts to reinstate our services and get our dealers back to business as usual as quickly as possible.”

CDK provides services to more than 15,000 retail locations. Its dealer management systems store customer records and automate much of the paperwork and data involved in selling and servicing cars and trucks.

Dealers said the outage had slowed sales and forced them to find alternative methods to produce the titles, contracts, leases, registration cards and other forms that must be delivered to customers, banks and state motor vehicle authorities.

“It is definitely annoying, no doubt,” said Brian Benstock, general manager and vice president of Paragon Honda in the New York borough of Queens. “But we’re still open for business. We’re still selling cars.”

He said his franchise had other systems to retrieve customer data. “We can produce contracts,” he said. “For customers, it’s pretty seamless.”

The disruption has come at a critical time for dealers as they head into the final two weekends of the month, typically a busy time for new-car sales. Many are also preparing for Fourth of July sales and other summer promotions.

Dealers said that in some cases they were reverting to writing contracts by hand, or asking customers to wait a few days to take delivery of their vehicles.

They have less leeway in servicing or repairing vehicles, when customers often expect their cars back within a few hours, but the lack of access to customer data in most cases won’t prevent technicians from performing repair work.



Source link

Continue Reading
Advertisement

Trending

Copyright © 2024 World Daily Info. Powered by Columba Ventures Co. Ltd.