Connect with us

Sci-Tech

First Came ‘Spam.’ Now, With A.I., We’ve Got ‘Slop’

Published

on


You may not know exactly what “slop” means in relation to artificial intelligence. But on some level you probably do.

Slop, at least in the fast-moving world of online message boards, is a broad term that has developed some traction in reference to shoddy or unwanted A.I. content in social media, art, books and, increasingly, in search results.

Google suggesting that you could add nontoxic glue to make cheese stick to a pizza? That’s slop. So is a low-price digital book that seems like the one you were looking for, but not quite. And those posts in your Facebook feed that seemingly came from nowhere? They’re slop as well.

The term became more prevalent last month when Google incorporated its Gemini A.I. model into its U.S.-based search results. Rather than pointing users toward links, the service attempts to solve a query directly with an “A.I. Overview” — a chunk of text at the top of a results page that uses Gemini to form its best guess at what the user is looking for.

The change was a reaction to Microsoft having incorporated A.I. into its search results on Bing, and it had some immediate missteps, leading Google to declare it would roll back some of its A.I. features until problems can be ironed out.

But with the dominant search engines having made A.I. a priority, it appears that vast quantities of information generated by machines, rather than largely curated by humans, will be served up as a daily part of life on the internet for the foreseeable future.

Hence the term slop, which conjures images of heaps of unappetizing food being shoveled into troughs for livestock. Like that type of slop, A.I.-assisted search comes together quickly, but not necessarily in a way that critical thinkers can stomach.

Kristian Hammond, the director of Northwestern University’s Center for Advancing Safety of Machine Intelligence, noted a problem in the current model: the information from A.I. Overview is being presented as a definitive answer, rather than as a place to start an internet user’s research into a given subject.

“You search for something and you get back what you need in order to think — and it actually encourages you to think,” Mr. Hammond said. “What it’s becoming, in this integration with language models, is something that does not encourage you to think. It encourages you to accept. And that, I think, is dangerous.”

For a problem to be targeted, giving it a name can prove helpful. And while slop is one option, it is still an open question of whether it will catch on with a mainstream audience, or end up in the slang dustbin with cheugy, bae and skibidi.

Adam Aleksic, a linguist and content creator who uses the handle etymologynerd on social media, believes that slop — which he said has yet to cross over to a broader audience — shows promise.

“I think this is a great example of an unobtrusive word right now, because it is a word we’re all familiar with,” Mr. Aleksic said. “It’s a word that feels like it’s naturally applicable to this situation. Therefore, it’s less in your face.”

The use of slop as a descriptor for low-grade A.I. material seemingly came about in reaction to the release of A.I. art generators in 2022. Some have identified Simon Willison, a developer, as an early adopter of the term — but Mr. Willison, who has pushed for the phrase’s adoption, said it was in use long before he found it.

“I think I might actually have been quite late to the party!” he said in an email.

The term has sprung up in 4chan, Hacker News and YouTube comments, where anonymous posters sometimes project their proficiency in complex subject matter by using in-group language.

“What we always see with any slang is that it starts in a niche community and then spreads from there,” Mr. Aleksic said. “Usually, coolness is a factor that helps it spread, but not necessarily. Like, we’ve had a lot of words spread from a bunch of coding nerds, right? Look at the word ‘spam.’ Usually, the word is created because there is a particular group with shared interests, with a shared need to invent words.”

In the short term, the effect of A.I. on search engines and the internet in general may be less extreme that some would fear.

News organizations have worried about shrinking online audiences as people rely more on A.I.-generated answers and data from Chartbeat, a company that researches internet traffic, indicates that there was an immediate drop in referrals from Google Discover to websites in the first days of A.I. overviews. But that dip has since recovered, and in the first three weeks of the overviews, overall search traffic to more than 2,000 major websites in the U.S. actually went up, according to Chartbeat.

Mr. Willison, who identified himself as an optimist for A.I. when it is used correctly, thought that slop could become the go-to term for junky machine-generated content.

“Society needs concise ways to talk about modern A.I. — both the positives and the negatives,” he said. “‘Ignore that email, it’s spam,’ and ‘Ignore that article, it’s slop,’ are both useful lessons.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Sci-Tech

How Netflix’s Corporate Culture Has Changed

Published

on


Netflix has long been a company known for its secrets: no Nielsen ratings, little feedback on why shows are canceled, no box office numbers for the rare movies that are actually released in theaters.

Yet for a place defined by its opaque approach to the outside world, the streaming giant has long been aggressively transparent internally. The company’s philosophy was immortalized in 2009 when Reed Hastings, the company’s co-founder and chief executive, first laid out the corporate ethos in a 125-slide presentation that introduced new buzzy phrases like “stunning colleagues,” “the keeper test” and “honesty always.”

The presentation, with its insistence on constant and unfiltered candor, felt both brutal and refreshingly antithetical to Hollywood’s normal way of doing business. To the frustration of former employees and current competitors, it may just be the blueprint that has enabled Netflix to have so much success while its rivals have stumbled.

Three more culture memos have followed over the years. Before being released, they are pored over and analyzed for months by top executives. At the same time, any employee can pop into the Google Doc where the memo is being assembled to leave a thought or a comment.

The latest iteration of the document, which was released internally on May 8 and will soon be made public, underwent eight months of vetting and received 1,500 comments from employees, according to Sergio Ezama, Netflix’s chief talent officer. It is five pages long (half the length of Mr. Hastings’s final memo in 2022), and some core tenets have changed, however slightly.

When Mr. Hastings titled his 2009 presentation “Netflix Culture,” he gave it the subhead “Freedom and Responsibility.” The idea was that Netflix trusted its employees to act in the best interest of the company. If you want a vacation, take a vacation. If you have a baby and need to go on leave, go on leave. Documents were shared widely throughout the company without any fear of leaks.

While those principles remain in practice, the new memo highlights Netflix’s philosophy of “People Over Process” first: “We hire unusually responsible people who thrive on this openness and freedom.”

The keeper test — which is defined as, “if X wanted to leave, would I fight to keep them?” — now includes this disclaimer: “The keeper test can sound scary. In reality, we encourage everyone to speak to their managers about what’s going well and what’s not on a regular basis.”

There is a sentence in the latest memo that reads, “Not all opinions are created equal” because as the organization has grown to more than 13,000 employees, it is no longer feasible for everyone to weigh in on every decision. “It does not scale,” said Elizabeth Stone, the company’s chief technology officer.

The company is never one to shy away from reorganizing itself — a feature that critics say happens too frequently and leaves many employees worrying that they could be fired any day. Mr. Hastings has moved on to the executive chairman role. Ted Sarandos and Greg Peters are the co-chief executives, and change is always afoot. Still, the latest culture memo feels much more about how the streamer expects its employees to behave rather than a treatise for what it wants to become.

“The key about the Netflix culture is we really try to systematically think what generates long-term excellence,” Mr. Hastings said in a video interview from his home in Santa Cruz, Calif. “Certainly a lot of creativity, a lot of freedom, a lot of focus on innovation, and trying to attract and develop people who are self-responsible.”

Talk to the employees who work at Netflix and the sense is that the cultural tenets have infiltrated their lives in ways they weren’t expecting. Many came in skeptical, assuming the memo itself was a public relations effort to make the company stand out. Yet some of those people now describe it as being 80 to 90 percent accurate.

Ms. Stone, who married months after joining Netflix in 2020, said that she and her husband “use certain language now like, ‘Do you have any feedback for me?’ He would be the first to say at a cocktail party that he’s very good at receiving feedback, and he’s still working on giving feedback.”

The document is made to read as aspirational, and there is always room for improvement.

“Are we always totally direct with each other? No. Are we completely devoid of politics? No,” said Spencer Wang, the vice president of finance and investor relations, who has been with Netflix for nine and a half years. The company is not “perfect across all these dimensions, but I would say it is a remarkably accurate description of what we aspire to be and how we generally operate,” he said.

Reflecting on the initial presentation, Mr. Hastings admitted that “leading with freedom was attractive,” adding, “It was good bait.”

But as the company grew, the concept of freedom and responsibility, which many reduced to “FNR,” became weaponized by some employees as justification for doing whatever they desired. One year an assistant expensed $30,000, according to a company official, because there was no rule saying that it wasn’t allowed.

“We care about freedom when it generates excellence, not for its own sake,” Mr. Hastings said. “In hindsight, this is the draft I wish we had 15 years ago.”

From the beginning, Netflix was never going to be a place where most people stayed for their entire careers. Employment contracts don’t exist, and an employee, no matter the rank, can be let go at any time.

While few leave of their own accord (voluntary resignation ranged from 2.1 to 3.1 percent in the last two years), about 9 percent are asked to leave annually. That may be a relief to those who describe the pace as all-consuming and find the company’s key tenet of being “uncomfortably exciting” untenable. The company warns in the memo that the concept may cause “many people” to choose other places “that are more stable or take fewer risks.”

While some employees, including the two co-chief executives, have been with Netflix for over 15 years, many consider sticking it out for five to be a significant achievement.

Still, some find the pressure invigorating. Brandon Riegg, the vice president of nonfiction and sports, said he had often felt stifled when working at the traditional entertainment studios. He calls the culture at Netflix “a life preserver” that has allowed him to make an impact that wouldn’t have been possible at a traditional studio. Five years ago, he persuaded his bosses to release episodes of the reality show “Rhythm + Flow” in batches for the first time. That practice has been repeated with other reality programs like “Love Is Blind” and scripted programming like “Bridgerton” and “Stranger Things.”

He said that while the strategy ran counter to what Netflix had done in the past, executives were willing to try it.

Their approach, Mr. Riegg said, was that “we hired you, and if you think this is the best thing, and you’ve farmed for dissent, and you’ve taken in all the feedback, and this is where you landed, let’s give it a shot.”

Mr. Hastings looked relaxed during the video interview, and that may be because he’s rid of the jet lag and “insane” schedule that used to wear him down as chief executive. (His new life of philanthropy and owning a ski mountain may also be helping.)

Or maybe it’s because he’s no longer subject to the constant feedback the company is known for — something many employees find jarring when entering the Netflix vortex, especially those coming from outside Silicon Valley.

Mr. Wang said that receiving candid feedback was fine but that as an Asian American, he had initially found it hard to provide it because “it rubbed against my cultural background.” More recently, he said, he was told that he’s “too direct,” so he’s now working on being more sensitive.

Ms. Stone, the chief technology officer, recently recounted being at a happy hour event in New York City where an engineer introduced himself and proceeded to say, “I’m the engineer who wrote the bug in the code that brought the service down two weeks ago.”

“He knew introducing himself that way to me would spark a good conversation about what’s the culture around improvement,” she said. “It wasn’t like: ‘Why is this person still here? This person should be fired.’”

As for Mr. Hastings, he may not have to take any more feedback, but he can still dole it out. He said he appreciated that Mr. Sarandos and Mr. Peters waited a year after his departure to reformulate the culture memo as their own.

“It’s 10 percent better,” he said. “It’s not radically better, but it’s as good as any improvement I ever made on it. So that’s a compliment.”



Source link

Continue Reading

Sci-Tech

What the Arrival of A.I. Phones and Computers Means for Our Data

Published

on


Apple, Microsoft and Google are heralding a new era of what they describe as artificially intelligent smartphones and computers. The devices, they say, will automate tasks like editing photos and wishing a friend a happy birthday.

But to make that work, these companies need something from you: more data.

In this new paradigm, your Windows computer will take a screenshot of everything you do every few seconds. An iPhone will stitch together information across many apps you use. And an Android phone can listen to a call in real time to alert you to a scam.

Is this information you are willing to share?

This change has significant implications for our privacy. To provide the new bespoke services, the companies and their devices need more persistent, intimate access to our data than before. In the past, the way we used apps and pulled up files and photos on phones and computers was relatively siloed. A.I. needs an overview to connect the dots between what we do across apps, websites and communications, security experts say.

“Do I feel safe giving this information to this company?” Cliff Steinhauer, a director at the National Cybersecurity Alliance, a nonprofit focusing on cybersecurity, said about the companies’ A.I. strategies.

All of this is happening because OpenAI’s ChatGPT upended the tech industry nearly two years ago. Apple, Google, Microsoft and others have since overhauled their product strategies, investing billions in new services under the umbrella term of A.I. They are convinced this new type of computing interface — one that is constantly studying what you are doing to offer assistance — will become indispensable.

The biggest potential security risk with this change stems from a subtle shift happening in the way our new devices work, experts say. Because A.I. can automate complex actions — like scrubbing unwanted objects from a photo — it sometimes requires more computational power than our phones can handle. That means more of our personal data may have to leave our phones to be dealt with elsewhere.

The information is being transmitted to the so-called cloud, a network of servers that are processing the requests. Once information reaches the cloud, it could be seen by others, including company employees, bad actors and government agencies. And while some of our data has always been stored in the cloud, our most deeply personal, intimate data that was once for our eyes only — photos, messages and emails — now may be connected and analyzed by a company on its servers.

The tech companies say they have gone to great lengths to secure people’s data.

For now, it’s important to understand what will happen to our information when we use A.I. tools, so I got more information from the companies on their data practices and interviewed security experts. I plan to wait and see whether the technologies work well enough before deciding whether it’s worth it to share my data.

Here’s what to know.

Apple recently announced Apple Intelligence, a suite of A.I. services and its first major entry into the A.I. race.

The new A.I. services will be built into its fastest iPhones, iPads and Macs starting this fall. People will be able to use it to automatically remove unwanted objects from photos, create summaries of web articles and write responses to text messages and emails. Apple is also overhauling its voice assistant, Siri, to make it more conversational and give it access to data across apps.

During Apple’s conference this month when it introduced Apple Intelligence, the company’s senior vice president of software engineering, Craig Federighi, shared how it could work: Mr. Federighi pulled up an email from a colleague asking him to push back a meeting, but he was supposed to see a play that night starring his daughter. His phone then pulled up his calendar, a document containing details about the play and a maps app to predict whether he would be late to the play if he agreed to a meeting at a later time.

Apple said it was striving to process most of the A.I. data directly on its phones and computers, which would prevent others, including Apple, from having access to the information. But for tasks that have to be pushed to servers, Apple said, it has developed safeguards, including scrambling the data through encryption and immediately deleting it.

Apple has also put measures in place so that its employees do not have access to the data, the company said. Apple also said it would allow security researchers to audit its technology to make sure it was living up to its promises.

Apple’s commitment to purging user data from its servers sets it apart from other companies that hold on to data. But Apple has been unclear about which new Siri requests could be sent to the company’s servers, said Matthew Green, a security researcher and an associate professor of computer science at Johns Hopkins University, who was briefed by Apple on its new technology. Anything that leaves your device is inherently less secure, he said.

Apple said that when Apple Intelligence is released, users would be able to see a report of what requests are leaving the device to be processed in the cloud.

Microsoft is bringing A.I. to the old-fashioned laptop.

Last week, it began rolling out Windows computers called Copilot+ PC, which start at $1,000. The computers contain a new type of chip and other gear that Microsoft says will keep your data private and secure. The PCs can generate images and rewrite documents, among other new A.I.-powered features.

The company also introduced Recall, a new system to help users quickly find documents and files they have worked on, emails they have read or websites they have browsed. Microsoft compares Recall to having a photographic memory built into your PC.

To use it, you can type casual phrases, such as “I’m thinking of a video call I had with Joe recently when he was holding an ‘I Love New York’ coffee mug.” The computer will then retrieve the recording of the video call containing those details.

To accomplish this, Recall takes screenshots every five seconds of what the user is doing on the machine and compiles those images into a searchable database. The snapshots are stored and analyzed directly on the PC, so the data is not reviewed by Microsoft or used to improve its A.I., the company said.

Still, security researchers warned about potential risks, explaining that the data could easily expose everything you’ve ever typed or viewed if it was hacked. In response, Microsoft, which had intended to roll out Recall last week, postponed its release indefinitely.

The PCs come outfitted with Microsoft’s new Windows 11 operating system. It has multiple layers of security, said David Weston, a company executive overseeing security.

Google last month also announced a suite of A.I. services.

One of its biggest reveals was a new A.I.-powered scam detector for phone calls. The tool listens to phone calls in real time, and if the caller sounds like a potential scammer (for instance, if the caller asks for a banking PIN), the company notifies you. Google said people would have to activate the scam detector, which is completely operated by the phone. That means Google will not listen to the calls.

Google announced another feature, Ask Photos, that does require sending information to the company’s servers. Users can ask questions like “When did my daughter learn to swim?” to surface the first images of their child swimming.

Google said its workers could, in rare cases, review the Ask Photos conversations and photo data to address abuse or harm, and the information might also be used to help improve its photos app. To put it another way, your question and the photo of your child swimming could be used to help other parents find images of their children swimming.

Google said its cloud was locked down with security technologies like encryption and protocols to limit employee access to data.

“Our privacy-protecting approach applies to our A.I. features, no matter if they are powered on-device or in the cloud,” Suzanne Frey, a Google executive overseeing trust and privacy, said in a statement.

But Mr. Green, the security researcher, said Google’s approach to A.I. privacy felt relatively opaque.

“I don’t like the idea that my very personal photos and very personal searches are going out to a cloud that isn’t under my control,” he said.



Source link

Continue Reading

Sci-Tech

How Pet Care Became a Big Business

Published

on


Heather Massey brought Ladybird to the veterinarian when the 9-year-old mutt began having seizures. A scan from an M.R.I. machine revealed bad news: brain cancer.

With the prognosis grim, Ms. Massey decided against further treatment at the animal hospital near her home in Athens, Ga., and Ladybird died four months later. The M.R.I. scan and related care had cost nearly $2,000, which Ms. Massey put on a specialty credit card she had learned about at a previous vet visit.

That was in 2018. She is still paying off the debt, with more than 30 percent interest.

“Could I afford to do that? Not really,” said Ms. Massey, 52, who is disabled and does not work. “Was it worth it to me? Yes.”

Ms. Massey’s experience illustrates the expensive new realities of owning a pet. For decades, veterinarians typically operated their own clinics, shepherding generations of pets from birth to death. They neutered, vaccinated and pulled thorns from paws and noses. When animals became seriously ill, vets often had little to offer beyond condolences and a humane death.

But in recent years, as people have grown more attached to their pets — and more willing to spend money on them — animal medicine has transformed into a big business that looks a lot like its human counterpart. Many veterinary offices have been replaced by hospitals outfitted with expensive M.R.I. machines, sophisticated lab equipment and round-the-clock intensive care units. Dogs and cats often see highly trained specialists in neurology, cardiology and oncology.

This high-tech care has spurred a booming market. Veterinary prices have soared more than 60 percent over the past decade, according to federal statistics. Private equity firms and large corporations have bought hundreds of facilities around the country, an acquisition spree reminiscent of the corporate roll-ups of doctors’ offices.

Veterinarians from around the country told The New York Times that their corporate managers were pushing clinics to become more efficient profit centers. Vets were often paid based on how much money they brought in, creating an incentive to see more pets, order more tests and upsell wellness plans and food.

The result is an increasingly unsustainable situation for animal owners, most of whom don’t have pet insurance.

The Times asked readers to share their stories about expensive vet bills, and hundreds responded. Sophia McElroy of Denver said she donated blood plasma and took extra freelance work to pay for her dog’s ongoing expenses.

Nancy Partridge of Waynesville, N.C., said that months after her cat was diagnosed with an inoperable tumor, she was still chipping away at the $1,500 bill. “We have a dead cat, and we’re still paying,” she said.

In 2015, Claire Kirsch was earning less than $10 an hour as a veterinary technician in Georgia when her own dog, Roscoe, and her horse, Gambit, each had medical emergencies, resulting in bills that totaled more than $13,000. Ms. Kirsch said her animals would have died had she not opted for additional care.

“I knew I would never be able to forgive myself if we didn’t try,” she said.

Ms. Kirsch maxed out a credit card, tapped into her husband’s retirement account and took out a personal loan. Roscoe lived another three years, and Gambit is still alive.

In interviews, veterinarians said pet owners who complained about care costs don’t appreciate the difficulties of running a clinic. Veterinarians make far less money than human doctors and are often in debt from years of education. Their prices have gone up partly because of the rising cost of drugs, vaccines and other supplies, as well as paying workers in a tight labor market.

And because of more advanced medical offerings, pets today can survive serious illnesses, like cancer, that would once have been unthinkable. They have access to surgeries and drugs that can vastly improve their lives.

“We live in the most technologically advanced time in human history, and how wonderful is that?” said Dr. Tracy Dewhirst, a veterinarian in Corryton, Tenn. “But it comes at a cost.”

Even run-of-the-mill visits can rack up big bills. Dr. David Roos, an 86-year-old veterinarian in Los Altos, Calif., said he decided to retire one day in 2014, when he checked on a dog whose owners were longtime clients. The animal had been admitted for vomiting. Dr. Roos said he normally would have told the owner to take the dog home and to give it sips of water. Instead, another vet had ordered X-rays, blood tests, intravenous fluids and a hospital stay. Dr. Roos knew the owners could not afford the bill.

“I realized at that stage that veterinary medicine had changed to the point where I no longer wanted to be a part of it,” Dr. Roos said.

With a growth in pet ownership and surveys showing that Americans are willing to go into debt to pay for their animals’ care, vet clinics have become increasingly attractive to investors. About one-quarter of primary care clinics and three-quarters of specialty clinics are now owned by corporations, according to Brakke Consulting, which focuses on the animal health industry.

In 2015, one major player, Mars — known for selling candy and pet food — acquired a specialty veterinary hospital chain, BluePearl, for an undisclosed sum. In 2017, it nabbed another hospital, VCA, for $9.1 billion. The trend peaked in 2021, with more than 200 private equity deals, according to Pitchbook.

Several veterinarians who have worked in corporate practices said that they were pressured to drive more business. One vet from California said she quit her job after she was told her “cost per client” was too low. Another, from Virginia, said she was told she needed to see 21 animals per day. A third, from Colorado, said she was taken aback when she overheard a manager saying some of the vets at her office needed coaching on “getting the client to a yes.” These vets asked to withhold their names because they worried that speaking out could jeopardize future job prospects with private-equity practices.

Other vets said that corporate ownership had no influence on the care they provided. Still, Dr. Andrew Federer, the medical director of a clinic in Mentor, Ohio, that is owned by a chain called National Veterinary Associates, said that when someone’s pay is tied to how many procedures and tests they perform, the incentives could be difficult to ignore, especially for vets who were just starting out.

“The more they bring into the hospital above their current salary, the more of a production bonus they will receive,” he said.

Only about 4 percent of pet owners have insurance, and even for them, the options are limited. Pet insurance often excludes pre-existing conditions and costs more for older pets who are more likely to get sick.

Companies can also change the terms. This spring, the insurance company Nationwide notified thousands of pet owners that it was discontinuing their coverage, leaving them scrambling to enroll in new plans that excluded the pets’ pre-existing conditions. About 100,000 plans are being discontinued, said Kevin Kemper, a Nationwide spokesman.

Stephanie Boerger of Royal Oak, Mich., said that Nationwide had been covering her cat’s chemotherapy, but told her it would not renew her plan when it expired in August. The treatment, which costs about $1,000 every other month, will not be covered under any available plan.

“Now I feel like I have to choose between paying for my cat’s chemo or letting her die,” said Ms. Boerger, who was able to find new coverage through a competing company.

In a statement, the Nationwide spokesman cited the rising cost of veterinary care. “We are making these tough decisions now so that we can continue to be here for even more pets in the future,” he said.

Many veterinarians offer specialty credit cards sold by outside companies, such as the CareCredit card that was used by Ms. Kirsch and Ms. Massey. Last year, the Biden administration warned that these medical credit cards — which were also promoted by doctors and dentists — drove many consumers into debilitating debt. A spokeswoman for CareCredit said that about 80 percent of cardholders paid off their debt before the no-interest introductory period expired.

Some groups, including the American Society for the Prevention of Cruelty to Animals, are researching how vets can perform common procedures more cheaply. And many veterinarians say they try to offer a “spectrum of care,” a nonjudgmental way of discussing less expensive options.

For many people, a pet’s companionship is priceless.

After Ladybird died, Ms. Massey adopted Lunabear, a Lab mix that she jokes is “allergic to the very air we breathe.” Lunabear needs prescription food that costs $6 a can and takes a $3 allergy pill three times a day. Last year, she had leg surgery.

These costs have totaled nearly $4,000, much of which has been charged to the high-interest credit card. But Ms. Massey, who has major depression and lives alone, said her dogs took top priority. “I pay my bills, and then I buy food,” she said.

Ben Casselman contributed reporting.



Source link

Continue Reading
Advertisement

Trending

Copyright © 2024 World Daily Info. Powered by Columba Ventures Co. Ltd.