Connect with us

Sci-Tech

From Baby Talk to Baby A.I.

Published

on


We ask a lot of ourselves as babies. Somehow we must grow from sensory blobs into mobile, rational, attentive communicators in just a few years. Here you are, a baby without a vocabulary, in a room cluttered with toys and stuffed animals. You pick up a Lincoln Log and your caretaker tells you, “This is a ‘log.’” Eventually you come to understand that “log” does not refer strictly to this particular brown plastic cylinder or to brown plastic cylinders in general, but to brown plastic cylinders that embody the characteristics of felled, denuded tree parts, which are also, of course, “logs.”

There has been much research and heated debate around how babies accomplish this. Some scientists have argued that most of our language acquisition can be explained by associative learning, as we relate sounds to sensibilia, much like dogs associate the sound of a bell with food. Others claim that there are features built into the human mind that have shaped the forms of all language, and are crucial to our learning. Still others contend that toddlers build their understanding of new words on top of their understanding of other words.

This discourse advanced on a recent Sunday morning, as Tammy Kwan and Brenden Lake delivered blackberries from a bowl into the mouth of their twenty-one-month-old daughter, Luna. Luna was dressed in pink leggings and a pink tutu, with a silicone bib around her neck and a soft pink hat on her head. A lightweight GoPro-type camera was attached to the front.

“Babooga,” she said, pointing a round finger at the berries. Dr. Kwan gave her the rest, and Dr. Lake looked at the empty bowl, amused. “That’s like $10,” he said. A light on the camera blinked.

For an hour each week over the past 11 months, Dr. Lake, a psychologist at New York University whose research focuses on human and artificial intelligence, has been attaching a camera to Luna and recording things from her point of view as she plays. His goal is to use the videos to train a language model using the same sensory input that a toddler is exposed to — a LunaBot, so to speak. By doing so, he hopes to create better tools for understanding both A.I. and ourselves. “We see this research as finally making that link, between those two areas of study,” Dr. Lake said. “You can finally put them in dialogue with each other.”

There are many roadblocks to using A.I. models to understand the human mind. The two are starkly different, after all. Modern language and multimodal models — like OpenAI’s GPT-4 and Google’s Gemini — are assembled on neural networks with little built-in structure, and have improved mostly as a result of increased computing power and larger training data sets. Meta’s most recent large language model, Llama 3, is trained on more than ten trillions words; an average five-year-old is exposed to more like 300,000.

Such models can analyze pixels in images but are unable to taste cheese or berries or feel hunger, important kinds of learning experiences for children. Researchers can try their best to turn a child’s full sensory stream into code, but crucial aspects of their phenomenology will inevitably be missed. “What we’re seeing is only the residue of an active learner,” said Michael Frank, a psychologist at Stanford who for years has been trying to capture the human experience on camera. His lab is currently working with over 25 children around the country, including Luna, to record their experiences at home and in social settings.

Humans are also not mere data receptacles, as neural nets are, but intentional animals. Everything we see, every object we touch, every word we hear couples with the beliefs and desires we have in the moment. “There is a deep relationship between what you’re trying to learn and the data that come in,” said Linda Smith, a psychologist at Indiana University. “These models just predict. They take whatever is put into them and make the next best step.” While you might be able to emulate human intentionality by structuring training data — something Dr. Smith’s lab has been attempting to do recently — the most competent A.I. models, and the companies that make them, have long been geared toward efficiently processing more data, not making more sense out of less.

There is, additionally, a more conceptual issue, which stems from the fact that the abilities of A.I. systems can seem quite human, even though they arise in nonhuman ways. Recently, dubious claims of consciousness, general intelligence and sentience have emerged from industry labs at Google and Microsoft following the release of new models. In March Claude 3, the newest model from a A.I research startup called Anthropic, stirred up debate when, after analyzing a random sentence about pizza toppings hidden in a long list of unrelated documents, it expressed the suspicion that it was being tested. Such reports often smell like marketing ploys rather than objective scientific projects, but they highlight our eagerness to attribute scientific meaning to A.I.

But human minds are converging with virtual ones in other ways. Tom Griffiths, a cognitive scientist at Princeton, has suggested that, in describing the limitations of human intelligence, and building models that have similar limitations, we could end up with a better comprehension of ourselves and more interpretable, efficient A.I. “A better understanding of human intelligence helps us better understand and model computers, and we can use these models to understand human intelligence,” Dr. Griffiths said. “All of this is very new. We’re exploring the space of possibilities.”

In February, Dr. Lake and his collaborators created the first ever A.I. model trained on the experiences of a child, using videos captured in Dr. Frank’s lab more than a decade ago. The model was published in the journal Science and, based on 60 hours of footage, was able to match different moments with words. Type in “sand” and the model will recall the moment, 11 years ago, when the boy whose experiences the model was trained on visited the beach with his mother. Type in “car” and the model brings up a first-person video of the boy sitting in his booster seat.

The training videos are old and grainy, and the data are fairly sparse, but the model’s ability to form some kind of conceptual mapping of the world suggests that it might be possible for language to be picked up mostly through association. “We had one reviewer on the paper who said, ‘Before I read this I would’ve thought this was impossible,’” said Wai Keen Vong, a researcher at N.Y.U. who helped lead the work.

For Dr. Lake, and for others investigators like him, these interlocking questions — How humanlike can we make A.I.? What makes us human? — present the most exciting research on the horizon. To pursue the former question piece by piece, by modeling social interactions, intentions and biases, by collecting comprehensive video footage from a headcam mounted on a one-year-old, is to move closer to answering the latter.

“If the field can get to the place where models are trained on nothing but the data that a single child saw, and they do well on a huge set of tasks, that would be a huge scientific achievement,” Dr. Lake said.

In their apartment, Dr. Lake and Dr. Kwan were gathering Luna and her older brother, Logan, for a birthday party. The children, crowding into the doorway, pulled on their socks and shoes. Dr. Lake stopped the recording on Luna’s camera and handed her a pair of fuzzy white mittens with sheep faces on them. “What are those, Luna?” he asked.

“Baa baa,” Luna said.

Dr. Kwan said, “There was a time when she didn’t know the word ‘no,’ and it was just ‘yes’ to everything.” She addressed Luna: “Kisses, do you want kisses?”

“No,” Luna said.

“Oh,” Dr. Lake said, laughing. “I do miss the ‘yes’ phase.”

Audio produced by Sarah Diamond.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Sci-Tech

Inside OpenAI’s Library – The New York Times

Published

on


The two-story library has Oriental rugs, shaded lamps dotting its desks and rows of hardbacks lining its walls. It is the architectural centerpiece of the offices of OpenAI, the start-up whose online chatbot, ChatGPT, showed the world that machines can instantly generate their own poetry and prose.

The building, which was once a mayonnaise factory, looks like a typical tech office, with its communal work spaces, well-stocked micro-kitchens and private nap rooms spread across three floors in San Francisco’s Mission District.

But then there is that library, with the ambience of a Victorian Era reading room. Its shelves offer everything from Homer’s “The Iliad” to David Deutsch’s “The Beginning of Infinity,” a favorite of Sam Altman, OpenAI’s chief executive.

Built at Mr. Altman’s request and stocked with titles suggested by his staff, the OpenAI library is an apt metaphor for the world’s hottest tech company, whose success was fueled by language — lots and lots of language. OpenAI’s chatbot was not built like the average internet app. ChatGPT learned its skills by analyzing huge amounts of text that was written, edited and curated by humans, including encyclopedia articles, news stories, poetry and, yes, books.

The library also represents the paradox at the heart of OpenAI’s technology. Authors and publishers, including The New York Times, are suing OpenAI, claiming the company illegally used their copyrighted content to build its A.I. systems. Many authors worry that the technology will ultimately take away their livelihood.

Many OpenAI employees, on the other hand, believe the company is using human creativity to fuel more human creativity. They believe their use of copyrighted works is “fair use” under the law, because they are transforming these works into something new.

“To say that this is a public debate right now is an understatement,” said Shannon Gaffney, co-founder and managing partner of SkB Architects, the architectural firm that renovated OpenAI’s headquarters and designed its library. “Though things might look like they are going in different directions, the library serves as a constant reminder of human creativity.”

When OpenAI hired Ms. Gaffney’s firm to renovate the building in 2019, Mr. Altman said he wanted a library with an academic aura.

He wanted it to be a reminder of the Green Library, a Romanesque library at Stanford University, where he was a student for two years before dropping out to build a social media app; the Rose Reading Room, a Beaux-Arts study hall on the top floor of the New York Public Library in Midtown Manhattan; and the library-like bar inside the now defunct Nomad Hotel, 15 blocks south of the Rose.

“My dining room and living room at home is inside a library — floor-to-ceiling books all the way around,” Mr. Altman said in an interview. “There is something about sitting in the middle of knowledge on the shelves at vast scale that I find interesting.”

Many titles, like “English Masterpieces, 700-1900” and “Ideas and Images in World Art,” seem like the weighty hardbacks that professional decorators place strategically inside hotel lobbies because they look the part. Still, the library is a reflection of the organization that built it.

On a recent afternoon, two paperbacks sat beside each other at eye-level: “Birds of Lake Merritt” (a field guide to the birds found in a wildlife refuge in Oakland, Calif.) and “Fake Birds of Lake Merritt” (a parody written by GPT-3, an early version of the technology that drives ChatGPT).

Some employees see the library as a quieter place to work. Long Ouyang, an A.I. researcher, keeps a rolling desk against the wall. Others see it as an unusually elegant break room. On weekends, Ryan Greene, another researcher, pumps his digital music through the audio speakers tucked among the hardbacks.

It is, other employees said, a far more inspiring place to work than a cubicle. “This is why so many people choose to work in the library,” Ms. Staudacher said.

Recently, Mr. Greene began feeding lists of his favorite books into ChatGPT and asking for new recommendations. At one point, the chatbot recommended “The Book of Disquiet,” a posthumously published autobiography from the Portuguese writer Fernando Pessoa. A friend, who knew his tastes well, had recommended that he read the same book.

“Given the trends and patterns in things that have happened in the past, the technology can suggest things for the future,” Mr. Greene said.

Ms. Gaffney, from OpenAI’s architectural firm, argued that this blend of the human and the machine will continue. Then she paused, before adding: “That, at least, is what I hope and feel.”



Source link

Continue Reading

Sci-Tech

Why TikTok Users Are Blocking Celebrities

Published

on


As protests over the war in Gaza unfolded blocks away, last week’s Met Gala was largely devoid of political statements on the red carpet. That the organizers of fashion’s most powerful annual spectacle (one for which tickets cost $75,000 this year) achieved this proved surprising to many observers. Less than two weeks later, though, a fast-growing online protest movement is taking shape. At least, it is on TikTok, the social media platform that was a sponsor of the Met event.

Blockout 2024, also referred to as Operation Blockout or Celebrity Block Party, targets high-profile figures who participants feel are not using their profiles and platforms to speak out about the Israel-Hamas war and wider humanitarian crises. Here’s what has happened so far, what supporters hope to achieve and why it all began.

The criticism began on May 6, when Haley Kalil (@haleyybaylee on social media), an influencer who was a host on E! News before the event, posted a TikTok video of herself wearing a lavish 18th-century-style floral gown and headdress with audio from Sofia Coppola’s 2006 film “Marie Antoinette,” in which Kirsten Dunst proclaims, “Let them eat cake!”

The clip (for which Ms. Kalil later apologized and which was deleted) was viewed widely. Given the current global conflicts and humanitarian crises, critics described it as “tone deaf.” Then posts emerged comparing ostentatious costumes worn by celebrities on the Met red carpet to scenes from “The Hunger Games,” in which affluent citizens in opulent outfits wine and dine while watching the suffering of the impoverished districts for sport.

Images of Zendaya, a Met Gala co-chair, spliced with photographs of Palestinian children, incited the online masses. A rallying cry soon came from @ladyfromtheoutside, a TikTok creator who found inspiration in Ms. Kalil’s parroting of Marie Antoinette.

“It’s time for the people to conduct what I want to call a digital guillotine — a ‘digitine,’ if you will,” she said in a May 8 video post with two million views. “It’s time to block all the celebrities, influencers and wealthy socialites who are not using their resources to help those in dire need. We gave them their platforms. It’s time to take it back, take our views away, our likes, our comments, our money.”

“Block lists” of celebrities thought to be deserving of being blocked were published and widely shared online.

The movement is made up of pro-Palestinian supporters who have been assessing the actions and words of A-listers in order to decide if they have adequately responded to the conflict. If they have said nothing or not enough, the movement calls for those supporting Gaza to block that celebrity on social media. What constitutes sufficient action by the famous person — be it calls for a cease-fire, donations to aid charities or statements — appears unclear and can vary from celebrity to celebrity.

“Blockout” supporters argue that blocking is important because brands look at data on the followers and engagement of influencers and celebrities on social media before choosing whether to work with them to promote a product. Blocking someone on social media means you no longer see any posts from the person’s accounts, and it gives the blocker more control over who has access to their own updates and personal information. It can have more impact than unfollowing a celebrity account because many product deals thrive on targeted ads and views that can accumulate even if a user simply sees a post, without liking or sharing it.

If enough people block a content creator, it could reduce the creator’s ability to make money. Also, adherents of this thinking say, why follow someone whose values don’t align with yours?

Attendees with huge followings, like Zendaya, Kim Kardashian and Kylie Jenner, have been at the top of the chopping blocks. But so have celebrities who didn’t attend the gala this year, including Justin Bieber, Taylor Swift and Selena Gomez.

Vogue, which according to Puck News published 570 Met Gala stories on its platforms and recorded more than a billion video views of content from the night, has also been targeted because of its ties to the event.

“The Met Gala is by far and away Vogue’s biggest cash cow,” Elaina Bell, a former Vogue employee, said in a TikTok post with 850,000 views. She explained that the event sold sponsorships “based on the data of past events,” adding, “How the Met Gala is seen is so important to the bottom line of Vogue specifically but also to Condé Nast.”

It certainly raised some eyebrows. The dress code was “The Garden of Time,” inspired by the J.G. Ballard short story of the same name. It’s an allegorical tale about an aristocratic couple isolated in their estate of fading beauty harassed by an enormous crowd preparing to overrun and destroy the space. Rather on the nose.

Yes. Some posts say the blockout is a negative example of “cancel culture.” Others suggest that, like other social media-led movements, it is digital posturing that generates little meaningful change.

Some argue that celebrities do not have a duty (or the awareness) to speak out on complicated geopolitical issues, and they question why it matters what famous people think about those issues, anyway. Others feel the movement has blurred parameters, given that some A-listers, like Jennifer Lopez and Billie Eilish, have previously shown support for a cease-fire in Gaza but are being punished for not speaking up now.

Several stars on the widely circulated block lists, including Lizzo and the influencer Chris Olsen, posted their first public videos asking followers to donate in support of aid organizations serving Palestinians. Blockout supporters have also worked to “boost” celebrities who have recently spoken about the conflict, like Macklemore, Dua Lipa and The Weeknd.

According to metrics from the analytics company Social Blade, many names on block lists have lost tens or hundreds of thousand of followers per day since the “digitine” began. But murky claims that stars like Kim Kardashian have lost millions of followers are unsubstantiated.

Will more A-listers start speaking out on the red carpet as a result of the lists? It is too soon to tell. But for frequent users of TikTok, the brand aura of the Met Gala is being profoundly altered. And while social-media-led boycotts are by no means unprecedented, this latest movement is a clear example of the growing power of creators to redistribute or even weaponize ​platforms that are cornerstones of a modern celebrity-centric — and capitalist — system.





Source link

Continue Reading

Sci-Tech

Grand Theft Auto maker firms up GTA 6 release date

Published

on



The latest instalment of the hugely popular series will be released in autumn 2025, its publisher says.



Source link

Continue Reading
Advertisement

Trending

Copyright © 2024 World Daily Info. Powered by Columba Ventures Co. Ltd.