Connect with us

Sci-Tech

U.S. Investigating Tesla Recall of Autopilot

Published

on


The federal government’s main auto safety agency said on Friday that it was investigating Tesla’s recall of its Autopilot driver-assistance system because regulators were concerned that the company had not done enough to ensure that drivers remained attentive while using the technology.

The National Highway Traffic Safety Administration said in documents posted on its website that it was looking into Tesla’s recall in December of two million vehicles, which covered nearly all of the cars the company had manufactured in the United States since 2012. The safety agency said it had concerns about crashes that took place after the recall and results from preliminary tests of recalled vehicles.

The agency also published an analysis that found that there had been at least 29 fatal accidents involving Autopilot and a more advanced system that Tesla calls Full Self-Driving from January 2018 to August 2023. In 13 of those fatal accidents, the fronts of Teslas hit objects or people in their path.

The investigation of Tesla’s recall and the new data about crashes adds to a list of headaches for Tesla, the dominant electric-vehicle maker in the United States. The company’s sales in the first three months of the year fell more than 8 percent from a year earlier, the first such drop since the early days of the coronavirus pandemic.

Tesla announced in December that it would recall its Autopilot software after an investigation by the auto safety agency found that the carmaker hadn’t put in place enough safeguards to make sure the system, which can accelerate, brake and control cars in other ways, was used safely by drivers who were supposed to be ready at any moment to retake control of their cars using Autopilot.

In its analysis of Tesla crash data, the safety agency found that when the company’s cameras, sensors and software did not spot obstacles in the car’s path and drivers did not compensate for that failure quickly enough the consequences were often catastrophic.

In one case, a child who had just gotten off a school bus in March 2023 in North Carolina was hit by a Tesla Model Y traveling at highway speed. He had serious injuries. “Based on publicly available information, both the bus and the pedestrian would have been visible to an attentive driver and allowed the driver to avoid or minimize the severity of this crash,” the safety agency said.

It is not clear how often Tesla’s cars are involved in accidents while Autopilot and Full Self-Driving are in use, the agency said, because the company is not aware of every crash involving its cars. The safety agency added that Tesla was an outlier in the auto industry by discouraging drivers to engage with an autopilot system that isn’t equipped for many situations.

Tesla is facing several lawsuits from individuals who claim that the system is defective, and that its design contributed to or is responsible for serious injuries and deaths.

The December recall, which entails a wireless software update, includes more prominent visual alerts and checks when drivers are using Autopilot to remind them to keep their hands on the wheel and pay attention to the road. The recall covers all five of Tesla’s passenger models — the 3, S, X, Y and Cybertruck.

Tesla did not respond to a request for comment.

The auto safety agency also said Friday that it took issue with Tesla’s decision to allow customers to opt in to the recall and let them undo the changes. Tesla also appeared to include other updates that addressed issues related to the recall that the company and the safety agency had not agreed on in advance.

Tesla and its chief executive, Elon Musk, have long chafed at criticism of Autopilot and Full Self-Driving. They have argued that the systems, neither of which allow cars to drive themselves, make its cars safer and have blamed drivers for any crashes or problems.

The carmaker has been under the scrutiny of safety regulators for other issues, too.

Last week, the auto safety agency said Tesla had agreed to recall nearly 4,000 Cybertruck pickups. The agency said the way soap had been used as a lubricant during the assembly of the truck could lead to the accelerator pedal’s becoming stuck. The carmaker is not aware of any injuries or accidents linked to that defect.

In February, Tesla recalled more than two million vehicles because the font size on a warning lights panel was too small.

The company is struggling to hold on to its dominance in the electric-vehicle market as newer and more established automakers introduce new models around the world. Tesla’s market share in the U.S. electric-vehicle market fell to 51 percent in the first quarter, down from 62 percent a year earlier.

Mr. Musk told employees this month that Tesla would cut more than 10 percent of its work force. Two senior executives also announced that they were leaving the company.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Sci-Tech

Ros Atkins on… How different countries protect children online

Published

on


This week, the UK’s media regulator, Ofcom, set out new rules for social media companies – aimed at protecting children from harmful content online.

More than 40 measures have been set out – including making firms change their algorithms and perform more rigorous age checks.

Around the world, governments are considering – or have already passed – similar legislation. Analysis editor Ros Atkins looks at what other countries are doing to try and protect children online.



Source link

Continue Reading

Sci-Tech

Test-at-home kit for cancer patients approved for use

Published

on



Patients say the device allows them to reduce the number of hospital visits involved in cancer care.



Source link

Continue Reading

Sci-Tech

Apple Will Revamp Siri to Catch Up to Its Chatbot Competitors

Published

on


Apple’s top software executives decided early last year that Siri, the company’s virtual assistant, needed a brain transplant.

The decision came after the executives Craig Federighi and John Giannandrea spent weeks testing OpenAI’s new chatbot, ChatGPT. The product’s use of generative artificial intelligence, which can write poetry, create computer code and answer complex questions, made Siri look antiquated, said two people familiar with the company’s work, who didn’t have permission to speak publicly.

Introduced in 2011 as the original virtual assistant in every iPhone, Siri had been limited for years to individual requests and had never been able to follow a conversation. It often misunderstood questions. ChatGPT, on the other hand, knew that if someone asked for the weather in San Francisco and then said, “What about New York?” that user wanted another forecast.

The realization that new technology had leapfrogged Siri set in motion the tech giant’s most significant reorganization in more than a decade. Determined to catch up in the tech industry’s A.I. race, Apple has made generative A.I. a tent pole project — the company’s special, internal label that it uses to organize employees around once-in-a-decade initiatives.

Apple is expected to show off its A.I. work at its annual developers conference on June 10 when it releases an improved Siri that is more conversational and versatile, according to three people familiar with the company’s work, who didn’t have permission to speak publicly. Siri’s underlying technology will include a new generative A.I. system that will allow it to chat rather than respond to questions one at a time.

The update to Siri is at the forefront of a broader effort to embrace generative A.I. across Apple’s business. The company is also increasing the memory in this year’s iPhones to support its new Siri capabilities. And it has discussed licensing complementary A.I. models that power chatbots from several companies, including Google, Cohere and OpenAI.

An Apple spokeswoman declined to comment.

Apple executives worry that new A.I. technology threatens the company’s dominance of the global smartphone market because it has the potential to become the primary operating system, displacing the iPhone’s iOS software, said two people familiar with the thinking of Apple’s leadership, who didn’t have permission to speak publicly. This new technology could also create an ecosystem of A.I. apps, known as agents, that can order Ubers or make calendar appointments, undermining Apple’s App Store, which generates about $24 billion in annual sales.

Apple also fears that if it fails to develop its own A.I. system, the iPhone could become a “dumb brick” compared with other technology. While it is unclear how many people regularly use Siri, the iPhone currently takes 85 percent of global smartphone profits and generates more than $200 billion in sales.

That sense of urgency contributed to Apple’s decision to cancel its other big bet — a $10 billion project to develop a self-driving car — and reassign hundreds of engineers to work on A.I.

Apple has also explored creating servers that are powered by its iPhone and Mac processors, two of these people said. Doing so could help Apple save money and create consistency between the tools used for processes in the cloud and on its devices.

Rather than compete directly with ChatGPT by releasing a chatbot that does things like write poetry, the three people familiar with its work said, Apple has focused on making Siri better at handling tasks that it already does, including setting timers, creating calendar appointments and adding items to a grocery list. It also would be able to summarize text messages.

Apple plans to bill the improved Siri as more private than rival A.I. services because it will process requests on iPhones rather than remotely in data centers. The strategy will also save money. OpenAI spends about 12 cents for about 1,000 words that ChatGPT generates because of cloud computing costs.

(The New York Times sued OpenAI and its partner, Microsoft, in December for copyright infringement of news content related to A.I. systems.)

But Apple faces risks by relying on a smaller A.I. system housed on iPhones rather than a larger one stored in a data center. Research has found that smaller A.I. systems could be more likely to make errors, known as hallucinations, than larger ones.

“It’s always been the Siri vision to have a conversational interface that understands language and context, but it’s a hard problem,” said Tom Gruber, a co-founder of Siri who worked at Apple until 2018. “Now that the technology has changed, it should be possible to do a much better job of that. So long as it’s not a one-size-fits-all effort to answer anything, then they should be able to avoid trouble.”

Apple has several advantages in the A.I. race, including more than two billion devices in use around the world where it can distribute A.I. products. It also has a leading semiconductor team that has been making sophisticated chips capable of powering A.I. tasks like facial recognition.

But for the past decade, Apple has struggled to develop a comprehensive A.I. strategy, and Siri has not had major improvements since its introduction. The assistant’s struggles blunted the appeal of the company’s HomePod smart speaker because it couldn’t consistently perform simple tasks like fulfilling a song request.

The Siri team has failed to get the kind of attention and resources that went to other groups inside Apple, said John Burkey, who worked on Siri for two years before founding a generative A.I. platform, Brighten.ai. The company’s divisions, such as software and hardware, operate independently of one another and share limited information. But A.I. needs to be threaded through products to succeed.

“It’s not in Apple’s DNA,” Mr. Burkey said. “It’s a blind spot.”

Apple has also struggled to recruit and retain leading A.I. researchers. Over the years, it has acquired A.I. companies led by leaders in the field, but they all left after a few years.

The reasons for their departures vary, but one factor is Apple’s secrecy. The company publishes fewer papers on its A.I. work than Google, Meta and Microsoft, and it doesn’t participate in conferences in the same way that its rivals do.

“Research scientists say: ‘What are my other options? Can I go back into academia? Can I go to a research institute, some place where I can work a bit more in the open?’” said Ruslan Salakhutdinov, a leading A.I. researcher, who left Apple in 2020 to return to Carnegie Mellon University.

In recent months, Apple has increased the number of A.I. papers it has published. But prominent A.I. researchers have questioned the value of the papers, saying they are more about creating the impression of meaningful work than providing examples of what Apple may bring to market.

Tsu-Jui Fu, an Apple intern and A.I. doctoral student at the University of California, Santa Barbara, wrote one of Apple’s recent A.I. papers. He spent last summer developing a system for editing photos with written commands rather than Photoshop tools. He said that Apple supported the project by providing him with the necessary G.P.U.s to train the system, but that he had no interaction with the A.I. team working on Apple products.

Though he said he had interviewed for full-time jobs at Adobe and Nvidia, he plans to return to Apple after he graduates because he thinks he can make a bigger difference there.

“A.I. product and research is emerging in Apple, but most companies are very mature,” Mr. Fu said in an interview with The Times. “At Apple, I can have more room to lead a project instead of just being a member of a team doing something.”



Source link

Continue Reading
Advertisement

Trending

Copyright © 2024 World Daily Info. Powered by Columba Ventures Co. Ltd.