Sorry, you need to enable JavaScript to visit this website.

Feed aggregator

Are AI-Generated Search Results Still Protected by Section 230?

Slashdot - 1 hour 4 min ago
Starting this week millions will see AI-generated answers in Google's search results by default. But the announcement Tuesday at Google's annual developer conference suggests a future that's "not without its risks, both to users and to Google itself," argues the Washington Post: For years, Google has been shielded for liability for linking users to bad, harmful or illegal information by Section 230 of the Communications Decency Act. But legal experts say that shield probably won't apply when its AI answers search questions directly. "As we all know, generative AIs hallucinate," said James Grimmelmann, professor of digital and information law at Cornell Law School and Cornell Tech. "So when Google uses a generative AI to summarize what webpages say, and the AI gets it wrong, Google is now the source of the harmful information," rather than just the distributor of it... Adam Thierer, senior fellow at the nonprofit free-market think tank R Street, worries that innovation could be throttled if Congress doesn't extend Section 230 to cover AI tools. "As AI is integrated into more consumer-facing products, the ambiguity about liability will haunt developers and investors," he predicted. "It is particularly problematic for small AI firms and open-source AI developers, who could be decimated as frivolous legal claims accumulate." But John Bergmayer, legal director for the digital rights nonprofit Public Knowledge, said there are real concerns that AI answers could spell doom for many of the publishers and creators that rely on search traffic to survive — and which AI, in turn, relies on for credible information. From that standpoint, he said, a liability regime that incentivizes search engines to continue sending users to third-party websites might be "a really good outcome." Meanwhile, some lawmakers are looking to ditch Section 230 altogether. [Last] Sunday, the top Democrat and Republican on the House Energy and Commerce Committee released a draft of a bill that would sunset the statute within 18 months, giving Congress time to craft a new liability framework in its place. In a Wall Street Journal op-ed, Reps. Cathy McMorris Rodgers (R-Wash.) and Frank Pallone Jr. (D-N.J.) argued that the law, which helped pave the way for social media and the modern internet, has "outlived its usefulness." The tech industry trade group NetChoice [which includes Google, Meta, X, and Amazon] fired back on Monday that scrapping Section 230 would "decimate small tech" and "discourage free speech online." The digital law professor points out Google has traditionally escaped legal liability by attributing its answers to specific sources — but it's not just Google that has to worry about the issue. The article notes that Microsoft's Bing search engine also supplies AI-generated answers (from Microsoft's Copilot). "And Meta recently replaced the search bar in Facebook, Instagram and WhatsApp with its own AI chatbot." The article also note sthat several U.S. Congressional committees are considering "a bevy" of AI bills...

Read more of this story at Slashdot.

How an 'Unprecedented' Google Cloud Event Wiped Out a Major Customer's Account

Slashdot - 2 hours 4 min ago
Ars Technica looks at what happened after Google's answer to Amazon's cloud service "accidentally deleted a giant customer account for no reason..." "[A]ccording to UniSuper's incident log, downtime started May 2, and a full restoration of services didn't happen until May 15." UniSuper, an Australian pension fund that manages $135 billion worth of funds and has 647,000 members, had its entire account wiped out at Google Cloud, including all its backups that were stored on the service... UniSuper's website is now full of must-read admin nightmare fuel about how this all happened. First is a wild page posted on May 8 titled "A joint statement from UniSuper CEO Peter Chun, and Google Cloud CEO, Thomas Kurian...." Google Cloud is supposed to have safeguards that don't allow account deletion, but none of them worked apparently, and the only option was a restore from a separate cloud provider (shoutout to the hero at UniSuper who chose a multi-cloud solution)... The many stakeholders in the service meant service restoration wasn't just about restoring backups but also processing all the requests and payments that still needed to happen during the two weeks of downtime. The second must-read document in this whole saga is the outage update page, which contains 12 statements as the cloud devs worked through this catastrophe. The first update is May 2 with the ominous statement, "You may be aware of a service disruption affecting UniSuper's systems...." Seven days after the outage, on May 9, we saw the first signs of life again for UniSuper. Logins started working for "online UniSuper accounts" (I think that only means the website), but the outage page noted that "account balances shown may not reflect transactions which have not yet been processed due to the outage...." May 13 is the first mention of the mobile app beginning to work again. This update noted that balances still weren't up to date and that "We are processing transactions as quickly as we can." The last update, on May 15, states, "UniSuper can confirm that all member-facing services have been fully restored, with our retirement calculators now available again." The joint statement and the outage updates are still not a technical post-mortem of what happened, and it's unclear if we'll get one. Google PR confirmed in multiple places it signed off on the statement, but a great breakdown from software developer Daniel Compton points out that the statement is not just vague, it's also full of terminology that doesn't align with Google Cloud products. The imprecise language makes it seem like the statement was written entirely by UniSuper. Thanks to long-time Slashdot reader swm for sharing the news.

Read more of this story at Slashdot.

Eight Automakers Grilled by US Lawmakers Over Sharing of Connected Car Data With Police

Slashdot - 3 hours 4 min ago
An anonymous reader shared this report from Automotive News: Automotive News recently reported that eight automakers sent vehicle location data to police without a court order or warrant. The eight companies told senators that they provide police with data when subpoenaed, getting a rise from several officials. BMW, Kia, Mazda, Mercedes-Benz, Nissan, Subaru, Toyota, and Volkswagen presented their responses to lawmakers. Senators Ron Wyden from Oregon and Ed Markey from Massachusetts penned a letter to the Federal Trade Commission, urging investigative action. "Automakers have not only kept consumers in the dark regarding their actual practices, but multiple companies misled consumers for over a decade by failing to honor the industry's own voluntary privacy principles," they wrote. Ten years ago, all of those companies agreed to the Consumer Privacy Protection Principles, a voluntary code that said automakers would only provide data with a warrant or order issued by a court. Subpoenas, on the other hand, only require approval from law enforcement. Though it wasn't part of the eight automakers' response, General Motors has a class-action suit on its hands, claiming that it shared data with LexisNexis Risk Solutions, a company that provides insurers with information to set rates. The article notes that the lawmakers praised Honda, Ford, GM, Tesla, and Stellantis for requiring warrants, "except in the case of emergencies or with customer consent."

Read more of this story at Slashdot.

SUSE's YaST Team Drops Cockpit With New Installer Code

Phoronix - 3 hours 48 min ago
SUSE/openSUSE has been busy crafting a next-gen Linux installer that is a web-based installer and originally known as D-Installer but now going by the name Agama...

Study Confirms Einstein Prediction: Black Holes Have a 'Plunging Region'

Slashdot - 4 hours 4 min ago
"Albert Einstein was right," reports CNN. "There is an area at the edge of black holes where matter can no longer stay in orbit and instead falls in, as predicted by his theory of gravity." The proof came by combining NASA's earth-orbiting NuSTAR telescope with the NICER telescope on the International Space Station to detect X-rays: A team of astronomers has for the first time observed this area — called the "plunging region" — in a black hole about 10,000 light-years from Earth. "We've been ignoring this region, because we didn't have the data," said research scientist Andrew Mummery, lead author of the study published Thursday in the journal Monthly Notices of the Royal Astronomical Society. "But now that we do, we couldn't explain it any other way." Mummery — also a Fellow in Oxford's physics department — told CNN, "We went out searching for this one specifically — that was always the plan. We've argued about whether we'd ever be able to find it for a really long time. People said it would be impossible, so confirming it's there is really exciting." Mummery described the plunging region as "like the edge of a waterfall." Unlike the event horizon, which is closer to the center of the black hole and doesn't let anything escape, including light and radiation, in the "plunging region" light can still escape, but matter is doomed by the powerful gravitational pull, Mummery explained. The study's findings could help astronomers better understand the formation and evolution of black holes. "We can really learn about them by studying this region, because it's right at the edge, so it gives us the most information," Mummery said... According to Christopher Reynolds, a professor of astronomy at the University of Maryland, College Park, finding actual evidence for the "plunging region" is an important step that will let scientists significantly refine models for how matter behaves around a black hole. "For example, it can be used to measure the rotation rate of the black hole," said Reynolds, who was not involved in the study.

Read more of this story at Slashdot.

'Google Domains' Starts Migrating to Squarespace

Slashdot - 5 hours 4 min ago
"We're migrating domains in batches..." announced web-hosting company Squarespace earlier this month. "Squarespace has entered into an agreement to become the new home for Google Domains customers. When your domain transitions from Google to Squarespace, you'll become a Squarespace customer and manage your domain through an account with us." Slashdot reader shortyadamk shares an email sent today to a Google Domains customer: "Today your domain, xyz.com, migrated from Google Domains to Squarespace Domains. "Your WHOIS contact details and billing information (if applicable) were migrated to Squarespace. Your DNS configuration remains unchanged. "Your migrated domain will continue to work with Google Services such as Google Search Console. To support this, your account now has a domain verification record — one corresponding to each Google account that currently has access to the domain."

Read more of this story at Slashdot.

Is America's Defense Department 'Rushing to Expand' Its Space War Capabilities?

Slashdot - 6 hours 4 min ago
America's Defense Department "is rushing to expand its capacity to wage war in space," reports the New York Times, "convinced that rapid advances by China and Russia in space-based operations pose a growing threat to U.S. troops and other military assets on the ground and U.S. satellites in orbit." [T]he Defense Department is looking to acquire a new generation of ground- and space-based tools that will allow it to defend its satellite network from attack and, if necessary, to disrupt or disable enemy spacecraft in orbit, Pentagon officials have said in a series of interviews, speeches and recent statements... [T]he move to enhance warfighting capacity in space is driven mostly by China's expanding fleet of military tools in space... [U.S. officials are] moving ahead with an effort they are calling "responsible counterspace campaigning," an intentionally ambiguous term that avoids directly confirming that the United States intends to put its own weapons in space. But it also is meant to reflect this commitment by the United States to pursue its interest in space without creating massive debris fields that would result if an explosive device or missile were used to blow up an enemy satellite. That is what happened in 2007, when China used a missile to blow up a satellite in orbit. The United States, China, India and Russia all have tested such missiles. But the United States vowed in 2022 not to do any such antisatellite tests again. The United States has also long had ground-based systems that allow it to jam radio signals, disrupting the ability of an enemy to communicate with its satellites, and is taking steps to modernize these systems. But under its new approach, the Pentagon is moving to take on an even more ambitious task: broadly suppress enemy threats in orbit in a fashion similar to what the Navy does in the oceans and the Air Force in the skies. The article notes a recent report drafted by a former Space Force colonel cited three ways to disable enemy satellite networks: cyberattacks, ground or space-based lasers, and high-powered microwaves. "John Shaw, a recently retired Space Force lieutenant general who helped run the Space Command, agreed that directed-energy devices based on the ground or in space would probably be a part of any future system. 'It does minimize debris; it works at the speed of light,' he said. 'Those are probably going to be the tools of choice to achieve our objective." The Pentagon is separately working to launch a new generation of military satellites that can maneuver, be refueled while in space or have robotic arms that could reach out and grab — and potentially disrupt — an enemy satellite. Another early focus is on protecting missile defense satellites. The Defense Department recently started to require that a new generation of these space-based monitoring systems have built-in tools to evade or respond to possible attack. "Resiliency feature to protect against directed energy attack mechanisms" is how one recent missile defense contract described it. Last month the Pentagon also awarded contracts to two companies — Rocket Lab and True Anomaly — to launch two spacecraft by late next year, one acting as a mock enemy and the other equipped with cameras, to pull up close and observe the threat. The intercept satellite will not have any weapons, but it has a cargo hold that could carry them. The article notes that Space Force's chief of space operations has told Senate appropriators that about $2.4 billion of the $29.4 billion in Space Force's proposed 2025 budget was set aside for "space domain awareness." And it adds that the Pentagon "is working to coordinate its so-called counterspace efforts with major allies, including Britain, Canada and Australia, through a multinational operation called Operation Olympic Defender. France has been particularly aggressive, announcing its intent to build and launch by 2030 a satellite equipped with a high-powered laser." [W]hat is clear is that a certain threshold has now been passed: Space has effectively become part of the military fighting domain, current and former Pentagon officials said. "By no means do we want to see war extend into space," Lt. Gen. DeAnna Burt, deputy chief of space operations, said at a Mitchell Institute event this year. "But if it does, we have to be prepared to fight and win."

Read more of this story at Slashdot.

An attorney says she saw her library reading habits reflected in mobile ads. That's not supposed to happen

El Reg - 6 hours 34 min ago
Follow us down this deep rabbit hole of privacy policy after privacy policy

Feature  In April, attorney Christine Dudley was listening to a book on her iPhone while playing a game on her Android tablet when she started to see in-game ads that reflected the audiobooks she recently checked out of the San Francisco Public Library.…

Cruise Reached an $8M+ Settlement With the Person Dragged Under Its Robotaxi

Slashdot - 7 hours 4 min ago
Bloomberg reports that self-driving car company Cruise "reached an $8 million to $12 million settlement with a pedestrian who was dragged by one of its self-driving vehicles in San Francisco, according to a person familiar with the situation." The settlement was struck earlier this year and the woman is out of the hospital, said the person, who declined to be identified discussing a private matter. In the October incident, the pedestrian crossing the road was struck by another vehicle before landing in front of one of GM's Cruise vehicles. The robotaxi braked hard but ran over the person. It then pulled over for safety, driving 20 feet at a speed of up to seven miles per hour with the pedestrian still under the car. The incident "contributed to the company being blocked from operating in San Francisco and halting its operations around the country for months," reports the Washington Post: The company initially told reporters that the car had stopped just after rolling over the pedestrian, but the California Public Utilities Commission, which regulates permits for self-driving cars, later said Cruise had covered up the truth that its car actually kept going and dragged the woman. The crash and the questions about what Cruise knew and disclosed to investigators led to a firestorm of scrutiny on the company. Cruise pulled its vehicles off roads countrywide, laid off a quarter of its staff and in November its CEO Kyle Vogt stepped down. The Department of Justice and the Securities and Exchange Commission are investigating the company, adding to a probe from the National Highway Traffic Safety Administration. In Cruise's absence, Google's Waymo self-driving cars have become the only robotaxis operating in San Francisco. in June, the company's president and chief technology officer Mohamed Elshenawy is slated to speak at a conference on artificial-intelligence quality in San Francisco. Dow Jones news services published this quote from a Cruise spokesperson. "The hearts of all Cruise employees continue to be with the pedestrian, and we hope for her continued recovery."

Read more of this story at Slashdot.

Bruce Schneier Reminds LLM Engineers About the Risks of Prompt Injection Vulnerabilities

Slashdot - 8 hours 4 min ago
Security professional Bruce Schneier argues that large language models have the same vulnerability as phones in the 1970s exploited by John Draper. "Data and control used the same channel," Schneier writes in Communications of the ACM. "That is, the commands that told the phone switch what to do were sent along the same path as voices." Other forms of prompt injection involve the LLM receiving malicious instructions in its training data. Another example hides secret commands in Web pages. Any LLM application that processes emails or Web pages is vulnerable. Attackers can embed malicious commands in images and videos, so any system that processes those is vulnerable. Any LLM application that interacts with untrusted users — think of a chatbot embedded in a website — will be vulnerable to attack. It's hard to think of an LLM application that isn't vulnerable in some way. Individual attacks are easy to prevent once discovered and publicized, but there are an infinite number of them and no way to block them as a class. The real problem here is the same one that plagued the pre-SS7 phone network: the commingling of data and commands. As long as the data — whether it be training data, text prompts, or other input into the LLM — is mixed up with the commands that tell the LLM what to do, the system will be vulnerable. But unlike the phone system, we can't separate an LLM's data from its commands. One of the enormously powerful features of an LLM is that the data affects the code. We want the system to modify its operation when it gets new training data. We want it to change the way it works based on the commands we give it. The fact that LLMs self-modify based on their input data is a feature, not a bug. And it's the very thing that enables prompt injection. Like the old phone system, defenses are likely to be piecemeal. We're getting better at creating LLMs that are resistant to these attacks. We're building systems that clean up inputs, both by recognizing known prompt-injection attacks and training other LLMs to try to recognize what those attacks look like. (Although now you have to secure that other LLM from prompt-injection attacks.) In some cases, we can use access-control mechanisms and other Internet security systems to limit who can access the LLM and what the LLM can do. This will limit how much we can trust them. Can you ever trust an LLM email assistant if it can be tricked into doing something it shouldn't do? Can you ever trust a generative-AI traffic-detection video system if someone can hold up a carefully worded sign and convince it to not notice a particular license plate — and then forget that it ever saw the sign...? Someday, some AI researcher will figure out how to separate the data and control paths. Until then, though, we're going to have to think carefully about using LLMs in potentially adversarial situations...like, say, on the Internet. Schneier urges engineers to balance the risks of generative AI with the powers it brings. "Using them for everything is easier than taking the time to figure out what sort of specialized AI is optimized for the task. "But generative AI comes with a lot of security baggage — in the form of prompt-injection attacks and other security risks. We need to take a more nuanced view of AI systems, their uses, their own particular risks, and their costs vs. benefits."

Read more of this story at Slashdot.

Facing Angry Users, Sonos Promises to Fix Flaws and Restore Removed Features

Slashdot - 9 hours 4 min ago
A blind worker for the National Federation of the Blind said Sonos had a reputation for making products usable for people with disabilities, but that "Overnight they broke that trust," according to the Washington Post. They're not the only angry customers about the latest update to Sonos's wireless speaker system. The newspaper notes that nonprofit worker Charles Knight is "among the Sonos die-hards who are furious at the new app that crippled their options to stream music, listen to an album all the way through or set a morning alarm clock." After Sonos updated its app last week, Knight could no longer set or change his wake-up music alarm. Timers to turn off music were also missing. "Something as basic as an alarm is part of the feature set that users have had for 15 years," said Knight, who has spent thousands of dollars on six Sonos speakers for his bedroom, home office and kitchen. "It was just really badly thought out from start to finish." Some people who are blind also complained that the app omitted voice-control features they need. What's happening to Sonos speaker owners is a cautionary tale. As more of your possessions rely on software — including your car, phone, TV, home thermostat or tractor — the manufacturer can ruin them with one shoddy update... Sonos now says it's fixing problems and adding back missing features within days or weeks. Sonos CEO Patrick Spence acknowledged the company made some mistakes and said Sonos plans to earn back people's trust. "There are clearly people who are having an experience that is subpar," Spence said. "I would ask them to give us a chance to deliver the actions to address the concerns they've raised." Spence said that for years, customers' top complaint was the Sonos app was clunky and slow to connect to their speakers. Spence said the new app is zippier and easier for Sonos to update. (Some customers disputed that the new app is faster.) He said some problems like Knight's missing alarms were flaws that Sonos found only once the app was about to roll out. (Sonos updated the alarm feature this week.) Sonos did remove but planned to add back some lesser-used features. Spence said the company should have told people upfront about the planned timeline to return any missing functions. In a blog post Sonos thanked customers for "valuable feedback," saying they're "working to address them as quickly as possible" and promising to reintroduce features, fix bugs, and address performance issues. ("Adding and editing alarms" is available now, as well as VoiceOver fixes for the home screen on iOS.) The Washington Post adds that Sonos "said it initially missed some software flaws and will restore more voice-reader functions next week."

Read more of this story at Slashdot.

Linux 6.10 NFSD Brings Optimizations & Preps For New nfsdctl Utility

Phoronix - 9 hours 45 min ago
The Linux Network File System (NFS) server code (NFSD) is seeing a new Netlink protocol introduced in Linux 6.10 as part of laying the groundwork for the new "nfsdctl" utility...

Niri 0.1.6 Wayland Compositor Adds Interactive Window Resizing & Mouse View Scrolling

Phoronix - 10 hours 29 min ago
Niri is a scrollable-tiling Wayland compositor inspired by PaperWM and heavy on the animations/effects. Out this morning is Niri 0.1.6 as the newest feature release for this Wayland compositor...

'Openwashing'

Slashdot - 10 hours 38 min ago
An anonymous reader quotes a report from The New York Times: There's a big debate in the tech world over whether artificial intelligence models should be "open source." Elon Musk, who helped found OpenAI in 2015, sued the startup and its chief executive, Sam Altman, on claims that the company had diverged from its mission of openness. The Biden administration is investigating the risks and benefits of open source models. Proponents of open source A.I. models say they're more equitable and safer for society, while detractors say they are more likely to be abused for malicious intent. One big hiccup in the debate? There's no agreed-upon definition of what open source A.I. actually means. And some are accusing A.I. companies of "openwashing" -- using the "open source" term disingenuously to make themselves look good. (Accusations of openwashing have previously been aimed at coding projects that used the open source label too loosely.) In a blog post on Open Future, a European think tank supporting open sourcing, Alek Tarkowski wrote, "As the rules get written, one challenge is building sufficient guardrails against corporations' attempts at 'openwashing.'" Last month the Linux Foundation, a nonprofit that supports open-source software projects, cautioned that "this 'openwashing' trend threatens to undermine the very premise of openness -- the free sharing of knowledge to enable inspection, replication and collective advancement." Organizations that apply the label to their models may be taking very different approaches to openness. [...] The main reason is that while open source software allows anyone to replicate or modify it, building an A.I. model requires much more than code. Only a handful of companies can fund the computing power and data curation required. That's why some experts say labeling any A.I. as "open source" is at best misleading and at worst a marketing tool. "Even maximally open A.I. systems do not allow open access to the resources necessary to 'democratize' access to A.I., or enable full scrutiny," said David Gray Widder, a postdoctoral fellow at Cornell Tech who has studied use of the "open source" label by A.I. companies.

Read more of this story at Slashdot.

Gawd, after that week, we wonder what's next for China and the Western world

El Reg - 11 hours 3 min ago
For starters: Crypto, import tariffs, and Microsoft shipping out staff

Kettle  It's been a fairly troubling week in terms of the relationship between China and the Western world.…

GNOME OS Working On A New Installer & Other Enhancements To Make It More Practical

Phoronix - 12 hours 41 min ago
Germany's Sovereign Tech Fund continues providing the resources for various new GNOME desktop development initiatives. There are various efforts underway for new features and refinements with GNOME 47 in September and a renewed emphasis around GNOME OS...

Linux 6.10 x86 Instruction Decoder Prepares For APX & Other New Intel Instructions

Phoronix - 12 hours 51 min ago
The performance events updates were submitted today for the ongoing Linux 6.10 kernel merge window. This pull adds support for Advanced Performance Extensions (APX) and other new Intel CPU instructions to the x86 instruction decoder...

Linux 6.10 Preps For "When Things Go Seriously Wrong" On Bigger Servers

Phoronix - 13 hours 4 min ago
While machine check exception (MCE) events tend to be uncommon, a change made by Intel engineers is accommodating the ability in the Linux kernel to store more machine check records for "when things go seriously wrong" on increasingly high core count servers...

KDE Apps Improving Experience When Running Outside Of Plasma

Phoronix - 13 hours 15 min ago
KDE development remains very busy ahead of next month's Plasma 6.1 desktop release...

Gentoo and NetBSD ban 'AI' code, but Debian doesn't – yet

El Reg - 15 hours 4 min ago
The problem isn't just that LLM-bot generated code is bad – it's where it came from

Comment  The Debian project has decided against joining Gentoo Linux and NetBSD in rejecting program code generated with the assistance of LLM tools, such as Github's Copilot.…