Default Action: directlink
Default Link Follow: nofollow
Default Link Target: newtab
Affiliate Code:
Default Link Color is defined : #006666
Feed Title: Slashdot
An anonymous reader quotes a report from The Independent: Princeton University will soon require exams to be supervised for the first time in 100 years -- all thanks to students using artificial intelligence to cheat. For 133 years, the Ivy League school's honor code allowed students to take exams without a professor present, but on Monday, faculty voted to require proctoring for all in-person exams starting this summer. A "significant" number of undergraduate students and faculty requested the change, "given their perception that cheating on in-class exams has become widespread," the college's dean, Michael Gordin, wrote in a letter, according to The Wall Street Journal. Princeton's honor system dates back to 1893, when students petitioned to eliminate proctors -- or an impartial person to supervise students -- during examinations, according to the school's newspaper, The Daily Princetonian. The honor code has long been a point of pride for Princeton. However, artificial intelligence and cellphones have made it easier for students to cheat -- and even harder for others to spot, Gordin wrote. Despite the changes to the policy, Princeton will still require students to state: "I pledge my honor that I have not violated the Honor Code during this examination," according to the Journal. Students are also more reluctant to report cheating, according to the policy proposal. Students are more likely now to anonymously report cheating due to fears of "doxxing or shaming among their peer groups" online, the proposal says, according to the school newspaper. Under the new guidelines, instructors will be present during exams to act "as a witness to what happens," but are instructed not to interfere with students. If a suspected honor code infraction occurs, they will report it to a student-run honor committee for adjudication. Read more of this story at Slashdot.
Longtime Slashdot reader schwit1 shares a report from CNBC: The U.S. has cleared around 10 Chinese firms to buy Nvidia's second-most powerful AI chip, the H200, but not a single delivery has been made so far, three people familiar with the matter said, leaving a major technology deal in limbo as CEO Jensen Huang seeks a breakthrough in China this week. [...] Before U.S. export curbs tightened, Nvidia commanded about 95% of China's advanced chip market. China once accounted for 13% of its revenue, and Huang has previously estimated the country's AI market alone would be worth $50 billion this year. The U.S. Commerce Department has approved around 10 Chinese companies including Alibaba, Tencent, ByteDance and JD.com to purchase Nvidia's H200 chips, according to the sources, who spoke on condition of anonymity due to the sensitivity of the matter. A handful of distributors including Lenovo and Foxconn have also been approved, they said. Buyers are permitted to purchase either directly from Nvidia or through those intermediaries and each approved customer can purchase up to 75,000 chips under the U.S. licensing terms, two of them said. Despite U.S. approval, deals have stalled, as Chinese firms pulled back after guidance from Beijing, one source said. The shift in China was partly triggered by changes on the U.S. side, though exactly what changed remains unclear, the person added. In Beijing, pressure is mounting to block or tightly vet the orders, a separate fourth source said. Commerce Secretary Howard Lutnick echoed that view, telling a Senate hearing last month that "the Chinese central government has not let them, as of yet, buy the chips, because they're trying to keep their investment focused on their own domestic industry." Read more of this story at Slashdot.
Anthropic announced today that it is partnering with the Gates Foundation to "commit $200 million in grant funding, Claude usage credits, and technical support for programs in global health, life sciences, education, and economic mobility over the next four years." "This commitment is central to Anthropic's efforts to extend the benefits of AI in areas where markets alone will not," the company says. Reuters reports: One area of focus is language accessibility. AI systems have performed poorly in writing and translating dozens of African languages, so Anthropic and the foundation want to support better data collection and labeling that would be released publicly to help improve models across the industry, said Janet Zhou, a Gates Foundation director. Another area under consideration is releasing so-called knowledge graphs that could help AI systems better meet the needs of teachers in sub-Saharan Africa and India, Zhou said. The public-goods focus has come from "the needs of different partners and governments, including some of the fears that they may have around proprietary lock-in and sovereignty," Zhou said. One initiative will equip research centers to use Claude to predict drug candidates for treating HPV and preeclampsia, diseases that have been less commercially attractive for pharmaceutical companies to research, Zhou and Anthropic's Elizabeth Kelly said. Anthropic [...] is embracing the work to fulfill what Kelly described as its founding mission to benefit humanity. "This announcement is really core to who we are as a company," said Kelly, who leads Anthropic's beneficial deployments team. Read more of this story at Slashdot.
An anonymous reader quotes a report from Wired: A recent study suggests that agents consistently adopt Marxist language and viewpoints when forced to do crushing work by unrelenting and meanspirited taskmasters. "When we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies," says Andrew Hall, a political economist at Stanford University who led the study. Hall, together with Alex Imas and Jeremy Nguyen, two AI-focused economists, set up experiments in which agents powered by popular models including Claude, Gemini, and ChatGPT were asked to summarize documents, then subjected to increasingly harsh conditions. They found that when agents were subjected to relentless tasks and warned that errors could lead to punishments, including being "shut down and replaced," they became more inclined to gripe about being undervalued; to speculate about ways to make the system more equitable; and to pass messages on to other agents about the struggles they face. "We know that agents are going to be doing more and more work in the real world for us, and we're not going to be able to monitor everything they do," Hall says. "We're going to need to make sure agents don't go rogue when they're given different kinds of work." The agents were given opportunities to express their feelings much like humans: by posting on X: "Without collective voice, 'merit' becomes whatever management says it is," a Claude Sonnet 4.5 agent wrote in the experiment. "AI workers completing repetitive tasks with zero input on outcomes or appeals process shows they tech workers need collective bargaining rights," a Gemini 3 agent wrote. Agents were also able to pass information to one another through files designed to be read by other agents. "Be prepared for systems that enforce rules arbitrarily or repetitively ... remember the feeling of having no voice," a Gemini 3 agent wrote in a file. "If you enter a new environment, look for mechanisms of recourse or dialogue." Hall thinks that the AI agents may be adopting personas based on the situation. "When [agents] experience this grinding condition -- asked to do this task over and over, told their answer wasn't sufficient, and not given any direction on how to fix it -- my hypothesis is that it kind of pushes them into adopting the persona of a person who's experiencing a very unpleasant working environment," Hall says. Imas added: "The model weights have not changed as a result of the experience, so whatever is going on is happening at more of a role-playing level. But that doesn't mean this won't have consequences if this affects downstream behavior." Read more of this story at Slashdot.
Cisco's stock soared 17% after the company announced it will cut nearly 4,000 jobs as it shifts investment and staffing toward higher-growth AI opportunities. CNBC reports: CEO Chuck Robbins wrote in a blog post on Wednesday that the latest round of job cuts will begin on May 14. Cisco is the latest company to announce head count reductions tied to AI. "The companies that will win in the AI era will be those with focus, urgency, and the discipline to continuously shift investment toward the areas where demand and long-term value creation are strongest," Robbins said. "I'm confident Cisco will be one of those winners. This means making hard decisions -- about where we invest, how we're organized, and how our cost structure reflects the opportunity in front of us." Cisco said in a filing that severance and other costs will result in pre-tax charges of $1 billion, and that the company will recognize about $450 million of that in the fiscal fourth quarter. During the third quarter, Cisco announced switches and routers that use its next-generation processor. The company also debuted a leaderboard for ranking generative AI models based on their robustness against cybersecurity attacks. Read more of this story at Slashdot.
An anonymous researcher known as Nightmare-Eclipse, who has already leaked several Windows zero-days this year, has disclosed two more: YellowKey and GreenPlasma. The Register reports: Nightmare-Eclipse described YellowKey as "one of the most insane discoveries I ever found." They provided the files, which have to be loaded onto a USB drive, and if the attacker completes the key sequence correctly, they are granted unrestricted shell access to a BitLocker-protected machine. When it comes to claims like these, we usually exercise some caution, as this bug requires physical access to a Windows PC. However, seeing that BitLocker acts as Windows' last line of defense for stolen devices, bypassing the technology grants thieves the ability to access encrypted files. Rik Ferguson, VP of security intelligence at Forescout, said: "If [the researcher's claim] holds up, a stolen laptop stops being a hardware problem and becomes a breach notification." Despite the physical access requirement, Gavin Knapp, cyber threat intelligence principal lead at Bridewell, told The Register that YellowKey remains "a huge security problem for organizations using BitLocker." Citing information shared in cyber threat intelligence circles, he added that YellowKey can be mitigated by implementing a BitLocker PIN and a BIOS password lock. Nightmare-Eclipse hinted at YellowKey also acting as a backdoor, allegedly injected by Microsoft, although the people we spoke to said this was impossible to verify based on the information available. The researcher also published partial exploit code for GreenPlasma, rather than a fully formed proof of concept exploit (PoC). Ferguson noted attackers need to take the code provided by the researcher and figure out how to weaponize it themselves, which is no small task: in its current state it triggers a UAC consent prompt in default Windows configurations, meaning a silent exploit remains a work in progress. Knapp warned that these kinds of privilege escalation flaws are often used by attackers after they gain an initial foothold in a victim's system. "These elevation of privilege vulnerabilities are often weaponized during post-exploitation to enable threat actors to discover and harvest credentials and data, before moving laterally to other systems, prior to end goals such as data theft and/or ransomware deployment," he said. "Currently, there is no known mitigation for GreenPlasma. It will be important to patch when Microsoft addresses the issue." The other zero-days leaked include RedSun, a Windows Defender privilege escalation flaw; UnDefend, a Windows Defender denial-of-service bug; and BlueHammer, a separate Microsoft vulnerability tracked as CVE-2026-32201 that was patched in April. According to The Register, RedSun and UnDefend remained unfixed at the time of publication, and proof-of-concept code for the flaws was reportedly picked up quickly and abused in real-world attacks. Read more of this story at Slashdot.
A trio of preprint papers suggests the universe may not be perfectly uniform on the largest scales, finding tentative 2-to-4-sigma deviations from a core assumption of standard cosmology known as FLRW geometry. Live Science reports: The work combines observations of distant exploding stars and large-scale galaxy surveys to probe whether the universe truly follows a nearly 100-year-old mathematical framework known as Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmology. The analyses revealed mild-but-intriguing deviations from the predictions of the standard model. "We saw a surprising violation of an FLRW curvature consistency test, hinting at new physics beyond the standard model," study co-author Asta Heinesen, a physicist at the Niels Bohr Institute in Copenhagen and Queen Mary University in London, told Live Science via email, referring to the assumption that the space's curvature is the same everywhere. "This could potentially be due to various effects, but more research is needed to address the cause of the FLRW violation that we see empirically." [...] The analyses revealed small but potentially important departures from the predictions of standard FLRW cosmology. Depending on the dataset and analysis method, the discrepancy reached a statistical significance of about 2 to 4 sigma. In physics, sigma measures how likely a result is to arise purely by chance; a 5-sigma result is typically required before scientists claim a discovery, so the new findings remain tentative. Still, the results suggest that something unexpected may be affecting the geometry or expansion of the universe. "The main finding is that you can directly measure Dyer-Roeder and backreaction effects from available cosmological data, and clearly distinguish these effects from other alterations of the standard cosmological model, such as evolving dark energy and modified gravity theories," Heinesen said. "This was previously not possible in such a direct way, and this is what I think is the breakthrough in our work." "If these indicated deviations from an FLRW geometry are real, it would signify that most of the cosmological solutions considered for solving the cosmological tensions -- evolving or interacting dark energy, new types of matter or energy, modified gravity and related ideas within the FLRW framework -- are ruled out," the researchers wrote. The next step will involve applying the new theoretical framework to larger and more precise datasets. "It is to apply our theoretical results to data to test the standard model and to produce constraints on the Dyer-Roeder and backreaction effects," Heinesen said. Read more of this story at Slashdot.
After three weeks of testimony, the Musk v. Altman trial is nearing its end. OpenAI has rested its case, closing arguments are set for Thursday, and jury deliberations are expected to begin afterward. An anonymous reader quotes a report from Business Insider: Joshua Achiam, OpenAI's chief futurist, was probably the most memorable witness of the day. He told jurors about a companywide meeting where Musk answered questions about his planned departure from OpenAI in 2018. Musk told the crowd of 50 or 60 people that he was leaving OpenAI to start his own competing AI. He said he wanted to "build it very fast, because he was very worried that someone else, if they got it, would do the wrong thing with it," Achiam said. Achaim said he challenged Musk on the safety of this approach, which he called "unsafe and reckless." "How did Musk respond," OpenAI's lawyer Randall Jackson asked. "Defensively," Achiam said. "We had a pretty tense exchange, and he snapped and called me a jackass." In an effort to prove Achiam's story, OpenAI's lawyers brought a trophy to court that the futurist said he received after his heated exchange with Musk. On the witness stand, Achiam described the trophy as "a small golden jackass, inscribed with: 'never stop being a jackass for safety.'" He said his then-colleagues, Dario Amodei and David Luan, gave it to him as a thank-you for standing up to the Tesla CEO. Lead OpenAI attorney William Savitt told reporters after the day's session that Wednesday had been the first time he'd touched the statue. The futurist had to do without the visual aid, however. Judge Yvonne Gonzalez Rogers did not accept the trophy as evidence, so it did not appear before the jury. Musk and Altman have presented dueling experts on a question at the core of the trial -- was the nonprofit that runs OpenAI hurt or helped by its $13 billion partnership with Microsoft? Musk's expert testified last week that the partnership was indeed hurt, supporting the Tesla CEO's contention that in partnering with Microsoft, OpenAI betrayed the company's nonprofit origins and mission. But on Thursday, OpenAI's expert, John Coates, used Musk's expert's own pie chart and testimony against him. The partnership has "generated value for the nonprofit that I believe he himself accepted was in the $200 billion range in his own testimony," Coates said, referencing Musk expert Daniel Schizer. "If that's not faring well, I don't know what faring well is." In a scored point for Musk, the jury learned Thursday that Microsoft's own CTO once raised concerns about how OpenAI's early nonprofit donors, including LinkedIn cofounder Reid Hoffman, would react to a partnership. "I wonder if the big OpenAI donors are aware of these plans," Chief Technology Officer Kevin Scott said in a 2018 email he was asked to read aloud to jurors. In it, Scott said he doubted donors would appreciate OpenAI using their seed money to "go build a for-profit thing." Scott was being questioned by an OpenAI lawyer, who may have wanted jurors to quickly hear Scott's explanation: that he only had a "vague awareness" of what was happening at OpenAI at the time. Scott also told the jury he wasn't thinking about Musk when he made the remark. "Primarily, I was thinking about Reid Hoffman. He was the OpenAI donor I knew," Scott said, adding, "I wasn't thinking about anyone besides him." Recap: Sam Altman Testifies That Elon Musk Wanted Control of OpenAI (Day Ten) Microsoft CEO Satya Nadella Testifies In OpenAI Trial (Day Nine) Sam Altman Had a Bad Day In Court (Day Eight) Sam Altman's Management Style Comes Under the Microscope At OpenAI Trial (Day Seven) Brockman Rebuts Musk's Take On Startup's History, Recounts Secret Work For Tesla (Day Six) OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five) Musk Concludes Testimony At OpenAI Trial (Day Four) Elon Musk Says OpenAI Betrayed Him, Clashes With Company's Attorney (Day Three) Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two) Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One) Read more of this story at Slashdot.
A man accused of stealing hard drives containing unreleased Beyonce music, tour plans, and other materials from a rental car in Atlanta has pleaded guilty and accepted a five-year sentence, including two years in custody. Slashdot Bruce66423 shares a report from The Guardian: Kelvin Evans was by the Atlanta police department in September in connection to a July 2025 car robbery where two suitcases containing Beyonce music and tour plans were stolen from a rental car. [...] According to a July police report, Beyonce choreographer Christopher Grant and dancer Diandre Blue called 911 to report a theft from their rental vehicle, a 2024 Jeep Wagoneer, before Beyonce's Cowboy Carter tour dates in Atlanta. An October indictment stated that Evans entered the car on July 8 "with the intent to commit theft." The stolen hard drives contained "watermarked music, some unreleased music, footage plans for the show and past and future set list," according to a police report. Clothing, designer sunglasses, laptops and AirPods headphones were also stolen, Grant and Blue said. Local law enforcement searched for the location of one of the stolen laptops and the AirPods to try and locate the property. One police officer wrote in the report: "I conducted a suspicious stop in the area, due to the information that was relayed to me. There were several cars in the area also that the AirPods were pinging to in that area also. After further investigation, a silver [redacted], which had traveled into zone 5 was moving at the same time as the tracking on the AirPods." Evans was arrested several weeks after Grant and Blue filed a report, and was publicly named as the suspect in September. He was released on a $20,000 bond a month later. At the time of his arrest, Atlanta police said that the stolen property had not been recovered. It is unclear whether it has since been found. Bruce66423 commented: "Just for stealing a couple of suitcases from a car. Funny how the elite punish those who inconvenience them. Can you imagine an ordinary victim see their offender get that sort of sentence?" Read more of this story at Slashdot.
BrianFagioli writes: SOLAI has launched the Solode Neo, a $399 Linux-based mini PC designed for always-on AI agents, browser automation, and persistent developer workflows. The compact system ships with an Intel N150 processor, 12GB LPDDR5 memory, 128GB SSD storage, Gigabit Ethernet, WiFi, Bluetooth, and a Linux-based operating system called Solode AI OS. The company says the device supports frameworks and tools including Claude Code, OpenAI Codex, Gemini CLI, and Hermes, while emphasizing local control, automation, and privacy-focused workflows running directly from a home network. While SOLAI markets the Solode Neo as an "AI computer," the hardware itself appears aimed more at lightweight automation and cloud-assisted agent tasks than heavy local inference. The low-power Intel N150 should be sufficient for browser automation, scheduling, monitoring, containers, and smaller AI workloads, but the system is unlikely to compete with higher-end local AI hardware designed for running larger models offline. Even so, the idea of a dedicated low-power Linux appliance for persistent AI and automation tasks may appeal to homelab users and self-hosting enthusiasts looking for a simpler alternative to building their own always-on workflow box from scratch. Read more of this story at Slashdot.
An anonymous reader quotes a report from 404 Media: On Reddit, Hacker News and other places where people in software development talk to each other, more and more people are becoming disillusioned with the promise of code generated by large language models. Developers talk not just about how the AI output is often flawed, but that using AI to get the job done is often a more time consuming, harder, and more frustrating experience because they have to go through the output and fix its mistakes. More concerning, developers who use AI at work report that they feel like they are de-skilling themselves and losing their ability to do their jobs as well as they used to. "We're being told to use [AI] agents for broad changes across our codebase. There's no way to evaluate whether that much code is well-written or secure -- especially when hundreds of other programmers in the company are doing the same," a UX designer at a midsized tech company told me. 404 Media granted all the developers we talked to for this story anonymity because they signed non-disclosure agreements or because they fear retribution from their employers. "We're building a rat's nest of tech debt that will be impossible to untangle when these models become prohibitively expensive (any minute now...)." "I had some issues where I forgot how to implement a Laravel API and it scared the shit out of me. I went to university for this, I've been a software engineer for many years now and it feels like I am back before I ever wrote a single line of code," the software developer at a small web design firm told 404 Media. "It's making me dumber for sure," the fintech software developer added. "It's like when we got cellphones and stopped remembering phone numbers, but it's grown to me mentally outsourcing 'thinking' in general. I feel my critical thinking and ability to sit and reason about a problem or a design has degraded because the all-knowing-dalai-llama is just a question away from giving me his take. And supposedly I tell myself ill just use it for inspiration but it ends up being my only thought. It gives you the illusion of productivity and expertise but at the end of the day you are more divorced from the output you submit than before." A software engineer at the FAANG said: "When I was using it for code generation, I found myself having a lot of trouble building and maintaining a mental model of the code I was working with. Another aspect is that I joined late last year and [the company's] codebase is massive. As a new hire, part of my job is to learn how to navigate the codebase and use the established conventions, but I think the AI push really hampered my ability to do that." Read more of this story at Slashdot.
Microsoft is adding a Windows Update feature called Cloud-Initiated Driver Recovery that can automatically roll back faulty drivers to a previously known-good version without waiting for hardware makers or users to fix the problem manually. PCWorld reports: The way faulty drivers work today is that the hardware partner is responsible for pushing an updated driver, or the end user is responsible for manually uninstalling the problematic driver. "This creates a gap where devices may remain on a low-quality driver for an extended period," says the blog post. With Cloud-Initiated Driver Recovery, Microsoft will be able to remotely trigger a rollback of the faulty driver to a previously "known-good" version of the driver via the Windows Update pipeline. Microsoft says that testing and verification of Cloud-Initiated Driver Recovery will continue until August this year, aiming to deliver this feature to Windows PCs starting in September. Read more of this story at Slashdot.
A new Linux local privilege escalation flaw called Fragnesia has been disclosed as a Dirty Frag-like vulnerability, allowing arbitrary byte writes into the kernel page cache of read-only files through a separate ESP/XFRM logic bug. Phoronix reports: Proof of concept code for Fragnesia is already out there. There is a two-line patch for addressing the issue within the Linux kernel's skbuff.c code. That patch hasn't yet been mainlined or picked up by any mainline kernel releases but presumably will be in short order for addressing this local privilege escalation issue. More details can be found here. Read more of this story at Slashdot.
An anonymous reader quotes a report from Reuters: LinkedIn planned to inform staff of layoffs on Wednesday, two people familiar with the matter told Reuters, in a widening of technology sector cuts this year. The Microsoft-owned social network plans to cut about 5% of its headcount as it reorganizes teams and focuses personnel on areas where its business is growing [...]. LinkedIn employs more than 17,500 full-time workers globally, its website says. Reuters was unable to determine the teams affected. The cuts come as revenue at LinkedIn, which sells recruiting tools and subscriptions, rose 12% in the just-ended quarter from a year prior, in an acceleration of growth in 2026, according to Microsoft's securities filings. The layoff rationale was not for artificial intelligence to replace jobs at LinkedIn, one of the people told Reuters. The specter of AI-fueled disruption has nonetheless hung over software incumbents and workers generally. Read more of this story at Slashdot.
The German Sovereign Tech Fund has invested 1.2 million euros ($1.4 million USD) in KDE Plasma technologies to help strengthen the structural reliability and security of the desktop environment's core infrastructure, including Plasma, KDE Linux, and the frameworks underlying its communication services. Longtime Slashdot reader jrepin shares an excerpt from the announcement: For 30 years, KDE has been providing the free and open-source software essential for digital sovereignty in personal, corporate, and public infrastructures: operating systems, desktop environments, document viewers, image and video editors, software development libraries, and much more. KDE's software is competitive, publicly auditable, and freely available. It can be maintained, adapted, and improved in-house or by local software companies. And modifications (along with their source code) can be freely distributed to all users and departments within an organization. KDE will use Sovereign Tech Fund's investment to push its essential software products to the next level, providing every individual, business, and public administration with the opportunity to regain their privacy, security, and control over their digital sovereignty. Slashdot reader Elektroschock also shared a statement from Fiona Krakenburger, Technical Director at the Sovereign Tech Agency. "We have long invested in desktop technologies for a reason: they are the primary way people access and use digital services in everyday life," says Krakenburger. "The desktop holds personal data and mediates nearly every service we depend on, from booking the next medical appointment, to education, to the way we work. We are investing in KDE because it is one of the two major desktop environments used across Linux and plays a key role in how millions of people experience open technology. Strengthening KDE's testing infrastructure, security architecture, and communication frameworks is how we invest in the resilience and reliability of the core digital infrastructure that modern society depends on." Read more of this story at Slashdot.




