Feed aggregator

Chuck Schumer is dropping the ball on regulating AI

Engadget - Wed, 05/15/2024 - 09:49

AI's capabilities are growing at tremendous speeds, and while that apparently warrants a ton of the United States' money for development, it doesn't seem to translate to a very obvious action: regulation. A bipartisan group of four senators, led by majority leader Chuck Schumer, have announced a legislative plan for AI that includes putting $32 billion towards research and development. But, it passes off the responsibility of devising regulatory measures around areas such as job eliminations, discrimination and copyright infringement to Senate committees. 

“It’s very hard to do regulations because AI is changing too quickly,” Schumer said in an interview published by The New York Times. Yet, in March, the European Parliament approved wide-ranging legislation for regulating AI that manages the obligations of AI applications based on what risks and effects they could bring. The European Union said it hopes to "protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field." 

Schumer seems to disagree with finding that balance, instead stating in the interview that investment into AI research and development "is sort of the American way — we are more entrepreneurial." 

For absolutely no reason at all and clearly not to hypothesize on reasons he avoided regulations, if you didn't know, one of Schumer's daughters works as a senior policy manager for Amazon, and the other one has worked for Meta (it's unclear if she still does). Furthermore, in May 2022, the New York Post reported that over 80 of Schumer's former employees held jobs in Big Tech at places such as Google and Apple.

This article originally appeared on Engadget at https://www.engadget.com/chuck-schumer-is-dropping-the-ball-on-regulating-ai-144957345.html?src=rss
Categories: Technology

Canon confirms its long-rumored flagship EOS R1 is coming later this year

Engadget - Wed, 05/15/2024 - 09:28

After years of rumors, Canon has confirmed that a flagship EOS R1 camera is in the works for its EOS line. The full-frame mirrorless camera is slated to arrive later this year and, while Canon hasn't revealed all the details just yet, it teased just enough to whet your appetite. There's no indication as to how much the EOS R1 will cost just yet either, but you may need to dig deep into your wallet this one.

The company says that the professional-grade camera will have an RF mount and offer improved video and still performance compared with the EOS R3. It will boast an upgraded image processing system that combines a fresh CMOS sensor, a new image processor called Digic Accelerator and the existing Digic X processor.

Canon says the system will be able to process a large volume of data at high speed and deliver advancements in auto focus and other areas. The company claims it's been able to combine the capabilities of the image processing system with its deep-learning tech to achieve "high-speed and high-accuracy subject recognition."

This powers a feature called Action Priority, which can, for instance, detect a player carrying a certain action in a sports game (like shooting a ball) and identify them as the main subject for a shot. The system would be able to instantly shift the auto focus frame in that person's direction to help make sure the photographer doesn't miss out on capturing key moments from a game.

Canon claims the EOS R1 can track athletes during sporting events even if they're momentarily out of line of sight. The focus on sports in the initial announcement suggests that the camera could be put to the test at this summer's Olympic Games in Paris.

In addition, Canon says it's bringing the image noise reduction feature that was initially built for PC software directly into the camera. It suggests this further improves image quality and can help users fulfill their creative goals.

This article originally appeared on Engadget at https://www.engadget.com/canon-confirms-its-long-rumored-flagship-eos-r1-is-coming-later-this-year-142838188.html?src=rss
Categories: Technology

Apple brings eye-tracking to recent iPhones and iPads

Engadget - Wed, 05/15/2024 - 09:00

Ahead of Global Accessibility Awareness Day this week, Apple is issuing its typical annual set of announcements around its assistive features. Many of these are useful for people with disabilities, but also have broader applications as well. For instance, Personal Voice, which was released last year, helps preserve someone's speaking voice. It can be helpful to those who are at risk of losing their voice or have other reasons for wanting to retain their own vocal signature for loved ones in their absence. Today, Apple is bringing eye-tracking support to recent models of iPhones and iPads, as well as customizable vocal shortcuts, music haptics, vehicle motion cues and more. 

Built-in eye-tracking for iPhones and iPads

The most intriguing feature of the set is the ability to use the front-facing camera on iPhones or iPads (at least those with the A12 chip or later) to navigate the software without additional hardware or accessories. With this enabled, people can look at their screen to move through elements like apps and menus, then linger on an item to select it. 

That pause to select is something Apple calls Dwell Control, which has already been available elsewhere in the company's ecosystem like in Mac's accessibility settings. The setup and calibration process should only take a few seconds, and on-device AI is at work to understand your gaze. It'll also work with third-party apps from launch, since it's a layer in the OS like Assistive Touch. Since Apple already supported eye-tracking in iOS and iPadOS with eye-detection devices connected, the news today is the ability to do so without extra hardware.

Vocal shortcuts for easier hands-free control

Apple is also working on improving the accessibility of its voice-based controls on iPhones and iPads. It again uses on-device AI to create personalized models for each person setting up a new vocal shortcut. You can set up a command for a single word or phrase, or even an utterance (like "Oy!" perhaps). Siri will understand these and perform your designated shortcut or task. You can have these launch apps or run a series of actions that you define in the Shortcuts app, and once set up, you won't have to first ask Siri to be ready. 

Another improvement coming to vocal interactions is "Listen for Atypical Speech," which has iPhones and iPads use on-device machine learning to recognize speech patterns and customize their voice recognition around your unique way of vocalizing. This sounds similar to Google's Project Relate, which is also designed to help technology better understand those with speech impairments or atypical speech.

To build these tools, Apple worked with the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign. The institute is also collaborating with other tech giants like Google and Amazon to further development in this space across their products.

Music haptics in Apple Music and other apps

For those who are deaf or hard of hearing, Apple is bringing haptics to music players on iPhone, starting with millions of songs on its own Music app. When enabled, music haptics will play taps, textures and specialized vibrations in tandem with the audio to bring a new layer of sensation. It'll be available as an API so developers can bring greater accessibility to their apps, too. 

Help in cars — motion sickness and CarPlay

Drivers with disabilities need better systems in their cars, and Apple is addressing some of the issues with its updates to CarPlay. Voice control and color filters are coming to the interface for vehicles, making it easier to control apps by talking and for those with visual impairments to see menus or alerts. To that end, CarPlay is also getting bold and large text support, as well as sound recognition for noises like sirens or honks. When the system identifies such a sound, it will display an alert at the bottom of the screen to let you know what it heard. This works similarly to Apple's existing sound recognition feature in other devices like the iPhone.

Apple

For those who get motion sickness while using their iPhones or iPads in moving vehicles, a new feature called Vehicle Motion Cues might alleviate some of that discomfort. Since motion sickness is based on a sensory conflict from looking at stationary content while being in a moving vehicle, the new feature is meant to better align the conflicting senses through onscreen dots. When enabled, these dots will line the four edges of your screen and sway in response to the motion it detects. If the car moves forward or accelerates, the dots will sway backwards as if in reaction to the increase in speed in that direction.

Other Apple Accessibility updates

There are plenty more features coming to the company's suite of products, including Live Captions in VisionOS, a new Reader mode in Magnifier, support for multi-line braille and a virtual trackpad for those who use Assistive Touch. It's not yet clear when all of these announced updates will roll out, though Apple has historically made these features available in upcoming versions of iOS. With its developer conference WWDC just a few weeks away, it's likely many of today's tools get officially released with the next iOS.

This article originally appeared on Engadget at https://www.engadget.com/apple-brings-eye-tracking-to-recent-iphones-and-ipads-140012990.html?src=rss
Categories: Technology

Threads gets its own fact-checking program

Engadget - Wed, 05/15/2024 - 08:00

This might come as a shock to you but the things people put on social media aren't always truthful — really blew your mind there, right? Due to this, it can be challenging for people to know what's real without context or expertise in a specific area. That's part of why many platforms use a fact-checking team to keep an eye (often more so look like they're keeping an eye) on what's getting shared. Now, Threads is getting its own fact-checking program, Adam Mosseri, head of Instagram and de-facto person in charge at Threads, announced. He first shared the company's plans to do so in December. 

Mosseri stated that Threads "recently" made it so that Meta's third-party fact-checkers could review and rate any inaccurate content on the platform. Before the shift, Meta was having fact-checks conducted on Facebook and Instagram and then matching "near-identical false content" that users shared on Threads. However, there's no indication of exactly when the program started or if it's global.

Then there's the matter of seeing how effective it really can be. Facebook and Instagram already had these dedicated fact-checkers, yet misinformation has run rampant across the platforms. Ahead of the 2024 Presidential election — and as ongoing elections and conflicts happen worldwide — is it too much to ask for some hardcore fact-checking from social media companies?

This article originally appeared on Engadget at https://www.engadget.com/threads-gets-its-own-fact-checking-program-130013115.html?src=rss
Categories: Technology

Google's Project Gameface hands-free 'mouse' launches on Android

Engadget - Wed, 05/15/2024 - 07:30

At last year's Google I/O developer conference, the company introduced Project Gameface, a hands-free gaming "mouse" that allows users to control a computer's cursor with movements of their head and facial gestures. This year, Google has announced that it has open-sourced more code for Project Gameface, allowing developers to build Android applications that can use the technology. 

The tool relies on the phone's front camera to track facial expressions and head movements, which can be used to control a virtual cursor. A user could smile to "select" items onscreen, for instance, or raise their left eyebrow to go back to the home screen on an Android phone. In addition, users can set thresholds or gesture sizes for each expression, so that they can control how prominent their expressions should be to trigger a specific mouse action. 

The company developed Project Gameface with gaming streamer Lance Carr, who has muscular dystrophy that weakens his muscles. Carr used a head-tracking mouse to game before a fire destroyed his home, along with his expensive equipment. The early version of Project Gameface was focused on gaming and uses a webcam to detect facial expressions, though Google had known from the start that it had a lot of other potential uses. 

For the tool's Android launch, Google teamed up with an Indian organization called Incluzza that supports people with disabilities. The partnership gave the company the chance to learn how Project Gameface can help people with disabilities further their studies, communicate with friends and family more easily and find jobs online. Google has released the project's open source code on GitHub and is hoping that more developers decide to "leverage it to build new experiences."

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-project-gameface-hands-free-mouse-launches-on-android-123029158.html?src=rss
Categories: Technology

The best cheap kitchen gadgets in 2024

Engadget - Wed, 05/15/2024 - 07:01

You can sink a lot of money into your kitchen without even realizing it. There’s no doubt that some of the best kitchen gadgets are on the pricey side, but there are plenty of budget-friendly tools that can make your time meal prepping, cooking for a party and reheating leftovers much easier. All the recommendations on this list are either products I use currently, or more affordable versions of something I decided to splurge on after years of food prep. You may not consider every single item an essential for your kitchen, but all of them can save you time when you need to get dinner on the table quickly.

This article originally appeared on Engadget at https://www.engadget.com/best-cheap-kitchen-gadgets-130049897.html?src=rss
Categories: Technology

The Morning After: The biggest news from Google's I/O keynote

Engadget - Wed, 05/15/2024 - 06:15

Google boss, Sundar Pichai, wrapped up the company’s I/O developer conference by noting its almost-two-hour presentation had mentioned AI 121 times. It was everywhere.

Google’s newest AI model, Gemini 1.5 Flash, is built for speed and efficiency. The company said it created Flash because developers wanted a lighter, less expensive model than Gemini Pro to build AI-powered apps and services.

Google says it’ll double Gemini’s context window to two million tokens, enough to process two hours of video, 22 hours of audio, more than 60,000 lines of code or 1.4 million-plus words at the same time.

But the bigger news is how the company is sewing AI into all the things you’re already using. With search, it’ll be able to answer your complex questions (a la Copilot in Bing), but for now, you’ll have to sign up to the company’s Search Labs to try that out. AI-generated answers will also appear alongside typical search results, just in case the AI knows better.

Google Photos was already pretty smart at searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question, like show me the best photo from each national park I’ve visited. You can also ask Google Photos to generate captions for you.

And, if you have an Android, Gemini is integrating directly into the device. Gemini will know the app, image or video you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions, like how to change settings or maybe even who’s displayed on screen. 

While these were the bigger beats, there was an awful lot to chew over. Check out all the headlines right here.

— Mat Smith

The biggest stories you might have missed

Google wants you to relax and have a natural chat with Gemini Live

Google Pixel 8a review

Google unveils Veo and Imagen 3, its latest AI media creation models

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

Google reveals its visual AI assistant, Project Astra Full of potential. Google

One of Google’s bigger projects is its visual multimodal AI assistant, currently called Project Astra. It taps into your smartphone (or smart glasses) camera and can contextually analyze and answer questions on the things it sees. Project Astra can offer silly wordplay suggestions, as well as identify and define the things it sees. A video demo shows Project Astra identifying the tweeter part of a speaker. It’s equal parts impressive and, well, familiar. We tested it out, right here.

Continue reading.

X now treats the term cisgender as a slur Elon Musk continues to add policy after baffling policy.

The increasingly unhinged world of X (Twitter) now considers the term ‘cisgender’ a slur. Owner Elon Musk posted last June, to the delight of his unhingiest users, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X reportedly began posting an official warning. A quick reminder: It’s not a slur.

Continue reading.

OpenAI co-founder Ilya Sutskever is leaving the company He’s moving to a new project.

Ilya Sutskever announced on X, formerly Twitter, he’s leaving OpenAI almost a decade after he co-founded the company. He’s confident OpenAI “will build [artificial general intelligence] that is both safe and beneficial” under the leadership of CEO Sam Altman, President Greg Brockman and CTO Mira Murati. While Sutskever and Altman praised each other in their farewell messages, the two were embroiled in the company’s biggest scandal, last year. Sutskever, who was a board member then, was involved in both of their dismissals.

Continue reading.

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-the-biggest-news-from-googles-io-keynote-111531702.html?src=rss
Categories: Technology

Virtual Boy: The bizarre rise and quick fall of Nintendo’s enigmatic red console

Ars Technica - Wed, 05/15/2024 - 06:00

Enlarge (credit: Benj Edwards)

Ars Technica AI Reporter and tech historian Benj Edwards has co-written a book on the Virtual Boy with Dr. Jose Zagal. In this exclusive excerpt, Benj and Jose take you back to Nintendo of the early '90s, where a unique 3D display technology captured the imagination of legendary designer Gunpei Yokoi and set the stage for a daring, if ultimately ill-fated, foray into the world of stereoscopic gaming.

Seeing Red: Nintendo's Virtual Boy is now available for purchase in print and ebook formats.

A full list of references can be found in the book.

Nearly 30 years after the launch of the Virtual Boy, not much is publicly known about how, exactly, Nintendo came to be interested in developing what would ultimately become its ill-fated console. Was Nintendo committed to VR as a future for video games and looking for technological solutions that made business sense? Or was the Virtual Boy primarily the result of Nintendo going “off script” and seizing a unique, and possibly risky, opportunity that presented itself? The answer is probably a little bit of both.

As it turns out, the Virtual Boy was not an anomaly in Nintendo’s history with video game platforms. Rather, it was the result of a deliberate strategy that was consistent with Nintendo’s way of doing things and informed by its lead creator Gunpei Yokoi’s design philosophy.

Read 47 remaining paragraphs | Comments

Categories: Technology

Kansas City has a problem with illegal street racing. Solving it is complicated.

KCUR - Wed, 05/15/2024 - 04:00
City ordinances outlaw watching or participating in street racing or “sideshows” where drivers engage in illegal stunts. But the issue persists.
Categories: News

OpenAI co-founder and Chief Scientist Ilya Sutskever is leaving the company

Engadget - Wed, 05/15/2024 - 00:46

Ilya Sutskever has announced on X, formerly known as Twitter, that he's leaving OpenAI almost a decade after he co-founded the company. He's confident that OpenAI "will build [artificial general intelligence] that is both safe and beneficial" under the leadership of CEO Sam Altman, President Greg Brockman and CTO Mira Murati, he continued. In his own post about Sutskever's departure, Altman called him "one of the greatest minds of our generation" and credited him for his work with the company. Jakub Pachocki, OpenAI's previous Director of Research who headed the development of GPT-4 and OpenAI Five, has taken Sutskever's role as Chief Scientist. 

After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the…

— Ilya Sutskever (@ilyasut) May 14, 2024

While Sutskever and Altman praised each other in their farewell messages, the two were embroiled in the company's biggest scandal last year. In November, OpenAI's board of directors suddenly fired Altman and company President Greg Brockman. "[T]he board no longer has confidence in [Altman's] ability to continue leading OpenAI," the ChatGPT-maker announced back then. Sutskever, who was a board member, was involved in their dismissal and was the one who asked both Altman and Brockman to separate meetings where they were informed that they were being fired. According to reports that came out at the time, Altman and Sutskever had been butting heads when it came to how quickly OpenAI was developing and commercializing its generative AI technology. 

Both Altman and Brockman were reinstated just five days after they were fired, and the original board was disbanded and replaced with a new one. Shortly before that happened, Sutskever posted on X that he "deeply regre[tted his] participation in the board's actions" and that he will do everything he can "to reunite the company." He then stepped down from his role as a board member, and while he remained Chief Scientist, The New York Times says he never really returned to work. 

Sutskever shared that he's moving on to a new project that's "very personally meaningful" to him, though he has yet to share details about it. As for OpenAI, it recently unveiled GPT-4o, which it claims can recognize emotion and can process and generate output in text, audio and images.

Ilya and OpenAI are going to part ways. This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend. His brilliance and vision are well known; his warmth and compassion are less well known but no less…

— Sam Altman (@sama) May 14, 2024

This article originally appeared on Engadget at https://www.engadget.com/openai-co-founder-and-chief-scientist-ilya-sutskever-is-leaving-the-company-054650964.html?src=rss
Categories: Technology

Chief Scientist Ilya Sutskever leaves OpenAI six months after Altman ouster

Ars Technica - Tue, 05/14/2024 - 22:05

Enlarge / An image Illya Sutskever tweeted with this OpenAI resignation announcement. From left to right: New OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman, Sutskever, CEO Sam Altman, and CTO Mira Murati. (credit: Ilya Sutskever / X)

On Tuesday evening, OpenAI chief scientist Ilya Sutskever announced that he is leaving the company he co-founded, six months since he participated in the coup that temporarily ousted OpenAI CEO Sam Altman. Jan Leike, a fellow member of Sutskever's Superalignment team, is reportedly resigning with him.

"After almost a decade, I have made the decision to leave OpenAI," Sutskever tweeted. "The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm. It was an honor and a privilege to have worked together, and I will miss everyone dearly."

Sutskever has been with the company since its founding in 2015 and is widely seen as one of the key engineers behind some of OpenAI's biggest technical breakthroughs. As a former OpenAI board member, he played a key role in the removal of Sam Altman as CEO in the shock firing last November. While it later emerged that Altman's firing primarily stemmed from a power struggle with former board member Helen Toner, Sutskever sided with Toner and personally delivered the news to Altman that he was being fired on behalf of the board.

Read 6 remaining paragraphs | Comments

Categories: Technology

Google Project Astra hands-on: Full of potential, but it’s going to be a while

Engadget - Tue, 05/14/2024 - 18:56

At I/O 2024, Google’s teaser for Project Astra gave us a glimpse at where AI assistants are going in the future. It’s a multi-modal feature that combines the smarts of Gemini with the kind of image recognition abilities you get in Google Lens, as well as powerful natural language responses. However, while the promo video was slick, after getting to try it out in person, it's clear there’s a long way to go before something like Astra lands on your phone. So here are three takeaways from our first experience with Google’s next-gen AI.

Sam’s take:

Currently, most people interact with digital assistants using their voice, so right away Astra’s multi-modality (i.e. using sight and sound in addition to text/speech) to communicate with an AI is relatively novel. In theory, it allows computer-based entities to work and behave more like a real assistant or agent – which was one of Google’s big buzzwords for the show – instead of something more robotic that simply responds to spoken commands.

Photo by Sam Rutherford/Engadget

In our demo, we had the option of asking Astra to tell a story based on some objects we placed in front of camera, after which it told us a lovely tale about a dinosaur and its trusty baguette trying to escape an ominous red light. It was fun and the tale was cute, and the AI worked about as well as you would expect. But at the same time, it was far from the seemingly all-knowing assistant we saw in Google's teaser. And aside from maybe entertaining a child with an original bedtime story, it didn’t feel like Astra was doing as much with the info as you might want.

Then my colleague Karissa drew a bucolic scene on a touchscreen, at which point Astra correctly identified the flower and sun she painted. But the most engaging demo was when we circled back for a second go with Astra running on a Pixel 8 Pro. This allowed us to point its cameras at a collection of objects while it tracked and remembered each one’s location. It was even smart enough to recognize my clothing and where I had stashed my sunglasses even though these objects were not originally part of the demo.

In some ways, our experience highlighted the potential highs and lows of AI. Just the ability for a digital assistant to tell you where you might have left your keys or how many apples were in your fruit bowl before you left for the grocery store could help you save some real time. But after talking to some of the researchers behind Astra, there are still a lot of hurdles to overcome.

Photo by Sam Rutherford/Engadget

Unlike a lot of Google’s recent AI features, Astra (which is described by Google as a “research preview”) still needs help from the cloud instead of being able to run on-device. And while it does support some level of object permanence, those “memories” only last for a single session, which currently only spans a few minutes. And even if Astra could remember things for longer, there are things like storage and latency to consider, because for every object Astra recalls, you risk slowing down the AI, resulting in a more stilted experience. So while it’s clear Astra has a lot of potential, my excitement was weighed down with the knowledge that it will be some time before we can get more full-feature functionality.

Karissa’s take:

Of all the generative AI advancements, multimodal AI has been the one I’m most intrigued by. As powerful as the latest models are, I have a hard time getting excited for iterative updates to text-based chatbots. But the idea of AI that can recognize and respond to queries about your surroundings in real-time feels like something out of a sci-fi movie. It also gives a much clearer sense of how the latest wave of AI advancements will find their way into new devices like smart glasses.

Google offered a hint of that with Project Astra, which may one day have a glasses component, but for now is mostly experimental (the glasses shown in the demo video during the I/O keynote were apparently a “research prototype.”) In person, though, Project Astra didn’t exactly feel like something out of sci-fi flick.

Photo by Sam Rutherford/Engadget

It was able to accurately recognize objects that had been placed around the room and respond to nuanced questions about them, like “which of these toys should a 2-year-old play with.” It could recognize what was in my doodle and make up stories about different toys we showed it.

But most of Astra’s capabilities seemed on-par with what Meta has already made available with its smart glasses. Meta’s multimodal AI can also recognize your surroundings and do a bit of creative writing on your behalf. And while Meta also bills the features as experimental, they are at least broadly available.

The Astra feature that may set Google’s approach apart is the fact that it has a built-in “memory.” After scanning a bunch of objects, it could still “remember” where specific items were placed. For now, it seems Astra’s memory is limited to a relatively short window of time, but members of the research team told us that it could theoretically be expanded. That would obviously open up even more possibilities for the tech, making Astra seem more like an actual assistant. I don’t need to know where I left my glasses 30 seconds ago, but if you could remember where I left them last night, that would actually feel like sci-fi come to life.

But, like so much of generative AI, the most exciting possibilities are the ones that haven’t quite happened yet. Astra might get there eventually, but right now it feels like Google still has a lot of work to do to get there.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-project-astra-hands-on-full-of-potential-but-its-going-to-be-a-while-235607743.html?src=rss
Categories: Technology

Weight loss from Wegovy sustained for up to four years, trial shows

Ars Technica - Tue, 05/14/2024 - 18:04

Enlarge / Wegovy is an injectable prescription weight loss medicine that has helped people with obesity. (credit: Getty | Michael Siluk)

A large, long-term trial of the weight-loss medication Wegovy (semaglutide) found that people tended to lose weight over the first 65 weeks on the drug—about one year and three months—but then hit a plateau or "set point." But that early weight loss was generally maintained for up to four years while people continued taking the weekly injections.

The findings, published Monday in Nature Medicine, come from a fresh analysis of data from the SELECT trial, which was designed to look at the drug's effects on cardiovascular health. The trial—a multicenter, double-blind, randomized, placebo-controlled trail—specifically enrolled people with existing cardiovascular disease who also had overweight or obesity, but did not have diabetes. In all, the trial included 17,604 people from 41 countries. Seventy-two percent of them were male, 84 percent were white, and the average age was about 62 years old.

Last year, researchers published the trial's primary results, which showed that semaglutide reduced participants' risk of heart attack, stroke, and cardiovascular-related deaths by 20 percent over the span of a little over three years.

Read 5 remaining paragraphs | Comments

Categories: Technology

“Fire and blood” come to Westeros in new House of the Dragon S2 trailer

Ars Technica - Tue, 05/14/2024 - 17:43

House of the Dragon returns to HBO Max for an action-packed second season next month.

The second season of House of the Dragon premieres in about a month and we've got one final action-packed trailer to boost anticipation. While the first season felt smaller and quieter—in a good way, more focused on character relationships and political maneuvering—the show seems to be pulling out all the stops in S2 as all-out war breaks out in the legendary "Dance of Dragons."

As previously reported, the series is set nearly 200 years before the events of Game of Thrones and chronicles the beginning of the end of House Targaryen's reign. The primary source material is Fire and Blood, a fictional history of the Targaryen kings written by George R.R. Martin. As book readers know, those events culminated in a civil war and the extinction of the dragons—at least until Daenerys Targaryen came along.

(Spoilers for S1 below.)

Read 6 remaining paragraphs | Comments

Categories: Technology

Engadget Podcast: The good, the bad and the AI of Google I/O 2024

Engadget - Tue, 05/14/2024 - 17:17

We just wrapped up coverage on Google's I/O 2024 keynote, and we're just so tired of hearing about AI. In this bonus episode, Cherlynn and Devindra dive into the biggest I/O news: Google's intriguing Project Astra AI assistant; new models for creating video and images; and some improvements to Gemini AI. While some of the announcements seem potentially useful, it's still tough to tell if the move towards AI will actually help consumers, or if Google is just fighting to stay ahead of OpenAI.

Listen below or subscribe on your podcast app of choice. If you've got suggestions or topics you'd like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcast, Engadget News!

Subscribe! Livestream Credits 

Hosts: Cherlynn Low and Devindra Hardawar
Music: Dale North

This article originally appeared on Engadget at https://www.engadget.com/engadget-podcast-the-good-the-bad-and-the-ai-of-google-io-2024-221741082.html?src=rss
Categories: Technology

Sauron’s dark rise is front and center in The Rings of Power S2 teaser

Ars Technica - Tue, 05/14/2024 - 17:09

Charlie Vicker's Sauron is front and center in the teaser for S2 of Lord of the Rings: The Rings of Power.

Amazon's Prime Video made a major investment in The Rings of Power when it acquired the rights to the source material from the Tolkien estate, even committing to multiple seasons upfront. The casting was strong and the visuals were quite spectacular (including the opening credits). But while the first season had its moments, personally I found it a bit plodding, often more concerned with establishing this rich fictional world and the characters within it than moving the story forward.

Showrunners J. D. Payne and Patrick McKay have said that this was deliberate. They wanted to avoid a "villain-centric" story in S1 but promised they would be delving more deeply into "the lore and the stories people have been waiting to hear." That would be the rise of Sauron (Charlie Vickers), the forging of the titular rings of power, and the last alliance between elves and men to defeat Sauron's evil machinations. Judging by the teaser that dropped today, we'll be getting lots more action in S2, with the shape-shifting Sauron now handily disguised as an elf. Bonus: There's an accompanying behind-the-scenes preview of the second season.

(Spoilers for the S1 finale below.)

Read 5 remaining paragraphs | Comments

Categories: Technology

Downranking won’t stop Google’s deepfake porn problem, victims say

Ars Technica - Tue, 05/14/2024 - 17:00

Enlarge (credit: imaginima | E+)

After backlash over Google's search engine becoming the primary traffic source for deepfake porn websites, Google has started burying these links in search results, Bloomberg reported.

Over the past year, Google has been driving millions to controversial sites distributing AI-generated pornography depicting real people in fake sex videos that were created without their consent, Similarweb found. While anyone can be targeted—police already are bogged down with dealing with a flood of fake AI child sex images—female celebrities are the most common victims. And their fake non-consensual intimate imagery is more easily discoverable on Google by searching just about any famous name with the keyword "deepfake," Bloomberg noted.

Google refers to this content as "involuntary fake" or "synthetic pornography." The search engine provides a path for victims to report that content whenever it appears in search results. And when processing these requests, Google also removes duplicates of any flagged deepfakes.

Read 20 remaining paragraphs | Comments

Categories: Technology

Some Chiefs parade shooting survivors could live with bullets in their bodies forever

KCUR - Tue, 05/14/2024 - 16:28
In a series of features from KCUR and KFF titled “The Injured,” reporters Peggy Lowe and Bram Sable-Smith are telling the stories of survivors of the shooting at the Kansas City Chiefs championship parade. In the most recent installment, they spoke with three survivors who still have bullets inside of them.
Categories: News

Meet the Press NOW — May 14

Meet the Press RSS - Tue, 05/14/2024 - 16:16

Defense attorneys begin their cross-examination of Michael Cohen in Donald Trump’s hush-money trial. NBC News Congressional Correspondent Julie Tsirkin explains what’s at stake in Maryland’s Democratic Senate primary contest. The Nevada Independent CEO and editor Jon Ralston discusses the state of 2024 race in that battle ground state. Akayla Gardner, Faiz Shakir and Stephen Hayes join the Meet the Press NOW roundtable.

Categories: Government, politics

X now treats the term cisgender as a slur

Engadget - Tue, 05/14/2024 - 16:11

The increasingly discriminatory X (Twitter) now considers the term “cisgender” a slur. Owner Elon Musk posted last June, to the delight of his bigoted brigade of blue-check sycophants, that “‘cis’ or ‘cisgender’ are considered slurs on this platform.” On Tuesday, X made good on the regressive provocateur’s stance and reportedly began posting an official warning that the LGBTQ-inclusive terms could result in a ban from the platform. Not that you’d miss much.

TechCrunch reported on Tuesday that trying to publish a post using the terms “cisgender” or “cis” in the X mobile app will pop up a full-screen warning reading, “This post contains language that may be considered a slur by X and could be used in a harmful manner in violation of our rules.” It then gives you the choice of continuing to publish the post or conforming to the backward views of the worst of us and deleting it.

Of course, neither form of the term cisgender is a slur.

As the historically marginalized transgender community finally began finding at least a sliver of widespread and long overdue social acceptance in the 21st century, the term became more commonly used in the mainstream lexicon to describe people whose gender identity matches their sex at birth. Organizations including the American Psychological Association, World Health Organization, American Medical Association, American Psychiatric Association recognize the term.

But some people have a hard time accepting and respecting that some humans are different from others. Those fantasizing (against all evidence and scientific consensus) that the heteronormative ideals they grew up with are absolute gospel sometimes take great offense at being asked to adjust their vocabulary to communicate respect for a community that has spent centuries forced to live in the shadows or risk their safety due to the widespread pathologization of their identities. 

Musk seems to consider those the good ol’ days.

This isn’t the billionaire’s first ride on the Transphobe Train. After his backward tweet last June (on the first day of Pride Month, no less), the edgelord’s platform ran a timeline takeover ad from a right-wing nonprofit, plugging a transphobic propaganda film. In case you’re wondering if the group may have anything of value to say, TechCrunch notes that the same organization also doubts climate change and downplays the dehumanizing atrocities of slavery.

X also reversed course on a policy, implemented long before Musk’s takeover, that banned the deadnaming or misgendering of transgender people.

This article originally appeared on Engadget at https://www.engadget.com/x-now-treats-the-term-cisgender-as-a-slur-211117779.html?src=rss
Categories: Technology

Pages

Subscribe to Superior Systems aggregator