Technology

Animal Well speedrunners are already beating the game in under five minutes

Engadget - Tue, 05/14/2024 - 14:53

Animal Well is one of the hottest games around. It quickly shot to the top of Steam's top-seller chart after it was released to glowing reviews last Thursday. 

While most players complete the main story in four to six hours, it hasn't taken long for speedrunners to figure out how to blaze through solo developer Billy Basso's eerie labyrinth. YouTubers are already posting runs of under five minutes and the any% record (i.e. the best recorded time without any restrictions) is being smashed over and over. 

Within a couple of hours of Hubert0987 claiming the world record with a 4:44 run on Thursday, The DemonSlayer6669 appeared to snag bragging rights with one that was 18 seconds faster and perhaps the first recorded sub-4:30 time. (Don't watch the video just yet if you haven't beaten the game and would like to avoid spoilers.)

Animal Well hasn't even been out for a week, so you can expect records to keep tumbling as runners optimize routes to the game's final plunger. It's cool to already see a speedrunning community form around a new game as skilled players duke it out, perhaps for the chance to show off their skills at the next big Games Done Quick event.

This article originally appeared on Engadget at https://www.engadget.com/animal-well-speedrunners-are-already-beating-the-game-in-under-five-minutes-195259598.html?src=rss
Categories: Technology

Boeing is troubleshooting a small helium leak on the Starliner spacecraft

Ars Technica - Tue, 05/14/2024 - 14:47

Enlarge / A view looking down at Boeing's Starliner spacecraft and United Launch Alliance's Atlas V rocket inside the Vertical Integration Facility at Cape Canaveral Space Force Station, Florida. (credit: United Launch Alliance)

Boeing is taking a few extra days to resolve a small helium leak on the Starliner spacecraft slated to ferry two NASA astronauts on a test flight to the International Space Station, officials said Tuesday.

This means the first crew launch of Boeing's Starliner spacecraft, running years behind schedule and more than $1.4 billion over budget, won't happen before next Tuesday, May 21, at 4:43 pm EDT (20:43 UTC). Meeting this schedule assumes engineers can get comfortable with the helium leak. Officials from Boeing and NASA, which manages Boeing's multibillion-dollar Starliner commercial crew contract, previously targeted Friday, May 17, for the spacecraft's first launch with astronauts onboard.

Boeing's ground team traced the leak to a flange on a single reaction control system thruster on the spacecraft's service module.

Read 12 remaining paragraphs | Comments

Categories: Technology

AT&T paid bribes to get two major pieces of legislation passed, US gov’t says

Ars Technica - Tue, 05/14/2024 - 14:31

Enlarge (credit: Getty Images | AaronP/Bauer-Griffin)

The US government has provided more detail on how a former AT&T executive allegedly bribed a powerful state lawmaker's ally in order to obtain legislation favorable to AT&T's business.

Former AT&T Illinois President Paul La Schiazza is set to go on trial in September 2024 after being indicted on charges of conspiracy to unlawfully influence then-Illinois House Speaker Michael Madigan. AT&T itself agreed to pay a $23 million fine in October 2022 in connection with the alleged illegal influence campaign and said it was "committed to ensuring that this never happens again."

US government prosecutors offered a preview of their case against La Schiazza in a filing on Friday in US District Court for the Northern District of Illinois. A contract lobbyist hired by AT&T "is expected to testify that AT&T successfully passed two major pieces of legislation after the company started making payments to Individual FR-1."

Read 15 remaining paragraphs | Comments

Categories: Technology

Google strikes back at OpenAI with “Project Astra” AI agent prototype

Ars Technica - Tue, 05/14/2024 - 14:11

Enlarge / A video still of Project Astra demo at the Google I/O conference keynote in Mountain View on May 14, 2024. (credit: Google)

Just one day after OpenAI revealed GPT-4o, which it bills as being able to understand what's taking place in a video feed and converse about it, Google announced Project Astra, a research prototype that features similar video comprehension capabilities. It was announced by Google DeepMind CEO Demis Hassabis on Tuesday at the Google I/O conference keynote in Mountain View, California.

Hassabis called Astra "a universal agent helpful in everyday life." During a demonstration, the research model showcased its capabilities by identifying sound-producing objects, providing creative alliterations, explaining code on a monitor, and locating misplaced items. The AI assistant also exhibited its potential in wearable devices, such as smart glasses, where it could analyze diagrams, suggest improvements, and generate witty responses to visual prompts.

Google says that Astra uses the camera and microphone on a user's device to provide assistance in everyday life. By continuously processing and encoding video frames and speech input, Astra creates a timeline of events and caches the information for quick recall. The company says that this enables the AI to identify objects, answer questions, and remember things it has seen that are no longer in the camera's frame.

Read 14 remaining paragraphs | Comments

Categories: Technology

Google expands digital watermarks to AI-made video and text

Engadget - Tue, 05/14/2024 - 13:55

As Google starts to make its latest video-generation tools available, the company says it has a plan to ensure transparency around the origins of its increasingly realistic AI-generated clips. All video made by the company’s new Veo model in the VideoFX app will have digital watermarks thanks to Google’s SynthID system. Furthermore, SynthID will be able to watermark AI-generated text that comes from Gemini.

SynthID is Google’s digital watermarking system that started rolling out to AI-generated images last year. The tech embeds imperceptible watermarks into AI-made content so that AI detection tools can recognize that the content was generated by AI. Considering that Veo, the company’s latest video generation model previewed onstage at I/O, can create longer and higher-res clips than what was previously possible, tracking the source of such content will be increasingly important.

As generative AI models advance, more companies have turned to watermarking amid fears that AI could fuel a new wave of misinformation. Watermarking systems would give platforms like Google a framework for detecting AI-generated content that may otherwise be impossible to distinguish. TikTok and Meta have also recently announced plans to support similar detection tools on their platforms and label more AI content in their apps.

Of course, there are still significant questions about whether digital watermarks on their own offer sufficient protection against deceptive AI content. Researchers have shown that watermarks can be easy to evade. But making AI-made content detectable in some way is an important first step toward transparency.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-expands-digital-watermarks-to-ai-made-video-175232320.html?src=rss
Categories: Technology

Gemini will be accessible in the side panel on Google apps like Gmail and Docs

Engadget - Tue, 05/14/2024 - 13:54

Google is adding Gemini-powered AI automation to more tasks in Workspace. In its Tuesday Google I/O keynote, the company said its advanced Gemini 1.5 Pro will soon be available in the Workspace side panel as “the connective tissue across multiple applications with AI-powered workflows,” as AI grows more intelligent, learns more about you and automates more of your workflow.

Gemini’s job in Workspace is to save you the time and effort of digging through files, emails and other data from multiple apps. “Workspace in the Gemini era will continue to unlock new ways of getting things done,” Google Workspace VP Aparna Pappu said at the event.

The refreshed Workspace side panel, coming first to Gmail, Docs, Sheets, Slides and Drive, will let you chat with Gemini about your content. Its longer context window (essentially, its memory) allows it to organize, understand and contextualize your data from different apps without leaving the one you’re in. This includes things like comparing receipt attachments, summarizing (and answering back-and-forth questions about) long email threads, or highlighting key points from meeting recordings.

Google

Another example Google provided was planning a family reunion when your grandmother asks for hotel information. With the Workspace side panel, you can ask Gemini to find the Google Doc with the booking information by using the prompt, “What is the hotel name and sales manager email listed in @Family Reunion 2024?” Google says it will find the document and give you a quick answer, allowing you to insert it into your reply as you save time by faking human authenticity for poor Grandma.

The email-based changes are coming to the Gmail mobile app, too. “Gemini will soon be able to analyze email threads and provide a summarized view with the key highlights directly in the Gmail app, just as you can in the side panel,” the company said.

Summarizing in the Gmail app is coming to Workspace Labs this month. Meanwhile, the upgraded Workspace side panel will arrive starting Tuesday for Workspace Labs and Gemini for Workspace Alpha users. Google says all the features will arrive for the rest of Workspace customers and Google One AI Premium users next month.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/gemini-will-be-accessible-in-the-side-panel-on-google-apps-like-gmail-and-docs-185406695.html?src=rss
Categories: Technology

Google is “reimagining” search in “the Gemini era” with AI Overviews

Ars Technica - Tue, 05/14/2024 - 13:33

Enlarge / "Google will do the Googling for you," says firm's search chief. (credit: Google)

Search is still important to Google, but soon it will change. At its all-in-one AI Google I/O event Tuesday, the company introduced a host of AI-enabled features coming to Google Search at various points in the near future, which will "do more for you than you ever imagined."

"Google will do the Googling for you," said Liz Reid, Google's head of Search.

It's not AI in every search, but it will seemingly be hard to avoid a lot of offers to help you find, plan, and brainstorm things. "AI Overviews," the successor to the Search Generative Experience, will provide summary answers to questions, along with links to sources. You can also soon submit a video as a search query, perhaps to identify objects or provide your own prompts by voice.

Read 4 remaining paragraphs | Comments

Categories: Technology

Google Gemini can power a virtual AI teammate with its own Workspace account

Engadget - Tue, 05/14/2024 - 13:28

Google's Gemini AI systems can do a lot, judging by today's I/O keynote. That includes the option to set up a virtual teammate with its own Workspace account. You can configure the teammate to carry out specific tasks, such as to monitor and track projects, organize information, provide context, pinpoint trends after analyzing data and to play a role in team collaboration.

In Google Chat, the teammate can join all relevant rooms and you can ask it questions based on all the conversation histories, Gmail threads and anything else it has access to. It can tell team members whether their projects are approved or if there might be an issue based on conflicting messages. 

It seems like the virtual teammate was just a tech demo for now, however. Aparna Pappu, vice president and GM of Workspace, said Google has "a lot of work to do to figure out how to bring these agentive experiences, like virtual teammates, into Workspace." That includes finding ways to let third parties make their own versions. 

While it doesn't seem like this virtual teammate will be available soon, it could eventually prove to be a serious timesaver — as long as you trust it to get everything right first time around.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-gemini-can-power-a-virtual-ai-teammate-with-its-own-workspace-account-182809274.html?src=rss
Categories: Technology

Google announces new scam detection tools that provide real-time alerts during phone calls

Engadget - Tue, 05/14/2024 - 13:14

Google just announced forthcoming scam detection tools coming to Android phones later this year, which is a good thing as these scammers keep getting better and better at parting people from their money. The toolset, revealed at Google I/O 2024, is still in the testing stages but uses AI to suss out fraudsters in the middle of a conversation.

You read that right. The AI will be constantly on the hunt for conversation patterns commonly associated with scams. Once detected, you’ll receive a real-time alert on the phone, putting to bed any worries that the person on the other end is actually heading over to deliver a court summons or whatever.

Google gives the example of a “bank representative” asking for personal information, like PINs and passwords. These are uncommon bank requests, so the AI would flag them and issue an alert. Everything happens on the device, so it stays private. This feature isn’t coming to Android 15 right away and the company says it’ll share more details later in the year. We do know that people will have to opt-in to use the tool. 

Google made a big move with Android 15, bringing its Gemini chatbot to actual devices instead of requiring a connection to the cloud. In addition to this scam detection tech, the addition of onboard AI will allow for many more features, like contextual awareness when using apps.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-announces-new-scam-detection-tools-that-provide-real-time-alerts-during-phone-calls-181442091.html?src=rss
Categories: Technology

With Gemini Live, Google wants you to relax and have a natural chat with AI

Engadget - Tue, 05/14/2024 - 13:13

While Google and OpenAI have been racing to win the AI crown over the past year, we've seemingly reverted away from the idea of speaking to virtual assistants. Generative AI products have typically launched with text-only inputs, and only later add the ability to search images and basic voice commands. At Google I/O today, the company showed off Gemini Live, a new mobile experience for natural conversations with its AI. 

Google offered up a few potential use cases; You could have a conversation with Gemini Live to help prepare for a job interview, where it could potentially ask you relevant questions around the positions. It could also give you public speaking tips if you want to research a speech. What makes Gemini Live unique is that you'll be able to speak at your own pace, or even interrupt its responses if you'd like. Ideally, it should be more like having a conversation with a person, instead of just voicing smart assistant commands or generative AI queries.

At I/O, Google also showed off Project Astra, a next-generation virtual assistant that takes the concept of Gemini Live even further. Astra is able to view your camera feed and answer questions in real-time. It's unclear how long that'll take to arrive, but Google says some of Astra's live video features will come to Gemini Live later this year. Gemini Live will be available for Gemini Advanced subscribers in the next few months.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/with-gemini-live-google-wants-you-to-relax-and-have-a-natural-chat-with-ai-181329788.html?src=rss
Categories: Technology

Google's Gemini Nano brings better image-description smarts to its TalkBack vision tool

Engadget - Tue, 05/14/2024 - 13:07

The Google I/O event is here, and the company is announcing lots of great updates for your Android device. As we heard earlier, Gemini Nano is getting multimodal support, meaning your Android will still process text but with a better understanding of other factors like sights, sounds and spoken language. Now Google has shared that the new tool is also coming to it's TalkBack feature.

TalkBack is an existing tool that reads aloud a description of an image, whether it's one you captured or from the internet. Gemini Nano's multimodal support should provide a more detailed understanding of the image. According to Google, TalkBack users encounter about 90 images each day that don't have a label. Gemini Nano should be able to provide missing information, such as what an item of clothing looks like or the details of a new photo sent by a friend. 

Gemini Nano works directly on a person's device, meaning it should still function properly without any network connection. While we don't yet have an exact date for when it will arrive, Google says TalkBack will get Gemini Nano's updated features later this year.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-nano-brings-better-image-description-smarts-to-its-talkback-vision-tool-180759598.html?src=rss
Categories: Technology

Google builds Gemini right into Android, adding contextual awareness within apps

Engadget - Tue, 05/14/2024 - 13:04

Google just announced some nifty improvements to its Gemini AI chatbot for Android devices as part of the company’s I/O 2024 event. The AI is now part of the Android operating system, allowing it to integrate in a more comprehensive way.

The coolest new feature wouldn’t be possible without that integration with the underlying OS. Gemini is now much better at understanding context as you control apps on the smartphone. What does this mean exactly? Once the tool officially launches as part of Android 15, you’ll be able to bring up a Gemini overlay that rests on top of the app you’re using. This will allow for context-specific actions and queries.

Google gives the example of quickly dropping generated images into Gmail and Google Messages, though you may want to steer clear of historical images for now. The company also teased a feature called “Ask This Video” that lets users pose questions about a particular YouTube video, which the chatbot should be able to answer. Google says this should work with "billions" of videos. There's a similar tool coming for PDFs. 

Google

It’s easy to see where this tech is going. Once Gemini has access to the lion’s share of your app library, it should be able to actually deliver on some of those lofty promises made by rival AI companies like Humane and Rabbit. Google says it's “just getting started with how on-device AI can change what your phone can do” so we imagine future integration with apps like Uber and Doordash, at the very least.

Circle to Search is also getting a boost thanks to on-board AI. Users will be able to circle just about anything on their phone and receive relevant information. Google says people will be able to do this without having to switch apps. This even extends to math and physics problems, just circle for the answer, which is likely to please students and frustrate teachers.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-builds-gemini-right-into-android-adding-contextual-awareness-within-apps-180413356.html?src=rss
Categories: Technology

Android's Circle to Search can now help students solve math and physics homework

Engadget - Tue, 05/14/2024 - 13:02

Google has introduced another capability for its Circle to Search feature at the company's annual I/O developer conference, and it's something that could help students better understand potentially difficult class topics. The feature will now be able to show them step-by-step instructions for a "range of physics and math word problems." They just have to activate the feature by long-pressing the home button or navigation bar and then circling the problem that's got them stumped, though some math problems will require users to be signed up for Google's experimental Search Labs feature.

The company says Circle to Search's new capability was made possible by its new family of AI models called LearnLM that was specifically created and fine-tuned for learning. It's also planning to make adjustments to this particular capability and to roll out an upgraded version later this year that could solve even more complex problems "involving symbolic formulas, diagrams, graphs and more." Google launched Circle to Search earlier this year at a Samsung Unpacked event, because the feature was initially available on Galaxy 24, as well as on Pixel 8 devices. It's now also out for the Galaxy S23, Galaxy S22, Z Fold, Z Flip, Pixel 6 and Pixel 7 devices, and it'll likely make its way to more hardware in the future. 

In addition to the new Circle to Search capability, Google has also revealed that devices that can support the Gemini for Android chatbot assistant will now be able to bring it up as an overlay on top of the application that's currently open. Users can then drag and drop images straight from the overlay into apps like Gmail, for instance, or use the overlay to look up information without having to swipe away from whatever they're doing. They can tap "Ask this video" to find specific information within a YouTube video that's open, and if they have access to Gemini Advanced, they can use the "Ask this PDF" option to find information from within lengthy documents. 

Google is also rolling out multimodal capabilities to Nano, the smallest model in the Gemini family that can process information on-device. The updated Gemini Nano, which will be able to process sights, sounds and spoken language, is coming to Google's TalkBack screen reader later this year. Gemini Nano will enable TalkBack to describe images onscreen more quickly and even without an internet connection. Finally, Google is currently testing a Gemini Nano feature that can alert users while a call is ongoing if it detects common conversation patterns associated with scams. Users will be alerted, for instance, if they're talking to someone asking them for their PINs or passwords or to someone asking them to buy gift cards. 

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/androids-circle-to-search-can-now-help-students-solve-math-and-physics-homework-180223229.html?src=rss
Categories: Technology

Google's Gemini will search your videos to help you solve problems

Engadget - Tue, 05/14/2024 - 12:52

As part of its push toward adding generative AI to search, Google has introduced a new twist: video. Gemini will let you upload video that demonstrates an issue you're trying to resolve, then scour user forums and other areas of the internet to find a solution. 

As an example, Google's Rose Yao talked onstage at I/O 2024 about a used turntable she bought and how she couldn't get the needle to sit on the record. Yao uploaded a video showing the issue, then Gemini quickly found an explainer describing how to balance the arm on that particular make and model. 

Google

"Search is so much more than just words in a text box. Often the questions you have are about the things you see around you, including objects in motion," Google wrote. "Searching with video saves you the time and trouble of finding the right words to describe this issue, and you’ll get an AI Overview with steps and resources to troubleshoot."

If the video alone doesn't make it clear what you're trying to figure out, you can add text or draw arrows that point to the issue in question. 

OpenAI just introduced ChatGPT 4o with the ability to interpret live video in real time, then describe a scene or even sing a song about it. Google, however, is taking a different tack with video by focusing on its Search product for now. Searching with video is coming to Search Labs US users in English to start with, but will expand to more regions over time, the company said.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-will-search-your-videos-to-help-you-solve-problems-175235105.html?src=rss
Categories: Technology

Google Search will now show AI-generated answers to millions by default

Engadget - Tue, 05/14/2024 - 12:45

Google is shaking up Search. On Tuesday, the company announced big new AI-powered changes to the world’s dominant search engine at I/O, Google’s annual conference for developers. With the new features, Google is positioning Search as more than a way to simply find websites. Instead, the company wants people to use its search engine to directly get answers and help them with planning events and brainstorming ideas.

“[With] generative AI, Search can do more than you ever imagined,” wrote Liz Reid, vice president and head of Google Search, in a blog post. “So you can ask whatever’s on your mind or whatever you need to get done — from researching to planning to brainstorming — and Google will take care of the legwork.”

Google’s changes to Search, the primary way that the company makes money, are a response to the explosion of generative AI ever since OpenAI’s ChatGPT released at the end of 2022. Since then, a handful of AI-powered apps and services including ChatGPT, Anthropic, Perplexity, and Microsoft’s Bing, which is powered by OpenAI’s GPT-4, have challenged Google’s flagship service by directly providing answers to questions instead of simply presenting people a list of links. This is the gap that Google is racing to bridge with its new features in Search.

Starting today, Google will show complete AI-generated answers in response to most search queries at the top of the results page in the US. Google first unveiled the feature a year ago at Google I/O in 2023, but so far, anyone who wanted to use the feature had to sign up for it as part of the company’s Search Labs platform that lets people try out upcoming features ahead of their general release. Google is now making AI Overviews available to hundreds of millions of Americans, and says that it expects it to be available in more countries to over a billion people by the end of the year. Reid wrote that people who opted to try the feature through Search Labs have used it “billions of times” so far, and said that any links included as part of the AI-generated answers get more clicks than if the page had appeared as a traditional web listing, something that publishers have been concerned about. “As we expand this experience, we’ll continue to focus on sending valuable traffic to publishers and creators,” Reid wrote. 

In addition to AI Overviews, searching for certain queries around dining and recipes, and later with movies, music, books, hotels, shopping and more in English in the US will show a new search page where results are organized using AI. “[When] you’re looking for ideas, Search will use generate AI to brainstorm with you and create an AI-organized results page that makes it easy to explore,” Reid said in the blog post.

Google

If you opt in to Search Labs, you’ll be able to access even more features powered by generative AI in Google Search. You’ll be able to get AI Overview to simplify the language or break down a complex topic in more detail. Here’s an example of a query asking Google to explain, for instance, the connection between lightning and thunder.

Google

Search Labs testers will also be able to ask Google really complex questions in a single query to get answers on a single page instead of having to do multiple searches. The example that Google’s blog post gives: “Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.” In response, Google shows the highest-rated yoga and pilates studios near Boston’s Beacon Hill neighborhood and even puts them on a map for easy navigation.

Google

Google also wants to become a meal and vacation planner by letting people who sign up for Search Labs ask queries like “create a 3 day meal plan for a group that’s easy to prepare” and letting you swap out individual results in its AI-generated plan with something else (swapping a meat-based dish in a meal plan for a vegetarian one, for instance).

Google

Finally, Google will eventually let anyone who signs up for Search Labs use a video as a search query instead of text or images. “Maybe you bought a record player at a thriftshop, but it’s not working when you turn it on and the metal piece with the needle is drifting unexpectedly,” wrote Reid in Google’s blog post. “Searching with video saves you the time and trouble of finding the right words to describe this issue, and you’ll get an AI Overview with steps and resources to troubleshoot.”

Google said that all these new capabilities are powered by a brand new Gemini model customized for Search that combines Gemini’s advanced multi-step reasoning and multimodal abilities with Google’s traditional search systems.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-search-will-now-show-ai-generated-answers-to-millions-by-default-174512845.html?src=rss
Categories: Technology

AI in Gmail will sift through emails, provide search summaries, send emails

Ars Technica - Tue, 05/14/2024 - 12:44

Google's Gemini AI often just feels like a chatbot built into a text-input field, but you can really start to do special things when you give it access to a ton of data. Gemini in Gmail will soon be able to search through your entire backlog of emails and show a summary in a sidebar.

That's simple to describe but solves a huge problem with email: even searching brings up a list of email subjects, and you have to click-through to each one just to read it. Having an AI sift through a bunch of emails and provide a summary sounds like a huge time saver and something you can't do with any other interface.

Google's one-minute demo of this feature showed a big blue Gemini button at the top right of the Gmail web app. Tapping it opens the normal chatbot sidebar you can type in. Asking for a summary of emails from a certain contact will get you a bullet-point list of what has been happening, with a list of "sources" at the bottom that will jump you right to a certain email. In the last second of the demo, the user types, "Reply saying I want to volunteer for the parent's group event," hits "enter," and then the chatbot instantly, without confirmation, sends an email.

Read 2 remaining paragraphs | Comments

Categories: Technology

Google unveils Veo and Imagen 3, its latest AI media creation models

Engadget - Tue, 05/14/2024 - 12:36

It's all AI all the time at Google I/O! Today, Google announced its new AI media creation engines: Veo, which can produce "high-quality" 1080p videos; and Imagen 3, its latest text-to-image framework. Neither sound particularly revolutionary, but they're a way for Google to keep up the fight against OpenAI's Sora video model and Dall-E 3, a tool that has practically become synonymous with AI-generated images.

Google claims Veo has "an advanced understanding of natural language and visual semantics" to create whatever video you have in mind. The AI generated videos can last "beyond a minute." Veo is also capable of understanding cinematic and visual techniques, like the concept of a timelapse. But really, that should be table stakes for an AI video generation model, right?

To prove that Veo isn't out to steal artist's jobs, Google has also partnered with Donald Glover and Gilga, his creative studio, to show off the model's capabilities. In a very brief promotional video, we see Glover and crew using text to create video of a convertible arriving at a European home, and a sailboat gliding through the ocean. According to Google, Veo can simulate real-world physics better than its previous models, and it's also improved how it renders high-definition footage.

"Everybody's going to become a director, and everybody should be a director," Glover says in the video, absolutely earning his Google paycheck. "At the heart of all of this is just storytelling. The closer we are to be able to tell each other our stories, the more we'll understand each other."

It remains to be seen if anyone will actually want to watch AI generated video, outside of the morbid curiosity of seeing a machine attempt to algorithmically recreate the work of human artists. But that's not stopping Google or OpenAI from promoting these tools and hoping they'll be useful (or at least, make a bunch of money). Veo will be available inside of Google's VideoFX tool today for some creators, and the company says it'll also be coming to YouTube Shorts and other products. If Veo does end up becoming a built-in part of YouTube Shorts, that's at least one feature Google can lord over TikTok.

Google

As for Imagen 3, Google is making the usual promises: It's said to be the company's "highest quality" text-to-image model, with "incredible level of detail" for "photorealistic, lifelike images" and fewer artifacts. The real test, of course, will be to see how it handles prompts compared to Dall-E 3. Imagen 3 handles text better than before, Google says, and it's also smarter about handling details from long prompts.

Google is also working with recording artists like Wyclef Jean and Bjorn to test out its Music AI Sandbox, a set of tools that can help with song and beat creation. We only saw a brief glimpse of this, but it's led to a few intriguing demos: 

The sun rises and sets. We're all slowly dying. And AI is getting smarter by the day. That seems to be the big takeaway from Google's latest media creation tools. Of course they're getting better! Google is pouring billions into making the dream of AI a reality, all in a bid to own the next great leap for computing. Will any of this actually make our lives better? Will they ever be able to generate art with genuine soul? Check back at Google I/O every year until AGI actually appears, or our civilization collapses.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-unveils-veo-and-imagen-3-its-latest-ai-media-creation-models-173617373.html?src=rss
Categories: Technology

5,471-piece Lego Barad-Dûr set will turn its watchful Eye to us in June

Ars Technica - Tue, 05/14/2024 - 12:33

Here's something for any Lord of the Rings fan with a tall, narrow space available on their tchotchkes shelf: Lego has announced a $460, 5,471-piece rendition of Barad-Dûr, which viewers of the films will recognize as "that giant black tower with the flaming eye on top of it."

Sauron, Base Master of Treachery, will keep his Eye on you from atop the tower, which will actually glow thanks to a built-in light brick. The tower includes a minifig of Sauron himself, plus the Mouth of Sauron, Gollum, and a handful of Orcs.

The Lego Barad-Dûr set will launch on June 1 for Lego Insiders and June 4 for everybody else. If you buy it between June 1 and June 7, you'll also get the "Fell Beast" bonus set, with pose-able wings and a Nazgûl minifig. It doesn't seem as though this bonus set will be sold separately, making it much harder to buy the nine Nazgûl you would need to make your collection story-accurate.

Read 4 remaining paragraphs | Comments

Categories: Technology

Google just snuck a pair of AR glasses into a Project Astra demo at I/O

Engadget - Tue, 05/14/2024 - 12:28

In a video showcasing the prowess of Google's new Project Astra experience at I/O 2024, an unnamed person demonstrating asked Gemini "do you remember where you saw my glasses?" The AI impressively responded "Yes, I do. Your glasses were on a desk near a red apple," despite said object not actually being in view when the question was asked. But these glasses weren't your bog-standard assistive vision aid; these had a camera onboard and some sort of visual interface!

The tester picked up their glasses and put them on, and proceeded to ask the AI more questions about things they were looking at. Clearly, there is a camera on the device that's helping it take in the surroundings, and we were shown some sort of interface where a waveform moved to indicate it was listening. Onscreen captions appeared to reflect the answer that was being read aloud to the wearer, as well. So if we're keeping track, that's at least a microphone and speaker onboard too, along with some kind of processor and battery to power the whole thing. 

We only caught a brief glimpse of the wearable, but from the sneaky seconds it was in view, a few things were evident. The glasses had a simple black frame and didn't look at all like Google Glass. They didn't appear very bulky, either. 

In all likelihood, Google is not ready to actually launch a pair of glasses at I/O. It breezed right past the wearable's appearance and barely mentioned them, only to say that Project Astra and the company's vision of "universal agents" could come to devices like our phones or glasses. We don't know much else at the moment, but if you've been mourning Google Glass or the company's other failed wearable products, this might instill some hope yet.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-just-snuck-a-pair-of-ar-glasses-into-a-project-astra-demo-at-io-172824539.html?src=rss
Categories: Technology

Google's Project Astra uses your phone's camera and AI to find noise makers, misplaced items and more.

Engadget - Tue, 05/14/2024 - 12:28

When Google first showcased its Duplex voice assistant technology at its developer conference in 2018, it was both impressive and concerning. Today, at I/O 2024, the company may be bringing up those same reactions again, this time by showing off another application of its AI smarts with something called Project Astra. 

The company couldn't even wait till its keynote today to tease Project Astra, posting a video to its social media of a camera-based AI app yesterday. At its keynote today, though, Google's DeepMind CEO Demis Hassabis shared that his team has "always wanted to develop universal AI agents that can be helpful in everyday life." Project Astra is the result of progress on that front. 

What is Project Astra?

According to a video that Google showed during a media briefing yesterday, Project Astra appeared to be an app which has a viewfinder as its main interface. A person holding up a phone pointed its camera at various parts of an office and verbally said "Tell me when you see something that makes sound." When a speaker next to a monitor came into view, Gemini responded "I see a speaker, which makes sound."

The person behind the phone stopped and drew an onscreen arrow to the top circle on the speaker and said, "What is that part of the speaker called?" Gemini promptly responded "That is the tweeter. It produces high-frequency sounds."

Then, in the video that Google said was recorded in a single take, the tester moved over to a cup of crayons further down the table and asked "Give me a creative alliteration about these," to which Gemini said "Creative crayons color cheerfully. They certainly craft colorful creations."

Wait, were those Project Astra glasses? Is Google Glass back?

The rest of the video goes on to show Gemini in Project Astra identifying and explaining parts of code on a monitor, telling the user what neighborhood they were in based on the view out the window. Most impressively, Astra was able to answer "Do you remember where you saw my glasses?" even though said glasses were completely out of frame and were not previously pointed out. "Yes, I do," Gemini said, adding "Your glasses were on a desk near a red apple."

After Astra located those glasses, the tester put them on and the video shifted to the perspective of what you'd see on the wearable. Using a camera onboard, the glasses scanned the wearer's surroundings to see things like a diagram on a whiteboard. The person in the video then asked "What can I add here to make this system faster?" As they spoke, an onscreen waveform moved to indicate it was listening, and as it responded, text captions appeared in tandem. Astra said "Adding a cache between the server and database could improve speed."

The tester then looked over to a pair of cats doodled on the board and asked "What does this remind you of?" Astra said "Schrodinger's cat." Finally, they picked up a plush tiger toy, put it next to a cute golden retriever and asked for "a band name for this duo." Astra dutifully replied "Golden stripes."

How does Project Astra work?

This means that not only was Astra processing visual data in realtime, it was also remembering what it saw and working with an impressive backlog of stored information. This was achieved, according to Hassabis, because these "agents" were "designed to process information faster by continuously encoding video frames, combining the video and speech input into a timeline of events, and caching this information for efficient recall."

It was also worth noting that, at least in the video, Astra was responding quickly. Hassabis noted in a blog post that "While we’ve made incredible progress developing AI systems that can understand multimodal information, getting response time down to something conversational is a difficult engineering challenge."

Google has also been working on giving its AI more range of vocal expression, using its speech models to "enhanced how they sound, giving the agents a wider range of intonations." This sort of mimicry of human expressiveness in responses is reminiscent of Duplex's pauses and utterances that led people to think Google's AI might be a candidate for the Turing test.

When will Project Astra be available?

While Astra remains an early feature with no discernible plans for launch, Hassabis wrote that in future, these assistants could be available "through your phone or glasses." No word yet on whether those glasses are actually a product or the successor to Google Glass, but Hassabis did write that "some of these capabilities are coming to Google products, like the Gemini app, later this year."

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-project-astra-uses-your-phones-camera-and-ai-to-find-noise-makers-misplaced-items-and-more-172642329.html?src=rss
Categories: Technology

Pages

Subscribe to Superior Systems aggregator - Technology