Feed aggregator

Everything announced at Google I/O 2024 including Gemini AI, Project Astra, Android 15 and more

Engadget - Tue, 05/14/2024 - 16:04

At the end of I/O, Google’s annual developer conference at the Shoreline Amphitheater in Mountain View, Google CEO Sundar Pichai revealed that the company had said “AI” 121 times. That, essentially, was the crux of Google’s two-hour keynote — stuffing AI into every Google app and service used by more than two billion people around the world. Here are all the major updates that Google announced at the event.

Gemini 1.5 Flash and updates to Gemini 1.5 Pro Google

Google announced a brand new AI model called Gemini 1.5 Flash, which it says is optimised for speed and efficiency. Flash sits between Gemini 1.5 Pro and Gemini 1.5 Nano, which its the company’s smallest model that runs locally on device. Google said that it created Flash because developers wanted a lighter and less expensive model than Gemini Pro to build AI-powered apps and services while keeping some of the things like a long context window of one million tokens that differentiates Gemini Pro from competing models. Later this year, Google will double Gemini’s context window to two million tokens, which means that it will be able to process two hours of video, 22 hours of audio, more than 60,000 lines of code or more than 1.4 million words at the same time.

Project Astra Google

Google showed off Project Astra, an early version of a universal assistant powered by AI that Google’s DeepMind CEO Demis Hassabis said was Google’s version of an AI agent “that can be helpful in everyday life.”

In a video that Google says was shot in a single take, an Astra user moves around Google’s London office holding up their phone and pointing the camera at various things — a speaker, some code on a whiteboard, and out a window — and has a natural conversation with the app about what it seems. In one of the video’s most impressive moments, the correctly tells the user where she left her glasses before without the user ever having brought up the glasses.

The video ends with a twist — when the user finds and wears the missing glasses, we learn that they have an onboard camera system and are capable of using Project Astra to seamlessly carry on a conversation with the user, perhaps indicating that Google might be working on a competitor to Meta’s Ray Ban smart glasses.

Ask Google Photos Google

Google Photos was already intelligent when it came to searching for specific images or videos, but with AI, Google is taking things to the next level. If you’re a Google One subscriber in the US, you will be able to ask Google Photos a complex question like “show me the best photo from each national park I’ve visited" when the feature rolls out over the next few months. Google Photos will use GPS information as well as its own judgement of what is “best” to present you with options. You can also ask Google Photos to generate captions to post the photos to social media.

Veo and Imagen 3 Google

Google’s new AI-powered media creation engines are called Veo and Imagen 3. Veo is Google’s answer to OpenAI’s Sora. It can produce “high-quality” 1080p videos that can last “beyond a minute”, Google said, and can understand cinematic concepts like a timelapse.

Imagen 3, meanwhile, is a text-to-image generator that Google claims handles text better than its previous version, Imagen 2. The result is the company’s highest quality” text-to-image model with “incredible level of detail” for “photorealistic, lifelike images” and fewer artifacts — essentially pitting it against OpenAI’s DALLE-3.

Big updates to Google Search

Google

Google is making big changes to how Search fundamentally works. Most of the updates announced today like the ability to ask really complex questions (“Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.”) and using Search to plan meals and vacations won’t be available unless you opt in to Search Labs, the company’s platform that lets people try out experimental features.

But a big new feature that Google is calling AI Overviews and which the company has been testing for a year now, is finally rolling out to millions of people in the US. Google Search will now present AI-generated answers on top of the results by default, and the company says that it will bring the feature to more than a billion users around the world by the end of the year.

Gemini on Android

Google

Google is integrating Gemini directly into Android. When Android 15 releases later this year, Gemini will be aware of the app, image or video that you’re running, and you’ll be able to pull it up as an overlay and ask it context-specific questions. Where does that leave Google Assistant that already does this? Who knows! Google didn’t bring it up at all during today’s keynote.

There were a bunch of other updates too. Google said it would add digital watermarks to AI-generated video and text, make Gemini accessible in the side panel in Gmail and Docs, power a virtual AI teammate in Workspace, listen in on phone calls and detect if you’re being scammed in real time, and a lot more.


Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/everything-announced-at-google-io-2024-including-gemini-ai-project-astra-android-15-and-more-210414580.html?src=rss
Categories: Technology

Patch Tuesday, May 2024 Edition

Krebs on Security - Tue, 05/14/2024 - 15:19

Microsoft today released updates to fix more than 60 security holes in Windows computers and supported software, including two “zero-day” vulnerabilities in Windows that are already being exploited in active attacks. There are also important security patches available for macOS and Adobe users, and for the Chrome Web browser, which just patched its own zero-day flaw.

First, the zero-days. CVE-2024-30051 is an “elevation of privilege” bug in a core Windows library. Satnam Narang at Tenable said this flaw is being used as part of post-compromise activity to elevate privileges as a local attacker.

“CVE-2024-30051 is used to gain initial access into a target environment and requires the use of social engineering tactics via email, social media or instant messaging to convince a target to open a specially crafted document file,” Narang said. “Once exploited, the attacker can bypass OLE mitigations in Microsoft 365 and Microsoft Office, which are security features designed to protect end users from malicious files.”

CVE-2024-30040 is a security feature bypass in MSHTML, a component that is deeply tied to the default Web browser on Windows systems. Microsoft’s advisory on this flaw is fairly sparse, but Kevin Breen from Immersive Labs said this vulnerability also affects Office 365 and Microsoft Office applications.

“Very little information is provided and the short description is painfully obtuse,” Breen said of Microsoft’s advisory on CVE-2024-30040.

Meanwhile, Kaspersky Lab, one of two companies credited with reporting exploitation of CVE-2024-30040 to Microsoft, has published a fascinating writeup on how they discovered the exploit in a file shared with Virustotal.com.

Kaspersky said it has since seen the exploit used together with QakBot and other malware. Emerging in 2007 as a banking trojan, QakBot (a.k.a. Qbot and Pinkslipbot) has morphed into an advanced malware strain now used by multiple cybercriminal groups to prepare newly compromised networks for ransomware infestations.

The only vulnerability fixed this month that earned Microsoft’s most-dire “critical” rating is CVE-2024-30044, a flaw in Sharepoint that Microsoft said is likely to be exploited. Tenable’s Narang notes that exploitation of this bug requires an attacker to be authenticated to a vulnerable SharePoint Server with Site Owner permissions (or higher) first and to take additional steps in order to exploit this flaw, which makes this flaw less likely to be widely exploited as most attackers follow the path of least resistance.

Five days ago, Google released a security update for Chrome that fixes a zero-day in the popular browser. Chrome usually auto-downloads any available updates, but it still may require a complete restart of the browser to install them. If you use Chrome and see a “Relaunch to update” message in the upper right corner of the browser, it’s time to restart.

Apple has just shipped macOS Sonoma 14.5 update, which includes nearly two dozen security patches. To ensure your Mac is up-to-date, go to System Settings, General tab, then Software Update and follow any prompts.

Finally, Adobe has critical security patches available for a range of products, including Acrobat, Reader, Illustrator, Adobe Substance 3D Painter, Adobe Aero, Adobe Animate and Adobe Framemaker.

Regardless of whether you use a Mac or Windows system (or something else), it’s always a good idea to backup your data and or system before applying any security updates. For a closer look at the individual fixes released by Microsoft today, check out the complete list over at the SANS Internet Storm Center. Anyone in charge of maintaining Windows systems in an enterprise environment should keep an eye on askwoody.com, which usually has the scoop on any wonky Windows patches.

Categories: Technology

2023 temperatures were warmest we’ve seen for at least 2,000 years

Ars Technica - Tue, 05/14/2024 - 15:17

Enlarge / Top: a look through the past 2,000 years of summertime temperatures, showing that 2023 is considerably warmer than anything earlier. Bottom: a bell curve of the typical temperatures, showing that the hot outliers are all recent years. (credit: Esper, Torbenson, and Büntgen)

Starting in June of last year, global temperatures went from very hot to extreme. Every single month since June, the globe has experienced the hottest temperatures for that month on record—that's 11 months in a row now, enough to ensure that 2023 was the hottest year on record, and 2024 will likely be similarly extreme.

There's been nothing like this in the temperature record, and it acts as an unmistakable indication of human-driven warming. But how unusual is that warming compared to what nature has thrown at us in the past? While it's not possible to provide a comprehensive answer to that question, three European researchers (Jan Esper, Max Torbenson, and Ulf Büntgen) have provided a partial answer: the Northern Hemisphere hasn't seen anything like this in over 2,000 years.

Tracking past temperatures

Current temperature records are based on a global network of data-gathering hardware. But, as you move back in time, gaps in that network go from rare to ever more common. Moving backwards from 1900, the network shrinks to just a few dozen land-based thermometers, almost all of them in Europe.

Read 12 remaining paragraphs | Comments

Categories: Technology

Android’s AI era includes eavesdropping on phone calls, warning you about scams

Ars Technica - Tue, 05/14/2024 - 14:57

Google's "code red" demands that AI be part of every single Google product and that includes Android. At Google I/O, the company announced a "multi-year journey to reimagine Android with AI at the core" but only demoed a few minor AI enhancements.

Gemini can soon be brought up via the power button as an overlay panel, where it will have access to whatever's on your screen. The demo involved opening a PDF in Android's PDF reader, summarizing it, and answering questions based on the content. You can do something similar with a YouTube video. The demo also showed generating images based on a text prompt and then sending those images in a text message. Another demo involved Gemini understanding a chat log and suggesting future actions.

Talkback, Android's system for low-vision users, will soon be able to use AI to describe images that lack descriptive text.

Read 3 remaining paragraphs | Comments

Categories: Technology

Animal Well speedrunners are already beating the game in under five minutes

Engadget - Tue, 05/14/2024 - 14:53

Animal Well is one of the hottest games around. It quickly shot to the top of Steam's top-seller chart after it was released to glowing reviews last Thursday. 

While most players complete the main story in four to six hours, it hasn't taken long for speedrunners to figure out how to blaze through solo developer Billy Basso's eerie labyrinth. YouTubers are already posting runs of under five minutes and the any% record (i.e. the best recorded time without any restrictions) is being smashed over and over. 

Within a couple of hours of Hubert0987 claiming the world record with a 4:44 run on Thursday, The DemonSlayer6669 appeared to snag bragging rights with one that was 18 seconds faster and perhaps the first recorded sub-4:30 time. (Don't watch the video just yet if you haven't beaten the game and would like to avoid spoilers.)

Animal Well hasn't even been out for a week, so you can expect records to keep tumbling as runners optimize routes to the game's final plunger. It's cool to already see a speedrunning community form around a new game as skilled players duke it out, perhaps for the chance to show off their skills at the next big Games Done Quick event.

This article originally appeared on Engadget at https://www.engadget.com/animal-well-speedrunners-are-already-beating-the-game-in-under-five-minutes-195259598.html?src=rss
Categories: Technology

Boeing is troubleshooting a small helium leak on the Starliner spacecraft

Ars Technica - Tue, 05/14/2024 - 14:47

Enlarge / A view looking down at Boeing's Starliner spacecraft and United Launch Alliance's Atlas V rocket inside the Vertical Integration Facility at Cape Canaveral Space Force Station, Florida. (credit: United Launch Alliance)

Boeing is taking a few extra days to resolve a small helium leak on the Starliner spacecraft slated to ferry two NASA astronauts on a test flight to the International Space Station, officials said Tuesday.

This means the first crew launch of Boeing's Starliner spacecraft, running years behind schedule and more than $1.4 billion over budget, won't happen before next Tuesday, May 21, at 4:43 pm EDT (20:43 UTC). Meeting this schedule assumes engineers can get comfortable with the helium leak. Officials from Boeing and NASA, which manages Boeing's multibillion-dollar Starliner commercial crew contract, previously targeted Friday, May 17, for the spacecraft's first launch with astronauts onboard.

Boeing's ground team traced the leak to a flange on a single reaction control system thruster on the spacecraft's service module.

Read 12 remaining paragraphs | Comments

Categories: Technology

AT&T paid bribes to get two major pieces of legislation passed, US gov’t says

Ars Technica - Tue, 05/14/2024 - 14:31

Enlarge (credit: Getty Images | AaronP/Bauer-Griffin)

The US government has provided more detail on how a former AT&T executive allegedly bribed a powerful state lawmaker's ally in order to obtain legislation favorable to AT&T's business.

Former AT&T Illinois President Paul La Schiazza is set to go on trial in September 2024 after being indicted on charges of conspiracy to unlawfully influence then-Illinois House Speaker Michael Madigan. AT&T itself agreed to pay a $23 million fine in October 2022 in connection with the alleged illegal influence campaign and said it was "committed to ensuring that this never happens again."

US government prosecutors offered a preview of their case against La Schiazza in a filing on Friday in US District Court for the Northern District of Illinois. A contract lobbyist hired by AT&T "is expected to testify that AT&T successfully passed two major pieces of legislation after the company started making payments to Individual FR-1."

Read 15 remaining paragraphs | Comments

Categories: Technology

Google strikes back at OpenAI with “Project Astra” AI agent prototype

Ars Technica - Tue, 05/14/2024 - 14:11

Enlarge / A video still of Project Astra demo at the Google I/O conference keynote in Mountain View on May 14, 2024. (credit: Google)

Just one day after OpenAI revealed GPT-4o, which it bills as being able to understand what's taking place in a video feed and converse about it, Google announced Project Astra, a research prototype that features similar video comprehension capabilities. It was announced by Google DeepMind CEO Demis Hassabis on Tuesday at the Google I/O conference keynote in Mountain View, California.

Hassabis called Astra "a universal agent helpful in everyday life." During a demonstration, the research model showcased its capabilities by identifying sound-producing objects, providing creative alliterations, explaining code on a monitor, and locating misplaced items. The AI assistant also exhibited its potential in wearable devices, such as smart glasses, where it could analyze diagrams, suggest improvements, and generate witty responses to visual prompts.

Google says that Astra uses the camera and microphone on a user's device to provide assistance in everyday life. By continuously processing and encoding video frames and speech input, Astra creates a timeline of events and caches the information for quick recall. The company says that this enables the AI to identify objects, answer questions, and remember things it has seen that are no longer in the camera's frame.

Read 14 remaining paragraphs | Comments

Categories: Technology

Google expands digital watermarks to AI-made video and text

Engadget - Tue, 05/14/2024 - 13:55

As Google starts to make its latest video-generation tools available, the company says it has a plan to ensure transparency around the origins of its increasingly realistic AI-generated clips. All video made by the company’s new Veo model in the VideoFX app will have digital watermarks thanks to Google’s SynthID system. Furthermore, SynthID will be able to watermark AI-generated text that comes from Gemini.

SynthID is Google’s digital watermarking system that started rolling out to AI-generated images last year. The tech embeds imperceptible watermarks into AI-made content so that AI detection tools can recognize that the content was generated by AI. Considering that Veo, the company’s latest video generation model previewed onstage at I/O, can create longer and higher-res clips than what was previously possible, tracking the source of such content will be increasingly important.

As generative AI models advance, more companies have turned to watermarking amid fears that AI could fuel a new wave of misinformation. Watermarking systems would give platforms like Google a framework for detecting AI-generated content that may otherwise be impossible to distinguish. TikTok and Meta have also recently announced plans to support similar detection tools on their platforms and label more AI content in their apps.

Of course, there are still significant questions about whether digital watermarks on their own offer sufficient protection against deceptive AI content. Researchers have shown that watermarks can be easy to evade. But making AI-made content detectable in some way is an important first step toward transparency.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-expands-digital-watermarks-to-ai-made-video-175232320.html?src=rss
Categories: Technology

Gemini will be accessible in the side panel on Google apps like Gmail and Docs

Engadget - Tue, 05/14/2024 - 13:54

Google is adding Gemini-powered AI automation to more tasks in Workspace. In its Tuesday Google I/O keynote, the company said its advanced Gemini 1.5 Pro will soon be available in the Workspace side panel as “the connective tissue across multiple applications with AI-powered workflows,” as AI grows more intelligent, learns more about you and automates more of your workflow.

Gemini’s job in Workspace is to save you the time and effort of digging through files, emails and other data from multiple apps. “Workspace in the Gemini era will continue to unlock new ways of getting things done,” Google Workspace VP Aparna Pappu said at the event.

The refreshed Workspace side panel, coming first to Gmail, Docs, Sheets, Slides and Drive, will let you chat with Gemini about your content. Its longer context window (essentially, its memory) allows it to organize, understand and contextualize your data from different apps without leaving the one you’re in. This includes things like comparing receipt attachments, summarizing (and answering back-and-forth questions about) long email threads, or highlighting key points from meeting recordings.

Google

Another example Google provided was planning a family reunion when your grandmother asks for hotel information. With the Workspace side panel, you can ask Gemini to find the Google Doc with the booking information by using the prompt, “What is the hotel name and sales manager email listed in @Family Reunion 2024?” Google says it will find the document and give you a quick answer, allowing you to insert it into your reply as you save time by faking human authenticity for poor Grandma.

The email-based changes are coming to the Gmail mobile app, too. “Gemini will soon be able to analyze email threads and provide a summarized view with the key highlights directly in the Gmail app, just as you can in the side panel,” the company said.

Summarizing in the Gmail app is coming to Workspace Labs this month. Meanwhile, the upgraded Workspace side panel will arrive starting Tuesday for Workspace Labs and Gemini for Workspace Alpha users. Google says all the features will arrive for the rest of Workspace customers and Google One AI Premium users next month.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/gemini-will-be-accessible-in-the-side-panel-on-google-apps-like-gmail-and-docs-185406695.html?src=rss
Categories: Technology

Google is “reimagining” search in “the Gemini era” with AI Overviews

Ars Technica - Tue, 05/14/2024 - 13:33

Enlarge / "Google will do the Googling for you," says firm's search chief. (credit: Google)

Search is still important to Google, but soon it will change. At its all-in-one AI Google I/O event Tuesday, the company introduced a host of AI-enabled features coming to Google Search at various points in the near future, which will "do more for you than you ever imagined."

"Google will do the Googling for you," said Liz Reid, Google's head of Search.

It's not AI in every search, but it will seemingly be hard to avoid a lot of offers to help you find, plan, and brainstorm things. "AI Overviews," the successor to the Search Generative Experience, will provide summary answers to questions, along with links to sources. You can also soon submit a video as a search query, perhaps to identify objects or provide your own prompts by voice.

Read 4 remaining paragraphs | Comments

Categories: Technology

Google Gemini can power a virtual AI teammate with its own Workspace account

Engadget - Tue, 05/14/2024 - 13:28

Google's Gemini AI systems can do a lot, judging by today's I/O keynote. That includes the option to set up a virtual teammate with its own Workspace account. You can configure the teammate to carry out specific tasks, such as to monitor and track projects, organize information, provide context, pinpoint trends after analyzing data and to play a role in team collaboration.

In Google Chat, the teammate can join all relevant rooms and you can ask it questions based on all the conversation histories, Gmail threads and anything else it has access to. It can tell team members whether their projects are approved or if there might be an issue based on conflicting messages. 

It seems like the virtual teammate was just a tech demo for now, however. Aparna Pappu, vice president and GM of Workspace, said Google has "a lot of work to do to figure out how to bring these agentive experiences, like virtual teammates, into Workspace." That includes finding ways to let third parties make their own versions. 

While it doesn't seem like this virtual teammate will be available soon, it could eventually prove to be a serious timesaver — as long as you trust it to get everything right first time around.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-gemini-can-power-a-virtual-ai-teammate-with-its-own-workspace-account-182809274.html?src=rss
Categories: Technology

Google announces new scam detection tools that provide real-time alerts during phone calls

Engadget - Tue, 05/14/2024 - 13:14

Google just announced forthcoming scam detection tools coming to Android phones later this year, which is a good thing as these scammers keep getting better and better at parting people from their money. The toolset, revealed at Google I/O 2024, is still in the testing stages but uses AI to suss out fraudsters in the middle of a conversation.

You read that right. The AI will be constantly on the hunt for conversation patterns commonly associated with scams. Once detected, you’ll receive a real-time alert on the phone, putting to bed any worries that the person on the other end is actually heading over to deliver a court summons or whatever.

Google gives the example of a “bank representative” asking for personal information, like PINs and passwords. These are uncommon bank requests, so the AI would flag them and issue an alert. Everything happens on the device, so it stays private. This feature isn’t coming to Android 15 right away and the company says it’ll share more details later in the year. We do know that people will have to opt-in to use the tool. 

Google made a big move with Android 15, bringing its Gemini chatbot to actual devices instead of requiring a connection to the cloud. In addition to this scam detection tech, the addition of onboard AI will allow for many more features, like contextual awareness when using apps.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-announces-new-scam-detection-tools-that-provide-real-time-alerts-during-phone-calls-181442091.html?src=rss
Categories: Technology

With Gemini Live, Google wants you to relax and have a natural chat with AI

Engadget - Tue, 05/14/2024 - 13:13

While Google and OpenAI have been racing to win the AI crown over the past year, we've seemingly reverted away from the idea of speaking to virtual assistants. Generative AI products have typically launched with text-only inputs, and only later add the ability to search images and basic voice commands. At Google I/O today, the company showed off Gemini Live, a new mobile experience for natural conversations with its AI. 

Google offered up a few potential use cases; You could have a conversation with Gemini Live to help prepare for a job interview, where it could potentially ask you relevant questions around the positions. It could also give you public speaking tips if you want to research a speech. What makes Gemini Live unique is that you'll be able to speak at your own pace, or even interrupt its responses if you'd like. Ideally, it should be more like having a conversation with a person, instead of just voicing smart assistant commands or generative AI queries.

At I/O, Google also showed off Project Astra, a next-generation virtual assistant that takes the concept of Gemini Live even further. Astra is able to view your camera feed and answer questions in real-time. It's unclear how long that'll take to arrive, but Google says some of Astra's live video features will come to Gemini Live later this year. Gemini Live will be available for Gemini Advanced subscribers in the next few months.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/with-gemini-live-google-wants-you-to-relax-and-have-a-natural-chat-with-ai-181329788.html?src=rss
Categories: Technology

Google's Gemini Nano brings better image-description smarts to its TalkBack vision tool

Engadget - Tue, 05/14/2024 - 13:07

The Google I/O event is here, and the company is announcing lots of great updates for your Android device. As we heard earlier, Gemini Nano is getting multimodal support, meaning your Android will still process text but with a better understanding of other factors like sights, sounds and spoken language. Now Google has shared that the new tool is also coming to it's TalkBack feature.

TalkBack is an existing tool that reads aloud a description of an image, whether it's one you captured or from the internet. Gemini Nano's multimodal support should provide a more detailed understanding of the image. According to Google, TalkBack users encounter about 90 images each day that don't have a label. Gemini Nano should be able to provide missing information, such as what an item of clothing looks like or the details of a new photo sent by a friend. 

Gemini Nano works directly on a person's device, meaning it should still function properly without any network connection. While we don't yet have an exact date for when it will arrive, Google says TalkBack will get Gemini Nano's updated features later this year.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-nano-brings-better-image-description-smarts-to-its-talkback-vision-tool-180759598.html?src=rss
Categories: Technology

Google builds Gemini right into Android, adding contextual awareness within apps

Engadget - Tue, 05/14/2024 - 13:04

Google just announced some nifty improvements to its Gemini AI chatbot for Android devices as part of the company’s I/O 2024 event. The AI is now part of the Android operating system, allowing it to integrate in a more comprehensive way.

The coolest new feature wouldn’t be possible without that integration with the underlying OS. Gemini is now much better at understanding context as you control apps on the smartphone. What does this mean exactly? Once the tool officially launches as part of Android 15, you’ll be able to bring up a Gemini overlay that rests on top of the app you’re using. This will allow for context-specific actions and queries.

Google gives the example of quickly dropping generated images into Gmail and Google Messages, though you may want to steer clear of historical images for now. The company also teased a feature called “Ask This Video” that lets users pose questions about a particular YouTube video, which the chatbot should be able to answer. Google says this should work with "billions" of videos. There's a similar tool coming for PDFs. 

Google

It’s easy to see where this tech is going. Once Gemini has access to the lion’s share of your app library, it should be able to actually deliver on some of those lofty promises made by rival AI companies like Humane and Rabbit. Google says it's “just getting started with how on-device AI can change what your phone can do” so we imagine future integration with apps like Uber and Doordash, at the very least.

Circle to Search is also getting a boost thanks to on-board AI. Users will be able to circle just about anything on their phone and receive relevant information. Google says people will be able to do this without having to switch apps. This even extends to math and physics problems, just circle for the answer, which is likely to please students and frustrate teachers.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-builds-gemini-right-into-android-adding-contextual-awareness-within-apps-180413356.html?src=rss
Categories: Technology

Android's Circle to Search can now help students solve math and physics homework

Engadget - Tue, 05/14/2024 - 13:02

Google has introduced another capability for its Circle to Search feature at the company's annual I/O developer conference, and it's something that could help students better understand potentially difficult class topics. The feature will now be able to show them step-by-step instructions for a "range of physics and math word problems." They just have to activate the feature by long-pressing the home button or navigation bar and then circling the problem that's got them stumped, though some math problems will require users to be signed up for Google's experimental Search Labs feature.

The company says Circle to Search's new capability was made possible by its new family of AI models called LearnLM that was specifically created and fine-tuned for learning. It's also planning to make adjustments to this particular capability and to roll out an upgraded version later this year that could solve even more complex problems "involving symbolic formulas, diagrams, graphs and more." Google launched Circle to Search earlier this year at a Samsung Unpacked event, because the feature was initially available on Galaxy 24, as well as on Pixel 8 devices. It's now also out for the Galaxy S23, Galaxy S22, Z Fold, Z Flip, Pixel 6 and Pixel 7 devices, and it'll likely make its way to more hardware in the future. 

In addition to the new Circle to Search capability, Google has also revealed that devices that can support the Gemini for Android chatbot assistant will now be able to bring it up as an overlay on top of the application that's currently open. Users can then drag and drop images straight from the overlay into apps like Gmail, for instance, or use the overlay to look up information without having to swipe away from whatever they're doing. They can tap "Ask this video" to find specific information within a YouTube video that's open, and if they have access to Gemini Advanced, they can use the "Ask this PDF" option to find information from within lengthy documents. 

Google is also rolling out multimodal capabilities to Nano, the smallest model in the Gemini family that can process information on-device. The updated Gemini Nano, which will be able to process sights, sounds and spoken language, is coming to Google's TalkBack screen reader later this year. Gemini Nano will enable TalkBack to describe images onscreen more quickly and even without an internet connection. Finally, Google is currently testing a Gemini Nano feature that can alert users while a call is ongoing if it detects common conversation patterns associated with scams. Users will be alerted, for instance, if they're talking to someone asking them for their PINs or passwords or to someone asking them to buy gift cards. 

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/androids-circle-to-search-can-now-help-students-solve-math-and-physics-homework-180223229.html?src=rss
Categories: Technology

Google's Gemini will search your videos to help you solve problems

Engadget - Tue, 05/14/2024 - 12:52

As part of its push toward adding generative AI to search, Google has introduced a new twist: video. Gemini will let you upload video that demonstrates an issue you're trying to resolve, then scour user forums and other areas of the internet to find a solution. 

As an example, Google's Rose Yao talked onstage at I/O 2024 about a used turntable she bought and how she couldn't get the needle to sit on the record. Yao uploaded a video showing the issue, then Gemini quickly found an explainer describing how to balance the arm on that particular make and model. 

Google

"Search is so much more than just words in a text box. Often the questions you have are about the things you see around you, including objects in motion," Google wrote. "Searching with video saves you the time and trouble of finding the right words to describe this issue, and you’ll get an AI Overview with steps and resources to troubleshoot."

If the video alone doesn't make it clear what you're trying to figure out, you can add text or draw arrows that point to the issue in question. 

OpenAI just introduced ChatGPT 4o with the ability to interpret live video in real time, then describe a scene or even sing a song about it. Google, however, is taking a different tack with video by focusing on its Search product for now. Searching with video is coming to Search Labs US users in English to start with, but will expand to more regions over time, the company said.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/googles-gemini-will-search-your-videos-to-help-you-solve-problems-175235105.html?src=rss
Categories: Technology

Google Search will now show AI-generated answers to millions by default

Engadget - Tue, 05/14/2024 - 12:45

Google is shaking up Search. On Tuesday, the company announced big new AI-powered changes to the world’s dominant search engine at I/O, Google’s annual conference for developers. With the new features, Google is positioning Search as more than a way to simply find websites. Instead, the company wants people to use its search engine to directly get answers and help them with planning events and brainstorming ideas.

“[With] generative AI, Search can do more than you ever imagined,” wrote Liz Reid, vice president and head of Google Search, in a blog post. “So you can ask whatever’s on your mind or whatever you need to get done — from researching to planning to brainstorming — and Google will take care of the legwork.”

Google’s changes to Search, the primary way that the company makes money, are a response to the explosion of generative AI ever since OpenAI’s ChatGPT released at the end of 2022. Since then, a handful of AI-powered apps and services including ChatGPT, Anthropic, Perplexity, and Microsoft’s Bing, which is powered by OpenAI’s GPT-4, have challenged Google’s flagship service by directly providing answers to questions instead of simply presenting people a list of links. This is the gap that Google is racing to bridge with its new features in Search.

Starting today, Google will show complete AI-generated answers in response to most search queries at the top of the results page in the US. Google first unveiled the feature a year ago at Google I/O in 2023, but so far, anyone who wanted to use the feature had to sign up for it as part of the company’s Search Labs platform that lets people try out upcoming features ahead of their general release. Google is now making AI Overviews available to hundreds of millions of Americans, and says that it expects it to be available in more countries to over a billion people by the end of the year. Reid wrote that people who opted to try the feature through Search Labs have used it “billions of times” so far, and said that any links included as part of the AI-generated answers get more clicks than if the page had appeared as a traditional web listing, something that publishers have been concerned about. “As we expand this experience, we’ll continue to focus on sending valuable traffic to publishers and creators,” Reid wrote. 

In addition to AI Overviews, searching for certain queries around dining and recipes, and later with movies, music, books, hotels, shopping and more in English in the US will show a new search page where results are organized using AI. “[When] you’re looking for ideas, Search will use generate AI to brainstorm with you and create an AI-organized results page that makes it easy to explore,” Reid said in the blog post.

Google

If you opt in to Search Labs, you’ll be able to access even more features powered by generative AI in Google Search. You’ll be able to get AI Overview to simplify the language or break down a complex topic in more detail. Here’s an example of a query asking Google to explain, for instance, the connection between lightning and thunder.

Google

Search Labs testers will also be able to ask Google really complex questions in a single query to get answers on a single page instead of having to do multiple searches. The example that Google’s blog post gives: “Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill.” In response, Google shows the highest-rated yoga and pilates studios near Boston’s Beacon Hill neighborhood and even puts them on a map for easy navigation.

Google

Google also wants to become a meal and vacation planner by letting people who sign up for Search Labs ask queries like “create a 3 day meal plan for a group that’s easy to prepare” and letting you swap out individual results in its AI-generated plan with something else (swapping a meat-based dish in a meal plan for a vegetarian one, for instance).

Google

Finally, Google will eventually let anyone who signs up for Search Labs use a video as a search query instead of text or images. “Maybe you bought a record player at a thriftshop, but it’s not working when you turn it on and the metal piece with the needle is drifting unexpectedly,” wrote Reid in Google’s blog post. “Searching with video saves you the time and trouble of finding the right words to describe this issue, and you’ll get an AI Overview with steps and resources to troubleshoot.”

Google said that all these new capabilities are powered by a brand new Gemini model customized for Search that combines Gemini’s advanced multi-step reasoning and multimodal abilities with Google’s traditional search systems.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-search-will-now-show-ai-generated-answers-to-millions-by-default-174512845.html?src=rss
Categories: Technology

AI in Gmail will sift through emails, provide search summaries, send emails

Ars Technica - Tue, 05/14/2024 - 12:44

Google's Gemini AI often just feels like a chatbot built into a text-input field, but you can really start to do special things when you give it access to a ton of data. Gemini in Gmail will soon be able to search through your entire backlog of emails and show a summary in a sidebar.

That's simple to describe but solves a huge problem with email: even searching brings up a list of email subjects, and you have to click-through to each one just to read it. Having an AI sift through a bunch of emails and provide a summary sounds like a huge time saver and something you can't do with any other interface.

Google's one-minute demo of this feature showed a big blue Gemini button at the top right of the Gmail web app. Tapping it opens the normal chatbot sidebar you can type in. Asking for a summary of emails from a certain contact will get you a bullet-point list of what has been happening, with a list of "sources" at the bottom that will jump you right to a certain email. In the last second of the demo, the user types, "Reply saying I want to volunteer for the parent's group event," hits "enter," and then the chatbot instantly, without confirmation, sends an email.

Read 2 remaining paragraphs | Comments

Categories: Technology

Pages

Subscribe to Superior Systems aggregator