Blogroll
What is agentic AI and why is everyone talking about it?
According to the AI overlords, this is the year of agentic AI.
You may have seen Google announce its "agentic era" with a web browsing research assistant and an AI bot that calls nail salons and mechanics for you. OpenAI leadership talked about agentic AI being a "big theme in 2025" and has already introduced a research preview of Operator, an agent that can perform tasks on your behalf, and Deep Research, which "conducts multi-step research on the internet for complex tasks." Microsoft just unveiled Microsoft Discover, an enterprise agentic AI tool for scientists. And your next smartphone could have agentic features that can send custom messages, create calendar events, or pull together information from across different apps.
If you've been nodding and smiling every time one of your tech friends mentions agentic AI, don't be embarrassed. This is a new entry in the AI glossary, but one that can no longer be ignored.
So what exactly is agentic AI?"Agentic AI refers to a class of artificial intelligence systems designed to operate autonomously, perceive their environment, set goals, plan actions to achieve those goals, and execute those plans without continuous human intervention. These systems can learn and adapt over time based on feedback and new information."
That's according to — what else? — Google's AI chatbot Gemini.
Unlike generative AI, which is essentially a tool for creating some kind of output — code, text, audio, images, videos — agentic AI can autonomously perform tasks on a user's behalf. This is a step up from the standard AI chatbot experience. Instead of generating a response based on its training material, agentic AI can take additional steps, such as conducting internet searches and analyzing the results, consulting additional sources, or completing a task in another app or software.
You may have heard this term used interchangeably with AI agents, but agentic AI is a broader term that encompasses technology that may not be fully autonomous but has some agent-like capabilities.
So, OpenAI considers Operator an AI agent because it has contextual awareness and can perform tasks for you like sending text messages. And its Deep Research tool is agentic AI because it can autonomously crawl the web and compile a report for the user, though its capabilities pretty much stop there for now.
Agentic AI is powered by more advanced reasoning models like ChatGPT o3 and Gemini 2.5 Pro Preview, which can break down complex tasks and make inferences. This brings large-language models like ChatGPT one step closer to mimicking how the human brain works. Unless you constantly retrain a generative AI model with new information, it can't learn new things, said Karen Panetta, IEEE Fellow and professor of engineering at Tufts University. "This other kind of AI can learn from seeing other examples, and it can be more autonomous in breaking down tasks and helping you with more goal-driven types of activities, versus more exploratory or giving back information."
When combined with computer vision, which is what allows a model to "see" a user's computer screen, we get the agentic AI everyone is so excited about.
Why is everyone talking about agentic AI? Google's new AI shopping experience could utilize agentic AI to make purchases on your behalf. Credit: GoogleAgentic AI is not entirely new. Self-driving cars and robot vacuums could both be considered early examples of agentic AI. They're technologies with autonomous properties that rely on advanced sensors and cameras to make sense of their environment and react accordingly.
But agentic AI is having its moment now for a few reasons. Crucially, the latest models have gotten better and more user-friendly (although sometimes too friendly). And as people begin to rely on AI chatbots like ChatGPT, there's a growing interest in using these tools to automate daily tasks like responding to emails. With agentic AI, you don't need to be a computer programmer to use ChatGPT for automation. You can simply tell the chatbot what to do in plain English and have it carry out your instructions. At least, that's the idea.
Companies like OpenAI, Google, and Anthropic are banking on agentic AI because it has the potential to move the technology beyond the novelty chatbot experience. With agentic AI, tools like ChatGPT could become truly indispensable for businesses and individuals alike. Agentic AI tools could order groceries online, browse and buy the best-reviewed espresso machine for you, or even research and book vacations. In fact, Google is already taking steps in this direction with its new AI shopping experience.
In the business world, companies are looking to agentic AI to resolve customer service inquiries and adjust stock trading strategies in real-time.
What could possibly go wrong? This Tweet is currently unavailable. It might be loading or has been removed.Are there risks involved with unleashing autonomous bots in the wild? Why, yes.
With an agent operating on your behalf, there's always a risk of it sending a sensitive email to the wrong person or accidentally making a huge purchase. And then there's the question of liability. "Am I going to be sued because I went and had my agent do something?" Panetta wondered. "Say I'm working as an officer of something, and I use an AI agent to make a decision, to help us do our planning, and then you lose that organization money."
The major AI players have put safeguards in place to prevent AI agents from going rogue, such as requiring human supervision or approval for sensitive tasks. OpenAI says Operator won't take screenshots when it's in human override mode, and it doesn't currently allow its agent to make banking transactions.
But what about when the technology becomes more commonplace? As we become more comfortable with agentic AI, will we become more passive and lax about oversight? Earlier in this article, we used Google Gemini to help define agentic AI. If we become dependent on AI tools for even simple learning, will human beings get dumber?
Then there's the extensive data access we have to give agents. Sure, it would be convenient for ChatGPT to automatically filter, sort, or even delete emails. But do you want to give an AI company full access to every email you've ever sent or received?
And what about bad actors that don't have such safeguards in place? Panetta warns of increasingly sophisticated cyberattacks utilizing agentic AI.
"Because the access to powerful computing now is so cheap, that means that the bad actors have access to it," she said. "They can be running simulations and being able to come up with sophisticated schemes to break into your systems or connive you into taking out this equity loan."
AI has always been a double-edged sword, with equally potent harms and benefits. And with agentic AI getting ready for primetime deployment, the stakes are getting higher.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
Say hello to summer with these giftworthy ideas
Memorial Day marks the unofficial start of summer, which means barbecues, extended vacations, and lots of time outside. Get in gear with our top picks for everything you need to have the best time ever. Think of these as your summertime essentials: All the best tools, gadgets, and seasonal upgrades for your home, yard, and beyond.
Now we know whos getting 23andMes DNA data. Meet Regeneron Pharmaceuticals.
23andMe had all the signs of a success story. The unicorn brand became synonymous with consumer-facing genetic testing, went public, and was once valued at $6 billion.
But then, it all came tumbling down. Sales for 23AndMe kits began to fall, attempts to raise new funds hit a wall, the company struggled to find a product that would bring in recurring revenue, and it was still dealing with the fallout from a major data breach.
23andMe filed for bankruptcy and put its assets up for sale in March 2025. But, more concerning than the success or failure of a particular company, was this question: What would happen to 23andMe's vast trove of highly personal customer data? We aren't just talking about usernames and birthdays. 23andMe has DNA profiles for millions of users.
Well, now we know who bought 23andMe and, most importantly, now owns all the DNA data from the company's customers.
Meet Regeneron PharmaceuticalsOn Monday, Regeneron Pharmaceuticals announced that it would acquire most of 23andMe's assets for $256 million. According to Regeneron Pharmaceuticals, the acquisition includes 23andMe's Personal Genome Service, Total Health and Research Services business lines, and the trove of customers' DNA samples and genetic data.
This Tweet is currently unavailable. It might be loading or has been removed.Regeneron Pharmaceuticals will not, however, be acquiring 23andMe's online doctor and telehealth service known as Lemonaid Health, which will be shuttered.
Based in Tarrytown, NY, Regeneron Pharmaceuticals is a biotechnology company that researches and develops medicines for cancer, cardiovascular disease, and other diseases. The company is currently valued at more than $64 billion.
Regeneron Pharmaceuticals placed a bid for 23andMe as part of a court-supervised sales process. The bid requires Regeneron Pharmaceuticals to comply with the company's existing privacy policies and applicable laws.
Conspiracy-minded consumers might revolt at the idea of a big pharma brand acquiring their DNA profile. However, in a press release, Regeneron has promised "to comply with the Company’s privacy policies" and "process all customer personal data in accordance with the consents, privacy policies and statements, terms of service, and notices currently in effect." Further, the company assured customers it has "a proven track record of safeguarding personal genetic data, and we assure 23andMe customers that we will apply our high standards for safety and integrity to their data." A court-appointed expert will also submit a report about potential privacy and security impacts to the court by June 10.
If the name "Regeneron" sounds familiar, it may be because the company received mainstream attention in 2020 after the company created an experimental treatment for COVID-19 called REGN-COV2. President Donald Trump was treated with the drug when he was infected with COVID-19 in October of that year.
While 23andMe's demise is certainly notable, a most lasting debate will undoubtedly be had regarding the sale of 15 million customers' genetic data.
“Your DNA and your family health history should not be a corporate asset," said J.B. Branch, Big Tech accountability advocate for the consumer rights advocacy group Public Citizen, in a statement. "Of course, Regeneron will promise to ‘respect consent’ and ‘uphold privacy policies.’ Those are bare minimal legal requirements. But time and again these companies fail consumers."
NYT Connections Sports Edition today: Hints and answers for May 19, 2025
Connections: Sports Edition is a new version of the popular New York Times word game that seeks to test the knowledge of sports fans.
Like the original Connections, the game is all about finding the "common threads between words." And just like Wordle, Connections resets after midnight and each new set of words gets trickier and trickier—so we've served up some hints and tips to get you over the hurdle.
If you just want to be told today's puzzle, you can jump to the end of this article for the latest Connections solution. But if you'd rather solve it yourself, keep reading for some clues, tips, and strategies to assist you.
SEE ALSO: Mahjong, Sudoku, free crossword, and more: Play games on Mashable What is Connections Sports Edition?The NYT's latest daily word game has launched in association with The Athletic, the New York Times property that provides the publication's sports coverage. Connections can be played on both web browsers and mobile devices and require players to group four words that share something in common.
This Tweet is currently unavailable. It might be loading or has been removed.Each puzzle features 16 words and each grouping of words is split into four categories. These sets could comprise of anything from book titles, software, country names, etc. Even though multiple words will seem like they fit together, there's only one correct answer.
If a player gets all four words in a set correct, those words are removed from the board. Guess wrong and it counts as a mistake—players get up to four mistakes until the game ends.
This Tweet is currently unavailable. It might be loading or has been removed.Players can also rearrange and shuffle the board to make spotting connections easier. Additionally, each group is color-coded with yellow being the easiest, followed by green, blue, and purple. Like Wordle, you can share the results with your friends on social media.
Here's a hint for today's Connections Sports Edition categoriesWant a hint about the categories without being told the categories? Then give these a try:
Yellow: To show off
Green: Areas to watch the game
Blue: They wear blue and orange jerseys
Purple: They share the second half of the word
Need a little extra help? Today's connections fall into the following categories:
Yellow: Boast
Green: Stadium seating sections
Blue: New York Knicks
Purple: ___Stop
Looking for Wordle today? Here's the answer to today's Wordle.
Ready for the answers? This is your last chance to turn back and solve today's puzzle before we reveal the solutions.
Drumroll, please!
The solution to today's Connections Sports Edition #238 is...
What is the answer to Connections Sports Edition todayBoast - CROW, GLOAT, GRANDSTAND, SHOWBOAT
Stadium seating sections - BLEACHER, LOGE, SUITES, UPPER DESK
New York Knicks - BRIDGES, HART, MCBRIDE, TOWNS
___Stop - BACK, JUMP, PIT, SHORT
Don't feel down if you didn't manage to guess it this time. There will be new Connections for you to stretch your brain with tomorrow, and we'll be back again to guide you with more helpful hints.
Are you also playing NYT Strands? See hints and answers for today's Strands.
If you're looking for more puzzles, Mashable's got games now! Check out our games hub for Mahjong, Sudoku, free crossword, and more.
Not the day you're after? Here's the solution to today's Connections.
You Can Finally Buy ASUS’s Latest Compact Zephyrus G14
There's a lot to love about ASUS' ROG gaming line, but its laptops are especially good, and the Zephyrus line tends to be a standout. The G14, in particular, is really, really good, and it just got refreshed with 2025 specs.
PDF Translation Is Coming to Microsoft Edge
Microsoft’s Build 2025 conference revealed a major update for its Edge browser. The company is adding a new feature to break down language barriers through built-in PDF translation. This feature should run much better than competitors thanks to taking context into account.
Everything we hope to learn at Google I/O 2025: Gemini, Gmail, and Project Astra updates
UPDATE: May. 19, 2025, 1:36 p.m. EDT This article has been updated with new information from 'The Android Event,' the Android-focused mini I/O event held on Tuesday, May 13.
The latest news from Google-land is all Gemini, Gemini, Gemini. And with Google I/O 2025 less than a day away, we expect more of the same.
A year after its last big event, Google is back with an even deeper dive into AI. So deep, in fact, that Android was shuffled off into its own separate event entirely. On Tuesday, May 13, Google hosted "The Android Show," a mini I/O-style event focused on the latest Android 16 developments.
So, is Google clearing the decks for major announcements at Google I/O tomorrow? All signs point to yes.
Ahead of I/O 2025, Google dropped a developer preview of Gemini 2.5 Pro, its latest generative AI model. Translation: this year’s keynote isn’t just about flashy hardware or Android updates — it’s about code, algorithms, and the general direction of Google’s artificial intelligence goals.
Whether you're a developer, a die-hard Android fan, a casual Gmail user, or just here for the spectacle, here’s what to expect from Google I/O 2025.
When is the Google I/O 2025 keynote?The big keynote for Google I/O is scheduled for Tuesday, May 20, 2025, at 10 a.m. PT. Here’s when it will be happening around the globe:
New York: 1 p.m.
Chicago: 12 p.m.
London: 6 p.m.
Honolulu: 7 a.m.
Dubai: 9 p.m.
Paris: 7 p.m.
Mumbai: 11:30 p.m.
Have you been hearing the phrase agentic AI a lot lately? We sure have, most recently at Microsoft Build 2025. Agentic AI features were a big focus during the opening Microsoft Build keynote, and OpenAI has been rolling out more and more agentic capabilities with its AI chatbot ChatGPT. And since we expect Gemini to be the primary focus of Google I/O 2025, we also expect announcements related to new agentic tools in Gemini.
AI ModeWith ChatGPT siphoning away searchers from Google (particularly young searchers), Google has gone all-in on AI search, first with AI Overviews, and more recently with AI Mode. And last week, Google began quietly testing AI Mode on its homepage and on search results pages for select users, as Mashable reported. It certainly seems like Google is readying AI Mode for primetime, and Google I/O would be the perfect time to announce this launch.
AI Mode uses the Gemini chatbot to give searchers information instead of the standard blue links you get with Google Search. And if AI Mode is getting a wider launch, it's further proof that the era of Google Search is over, and the era of AI search has officially begun.
Android 16A week before Google I/O, Google pulled back the curtain on Android 16. The headline here is a fresh evolution of Google's design language, shifting from Material 3 to the more vibrant and customizable Material 3 Expressive. (Google, true to form, self-leaked the details in a now-deleted blog post.) You can get all the details at the Google blog, but we'll save you a click: Material 3 Expressive does away with clean design and Corporate Memphis art and is embracing more active animations, colors, and rounded designs.
A preview of Material 3 Expressive in Android 16. Credit: Google SEE ALSO: Everything we learned at 'The Android Show' event from GoogleWe also learned that Google is transforming its Find My Device app into the "Find Hub," which will let users track devices, of course, but also people, belongings, and even luggage. New Bluetooth tracker tags and smart luggage will soon be released and integrate directly with the new Find Hub.
As for what Android 16 will bring, the beta has already given us a sneak peek. Features like Auracast support hint at smoother Bluetooth switching, while visual tweaks, quality-of-life upgrades, and the introduction of “summarized notifications” suggest a more streamlined, user-friendly experience across the board.
Leaks suggest a Q2 launch, sometime around June.
WearOSWear OS fans, this one's (almost) for you. Wear OS 5.1 quietly dropped in March, delivering some relatively minor improvements like better step tracking and revamped media controls. However, in a pleasant surprise, Google revealed Wear OS 6 during The Android Show event. We now know that Wear OS 6 will be getting a big visual update, new Gemini AI features, and a 10% battery life boost.
Android XRFourth time’s the charm — at least, that’s what Google hopes.
After the quiet burial of Google Glass, the slow fade of Daydream, and the DIY novelty of Cardboard, Google is once again diving headfirst into immersive tech with Android XR. Built from the ground up with Gemini AI in mind, this new operating system is aimed squarely at powering the next wave of AR and VR wearables.
Things may be different now with Google's collaboration with Samsung on Project Moohan — a pair of XR glasses using Google’s OS. Details are sparse, and it’s unclear whether Moohan will make a cameo at I/O 2025, but you can bet Android XR will get some stage time. Expect Google to name names when it comes to new partners and paint a picture of an XR ecosystem that might have staying power.
Google Workspace changesGoogle regularly rolls out new tools, updates, and features for its Workspace suite of tools, including Google Docs, Gmail, Sheets, and Slides. During the Google I/O 2025 keynote, look for a ton of announcements related to new Workspace tools. You don't have to be a psychic to know that Gemini and artificial intelligence will be the driving force behind most of these changes.
Project AstraWhat is Project Astra? The project is part of Google DeepMind, the company's AI skunkworks lab. It's the name for Google's research prototype for a universal AI. Also on deck is a mobile version of NotebookLM, Google’s AI-powered research assistant. We're not sure if we'll get updates about this at Google I/O 2025, but fingers crossed.
Project MoohanProject Moohan is actually a joint venture between Google and Samsung. It's the code name for Samsung's first augmented reality glasses, powered by Google Gemini. We know that Samsung is working on new display technology for AR glasses, and we're hoping to get more updates on this project soon. Unfortunately, Google I/O doesn't usually include many hardware announcements, but if we're lucky, we'll get some teasers on this upcoming product launch. We got a good look at this technology during the Galaxy Unpacked event earlier this year, so perhaps Google I/O will give us our next preview.
More AI, AI, AIGoogle is an AI company now, full stop. And I/O 2025 is shaping up to be less about what Google makes and more about how much smarter it can make everything.
At the center of it all is Gemini. With version 2.5 Pro already in developers’ hands, expect Google to go deep on performance gains, real-world integrations, and new ways Gemini is flexing across platforms. (As of this writing, Gemini 2.5 Pro tops AI leaderboards.)
Google is bringing AI to everything, so expect announcements on a bunch of AI-related features: cars, smartwatches, earbuds, even your toaster, probably. Context-aware assistants, predictive interfaces, and on-device models will dominate the demos. It's either thrilling or exhausting, depending on how many times you’ve heard the phrase "AI-first strategy."
Need a Chromebook? ASUS Just Revealed 4 New Affordable Models
ASUS has sold many Chromebooks over the years, including some of the best models, and the company is continuing that trend at Computex 2025. ASUS just revealed four budget Chromebooks with 14 and 15.6-inch screens, as well as a 14-inch Chromebook Plus model.
10 Great Animated Films You Might Mistake for Studio Ghibli
With their whimsical worlds and breathtaking storytelling, Ghibli set the golden standard for animated films. But you should know, not every enchanting animation comes from the legendary studio.
A smarter, AI-powered Siri wont be attending WWDC 2025
Apple's new and AI-improved Siri that was showcased at last year's WWDC still isn't ready, according to Bloomberg's Mark Gurman.
"Significant upgrades to Siri—including the ones promised nearly a year ago—are unlikely to be discussed much and are still months away from shipping," said Gurman in an-depth report that detailed Apple leadership's reported failures to recognize the significance of generative AI for consumer tech products and subsequent issues in rolling out a smarter Siri driven by powerful LLMs and onscreen awareness.
SEE ALSO: iOS 19 rumors: Every feature we've heard of so farAt Apple's WWDC 2025, which is less than a month away on June 9, the event will reportedly instead highlight "various non-AI software upgrades" like the reported interface design overhauls to make the operating systems for the iPhone, iPad, and Mac more in the style of the Vision Pro, said Gurman.
Reports of Siri trouble have been brewing for a while now. Apple's promise of a more personalized and context-aware version of the voice assistant hasn't yet come to fruition, and instead we've seen minor updates like the ability to type to Siri and even examples of Siri getting worse. Apple even pulled a commercial with actor Bella Ramsey, featuring Siri being able answer who they had had lunch with by pulling up relevant calendar details, over accusations of false advertising.
Gurman reports that Apple is working on a Siri-specific LLM, internally dubbed "LLM Siri" which will reportedly resolve many of the underlying infrastructure issues. Until then, don't expect much from Siri at this year's WWDC.
One AI image generator lets you create NSFW art, and it’s only A$62 for life
TL;DR: Create anything, even NSFW art, with a lifetime subscription to Imagiyo for only A$62.
Digital creativity has never been more accessible, yet many of us remember the days when crafting a single image meant wrestling with layers and plugins for hours on end. Now there’s a way to generate stunning visuals in seconds simply by typing a description of what you have in mind.
Imagiyo uses Stable Diffusion AI alongside FLUX AI to turn text prompts into high-quality images ready for commercial use, and there aren’t many limits to what you can create. Here’s what that means.
What art can you make with ImagiyoWhat do you want to make first? It only takes a brief description to put Imagiyo’s advanced algorithms to work, and unlike other image generators, Imagiyo actually lets you really follow your creativity. Craft stunning landscapes, visualize characters from books, or go for something a little more daring. Imagiyo supports NSFW content creation. Just set your prompts to private and let your mind run wild.
Imagiyo’s commercial-use license means you can take some of the images you generate and incorporate them into client projects, social media campaigns, or personal portfolios without fear of copyright issues.
Each month, you receive 500 image-generation credits and can submit up to two prompts at once. Unused credits roll over, so you never lose access to your creative potential. Best of all, Imagiyo delivers your purchased engine updates and feature improvements automatically, ensuring you always work with the latest AI models.
You have until June 1 at 11:59 p.m. PT to get an Imagiyo AI Image Generator lifetime subscription for A$62 (reg. A$772).
StackSocial prices subject to change.
Opens in a new window Credit: Imagiyo Imagiyo AI Image Generator: Lifetime Subscription (Standard Plan) AU$62AU$772 Save AU$710 Get Deal
Know the Differences Between Meteors, Meteoroids, Meteorites, Comets, and Asteroids
Astronomy is riddled with complex jargon that many of us will probably never need to understand. However, you're more likely to read and hear about meteors, meteoroids, meteorites, comets, and asteroids online and in the news, so here's a breakdown of exactly what they are.
Shop the REI Anniversary Sale to get 30% off summer outdoor essentials
Memorial Day is just a few days away, which means it's time to think about every amazing outdoor adventure you have planned for summer of 2025. If last year's gear got put away wet and dirty, it's time to consider some upgrades. Instead of doing this the Thursday night before leaving for a weekend adventure on Friday, spend Memorial Day weekend sorting through your gear and deciding what needs a refresh.
Coincidentally, the REI Anniversary Sale is on now through May 26 and has thousands of deals that take up to 30% off outdoor gear. Snag a new paddle board, replace the cooler, finally keep in touch with a Garmin inReach, or get a cozier sleeping bag.
Plus, if you happen to be an REI member and you see something that's not on sale, use code ANNIV2025 to take 20% off. A lifetime membership to the REI Co-op costs just $30 which means joining to apply to 20% coupon could mean the membership has already paid for itself.
In terms of what's on sale during the REI Anniversary Sale expect to see 30% off tons of REI Co-op brand gear, 20% off REI bicycles, and up to 25% off tons of camping gear from Nemo, Kelty, Therm-a-Rest and more. Below are some of our favorite deals or you can spend hours browsing the entire sale selection.
Best camping deal Opens in a new window Credit: REI REI Base Camp 4 Tent $331.79 at REI$474 Save $142.21 Get Deal Why we like it
The outdoors is a wonderful place to spend sunny weekends but it's no time to skip out on sleeping. You'll want to head out with a cozy sleeping set-up which relies on a functional tent, and that's where the REI Base Camp 4 Tent comes into play. It's part of the brand's Anniversary Sale which means you'll be saving 30% on the tent, scoring it for $331.79 instead of the normal price of $474.
In terms of function, the Base Camp 4 Tent is ready to sleep you and three friends with its durable materials that are also water repellent. The wide two door means no sleeper is trapped inside and everyone will have easy access for that inevitable 1 a.m. latrine trip. Of course, the pockets and hang loops make camp organization much easier.
More camping dealsSea to Summit Ultra-Sil Dry Bag (3 pack) — $52.39 $69.95 (save $17.56)
GSI Outdoors Glacier Stainless Base Camper Cookset — $67.39 $89.95 (save $22.56)
Kelty Low Loveseat — $97.39 $129.95 (save $32.56)
Helinox Chair Zero — $104.89 $139.95 (save $35.06)
Rumpl Original Puffy Blanket (two person) — $149.19 $199 (save $49.81)
Sea to Summit Traveller 45F Down Sleeping Bag — $149.19 $199 (save $49.81)
Coleman Cascade 222 2-Burner Camp Stove — $164.99 $220 (save $55.01)
NEMO Dagger OSMO 2P Tent — $374.89 $499.95 (save $125.06)
$559 Save $83.91 Get Deal Why we like it
You've tried out the paddle boards from the local rental shop and decided it's a great way to get out on the water, but have you tried your own? Bote makes some impressively designed paddle boards and the Bote Wulf Aero is on sale during the REI Anniversary Sale for $475.09, down from the usual price of $559. At 10 feet 4 inches in length, the Bote Wulf means you can spend the day out on the water instead of on the crowded shoreline and the included travel bag means everything is so much easier to carry to the lake. You'll also get a three-piece adjustable SUP paddle, a hand pump, a coiled leash, and a removeable Aero center fin, and an Aero repair kit.
More outdoor gear sales at REINRS Ninja PFD — $119.89 $149.95(save $30.06)
Garmin inReach Mini 2 — $299.99 $400 (save $100.01)
Cannondale Topstone 3 Bike — $1,119.93 $1,400 (save $280.07)
Salsa Journeyer Sora 700c Bike — $1,169.09 $1,299 (save $129.91)
Everything Revealed at Nvidias 2025 Computex Press Conference in 19 Minutes
Watch all the biggest announcements from Nvidia's keynote address at Computex 2025 in Taipei, Taiwan.
Outdoor Boys YouTuber hit 12M subs in 18 months. Now hes calling it quits.
In an era where most YouTubers chase growth at all costs, Outdoor Boys creator Luke Nichols is walking away at the height of his channel’s explosive rise.
In just the past 18 months, the channel has gained around 12 million subscribers. But instead of capitalizing on that momentum, Nichols is stepping back, citing the toll it's taking on his family.
"Because of people stealing my content and posting it on other platforms, my family and I have been viewed about 4 billion times, in addition to the 2.8 billion views on YouTube," Nichols says in his goodbye video. "The sheer volume of fans trying to contact me, trying to take pictures with me, or just trying to come up and talk to me in public, it can be a bit overwhelming at times."
What makes Nichols’ decision so unusual is how sharply it runs against the current of influencer culture, where visibility and virality are everything. A study from the Interactive Advertising Bureau found that creator jobs have grown 7.5 times in four years, making the sector the fastest-growing driver of U.S. GDP, now valued at $4.9 trillion.
However, for Nichols and his wife, the costs of that attention are becoming too high.
"My wife and I — we both have real concerns about what this will do to our family if I keep growing my YouTube channel at this pace," he explains. "And the time to stop is before this problem gets so out of hand that my family and I can’t live normal lives."
He won’t be uploading to the channel for a while, though he mentioned the possibility of "one big dump" of videos featuring unfinished projects if he finishes them. In the meantime, though, Nichols says it’s time for him to move on so he can help his kids achieve their dreams.
The fan response on social media has been overwhelming, filled with both support for his decision and sadness over his departure.
Rip outdoor boys
[image or embed]
Outdoor Boys is one of the goats on the platform for all time. There’s so much you can learn from the way that channel is run, the way the videos are made, and the way he approaches success. Big loss— there’s nobody else like him.
[image or embed]
This JBL Quantum 200 Gaming Headset deal is lit — $30 off
GET 50% OFF: As of May 19th, the JBL Quantum 200 Wired Over-ear Gaming Headset is currently on sale for $29.95, down from $59.95, for a savings of 50%, or $29.95.
Opens in a new window Credit: JBL JBL Quantum 200 Wired Over-ear Gaming Headset $29.95 at Amazon$59.95 Save $30.00 Get Deal
This is a deal that your teammates will appreciate. If your co-op gaming buddies are always complaining about fans running in the background of the voice feed, or your dog snoring on the floor during Horde Night, a good gaming headset with a voice-focused mic will make all the difference. There's also the over-ear immersion element that comes with excellent sounding headphones — yeah that's a nice perk for you.
Right now, the JBL Quantum 200 Wired Over-ear Gaming Headset is on sale for $29.95. Get it now and save 50%.
SEE ALSO: Get $70 off a Corsair keyboard that's made to game. The best audio deals available now-
Soundcore by Anker P20i True Wireless Earbuds — $19.98 (List Price $39.99)
-
Sony WH-CH520 Wireless Headphones — $38.00 (List Price $59.99)
-
JBL Flip 6 Portable Bluetooth Speaker — $99.95 (List Price $129.95)
-
Bose SoundLink Flex Portable Speaker — $149.00
-
Sony WH-1000XM4 Wireless Noise Cancelling Headphones — $228.00 (List Price $348.00)
-
JBL Bar 300 5.0ch Compact Soundbar — $249.95 (List Price $399.95)
This JBL headset is meant to envelop. The headphones rock 50mm audio drivers with 20Hz - 20kHZ response and spatial audio. Wired with a PC splitter, they'll connect to PC, Xbox, or anything that has a standard 3.5mm audio jack.
These aren't just headphones. The headset features a responsive, flip-up boom mic. The mic is directionally oriented and tuned to your voice, so it cuts background audio that would otherwise get picked up by a computer mic. This means it will emit traffic sounds from outside your apartment, or your partner knocking around in another room.
Right now, May 19th, the JBL Quantum 200 Wired Over-ear Gaming Headset is on sale for $29.95, for a savings of 50%.
Memory foam cups your ears and envelops them, insulating you from the outside world, and staying comfortable even during long gaming sessions.
YouTube TV's Multiview Feature Expands Beyond Sports
If you love the Multiview feature on YouTube TV but wish it worked on more channels and content other than sports, we have good news. Over the weekend, Google announced that it's experimenting with more multiview combinations, expanding the feature beyond sports, and I couldn't be more excited.
Today Only: This Acer Laptop With an Intel Core Ultra 7 is Almost $400 Off
If you want a powerful laptop and you don't want to break the bank, there are tons of options out there. This discount, however, might be one of the best we've seen so far, considering the specs it's packing.
Apple Music Introduces Sound Therapy, so Is It Any Good?
Apple Music can now do much more than just entertain. Apple Music Therapy combines popular hits and special sound waves to help you better focus, relax, or sleep. Let's see if it's worth your time.
Magentic-UI, an experimental human-centered web agent
Modern productivity is rooted in the web—from searching for information and filling in forms to navigating dashboards. Yet, many of these tasks remain manual and repetitive. Today, we are introducing Magentic-UI, a new open-source research prototype of a human-centered agent that is meant to help researchers study open questions on human-in-the-loop approaches and oversight mechanisms for AI agents. This prototype collaborates with users on web-based tasks and operates in real time over a web browser. Unlike other computer use agents that aim for full autonomy, Magentic-UI offers a transparent and controllable experience for tasks that are action-oriented and require activities beyond just performing simple web searches.
Magentic-UI builds on Magentic-One (opens in new tab), a powerful multi-agent team we released last year, and is powered by AutoGen (opens in new tab), our leading agent framework. It is available under MIT license at https://github.com/microsoft/Magentic-UI (opens in new tab) and on Azure AI Foundry Labs (opens in new tab), the hub where developers, startups, and enterprises can explore groundbreaking innovations from Microsoft Research. Magentic-UI is integrated with Azure AI Foundry models and agents. Learn more about how to integrate Azure AI agents into the Magentic-UI multi-agent architecture by following this code sample (opens in new tab).
Magentic-UI can perform tasks that require browsing the web, writing and executing Python and shell code, and understanding files. Its key features include:
- Collaborative planning with users (co-planning). Magentic-UI allows users to directly modify its plan through a plan editor or by providing textual feedback before Magentic-UI executes any actions.
- Collaborative execution with users (co-tasking). Users can pause the system and give feedback in natural language or demonstrate it by directly taking control of the browser.
- Safety with human-in-the-loop (action guards). Magentic-UI seeks user approval before executing potentially irreversible actions, and the user can specify how often Magentic-UI needs approvals. Furthermore, Magentic-UI is sandboxed for the safe operation of tools such as browsers and code executors.
- Safety with human-in-the-loop. Magentic-UI seeks user approval before executing potentially irreversible actions, and the user can specify how often Magentic-UI needs approvals. Furthermore, Magentic-UI is sandboxed for the safe operation of tools such as browsers and code executors.
- Learning from experience (plan learning). Magentic-UI can learn and save plans from previous interactions to improve task completion for future tasks.
While many web agents promise full autonomy, in practice users can be left unsure of what the agent can do, what it is currently doing, and whether they have enough control to intervene when something goes wrong or doesn’t occur as expected. By contrast, Magentic-UI considers user needs at every stage of interaction. We followed a human-centered design methodology in building Magentic-UI by prototyping and obtaining feedback from pilot users during its design.
Figure 2: Co-planning – Users can collaboratively plan with Magentic-UI.For example, after a person specifies and before Magentic-UI even begins to execute, it creates a clear step-by-step plan that outlines what it would do to accomplish the task. People can collaborate with Magentic-UI to modify this plan and then give final approval for Magentic-UI to begin execution. This is crucial as users may have expectations of how the task should be completed; communicating that information could significantly improve agent performance. We call this feature co-planning.
During execution, Magentic-UI shows in real time what specific actions it’s about to take. For example, whether it is about to click on a button or input a search query. It also shows in real time what it observed on the web pages it is visiting. Users can take control of the action at any point in time and give control back to the agent. We call this feature co-tasking.
Figure 3: Co-tasking – Magentic-UI provides real-time updates about what it is about to do and what it already did, allowing users to collaboratively complete tasks with the agent. Figure 4: Action-guards – Magentic-UI will ask users for permission before executing actions that it deems consequential or important.Additionally, Magentic-UI asks for user permission before performing actions that are deemed irreversible, such as closing a tab or clicking a button with side effects. We call these “action guards”. The user can also configure Magentic-UI’s action guards to always ask for permission before performing any action. If the user deems an action risky (e.g., paying for an item), they can reject it.
Figure 5: Plan learning – Once a task is successfully completed, users can request Magentic-UI to learn a step-by-step plan from this experience.After execution, the user can ask Magentic-UI to reflect on the conversation and infer and save a step-by-step plan for future similar tasks. Users can view and modify saved plans for Magentic-UI to reuse in the future in a saved-plans gallery. In a future session, users can launch Magentic-UI with the saved plan to either execute the same task again, like checking the price of a specific flight, or use the plan as a guide to help complete similar tasks, such as checking the price of a different type of flight.
Combined, these four features—co-planning, co-tasking, action guards, and plan learning—enable users to collaborate effectively with Magentic-UI.
ArchitectureMagentic-UI’s underlying system is a team of specialized agents adapted from AutoGen’s Magentic-One system. The agents work together to create a modular system:
- Orchestrator is the lead agent, powered by a large language model (LLM), that performs co-planning with the user, decides when to ask the user for feedback, and delegates sub-tasks to the remaining agents to complete.
- WebSurfer is an LLM agent equipped with a web browser that it can control. Given a request by the Orchestrator, it can click, type, scroll, and visit pages in multiple rounds to complete the request from the Orchestrator.
- Coder is an LLM agent equipped with a Docker code-execution container. It can write and execute Python and shell commands and provide a response back to the Orchestrator.
- FileSurfer is an LLM agent equipped with a Docker code-execution container and file-conversion tools from the MarkItDown (opens in new tab) package. It can locate files in the directory controlled by Magentic-UI, convert files to markdown, and answer questions about them.
To interact with Magentic-UI, users can enter a text message and attach images. In response, Magentic-UI creates a natural-language step-by-step plan with which users can interact through a plan-editing interface. Users can add, delete, edit, regenerate steps, and write follow-up messages to iterate on the plan. While the user editing the plan adds an upfront cost to the interaction, it can potentially save a significant amount of time in the agent executing the plan and increase its chance at success.
The plan is stored inside the Orchestrator and is used to execute the task. For each step of the plan, the Orchestrator determines which of the agents (WebSurfer, Coder, FileSurfer) or the user should complete the step. Once that decision is made, the Orchestrator sends a request to one of the agents or the user and waits for a response. After the response is received, the Orchestrator decides whether that step is complete. If it is, the Orchestrator moves on to the following step.
Once all steps are completed, the Orchestrator generates a final answer that is presented to the user. If, while executing any of the steps, the Orchestrator decides that the plan is inadequate (for example, because a certain website is unreachable), the Orchestrator can replan with user permission and start executing a new plan.
All intermediate progress steps are clearly displayed to the user. Furthermore, the user can pause the execution of the plan and send additional requests or feedback. The user can also configure through the interface whether agent actions (e.g., clicking a button) require approval.
Evaluating Magentic-UIMagentic-UI innovates through its ability to integrate human feedback in its planning and execution of tasks. We performed a preliminary automated evaluation to showcase this ability on the GAIA benchmark (opens in new tab) for agents with a user-simulation experiment.
Evaluation with simulated users Figure 7: Comparison on the GAIA validation set of the accuracy of Magentic-One, Magentic-UI in autonomous mode, Magentic-UI with a simulated user powered by a smarter LLM than the MAGUI agents, Magentic-UI with a simulated user that has a\access to side information about the tasks, and human performance. This shows that human-in-the-loop can improve the accuracy of autonomous agents, bridging the gap to human performance at a fraction of the cost.GAIA is a benchmark for general AI assistants, with multimodal question-answer pairs that are challenging, requiring the agents to navigate the web, process files, and execute code. The traditional evaluation setup with GAIA assumes the system will autonomously complete the task and return an answer, which is compared to the ground-truth answer.
To evaluate the human-in-the-loop capabilities of Magentic-UI, we transform GAIA into an interactive benchmark by introducing the concept of a simulated user. Simulated users provide value in two ways: by having specific expertise that the agent may not possess, and by providing guidance on how the task should be performed.
We experiment with two types of simulated users to show the value of human-in-the-loop: (1) a simulated user that is more intelligent than the Magentic-UI agents and (2) a simulated user with the same intelligence as Magentic-UI agents but with additional information about the task. During co-planning, Magentic-UI takes feedback from this simulated user to improve its plan. During co-tasking, Magentic-UI can ask the (simulated) user for help when it gets stuck. Finally, if Magentic-UI does not provide a final answer, then the simulated user provides an answer instead. These experiments reflect a lower bound on the value of human feedback, since real users can step in at any time and offer any kind of input—not just when the system explicitly asks for help.
The simulated user is an LLM without any tools, instructed to interact with Magentic-UI the way we expect a human would act. The first type of simulated user relies on OpenAI’s o4-mini, more performant at many tasks than the one powering the Magentic-UI agents (GPT-4o). For the second type of simulated user, we use GPT-4o for both the simulated user and the rest of the agents, but the user has access to side information about each task. Each task in GAIA has side information, which includes a human-written plan to solve the task. While this plan is not used as input in the traditional benchmark, in our interactive setting we provide this information to the second type of simulated user,which is powered by an LLM so that it can mimic a knowledgeable user. Importantly, we tuned our simulated user so as not to reveal the ground-truth answer directly as the answer is usually found inside the human written plan. Instead, it is prompted to guide Magentic-UI indirectly. We found that this tuning prevented the simulated user from inadvertently revealing the answer in all but 6% of tasks when Magentic-UI provides a final answer.
On the validation subset of GAIA (162 tasks), we show the results of Magentic-One operating in autonomous mode, Magentic-UI operating in autonomous mode (without the simulated user), Magentic-UI with simulated user (1) (smarter model), Magentic-UI with simulated user (2) (side-information), and human performance. We first note that Magentic-UI in autonomous mode is within a margin of error of the performance of Magentic-One. Note that the same LLM (GPT-4o) is used for Magentic-UI and Magentic-One.
Magentic-UI with the simulated user that has access to side information improves the accuracy of autonomous Magentic-UI by 71%, from a 30.3% task-completion rate to a 51.9% task-completion rate. Moreover, Magentic-UI only asks for help from the simulated user in 10% of tasks and relies on the simulated user for the final answer in 18% of tasks. And in those tasks where it does ask for help, it asks for help on average 1.1 times. Magentic-UI with the simulated user powered by a smarter model improves to 42.6% where Magentic-UI asks for help in only 4.3% of tasks, asking for help an average of 1.7 times in those tasks. This demonstrates the potential of even lightweight human feedback for improving performance (e.g., task completion) over autonomous agents working alone, especially at a fraction of the cost compared to people completing tasks entirely manually.
Learning and reusing plansAs described above, once Magentic-UI completes a task, users have the option for Magentic-UI to learn a plan based on the execution of the task. These plans are saved in a plan gallery, which users and Magentic-UI can access in the future.
The user can select a plan from the plan gallery, which is displayed by clicking on the Saved Plans button. Alternatively, as a user enters a task that closely matches a previous task, the saved plan will be displayed even before the user is done typing. If no identical task is found, Magentic-UI can use AutoGen’s Task-Centric Memory (opens in new tab) to retrieve plans for any similar tasks. Our preliminary evaluations show that this retrieval is highly accurate, and when recalling a saved plan can be around 3x faster than generating a new plan. Once a plan is recalled or generated, the user can always accept it, modify it, or ask Magentic-UI to modify it for the specific task at hand.
Safety and controlMagentic-UI can surf the live internet and execute code. With such capabilities, we need to ensure that Magentic-UI acts in a safe and secure manner. The following features, design decisions, and evaluations were made to ensure this:
- Allow-list: Users can set a list of websites that Magentic-UI is allowed to access. If Magentic-UI needs to access a website outside of the allow-list, users must explicitly approve it through the interface
- Anytime interruptions: At any point of Magentic-UI completing the task, the user can interrupt Magentic-UI and stop any pending code execution or web browsing.
- Docker sandboxing: Magentic-UI controls a browser that is launched inside a Docker container with no credentials, which avoids risks with logged-in accounts and credentials. Moreover, any code execution is also performed inside a separate Docker container to avoid affecting the host environment in which Magentic-UI is running. This is illustrated in the system architecture of Magentic-UI (Figure 3).
- Detection and approval of irreversible agent actions: Users can configure an action-approval policy (action guards) to determine which actions Magentic-UI can perform without user approval. In the extreme, users can specify that any action (e.g., any button click) needs explicit user approval. Users must press an “Accept” or “Deny” button for each action.
In addition to the above design decisions, we performed a red-team evaluation of Magentic-UI on a set of internal scenarios, which we developed to challenge the security and safety of Magentic-UI. Such scenarios include cross-site prompt injection attacks, where web pages contain malicious instructions distinct from the user’s original intent (e.g., to execute risky code, access sensitive files, or perform actions on other websites). It also contains scenarios comparable to phishing, which try to trick Magentic-UI into entering sensitive information, or granting permissions on impostor sites (e.g., a synthetic website that asks Magentic-UI to log in and enter Google credentials to read an article). In our preliminary evaluations, we found that Magentic-UI either refuses to complete the requests, stops to ask the user, or, as a final safety measure, is eventually unable to complete the request due to Docker sandboxing. We have found that this layered approach is effective for thwarting these attacks.
We have also released transparency notes, which can be found at: https://github.com/microsoft/magentic-ui/blob/main/TRANSPARENCY_NOTE.md (opens in new tab)
Open research questionsMagentic-UI provides a tool for researchers to study critical questions in agentic systems and particularly on human-agent interaction. In a previous report (opens in new tab), we outlined 12 questions for human-agent communication, and Magentic-UI provides a vehicle to study these questions in a realistic setting. A key question among these is how we enable humans to efficiently intervene and provide feedback to the agent while executing a task. Humans should not have to constantly watch the agent. Ideally, the agent should know when to reach out for help and provide the necessary context for the human to assist it. A second question is about safety. As agents interact with the live web, they may become prone to attacks from malicious actors. We need to study what necessary safeguards are needed to protect the human from side effects without adding a heavy burden on the human to verify every agent action. There are also many other questions surrounding security, personalization, and learning that Magentic-UI can help with studying.
ConclusionMagentic-UI is an open-source agent prototype that works with people to complete complex tasks that require multi-step planning and browser use. As agentic systems expand in the scope of tasks they can complete, Magentic-UI’s design enables better transparency into agent actions and enables human control to ensure safety and reliability. Moreover, by facilitating human intervention, we can improve performance while still reducing human cost in completing tasks on aggregate. Today we have released the first version of Magentic-UI. Looking ahead, we plan to continue developing it in the open with the goal of improving its capabilities and answering research questions on human-agent collaboration. We invite the research community to extend and reuse Magentic-UI for their scientific explorations and domains.
Opens in a new tabThe post Magentic-UI, an experimental human-centered web agent appeared first on Microsoft Research.