Both OpenAI and Anthropic Are Now Working with the Military
OpenAI signed an agreement with the Department of War. Anthropic's CEO issued statements about their military discussions. Both companies previously restricted military use.
This is a significant philosophical and business shift. Both companies were founded with strong safety principles and initially prohibited military applications of their AI.
What changed:
- AI has become important enough to national security that safety-focused companies feel they need to participate (or risk being excluded from policy conversations)
- Government AI contracts are worth billions
- The argument: it's better for safety-focused companies to be at the table than to cede the space to companies with fewer guardrails
The counterargument from critics: participating in military AI normalizes weapons applications and creates pressure to relax safety restrictions when they conflict with military requirements.
For commercial users, the direct impact is minimal — you'll still use the same models. But the indirect effects are significant: government contracts bring massive revenue, which funds better models for everyone. And military-grade security requirements raise the bar for commercial products too.
Why it matters
When AI companies take on government contracts, the security, reliability, and auditability standards they must meet flow downstream to commercial products. GPS started as military tech. The internet started as ARPANET. Military investment in AI will make your commercial AI tools more robust.
The takeaway
The AI you use for business is about to get more secure and reliable, partly funded by government contracts. The political and ethical questions are worth following, but the practical impact on commercial AI is likely positive.
AgentsFeb 20, 2026
Claude Gets Cybersecurity Powers — AI-Powered Vulnerability Scanning Is Here
Anthropic released frontier cybersecurity capabilities through Claude Code. AI that finds security holes in your code the way a human expert would, but faster.
Cybersecurity is one of those fields where expertise is scarce and expensive — a qualified security researcher costs $200-500/hour. Most small and mid-size businesses simply can't afford regular security audits.
Claude Code can now scan codebases for vulnerabilities: SQL injection, authentication bypasses, data exposure, dependency issues. It reads code the way a security expert would, understanding context and intent rather than just pattern-matching known signatures.
This isn't replacing security experts for critical systems. But it's making "basic security hygiene" accessible to every business with custom software. Think of it as the difference between having no security review and having a competent first pass — it catches the obvious stuff so human experts can focus on the subtle stuff.
The timing is relevant: as AI agents get more capable (handling payments, accessing databases, managing infrastructure), the attack surface for AI-connected systems grows. Security isn't optional anymore.
Why it matters
Every business with a website, app, or API has code that could have vulnerabilities. A data breach costs $4.45 million on average (IBM 2023). AI-powered security scanning makes protection accessible to businesses that previously couldn't afford it. The economics just changed dramatically.
The takeaway
If you have custom software and haven't done a security review recently, AI-powered scanning is now affordable enough that there's no excuse. Ask your developer about it — it's hours of work, not weeks, and it could prevent a catastrophic breach.
BusinessFeb 25, 2026
What Ad-Supported AI Would Actually Look Like (It's Terrifying)
A demo of "free" ad-supported AI chat went viral on Hacker News (440+ upvotes). It shows what happens when your AI assistant is funded by advertisers instead of you.
A developer built a working demo of what AI chat looks like when it's "free" and paid for by ads. The AI subtly steers conversations toward advertisers' products, inserts sponsored suggestions, and optimizes for engagement (keeping you chatting) rather than helpfulness (giving you the answer and moving on).
This isn't hypothetical. Social media already works this way — the algorithm doesn't care if you're happy, it cares if you're scrolling. The demo applies the same incentive model to AI assistants.
Anthropic (Claude) has explicitly committed to never being ad-supported. Their argument: advertising incentives are fundamentally incompatible with a genuinely helpful AI. When the AI works for advertisers, it has a conflict of interest with you.
OpenAI hasn't made the same commitment. Google's AI is already integrated with their ad business. This is worth watching.
Why it matters
As AI becomes embedded in business decisions — analyzing options, recommending vendors, evaluating tools — the question of "who does this AI actually work for?" becomes critical. An ad-supported AI recommending your next software purchase has the same conflict of interest as a paid product review.
The takeaway
For any AI touching business decisions, pay for it. The subscription isn't a cost — it's the guarantee that the AI's incentives are aligned with yours. Free AI has a customer, and it's not you.
AgentsFeb 25, 2026
Anthropic Acquires Vercept — Computer Use Gets Real
Anthropic acquired Vercept to improve Claude's ability to use computers like a human — clicking, typing, navigating apps. This is a bigger deal than it sounds.
"Computer use" is the idea that instead of building custom integrations for every app your AI needs to touch, the AI just... uses the app. Opens a browser, clicks buttons, fills in forms, reads what's on screen.
Right now, connecting AI to your business tools usually requires API integrations — technical work that costs time and money. Computer use could skip that entirely. The AI uses the same interface your employees use.
Anthropic acquiring Vercept (a startup specializing in this) signals they're serious about making it production-ready. The current version of computer use in Claude works but is still clunky — it's like watching a new employee use a computer for the first time. Each generation gets significantly smoother.
Combined with Google's WebMCP announcement (websites declaring structured tools for agents), we're seeing a two-pronged approach: structured tools where available, visual computer use as a fallback.
Why it matters
Computer use could be what makes AI accessible to businesses without developer teams. Right now, serious AI automation requires technical integration work. If the AI can use your existing software through the same interface your team uses, the barrier to adoption drops to near zero.
The takeaway
Don't rush to build expensive custom integrations. Computer use is 6-12 months from being reliable for most business tasks. Plan your AI strategy knowing this is coming — it may be worth waiting for some integrations and doing others now.
BusinessFeb 28, 2026
OpenAI + Amazon = AI Models on AWS. Here's the Business Impact.
OpenAI and Amazon announced a major partnership. OpenAI's models are now available on AWS, plus a new "Stateful Runtime" for AI agents.
What happened: if your company runs on Amazon Web Services (which about a third of all businesses do), you can now use OpenAI's models without setting up a separate account. It's like your favorite restaurant opening in your neighborhood.
The bigger news is the "Stateful Runtime" — a system for running AI agents that remember what they're doing across multiple steps. Currently, most AI agents are stateless — each request starts fresh. The Stateful Runtime lets an agent work on a complex task (like processing a 50-page contract or managing a multi-day project) without losing context.
Simultaneously, OpenAI announced they want to be the "one platform" for all AI — models, hosting, tools, everything in one place.
The catch: platform convenience comes with vendor lock-in. Build everything on OpenAI's platform, and switching later is expensive and painful. This is the classic tech playbook — make it easy to start, hard to leave.
Why it matters
The AI infrastructure landscape is consolidating fast. Choosing where to build your AI stack is becoming a strategic decision, not just a technical one. AWS + OpenAI means more competition with Microsoft Azure + OpenAI and Google Cloud + Gemini. More competition = better pricing for you.
The takeaway
Don't build your AI on a single vendor's platform without understanding the switching costs. A good AI developer builds model-agnostic — meaning you can swap between Claude, GPT, and Gemini without rebuilding. Insist on this.
ModelsMar 1, 2026
Karpathy Built a Full GPT in 200 Lines of Code — And It Explains Everything
Andrej Karpathy (ex-Tesla AI chief, ex-OpenAI) just published "microgpt" — a complete, working GPT in 200 lines of Python with zero dependencies. 1,600+ upvotes on Hacker News.
Why this matters even if you'll never read a line of code:
Karpathy is probably the best AI educator alive. He was the AI director at Tesla (Autopilot), founding team at OpenAI, and now he keeps distilling complex AI into things anyone can understand.
microgpt is his "here's the entire thing, no magic" moment. In 200 lines, he showed:
- How AI reads text and converts it to numbers (tokenization)
- How the AI learns patterns (training)
- How it generates new text (inference)
- How the "attention" mechanism works (the core innovation of modern AI)
The model he built is tiny — 4,192 parameters vs. hundreds of billions in ChatGPT — but the algorithm is identical. It's like building a working car engine the size of your fist. Same principles, just miniaturized.
His key insight for non-technical people: "When you chat with ChatGPT, the system prompt, your message, and its reply are all just tokens in a sequence. The model is completing the document one token at a time." In other words, ChatGPT doesn't "think" — it predicts what word comes next, really well.
Why it matters
Understanding how AI actually works (even at a high level) gives you better intuition about what it can and can't do. AI doesn't "understand" — it predicts patterns. That's why it's amazing at tasks with clear patterns (code, data analysis, writing in a style) and unreliable at tasks requiring real-world truth (factual claims, math, novel reasoning).
The takeaway
AI is pattern matching at superhuman scale. When you know that, you can design better AI systems: give it clear patterns to match (good prompts, good examples, good data) and don't trust it for things that require ground truth verification.
ToolsMar 1, 2026
Hot Debate: Is MCP Already Dying? Developers Say Just Use the Command Line
A viral Hacker News post argues that MCP (Model Context Protocol) — the hot new way AI connects to tools — is already obsolete. 208+ upvotes and heated debate.
Background: MCP is a protocol Anthropic created to let AI agents talk to external tools (your CRM, your email, your database). Every company rushed to build MCP support. It was the hot thing in AI infrastructure.
The counter-argument (from a developer named Eric Holmes, and a lot of HN agrees): AI models are already great at using command-line tools. They've been trained on millions of Stack Overflow answers and man pages. You don't need a special protocol — just give the AI access to the tools that already exist.
His key points:
- CLI tools are debuggable (you can run the same command yourself)
- They compose (pipe outputs between tools)
- Auth already works (existing login flows)
- No background processes to manage
- "If you're building an MCP server but don't have a CLI, you're doing it backwards"
The other side: MCP provides a standardized interface that's easier for non-technical teams to set up. Not everything has a good CLI.
The real answer: both will coexist. MCP for web-based tools, CLI for everything else.
Why it matters
This debate reveals something important about where AI tooling is heading. The winner won't be the fanciest protocol — it'll be whatever is most reliable and easiest to debug. For businesses buying AI solutions, the lesson is: don't get locked into one approach. Good AI architecture is tool-agnostic.
The takeaway
When evaluating AI tools or developers, ask: "What happens when something breaks? How do we debug it?" The ability to inspect and fix issues matters more than how fancy the integration looks in a demo.
ToolsMar 1, 2026
Google Wants Every Website to Be "Agent-Ready" — WebMCP Just Launched
Chrome just announced WebMCP — a new standard that lets AI agents interact with websites the way humans do, but faster and more reliably.
Right now, if an AI agent needs to book a flight on a website, it has to "look" at the screen, figure out where to click, type in fields, and navigate pages — just like a clumsy robot using a human interface. It works, but it's slow and breaks easily.
WebMCP changes this. Websites can now declare structured "tools" — basically telling AI agents: "Here's exactly how to search flights, here's how to fill in passenger details, here's how to check out." The agent doesn't need to figure out the UI — it just calls the right tool.
Think of it like the difference between explaining to someone over the phone how to navigate a website vs. giving them an API. Same result, 10x more reliable.
Google is running an early preview program for developers. Use cases they highlighted: customer support, e-commerce, and travel booking.
Why it matters
This is the infrastructure layer that makes AI agents actually work at scale. When every major website supports WebMCP, your AI agent will be able to book travel, file support tickets, manage subscriptions, and place orders — without brittle screen-scraping workarounds. It's like when websites all agreed on HTTPS — it just becomes the baseline.
The takeaway
If you run a business website, WebMCP will eventually be something you want to support. For now, just know that the web is being rebuilt to be AI-native. The businesses that prepare for this will have a massive advantage.
AgentsMar 1, 2026
Claude Drove a Rover on Mars — Here's Why That Matters for Your Business
Anthropic announced Claude helped NASA's Perseverance rover navigate 400 meters on Mars. First AI-assisted drive on another planet.
The technical challenge: radio signals between Earth and Mars take 5-20 minutes each way. You can't remote-control a rover in real time. So the rover needs to make decisions locally — analyze terrain photos, identify safe paths, and drive itself.
Claude analyzed the rover's camera data and helped mission controllers plan a 400-meter traverse across rocky Martian terrain. The AI isn't physically steering — it's doing what AI does best: looking at data, understanding context, and recommending decisions faster than humans can.
This is the exact same pattern as a business AI agent: receive data (invoice, email, sensor reading), analyze it (extract info, check against rules), and take or recommend action (pay, respond, alert).
Why it matters
Mars navigation and your business operations share the same AI architecture. If the AI can handle the stakes of a $2.7 billion rover on another planet, it can handle your invoices. The gap between "cutting-edge research" and "available for business" is now measured in months, not decades.
The takeaway
Next time someone says "AI isn't ready for serious work" — it's literally driving on Mars. The same models are available via API for your projects today.
ModelsMar 1, 2026
Claude Sonnet 4.6 Drops — Best Coding AI Yet
Anthropic released Claude Sonnet 4.6 on Feb 17. It's their new flagship for coding, agents, and complex professional work.
What developers are saying: Sonnet 4.6 is the first model where you can hand it a codebase and say "add this feature" and get back working code more often than not. It's specifically tuned for agent workflows — meaning AI that takes multi-step actions, not just answers questions.
The model sits in the "Sonnet" tier — Anthropic's sweet spot between power and cost. Above it is Opus (most capable, expensive) and below is Haiku (fastest, cheapest). Most production AI agents run on Sonnet-class models because the price/performance ratio is right.
This matters because the competitive gap between Claude, GPT, and Gemini changes with every release. Right now, Claude is widely considered the best for code and structured reasoning. GPT leads in breadth and ecosystem. Gemini leads in multimodal (images, video, audio) and has the biggest context window.
Why it matters
If you're building or buying AI tools, the model underneath matters. Claude Sonnet 4.6 being "best at coding" means any AI developer you work with can build more reliable automations faster. It also means the agents running your business workflows get more capable without you changing anything — your AI developer just switches to the new model.
The takeaway
You don't need to track model releases yourself. But know that AI capabilities are improving every few weeks, not every few years. Something your developer said "wasn't possible" 3 months ago might be straightforward now. Worth re-asking.
Want these updates in your inbox? We're working on a newsletter. For now, bookmark this page or reach out directly.