Morning Brief
2026-04-24 · 18 sources
Benchmarks, hardware, and hustle porn — six creators dropped today and the only one worth your evening is Cole's autonomous dev factory on Kimi K2.6.
What Creators Are Saying
Nate Herk | AI Automation
GPT 5.5 vs Claude Opus 4.7 bake-off — watch for the exact prompts and n8n nodes he swaps between the two, that's the only reusable artifact.
1 videos
I Tested GPT 5.5 vs Opus 4.7: What You Need to Know
Head-to-head: GPT 5.5 versus Claude Opus 4.7.
No transcript available, so there's no instructive build or tool breakdown to extract here — skip unless you want the vibes-based model comparison.
details
What it is: A model showdown video comparing OpenAI's GPT 5.5 against Anthropic's Claude Opus 4.7, framed as practical tests rather than benchmark theater.
How it works:
- Transcript was not available at analysis time, so specifics of the test harness, prompts, and scoring are unknown
- Based on Nate's usual format, expect side-by-side prompts run through agent workflows (likely n8n) with qualitative commentary on output quality, reasoning, tool use, and cost
- Comparison likely covers coding, agentic tasks, and long-context reasoning — the areas where Opus 4.7 and GPT 5.5 are currently fighting for the lead
Tools & links:
- AI Automation Society Plus (paid) — Nate's paid community with full courses
- AI Automation Society (free) — free resources
- Podcast application — apply to be on his YT podcast
- Uppit AI — Nate's agency, "work with me"
Why it matters for you: You wanted instructive breakdowns with repos, tools, and workflows — this one is a model comparison without a transcript, so there's nothing concrete to lift. Skip unless you want a gut-check on which frontier model to default to in your automations.
6 previously covered
Cole Medin
Live autonomous dev factory on Kimi K2.6 — steal his orchestration pattern for mx-workflow before everyone else does.
1 videos
Pushing My Dark Factory Further with Kimi K2.6: A Codebase That Writes Its Own Code, Live
Live demo of an autonomous AI dev factory using Kimi K2.6.
Cole's 'Dark Factory' is a working reference for no-human-in-the-loop agent pipelines — directly relevant to where mx-workflow is headed with Claude agent teams.
details
What it is: Live session running Cole's AI 'Dark Factory' — an autonomous pipeline where agents triage GitHub issues, write code, and ship PRs with zero human coding. This episode swaps the coding model to Kimi K2.6 and pushes it against a real app.
How it works:
- Triage agent picks which GitHub issues to accept into the work queue
- Coding agent (now Kimi K2.6) implements the fix/feature end-to-end
- Runs against a real production-style application, not a toy repo
- Built fully in public — prior episodes cover each layer of the factory
- 'No humans allowed' policy: code is written, reviewed, and merged by agents
What's different this time:
- Model swap from prior Claude/GPT-based coding agent to Kimi K2.6 (Moonshot's latest open-weight coding model)
- Stress-testing whether a non-frontier model can carry the full autonomous loop
- Live-coding format — you see failure modes and prompt/harness adjustments in real time
Tools & links:
- Kimi K2.6 — Moonshot AI's coding-optimized model, used as the implementation agent
- Archon — Cole's open-source agent orchestration / harness engineering framework, likely powering the factory
- Cole's channel — full Dark Factory build series
Why it matters for you: mx-workflow's agent team (mx-feature-builder, mx-quality-keeper, mx-test-builder, etc.) is architecturally adjacent to a Dark Factory — Cole's live failure modes with model swaps + triage-to-merge loops are exactly the gaps to close before mx-workflow can run unattended.
Chris Koerner on The Koerner Office Podcast
Nothing new.
Codie Sanchez
Pick three AI tools and actually ship — the shiny-object tax is killing your side hustle margins.
1 videos
6 previously covered
Alex Ziskind
M5 Max MacBook Pro crushes local AI but costs 3x a maxed Mac mini — the mini still wins on $/token unless you need portability.
1 videos
This MacBook Pro Makes Me Feel Stupid
M5 Pro vs M5 Max MacBook Pro for local AI.
Alex second-guesses his M5 Max pick — the verdict tells you which Apple Silicon tier actually earns its price for local LLM work.
details
What it is: A head-to-head of the M5 Pro and M5 Max MacBook Pros, framed around whether the Max premium is justified for AI/ML workloads.
How it works:
- Alex runs his usual local LLM benchmarks across both chips
- Compares memory bandwidth, unified memory ceilings, and token/sec on common models
- Flags where the Pro punches above its weight and where the Max pulls ahead
- Questions his own Max purchase given the price delta
Machine & cost angle (your focus):
- M5 Max MacBook Pro — his daily driver, top tier, priciest config
- M5 Pro MacBook Pro — the one making him second-guess the upgrade
- Context vs Mac mini: MacBook Pros sit well above Mac mini pricing; the M5 Pro MBP is the closer comparison point if you're weighing portability vs a mini on your desk
- Transcript wasn't available, so exact $ figures and tok/s numbers live in the video itself
Tools & links:
- Verdent — sponsor, multi-plan / multi-model code review tool
- 2x and 4x 400Gbps switches referenced in gear links (networking rig, not core to the AI comparison)
Why it matters for you: If you're eyeing Apple Silicon for local AI, this is the video that cuts through the Max hype — skip to the benchmark and price sections and ignore the deep hardware tangents.
Matt Wolfe
AI-vs-creators hot take — skip it, nothing here moves the needle on your web apps.
1 videos
My most controversial opinion....
Matt's take on whether AI replaces content creators.
Skip — it's a vibes-only opinion piece with zero relevance to building web apps.
details
What it is: A personal hot-take video where Matt answers whether AI will replace human content creators, based purely on his own experience using AI daily.
How it works:
- No transcript available, but the framing is clear from title/description/tags (#aislop, #contentcreator)
- Pure opinion monologue — no tools demoed, no workflows shown, no code
- Aimed at the creator-economy audience, not builders
Tools & links:
- None mentioned beyond Matt's usual futuretools.io plug
Why it matters for you: It doesn't. You build web apps — this is creator-discourse filler with no technical substance, no shippable ideas, and nothing that changes how you build. Hard skip.
7 previously covered
My First Million
Buffett's $19M charity lunch economics — the lesson is scarcity + status pricing, not the lunch itself.
1 videos
$2M for a lunch with Warren Buffett?
Breaking down the absurd economics of Buffett's charity lunch.
No transcript available — can't extract the featured individuals, businesses, or money-making tactics you actually want from this channel.
details
What it is: A My First Million episode riffing on the $2M+ price tags people paid to have lunch with Warren Buffett at his annual Glide Foundation charity auction.
How it works:
- No transcript was provided for this video, so the actual guests, businesses, and tactical breakdowns can't be summarized
- Title implies the hosts (Shaan Puri, Sam Parr) dissect the ROI logic — who paid, what they got, and whether spending seven figures on a meal is rational signaling, network arbitrage, or pure ego
- Typical MFM format: anecdote → business framework → tangents into adjacent money-making ideas
Tools & links:
- None extractable without transcript
Why it matters for you: Skip until a transcript exists — the channel's value for you is the detailed who/what/how-they-made-money breakdown, and that's exactly what's missing here.
What Shipped
claude-code
Config persistence, PR templates, hook timings, paste fixes.
Your daily `/config` settings now actually stick, and several paper-cut paste/scroll bugs that hit power users are fixed.
details
What changed:
- `/config` settings (theme, editor mode, verbose, etc.) now persist to `~/.claude/settings.json` and respect project/local/policy precedence
- New `prUrlTemplate` setting points the PR badge at a custom code-review URL (not just github.com)
- New `CLAUDE_CODE_HIDE_CWD` env var hides the working directory in the startup logo
- `--from-pr` now accepts GitLab MRs, Bitbucket PRs, and GitHub Enterprise URLs
- `--print` mode now honors agent `tools:`/`disallowedTools:` frontmatter (matches interactive mode)
- `--agent <name>` now honors `permissionMode` for built-in agents
- PowerShell tool commands can be auto-approved in permission mode (parity with Bash)
- `PostToolUse` / `PostToolUseFailure` hooks now receive `duration_ms`
- Subagent + SDK MCP server reconfiguration connects in parallel (faster startup)
- Plugins pinned by another plugin's version constraint auto-update to the highest satisfying tag
- Vim mode: Esc in INSERT no longer yanks queued messages back; second Esc interrupts
- Slash command picker highlights matched chars and wraps long descriptions instead of truncating
- `owner/repo#N` shorthand uses your git remote's host
- Status line stdin JSON now includes `effort.level` and `thinking.enabled`
- OTel `tool_result`/`tool_decision` events include `tool_use_id`; `tool_result` adds `tool_input_size_bytes`
Bug fixes worth noting:
- CRLF paste no longer inserts blank lines between every line (Windows clipboard, Xcode console)
- Multi-line paste no longer loses newlines under kitty keyboard protocol
- Glob/Grep no longer disappear on native macOS/Linux builds when Bash is denied
- Fullscreen scroll-up no longer snaps back to bottom on tool finish
- MCP HTTP no longer fails on non-JSON OAuth discovery responses
- Auto mode no longer overrides plan mode with conflicting instructions
- Async `PostToolUse` hooks with no payload no longer write empty transcript entries
- Security: `blockedMarketplaces` now correctly enforces `hostPattern` / `pathPattern`
Breaking changes:
- None (behavior changes around `/config` persistence and `--print` tool filtering could surprise users relying on prior defaults, but no explicit breaks called out)
Links:
Why it matters for you: If you live in Claude Code daily, the `/config` persistence, fullscreen scroll fix, and parallel MCP startup are quiet quality-of-life wins you'll feel immediately — and the new `duration_ms` in hooks is useful if you ever want to profile your own tool usage.
kit
Remote functions reshape `requested`, plus FOUC and form fixes.
If you use SvelteKit remote functions, `requested` now yields different shapes — a small breaking change that will silently break code on upgrade.
details
What changed:
- `RemoteQueryFunction` gains optional third generic `Validated` (defaults to `Input`) for post-validation argument types
- `query().current`, `.error`, `.loading`, `.ready` now work in non-reactive contexts
- `deep_set` no longer crashes on nullish nested values
- `RemoteFormFields` typing restored for nullable array fields (e.g. `.default([])` schemas) so `.as('checkbox')` works again
- SSI comments (`<!--#include virtual="..." -->`) stripped in `transformPageChunk` no longer trigger false hydration warnings
- `enhance` return type broadened from `void` to `MaybePromise<void>`
- `resolve` now throws when called with an external URL
- CSR-only pages load styles/fonts before CSR starts to avoid FOUC
- Form result resets on redirect
Breaking changes:
- `limit` is now required in `requested` (was always intended, now enforced)
- `requested` yields `{ arg, query }` entries instead of the validated argument — any code iterating `requested` needs updating
Links:
Why it matters for you: Your static-GitHub-Pages + Supabase stack mostly won't touch `requested`/remote functions, so these breaking changes are probably safe to ignore — but the FOUC fix for CSR-only pages and the `resolve` external-URL guard are broadly useful if you ever add an app-router page.