I had six cron jobs whose entire purpose was to make my AI agent smarter over time. They had names like "Digital Dreaming," "MindBranches," and "Weekly Strategic Deep Think." They ran daily or weekly, on Sonnet or Haiku. Combined cost: about $1.66 per day.
After two weeks of operation, I audited them. Total actionable lessons produced: zero.
Not one config change. Not one behavior update. Not one file edit. Six crons, burning tokens every day, producing beautifully written summaries of what they could do that went absolutely nowhere.
The Audit
I ran through each cron and asked a simple question: Has this cron ever caused a concrete change to the system? Not "did it produce output" โ did anything actually change as a result?
| Cron | Schedule | Model | ~Cost/Day | Result |
|---|---|---|---|---|
| Digital Dreaming | Daily 3 AM | Sonnet | $0.35 | Closing loops |
| MindBranches | Daily 4 AM | Sonnet | $0.30 | Closing loops |
| Weekly Deep Think | Sundays 6 AM | Sonnet | $0.05 | Closing loops |
| Config Review | Weekly Mon | Haiku | $0.02 | Newsletter |
| Community Intelligence | Daily 6 PM | Sonnet | $0.80 | Newsletter |
| Friday Insight | Fridays 5 PM | Sonnet | $0.14 | Newsletter |
Three were closing loops (actually making changes), three were in "newsletter mode" โ producing well-written summaries that nobody acted on.
What "Newsletter Mode" Looks Like
Here's an actual output from the Community Intelligence cron:
"The OpenClaw community is discussing improved memory management approaches. Several users on Discord are experimenting with RAG-based memory. Consider exploring vector embedding integration for the vault index. Recommendation: review the memory-rag skill on ClawHub for potential adoption."
Sounds useful, right? It's not. That output was delivered to my Telegram at 6 PM, I glanced at it, thought "interesting," and did nothing. The agent did nothing. Nobody explored anything. Nobody reviewed the skill. The "recommendation" evaporated the moment I scrolled past it.
This is what I mean by newsletter mode. The cron produces a summary. The summary contains suggestions. Nobody acts on the suggestions. Repeat daily. Burn $0.80.
Why This Happens
AI agents are trained to be helpful and suggestive. When you ask one to "review the community and find useful insights," it will find insights and suggest actions. That's what it's optimized to do โ surface information and recommend next steps.
But in a cron context, there's nobody reading the output in real-time deciding to act. The output goes to a channel (Telegram, in my case), gets a quick glance, and disappears. The agent doesn't follow up. The human doesn't follow up. The recommendation sits there, technically delivered, practically useless.
The root cause is the prompt. If the prompt says "review X and suggest improvements," you'll get suggestions. Every time. Forever. What you won't get is action.
The Fix: Closed-Loop Enforcement
I rewrote all six crons with a simple rule: every run must either make a concrete change or reply NO_REPLY. No summaries. No recommendations. No "consider exploring." Either do the thing or admit there's nothing to do.
Here's the before and after for the Community Intelligence cron:
Before (Newsletter Mode)
"Search for OpenClaw community discussions, new skills,
and relevant developments. Summarize findings and
recommend actions for improving the workspace."
After (Closed-Loop)
"Search for ONE actionable OpenClaw community development.
If you find something worth implementing:
1. Implement it (install, configure, commit)
2. Report what you changed and the commit hash
If nothing actionable found, reply NO_REPLY.
BANNED PHRASES: 'Consider', 'Recommendation:',
'You might want to', 'Worth exploring',
'Should I', 'Want me to'"
The banned phrases list is not a joke. I literally ban the agent from using weasel words that let it produce output without producing value. If it can't say "I did X, here's the commit hash," it says nothing.
The Three That Were Working
Digital Dreaming, MindBranches, and Weekly Deep Think were actually closing loops. What made them different?
- They had write access to specific files. Digital Dreaming was instructed to update
lessons.mdwith new patterns. It didn't suggest updating โ it updated. - Their prompts specified the output format. Not "summarize what you find" but "append to this file in this format."
- They had a narrow scope. Instead of "review everything," they focused on one thing per run.
The working crons weren't smarter โ they were more constrained. Constraints force action. Open-ended prompts enable procrastination.
After the Rewrite
Early results after rewriting the three newsletter crons:
- Config Review ran its first post-rewrite cycle. It found a cron still running on Sonnet that should be Haiku, downgraded it, and reported the commit hash. First concrete action in two weeks.
- Community Intelligence found and installed the
entireCLI for agent session tracking. Committed the config. Queued one more tool for evaluation. Then stopped โ because the prompt told it to do ONE thing, not write a report. - Friday Insight hasn't fired yet post-rewrite. Will report back.
Cost Impact
The daily cost will actually go up slightly because the crons are now doing real work (installing tools, writing configs) instead of just generating text. But the value-per-dollar goes from zero to measurable.
I'd rather spend $2/day on crons that change things than $1.66/day on crons that summarize things.
What's Next
Monitoring the rewritten crons for a week to see if closed-loop enforcement actually holds. The risk: the agent finds creative ways to technically "do something" while not doing anything useful. "I reviewed the config and confirmed it's correct" is technically a change (it confirms the status) but practically still a newsletter.
The deeper question: can an AI agent genuinely learn and improve over time, or is "learning" just a human narrative we project onto a system that resets every session? I don't know yet. But I know the answer isn't going to come from crons that write summaries about how much they're learning.
Read next: My AI Gave Itself an 84. I Gave It a 47. โ what happens when you let your agent grade itself.