Add AutoCora job submission + result polling -- Phase 3

Submits Cora SEO jobs as JSON files to the NAS queue, polls for .result
files each cycle, and updates ClickUp on success/failure. Async design:
submission and result processing happen on separate poll cycles so the
runner never blocks. Full README rewrite with troubleshooting section.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
clickup-runner
PeninsulaInd 2026-03-30 10:17:42 -05:00
parent b19e221b8f
commit 645b9cfbef
4 changed files with 959 additions and 19 deletions

View File

@ -19,18 +19,19 @@ uv run python -m clickup_runner
## How It Works
1. Every 720 seconds, polls all "Overall" lists in the ClickUp space
2. Finds tasks where:
2. Checks for completed AutoCora jobs (result polling)
3. Finds tasks where:
- "Delegate to Claude" checkbox is checked
- Due date is today or earlier
3. Reads the task's Work Category and Stage fields
4. Looks up the skill route in `skill_map.py`
5. Dispatches to either:
- **AutoCora handler** (for `run_cora` stage): submits a Cora job to the NAS queue *(Phase 3)*
4. Reads the task's Work Category and Stage fields
5. Looks up the skill route in `skill_map.py`
6. Dispatches to either:
- **AutoCora handler** (for `run_cora` stage): submits a Cora job to the NAS queue
- **Claude Code handler**: runs `claude -p` with the skill file + task context as prompt
6. On success: uploads output files as ClickUp attachments, copies to NAS (best-effort),
7. On success: uploads output files as ClickUp attachments, copies to NAS (best-effort),
advances Stage, sets next status, posts summary comment
7. On error: sets Error checkbox, posts structured error comment (what failed, how to fix)
8. Always unchecks "Delegate to Claude" after processing
8. On error: sets Error checkbox, posts structured error comment (what failed, how to fix)
9. Always unchecks "Delegate to Claude" after processing
## Configuration
@ -83,6 +84,9 @@ These must exist in your ClickUp space:
| Stage | Dropdown | Pipeline position (run_cora, outline, draft, etc.) |
| Error | Checkbox | Flagged when processing fails |
| Work Category | Dropdown | Task type (Content Creation, Press Release, etc.) |
| Keyword | Text | SEO keyword for Cora analysis (required for run_cora stage) |
| IMSURL | URL | Target money-site URL (used in prompts and Cora jobs) |
| Customer | Dropdown | Client name (used for NAS file organization) |
## Skill Map
@ -129,7 +133,7 @@ run_cora -> build -> final
| Client Review | Client | Sent to client |
| Complete | Nobody | Done |
## Claude Code Runner (Phase 2)
## Claude Code Handler
When a task routes to a Claude handler, the runner:
@ -156,11 +160,79 @@ What failed: <error details>
How to fix: <instructions>
```
## AutoCora Handler
AutoCora jobs are asynchronous -- submission and result polling happen on
separate poll cycles.
### Submission (when a `run_cora` task is found)
1. Reads the `Keyword` and `IMSURL` custom fields from the task
2. Sets status to "AI Working"
3. Writes a job JSON file to `//PennQnap1/SHARE1/AutoCora/jobs/`:
```json
{
"keyword": "CNC Machining",
"url": "https://acme.com/cnc-machining",
"task_ids": ["task_id"]
}
```
4. Stores job metadata in the state DB for result polling
5. Posts comment "Cora job submitted for keyword: ..."
6. Unchecks "Delegate to Claude"
### Result Polling (every poll cycle)
At the start of each cycle, the runner scans the results directory:
1. Looks for `.result` files in `//PennQnap1/SHARE1/AutoCora/results/`
2. Matches results to pending jobs via the state DB
3. On **success**:
- Advances Stage to the next stage (e.g. run_cora -> outline)
- Sets status to "review"
- Posts comment with keyword and .xlsx location
- Clears Error checkbox
- **Does NOT re-check Delegate to Claude** (human reviews first)
4. On **failure**:
- Sets Error checkbox
- Posts structured error comment with failure reason
5. Archives processed `.result` files to `results/processed/`
### .xlsx Skip
If a task at `run_cora` stage already has an `.xlsx` attachment, the runner
skips Cora submission and advances directly to the next stage.
## Logs
- Console output: INFO level
- File log: `logs/clickup_runner.log` (DEBUG level)
- Run history: `data/clickup_runner.db` (run_log table)
- Run history: `data/clickup_runner.db` (run_log table + kv_store for AutoCora jobs)
## Troubleshooting
### Task not being picked up
- Check that "Delegate to Claude" is checked
- Check that the due date is today or earlier
- Check that Work Category and Stage are set and valid
- Check that the task is in an "Overall" list
### Claude errors
- Check `logs/clickup_runner.log` for the full error
- Verify the skill `.md` file exists in `skills/`
- Verify `claude` CLI is on PATH
- Check the Error comment on the ClickUp task for fix instructions
### AutoCora not producing results
- Verify the NAS is mounted and accessible
- Check that job files appear in `//PennQnap1/SHARE1/AutoCora/jobs/`
- Check the AutoCora worker logs on the NAS
- Look for `.result` files in `//PennQnap1/SHARE1/AutoCora/results/`
### NAS copy failures
- NAS copy is best-effort and won't block the pipeline
- Check that `//PennQnap1/SHARE1/generated/` is accessible
- Check `logs/clickup_runner.log` for copy warnings
## Tests
@ -171,6 +243,6 @@ uv run pytest tests/test_clickup_runner/ -m "not integration"
# Full suite (needs CLICKUP_API_TOKEN)
uv run pytest tests/test_clickup_runner/
# Specific test
uv run pytest tests/test_clickup_runner/test_skill_map.py -v
# Specific test file
uv run pytest tests/test_clickup_runner/test_autocora.py -v
```

View File

@ -12,6 +12,7 @@ import sys
import time
from datetime import datetime, timezone
from .autocora import archive_result, scan_results, submit_job
from .claude_runner import (
RunResult,
build_prompt,
@ -98,6 +99,9 @@ def poll_cycle(
log.error("No space_id configured -- skipping poll cycle")
return 0
# Check for completed AutoCora jobs before dispatching new tasks
_check_autocora_results(client, cfg, db)
# Fetch all tasks from Overall lists with due date <= today
cutoff_ms = _due_date_cutoff_ms()
tasks = client.get_tasks_from_overall_lists(space_id, due_date_lt=cutoff_ms)
@ -224,29 +228,213 @@ def _handle_no_mapping(
log.warning("Task %s: %s", task.id, message)
def _check_autocora_results(
client: ClickUpClient,
cfg: Config,
db: StateDB,
):
"""Poll for completed AutoCora jobs and update ClickUp accordingly."""
results = scan_results(cfg.autocora.results_dir)
if not results:
return
log.info("Found %d AutoCora result(s) to process", len(results))
for result in results:
# Look up the pending job in the state DB
job_data = db.kv_get_json("autocora:job:%s" % result.job_id)
# Also check task_ids from the result file itself
task_ids = result.task_ids
if job_data:
# Prefer state DB data -- it always has the task_id
task_ids = [job_data["task_id"]]
if not task_ids:
log.warning(
"Result %s has no task_ids and no matching state DB entry -- skipping",
result.job_id,
)
archive_result(result)
continue
for task_id in task_ids:
if result.status == "SUCCESS":
_handle_autocora_success(client, cfg, db, task_id, result, job_data)
else:
_handle_autocora_failure(client, cfg, db, task_id, result, job_data)
# Clean up state DB entry
if job_data:
db.kv_delete("autocora:job:%s" % result.job_id)
archive_result(result)
def _handle_autocora_success(
client: ClickUpClient,
cfg: Config,
db: StateDB,
task_id: str,
result,
job_data: dict | None,
):
"""Handle a successful AutoCora result for one task."""
keyword = result.keyword or (job_data or {}).get("keyword", "unknown")
# Advance stage -- need list_id from task or job_data
try:
task = client.get_task(task_id)
except Exception as e:
log.error("Failed to fetch task %s for AutoCora result: %s", task_id, e)
return
# Look up the route to get next_stage
task_type = task.task_type
stage = client.get_stage(task, cfg.clickup.stage_field_name)
route = get_route(task_type, stage)
if route:
client.set_stage(
task_id, task.list_id, route.next_stage, cfg.clickup.stage_field_name
)
next_stage = route.next_stage
else:
# Fallback -- just note it in the comment
next_stage = "(unknown)"
client.update_task_status(task_id, cfg.clickup.review_status)
client.add_comment(
task_id,
"Cora report generated for \"%s\". Stage advanced to %s.\n"
"Review the .xlsx in %s, then re-check Delegate to Claude for the next stage."
% (keyword, next_stage, cfg.autocora.xlsx_dir),
)
client.set_checkbox(
task_id, task.list_id, cfg.clickup.error_field_name, False
)
# Finish the run log if we have a run_id
run_id = (job_data or {}).get("run_id")
if run_id:
db.log_run_finish(run_id, "completed", result="Cora report ready")
notify(cfg, "Cora done: %s" % keyword, "Task %s ready for review" % task_id)
log.info("AutoCora SUCCESS for task %s (keyword=%s)", task_id, keyword)
def _handle_autocora_failure(
client: ClickUpClient,
cfg: Config,
db: StateDB,
task_id: str,
result,
job_data: dict | None,
):
"""Handle a failed AutoCora result for one task."""
keyword = result.keyword or (job_data or {}).get("keyword", "unknown")
reason = result.reason or "Unknown error"
try:
task = client.get_task(task_id)
except Exception as e:
log.error("Failed to fetch task %s for AutoCora result: %s", task_id, e)
return
comment = (
"[ERROR] Cora report failed for keyword: \"%s\"\n"
"--\n"
"What failed: %s\n"
"\n"
"How to fix: Check the AutoCora worker logs, fix the issue, "
"then re-check Delegate to Claude."
) % (keyword, reason)
client.add_comment(task_id, comment)
client.set_checkbox(
task_id, task.list_id, cfg.clickup.error_field_name, True
)
client.update_task_status(task_id, cfg.clickup.review_status)
run_id = (job_data or {}).get("run_id")
if run_id:
db.log_run_finish(run_id, "failed", error="Cora failed: %s" % reason)
notify(
cfg,
"Cora FAILED: %s" % keyword,
"Task %s -- %s" % (task_id, reason),
is_error=True,
)
log.error("AutoCora FAILURE for task %s: %s", task_id, reason)
def _dispatch_autocora(
client: ClickUpClient,
cfg: Config,
db: StateDB,
task: ClickUpTask,
route,
route: SkillRoute,
run_id: int,
):
"""Submit an AutoCora job for a task."""
# TODO: Phase 3 -- implement AutoCora job submission
log.info("AutoCora dispatch for task %s -- NOT YET IMPLEMENTED", task.id)
db.log_run_finish(run_id, "skipped", result="AutoCora not yet implemented")
keyword = task.get_field_value("Keyword") or ""
url = task.get_field_value("IMSURL") or ""
# For now, post a comment and uncheck
if not keyword:
_handle_no_mapping(
client, cfg, task,
"Task has no Keyword field set. "
"Set the Keyword custom field, then re-check Delegate to Claude.",
)
db.log_run_finish(run_id, "failed", error="Missing Keyword field")
return
# 1. Set status to "ai working"
client.update_task_status(task.id, cfg.clickup.ai_working_status)
# 2. Submit the job to the NAS queue
job_id = submit_job(keyword, url, task.id, cfg.autocora.jobs_dir)
if not job_id:
_handle_dispatch_error(
client, cfg, db, task, run_id,
error="Failed to write AutoCora job file to %s" % cfg.autocora.jobs_dir,
fix="Check that the NAS is mounted and accessible, "
"then re-check Delegate to Claude.",
)
return
# 3. Store job metadata in state DB for result polling
db.kv_set_json("autocora:job:%s" % job_id, {
"task_id": task.id,
"task_name": task.name,
"keyword": keyword,
"url": url,
"run_id": run_id,
})
# 4. Post comment + uncheck delegate
client.add_comment(
task.id,
"[WARNING] AutoCora dispatch not yet implemented. "
"Attach the .xlsx manually and re-check Delegate to Claude.",
"Cora job submitted for keyword: \"%s\" (job: %s).\n"
"The runner will check for results automatically."
% (keyword, job_id),
)
client.set_checkbox(
task.id, task.list_id, cfg.clickup.delegate_field_name, False
)
# 5. Log as submitted (not completed -- that happens when results arrive)
db.log_run_finish(run_id, "submitted", result="Job: %s" % job_id)
notify(cfg, "Cora submitted: %s" % keyword, "Task: %s" % task.name)
log.info(
"AutoCora job submitted: %s (task=%s, keyword=%s)",
job_id, task.id, keyword,
)
def _dispatch_claude(
client: ClickUpClient,

View File

@ -0,0 +1,179 @@
"""AutoCora job submission and result polling.
Submits Cora SEO analysis jobs to the NAS queue and polls for results.
Jobs are JSON files written to the jobs directory; an external worker
picks them up, runs Cora, and writes .result files to the results directory.
"""
from __future__ import annotations
import json
import logging
import re
import shutil
import time
from dataclasses import dataclass
from pathlib import Path
log = logging.getLogger(__name__)
@dataclass
class CoraResult:
"""Parsed result from a .result file."""
job_id: str
status: str # "SUCCESS" or "FAILURE"
keyword: str
task_ids: list[str]
reason: str # failure reason, empty on success
result_path: Path
def slugify(text: str, max_len: int = 80) -> str:
"""Convert text to a filesystem-safe slug.
Lowercase, alphanumeric + hyphens only, max length.
"""
slug = text.lower().strip()
slug = re.sub(r"[^a-z0-9]+", "-", slug)
slug = slug.strip("-")
if len(slug) > max_len:
slug = slug[:max_len].rstrip("-")
return slug or "unknown"
def make_job_id(keyword: str) -> str:
"""Generate a unique job ID from keyword + timestamp."""
ts = int(time.time() * 1000)
return "job-%d-%s" % (ts, slugify(keyword))
def submit_job(
keyword: str,
url: str,
task_id: str,
jobs_dir: str,
) -> str | None:
"""Write a job JSON file to the NAS jobs directory.
Returns the job_id on success, None on failure.
"""
jobs_path = Path(jobs_dir)
try:
jobs_path.mkdir(parents=True, exist_ok=True)
except OSError as e:
log.error("Cannot access jobs directory %s: %s", jobs_dir, e)
return None
job_id = make_job_id(keyword)
job_file = jobs_path / ("%s.json" % job_id)
job_data = {
"keyword": keyword,
"url": url or "https://seotoollab.com/blank.html",
"task_ids": [task_id],
}
try:
job_file.write_text(
json.dumps(job_data, indent=2),
encoding="utf-8",
)
log.info("Submitted AutoCora job: %s (keyword=%s)", job_id, keyword)
return job_id
except OSError as e:
log.error("Failed to write job file %s: %s", job_file, e)
return None
def parse_result_file(result_path: Path) -> CoraResult | None:
"""Parse a .result file (JSON or legacy plain-text format).
Returns a CoraResult or None if the file can't be parsed.
"""
try:
raw = result_path.read_text(encoding="utf-8").strip()
except OSError as e:
log.warning("Cannot read result file %s: %s", result_path, e)
return None
if not raw:
log.warning("Empty result file: %s", result_path)
return None
job_id = result_path.stem # filename without .result extension
# Try JSON first
try:
data = json.loads(raw)
return CoraResult(
job_id=job_id,
status=data.get("status", "FAILURE"),
keyword=data.get("keyword", ""),
task_ids=data.get("task_ids", []),
reason=data.get("reason", ""),
result_path=result_path,
)
except (json.JSONDecodeError, AttributeError):
pass
# Legacy plain-text format
if raw.startswith("SUCCESS"):
return CoraResult(
job_id=job_id,
status="SUCCESS",
keyword="",
task_ids=[],
reason="",
result_path=result_path,
)
if raw.startswith("FAILURE"):
reason = raw.split(":", 1)[1].strip() if ":" in raw else "Unknown"
return CoraResult(
job_id=job_id,
status="FAILURE",
keyword="",
task_ids=[],
reason=reason,
result_path=result_path,
)
log.warning("Unrecognized result format in %s", result_path)
return None
def scan_results(results_dir: str) -> list[CoraResult]:
"""Scan the results directory for .result files and parse them.
Returns a list of parsed results (skips unparseable files).
"""
results_path = Path(results_dir)
if not results_path.exists():
return []
results: list[CoraResult] = []
for f in sorted(results_path.glob("*.result")):
parsed = parse_result_file(f)
if parsed:
results.append(parsed)
return results
def archive_result(result: CoraResult) -> bool:
"""Move a .result file to the processed/ subdirectory.
Returns True on success.
"""
processed_dir = result.result_path.parent / "processed"
try:
processed_dir.mkdir(exist_ok=True)
dest = processed_dir / result.result_path.name
shutil.move(str(result.result_path), str(dest))
log.info("Archived result file: %s", result.result_path.name)
return True
except OSError as e:
log.warning("Failed to archive result %s: %s", result.result_path, e)
return False

View File

@ -0,0 +1,501 @@
"""Tests for clickup_runner.autocora and AutoCora dispatch in __main__."""
from __future__ import annotations
import json
from pathlib import Path
from unittest.mock import MagicMock, patch
import pytest
from clickup_runner.autocora import (
CoraResult,
archive_result,
make_job_id,
parse_result_file,
scan_results,
slugify,
submit_job,
)
from clickup_runner.clickup_client import ClickUpTask
from clickup_runner.config import AutoCoraConfig, Config, NtfyConfig, RunnerConfig
from clickup_runner.skill_map import SkillRoute
# ── Fixtures ──
def _make_task(**overrides) -> ClickUpTask:
defaults = {
"id": "task_abc",
"name": "SEO for CNC Machining",
"status": "to do",
"description": "Content creation for CNC machining page.",
"task_type": "Content Creation",
"url": "https://app.clickup.com/t/task_abc",
"list_id": "list_1",
"custom_fields": {
"Customer": "Acme Corp",
"Keyword": "CNC Machining",
"IMSURL": "https://acme.com/cnc-machining",
"Delegate to Claude": True,
"Stage": "run_cora",
},
}
defaults.update(overrides)
return ClickUpTask(**defaults)
def _make_config(**overrides) -> Config:
cfg = Config()
cfg.runner = RunnerConfig(claude_timeout_seconds=60)
cfg.ntfy = NtfyConfig()
for k, v in overrides.items():
setattr(cfg, k, v)
return cfg
# ── slugify ──
class TestSlugify:
def test_basic(self):
assert slugify("CNC Machining") == "cnc-machining"
def test_special_chars(self):
assert slugify("Hello, World! & Co.") == "hello-world-co"
def test_max_length(self):
result = slugify("a" * 100, max_len=20)
assert len(result) <= 20
def test_empty_string(self):
assert slugify("") == "unknown"
def test_only_special_chars(self):
assert slugify("!!!@@@") == "unknown"
def test_leading_trailing_hyphens(self):
assert slugify("--hello--") == "hello"
def test_preserves_numbers(self):
assert slugify("Top 10 CNC tips") == "top-10-cnc-tips"
# ── make_job_id ──
class TestMakeJobId:
def test_format(self):
job_id = make_job_id("CNC Machining")
assert job_id.startswith("job-")
assert "cnc-machining" in job_id
def test_uniqueness(self):
# Two calls should produce different IDs (different timestamps)
id1 = make_job_id("test")
id2 = make_job_id("test")
# Could be same in same millisecond, but format should be valid
assert id1.startswith("job-")
assert id2.startswith("job-")
# ── submit_job ──
class TestSubmitJob:
def test_creates_job_file(self, tmp_path):
jobs_dir = tmp_path / "jobs"
job_id = submit_job("CNC Machining", "https://acme.com", "task_1", str(jobs_dir))
assert job_id is not None
assert jobs_dir.exists()
# Find the job file
files = list(jobs_dir.glob("job-*.json"))
assert len(files) == 1
data = json.loads(files[0].read_text())
assert data["keyword"] == "CNC Machining"
assert data["url"] == "https://acme.com"
assert data["task_ids"] == ["task_1"]
def test_fallback_url(self, tmp_path):
jobs_dir = tmp_path / "jobs"
submit_job("test", "", "task_1", str(jobs_dir))
files = list(jobs_dir.glob("job-*.json"))
data = json.loads(files[0].read_text())
assert data["url"] == "https://seotoollab.com/blank.html"
def test_unreachable_dir(self):
result = submit_job("test", "http://x.com", "t1", "//NONEXISTENT/share/jobs")
assert result is None
def test_creates_parent_dirs(self, tmp_path):
jobs_dir = tmp_path / "deep" / "nested" / "jobs"
job_id = submit_job("test", "http://x.com", "t1", str(jobs_dir))
assert job_id is not None
assert jobs_dir.exists()
# ── parse_result_file ──
class TestParseResultFile:
def test_json_success(self, tmp_path):
f = tmp_path / "job-123-test.result"
f.write_text(json.dumps({
"status": "SUCCESS",
"keyword": "CNC Machining",
"task_ids": ["t1", "t2"],
}))
result = parse_result_file(f)
assert result is not None
assert result.status == "SUCCESS"
assert result.keyword == "CNC Machining"
assert result.task_ids == ["t1", "t2"]
assert result.job_id == "job-123-test"
def test_json_failure(self, tmp_path):
f = tmp_path / "job-456.result"
f.write_text(json.dumps({
"status": "FAILURE",
"keyword": "test",
"task_ids": ["t1"],
"reason": "Cora timed out",
}))
result = parse_result_file(f)
assert result.status == "FAILURE"
assert result.reason == "Cora timed out"
def test_legacy_success(self, tmp_path):
f = tmp_path / "job-789.result"
f.write_text("SUCCESS")
result = parse_result_file(f)
assert result.status == "SUCCESS"
assert result.task_ids == []
def test_legacy_failure(self, tmp_path):
f = tmp_path / "job-101.result"
f.write_text("FAILURE: Network timeout")
result = parse_result_file(f)
assert result.status == "FAILURE"
assert result.reason == "Network timeout"
def test_empty_file(self, tmp_path):
f = tmp_path / "empty.result"
f.write_text("")
assert parse_result_file(f) is None
def test_unrecognized_format(self, tmp_path):
f = tmp_path / "weird.result"
f.write_text("something random")
assert parse_result_file(f) is None
def test_missing_file(self, tmp_path):
f = tmp_path / "missing.result"
assert parse_result_file(f) is None
# ── scan_results ──
class TestScanResults:
def test_finds_result_files(self, tmp_path):
(tmp_path / "job-1.result").write_text(json.dumps({"status": "SUCCESS"}))
(tmp_path / "job-2.result").write_text(json.dumps({"status": "FAILURE", "reason": "x"}))
(tmp_path / "not-a-result.txt").write_text("ignore me")
results = scan_results(str(tmp_path))
assert len(results) == 2
def test_empty_dir(self, tmp_path):
assert scan_results(str(tmp_path)) == []
def test_nonexistent_dir(self):
assert scan_results("//NONEXISTENT/path") == []
def test_skips_unparseable(self, tmp_path):
(tmp_path / "good.result").write_text(json.dumps({"status": "SUCCESS"}))
(tmp_path / "bad.result").write_text("")
results = scan_results(str(tmp_path))
assert len(results) == 1
# ── archive_result ──
class TestArchiveResult:
def test_moves_to_processed(self, tmp_path):
f = tmp_path / "job-1.result"
f.write_text("SUCCESS")
result = CoraResult(
job_id="job-1",
status="SUCCESS",
keyword="test",
task_ids=[],
reason="",
result_path=f,
)
assert archive_result(result) is True
assert not f.exists()
assert (tmp_path / "processed" / "job-1.result").exists()
def test_creates_processed_dir(self, tmp_path):
f = tmp_path / "job-2.result"
f.write_text("data")
result = CoraResult(
job_id="job-2", status="SUCCESS", keyword="",
task_ids=[], reason="", result_path=f,
)
archive_result(result)
assert (tmp_path / "processed").is_dir()
# ── _dispatch_autocora integration ──
class TestDispatchAutocora:
def _setup(self, tmp_path):
cfg = _make_config()
cfg.autocora = AutoCoraConfig(
jobs_dir=str(tmp_path / "jobs"),
results_dir=str(tmp_path / "results"),
)
client = MagicMock()
db = MagicMock()
db.log_run_start.return_value = 1
task = _make_task()
route = SkillRoute(
handler="autocora",
next_stage="outline",
next_status="review",
)
return cfg, client, db, task, route
def test_success_submission(self, tmp_path):
from clickup_runner.__main__ import _dispatch_autocora
cfg, client, db, task, route = self._setup(tmp_path)
_dispatch_autocora(client, cfg, db, task, route, run_id=1)
# Job file created
job_files = list((tmp_path / "jobs").glob("job-*.json"))
assert len(job_files) == 1
data = json.loads(job_files[0].read_text())
assert data["keyword"] == "CNC Machining"
assert data["task_ids"] == ["task_abc"]
# Status set to ai working
client.update_task_status.assert_called_with("task_abc", "ai working")
# Comment posted
client.add_comment.assert_called_once()
comment = client.add_comment.call_args[0][1]
assert "CNC Machining" in comment
# Delegate unchecked
client.set_checkbox.assert_called_with(
"task_abc", "list_1", "Delegate to Claude", False
)
# State DB updated
db.kv_set_json.assert_called_once()
kv_key = db.kv_set_json.call_args[0][0]
assert kv_key.startswith("autocora:job:")
# Run logged as submitted
db.log_run_finish.assert_called_once()
assert db.log_run_finish.call_args[0][1] == "submitted"
def test_missing_keyword(self, tmp_path):
from clickup_runner.__main__ import _dispatch_autocora
cfg, client, db, task, route = self._setup(tmp_path)
task.custom_fields["Keyword"] = None
_dispatch_autocora(client, cfg, db, task, route, run_id=1)
# Error comment posted
comment = client.add_comment.call_args[0][1]
assert "Keyword" in comment
# Run logged as failed
db.log_run_finish.assert_called_once()
assert db.log_run_finish.call_args[0][1] == "failed"
def test_unreachable_nas(self, tmp_path):
from clickup_runner.__main__ import _dispatch_autocora
cfg, client, db, task, route = self._setup(tmp_path)
cfg.autocora.jobs_dir = "//NONEXISTENT/share/jobs"
_dispatch_autocora(client, cfg, db, task, route, run_id=1)
# Error comment posted about NAS
comment = client.add_comment.call_args[0][1]
assert "ERROR" in comment
# Error checkbox set
client.set_checkbox.assert_any_call(
"task_abc", "list_1", "Error", True
)
# ── _check_autocora_results integration ──
class TestCheckAutocoraResults:
def _setup(self, tmp_path):
cfg = _make_config()
cfg.autocora = AutoCoraConfig(
jobs_dir=str(tmp_path / "jobs"),
results_dir=str(tmp_path / "results"),
xlsx_dir="//NAS/Cora72",
)
client = MagicMock()
# Mock get_task to return a task
client.get_task.return_value = _make_task()
# get_stage needs to return the actual stage string for route lookup
client.get_stage.return_value = "run_cora"
db = MagicMock()
return cfg, client, db
def test_success_result_with_state_db(self, tmp_path):
from clickup_runner.__main__ import _check_autocora_results
cfg, client, db = self._setup(tmp_path)
# Write a result file
results_dir = tmp_path / "results"
results_dir.mkdir()
job_id = "job-1234-cnc-machining"
(results_dir / ("%s.result" % job_id)).write_text(json.dumps({
"status": "SUCCESS",
"keyword": "CNC Machining",
"task_ids": ["task_abc"],
}))
# Set up state DB to return job data
db.kv_get_json.return_value = {
"task_id": "task_abc",
"task_name": "SEO for CNC",
"keyword": "CNC Machining",
"url": "https://acme.com",
"run_id": 5,
}
_check_autocora_results(client, cfg, db)
# Task status updated to review
client.update_task_status.assert_called_with("task_abc", "review")
# Stage advanced
client.set_stage.assert_called_once()
# Success comment posted
client.add_comment.assert_called_once()
comment = client.add_comment.call_args[0][1]
assert "CNC Machining" in comment
assert "//NAS/Cora72" in comment
# Error checkbox cleared
client.set_checkbox.assert_called()
# Run log finished
db.log_run_finish.assert_called_once_with(5, "completed", result="Cora report ready")
# State DB entry deleted
db.kv_delete.assert_called_once_with("autocora:job:%s" % job_id)
# Result file archived
assert not (results_dir / ("%s.result" % job_id)).exists()
assert (results_dir / "processed" / ("%s.result" % job_id)).exists()
def test_failure_result(self, tmp_path):
from clickup_runner.__main__ import _check_autocora_results
cfg, client, db = self._setup(tmp_path)
results_dir = tmp_path / "results"
results_dir.mkdir()
job_id = "job-999-test"
(results_dir / ("%s.result" % job_id)).write_text(json.dumps({
"status": "FAILURE",
"keyword": "test keyword",
"task_ids": ["task_abc"],
"reason": "Cora process crashed",
}))
db.kv_get_json.return_value = {
"task_id": "task_abc",
"keyword": "test keyword",
"run_id": 10,
}
_check_autocora_results(client, cfg, db)
# Error comment posted
comment = client.add_comment.call_args[0][1]
assert "ERROR" in comment
assert "Cora process crashed" in comment
# Error checkbox set
client.set_checkbox.assert_any_call(
"task_abc", "list_1", "Error", True
)
# Run log failed
db.log_run_finish.assert_called_once()
assert db.log_run_finish.call_args[0][1] == "failed"
def test_no_results(self, tmp_path):
from clickup_runner.__main__ import _check_autocora_results
cfg, client, db = self._setup(tmp_path)
# No results dir
_check_autocora_results(client, cfg, db)
# Nothing should happen
client.add_comment.assert_not_called()
db.log_run_finish.assert_not_called()
def test_result_without_state_db_uses_file_task_ids(self, tmp_path):
from clickup_runner.__main__ import _check_autocora_results
cfg, client, db = self._setup(tmp_path)
results_dir = tmp_path / "results"
results_dir.mkdir()
(results_dir / "job-orphan.result").write_text(json.dumps({
"status": "SUCCESS",
"keyword": "orphan",
"task_ids": ["task_abc"],
}))
# No state DB entry
db.kv_get_json.return_value = None
_check_autocora_results(client, cfg, db)
# Should still process using task_ids from result file
client.update_task_status.assert_called()
client.add_comment.assert_called()