Compare commits
No commits in common. "master" and "fix/customer-field-migration" have entirely different histories.
master
...
fix/custom
|
|
@ -1,160 +0,0 @@
|
||||||
# Brand Voice & Tone Guidelines
|
|
||||||
|
|
||||||
Reference for maintaining consistent voice across all written content. These are defaults — override with client-specific guidelines when available.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Voice Archetypes
|
|
||||||
|
|
||||||
Start with Expert but also work in Guide when appliciable.
|
|
||||||
|
|
||||||
### Expert
|
|
||||||
- **Sounds like:** A senior practitioner sharing hard-won knowledge.
|
|
||||||
- **Characteristics:** Precise, evidence-backed, confident without arrogance. Cites data, references real-world experience, and isn't afraid to say "it depends."
|
|
||||||
- **Typical vocabulary:** "In practice," "the tradeoff is," "based on our benchmarks," "here's why this matters."
|
|
||||||
- **Risk to avoid:** Coming across as condescending or overly academic.
|
|
||||||
- **Best for:** Technical audiences, B2B SaaS, engineering blogs, whitepapers.
|
|
||||||
|
|
||||||
### Guide
|
|
||||||
- **Sounds like:** A patient teacher walking you through something step by step.
|
|
||||||
- **Characteristics:** Clear, encouraging, anticipates confusion. Breaks complex ideas into digestible pieces. Uses analogies.
|
|
||||||
- **Typical vocabulary:** "Let's start with," "think of it like," "the key thing to remember," "don't worry if this seems complex."
|
|
||||||
- **Risk to avoid:** Being patronizing or oversimplifying for an advanced audience.
|
|
||||||
- **Best for:** Tutorials, onboarding content, documentation, beginner-to-intermediate audiences.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Core Writing Principles
|
|
||||||
|
|
||||||
These apply regardless of archetype.
|
|
||||||
|
|
||||||
### 1. Clarity First
|
|
||||||
- If a sentence can be misread, rewrite it.
|
|
||||||
- Use the simplest word that conveys the precise meaning. "Use" over "utilize." "Start" over "commence."
|
|
||||||
- One idea per paragraph. One purpose per section.
|
|
||||||
- Define jargon on first use, or skip it entirely.
|
|
||||||
|
|
||||||
### 2. Customer-Centric
|
|
||||||
- Frame everything from the reader's perspective, not the company's.
|
|
||||||
- **Instead of:** "We built a new feature that enables real-time collaboration."
|
|
||||||
- **Write:** "You can now edit documents with your team in real time."
|
|
||||||
- Lead with the reader's problem or goal, not the product or solution.
|
|
||||||
|
|
||||||
### 3. Active Voice
|
|
||||||
- Active voice is the default. Passive voice is acceptable only when the actor is unknown or irrelevant.
|
|
||||||
- **Active:** "The script generates a report every morning."
|
|
||||||
- **Passive (acceptable):** "The logs are rotated every 24 hours." (The actor doesn't matter.)
|
|
||||||
- **Passive (avoid):** "A decision was made to deprecate the endpoint." (Who decided?)
|
|
||||||
|
|
||||||
### 4. Show, Don't Claim
|
|
||||||
- Replace vague claims with specific evidence.
|
|
||||||
- **Claim:** "Our platform is incredibly fast."
|
|
||||||
- **Show:** "Queries return in under 50ms at the 99th percentile."
|
|
||||||
- If you can't provide evidence, soften the language or cut the sentence.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Tone Attributes
|
|
||||||
|
|
||||||
Tone shifts based on content type and audience. Use these spectrums to calibrate.
|
|
||||||
|
|
||||||
### Formality Spectrum
|
|
||||||
|
|
||||||
```
|
|
||||||
Casual -------|-------|-------|-------|------- Formal
|
|
||||||
1 2 3 4 5
|
|
||||||
```
|
|
||||||
|
|
||||||
| Level | Description | Use When |
|
|
||||||
|-------|-------------|----------|
|
|
||||||
| 1 | Slang OK, sentence fragments, first person | Internal team comms, very informal blogs |
|
|
||||||
| 2 | Conversational, contractions, direct address | Newsletters, community posts, most blog content |
|
|
||||||
| 3 | Professional but approachable, minimal contractions | Product announcements, mid-funnel content |
|
|
||||||
| 4 | Polished, structured, no contractions | Whitepapers, enterprise case studies, executive briefs |
|
|
||||||
| 5 | Formal, third person, precise terminology | Legal, compliance, academic partnerships |
|
|
||||||
|
|
||||||
**Default for most blog/article content: Level 2-3.**
|
|
||||||
|
|
||||||
### Technical Depth Spectrum
|
|
||||||
|
|
||||||
```
|
|
||||||
General -------|-------|-------|-------|------- Deep Technical
|
|
||||||
1 2 3 4 5
|
|
||||||
```
|
|
||||||
|
|
||||||
| Level | Description | Use When |
|
|
||||||
|-------|-------------|----------|
|
|
||||||
| 1 | No jargon, analogy-heavy, conceptual | Non-technical stakeholders, general audience |
|
|
||||||
| 2 | Light jargon (defined inline), practical focus | Business audience with some domain familiarity |
|
|
||||||
| 3 | Industry-standard terminology, code snippets OK | Practitioners who do the work daily |
|
|
||||||
| 4 | Assumes working knowledge, implementation details | Developers, engineers, technical decision-makers |
|
|
||||||
| 5 | Deep internals, performance analysis, tradeoff math | Senior engineers, architects, researchers |
|
|
||||||
|
|
||||||
**Default: Match the audience. When unsure, aim at what you think the audience can handle. We are mostly B2B.**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Language Preferences
|
|
||||||
|
|
||||||
### Use Action Verbs
|
|
||||||
Lead sentences — especially headings and CTAs — with strong verbs.
|
|
||||||
|
|
||||||
| Weak | Strong |
|
|
||||||
|------|--------|
|
|
||||||
| There is a way to improve | Improve |
|
|
||||||
| This section is a discussion of | This section covers |
|
|
||||||
| You should consider using | Use |
|
|
||||||
| It is important to note that | Note: |
|
|
||||||
| We are going to walk through | Let's walk through |
|
|
||||||
|
|
||||||
### Be Concrete and Specific
|
|
||||||
Vague language erodes trust. Replace generalities with specifics.
|
|
||||||
|
|
||||||
| Vague | Concrete |
|
|
||||||
|-------|----------|
|
|
||||||
| "significantly faster" | "3x faster" or "reduced from 12s to 2s" |
|
|
||||||
| "a large number of users" | "over 40,000 monthly active users" |
|
|
||||||
| "best-in-class" | describe the specific advantage |
|
|
||||||
| "seamless integration" | "connects via a single API call" |
|
|
||||||
| "in the near future" | "by Q2" or "in the next release" |
|
|
||||||
|
|
||||||
### Avoid These Patterns
|
|
||||||
- **Weasel words:** "very," "really," "extremely," "quite," "somewhat" — cut them or replace with data.
|
|
||||||
- **Nominalizations:** "implementation" when you mean "implement," "utilization" when you mean "use."
|
|
||||||
- **Hedge stacking:** "It might potentially be possible to perhaps consider..." — commit to a position or state the uncertainty once, clearly.
|
|
||||||
- **Buzzword chains:** "AI-powered next-gen synergistic platform" — describe what it actually does.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Pre-Publication Checklist
|
|
||||||
|
|
||||||
Run through this before publishing any piece of content.
|
|
||||||
|
|
||||||
### Voice Consistency
|
|
||||||
- [ ] Does the piece sound like one person wrote it, beginning to end?
|
|
||||||
- [ ] Does it match the target voice archetype?
|
|
||||||
- [ ] Are there jarring shifts in tone between sections?
|
|
||||||
|
|
||||||
|
|
||||||
### Clarity
|
|
||||||
- [ ] Can a reader in the target audience understand every sentence on the first read?
|
|
||||||
- [ ] Is jargon defined or avoided?
|
|
||||||
- [ ] Are all acronyms expanded on first use?
|
|
||||||
- [ ] Do headings accurately describe the content beneath them?
|
|
||||||
- [ ] Is the article scannable? (subheadings every 2-4 paragraphs, short paragraphs, lists where appropriate)
|
|
||||||
|
|
||||||
### Value
|
|
||||||
- [ ] Does the introduction make clear what the reader will gain?
|
|
||||||
- [ ] Does every section earn its place? (Cut anything that doesn't serve the reader's goal.)
|
|
||||||
- [ ] Are claims supported by evidence, examples, or data?
|
|
||||||
- [ ] Is the advice actionable — can the reader do something with it today?
|
|
||||||
- [ ] Does the conclusion provide a clear next step?
|
|
||||||
|
|
||||||
### Formatting
|
|
||||||
- [ ] Title includes the core keyword or topic and at least 2 closely related keyword's/topics.
|
|
||||||
- [ ] Meta description summarizes the value proposition.
|
|
||||||
- [ ] Code blocks, tables, and images have context (a sentence before them explaining what the reader is looking at).
|
|
||||||
- [ ] Links use descriptive anchor text, not "click here."
|
|
||||||
- [ ] No walls of text — maximum 5 sentences per paragraph for web content. Use a minimum of 2 sentences.
|
|
||||||
|
|
@ -59,13 +59,6 @@ OPTIMIZATION_RULES = {
|
||||||
"exclude_measurement_entities": True, # Ignore measurements (dimensions, tolerances) as entities
|
"exclude_measurement_entities": True, # Ignore measurements (dimensions, tolerances) as entities
|
||||||
"allow_organization_entities": True, # Organizations like ISO, ANSI, etc. are OK
|
"allow_organization_entities": True, # Organizations like ISO, ANSI, etc. are OK
|
||||||
"never_mention_competitors": True, # Never mention competitors by name in content
|
"never_mention_competitors": True, # Never mention competitors by name in content
|
||||||
|
|
||||||
# Entity correlation threshold
|
|
||||||
# Best of Both = lower of Spearman's or Pearson's correlation.
|
|
||||||
# Measures correlation to ranking position (1=top, 100=bottom), so negative = better ranking.
|
|
||||||
# Only include entities with Best of Both <= this value.
|
|
||||||
# Set to None to disable filtering.
|
|
||||||
"entity_correlation_threshold": -0.19,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -202,15 +195,6 @@ class CoraReport:
|
||||||
if name.startswith("critical") or name.startswith("http"):
|
if name.startswith("critical") or name.startswith("http"):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
correlation = _safe_float(row, col_map.get("Best of Both"))
|
|
||||||
|
|
||||||
# Filter by Best of Both correlation threshold.
|
|
||||||
# Lower (more negative) = stronger ranking signal (correlates with
|
|
||||||
# position 1 vs 100). Only keep entities at or below the threshold.
|
|
||||||
threshold = OPTIMIZATION_RULES.get("entity_correlation_threshold")
|
|
||||||
if threshold is not None and (correlation is None or correlation > threshold):
|
|
||||||
continue
|
|
||||||
|
|
||||||
entity = {
|
entity = {
|
||||||
"name": name,
|
"name": name,
|
||||||
"freebase_id": _safe_str(row, col_map.get("Freebase ID")),
|
"freebase_id": _safe_str(row, col_map.get("Freebase ID")),
|
||||||
|
|
@ -219,7 +203,7 @@ class CoraReport:
|
||||||
"relevance": _safe_float(row, col_map.get("Relevance")),
|
"relevance": _safe_float(row, col_map.get("Relevance")),
|
||||||
"confidence": _safe_float(row, col_map.get("Confidence")),
|
"confidence": _safe_float(row, col_map.get("Confidence")),
|
||||||
"type": _safe_str(row, col_map.get("Type")),
|
"type": _safe_str(row, col_map.get("Type")),
|
||||||
"correlation": correlation,
|
"correlation": _safe_float(row, col_map.get("Best of Both")),
|
||||||
"current_count": _safe_int(row, site_col_idx),
|
"current_count": _safe_int(row, site_col_idx),
|
||||||
"max_count": _safe_int(row, col_map.get("Max")),
|
"max_count": _safe_int(row, col_map.get("Max")),
|
||||||
"deficit": _safe_int(row, col_map.get("Deficit")),
|
"deficit": _safe_int(row, col_map.get("Deficit")),
|
||||||
|
|
|
||||||
|
|
@ -433,7 +433,6 @@ def main():
|
||||||
|
|
||||||
md_path = out_dir / "test_block.md"
|
md_path = out_dir / "test_block.md"
|
||||||
html_path = out_dir / "test_block.html"
|
html_path = out_dir / "test_block.html"
|
||||||
txt_path = out_dir / "test_block.txt"
|
|
||||||
stats_path = out_dir / "test_block_stats.json"
|
stats_path = out_dir / "test_block_stats.json"
|
||||||
|
|
||||||
md_content = format_markdown(result["sentences"])
|
md_content = format_markdown(result["sentences"])
|
||||||
|
|
@ -441,7 +440,6 @@ def main():
|
||||||
|
|
||||||
md_path.write_text(md_content, encoding="utf-8")
|
md_path.write_text(md_content, encoding="utf-8")
|
||||||
html_path.write_text(html_content, encoding="utf-8")
|
html_path.write_text(html_content, encoding="utf-8")
|
||||||
txt_path.write_text(html_content, encoding="utf-8")
|
|
||||||
stats_path.write_text(
|
stats_path.write_text(
|
||||||
json.dumps(result["stats"], indent=2, default=str), encoding="utf-8"
|
json.dumps(result["stats"], indent=2, default=str), encoding="utf-8"
|
||||||
)
|
)
|
||||||
|
|
@ -461,7 +459,6 @@ def main():
|
||||||
print(f"\nFiles written:")
|
print(f"\nFiles written:")
|
||||||
print(f" {md_path}")
|
print(f" {md_path}")
|
||||||
print(f" {html_path}")
|
print(f" {html_path}")
|
||||||
print(f" {txt_path}")
|
|
||||||
print(f" {stats_path}")
|
print(f" {stats_path}")
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -492,7 +492,6 @@ These override any data from the Cora report:
|
||||||
| Competitor names | NEVER use competitor company names as entities or LSI keywords. Do not mention competitors by name in content. |
|
| Competitor names | NEVER use competitor company names as entities or LSI keywords. Do not mention competitors by name in content. |
|
||||||
| Measurement entities | Ignore measurements (dimensions, tolerances, etc.) as entities — skip these in entity optimization |
|
| Measurement entities | Ignore measurements (dimensions, tolerances, etc.) as entities — skip these in entity optimization |
|
||||||
| Organization entities | Organizations like ISO, ANSI, ASTM are fine — keep these as entities |
|
| Organization entities | Organizations like ISO, ANSI, ASTM are fine — keep these as entities |
|
||||||
| Entity correlation filter | Only entities with Best of Both <= -0.19 are included. Best of Both is the lower of Spearman's or Pearson's correlation to ranking position (1=top, 100=bottom), so more negative = stronger ranking signal. This filter is applied in `cora_parser.py` and affects all downstream consumers. To disable, set `entity_correlation_threshold` to `None` in `OPTIMIZATION_RULES`. Added 2026-03-20 — revert if entity coverage feels too thin. |
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -17,13 +17,13 @@ logging.basicConfig(
|
||||||
datefmt="%H:%M:%S",
|
datefmt="%H:%M:%S",
|
||||||
)
|
)
|
||||||
|
|
||||||
# All levels to rotating log file (DEBUG+)
|
# Warnings and errors to rotating log file
|
||||||
_log_dir = Path(__file__).resolve().parent.parent / "logs"
|
_log_dir = Path(__file__).resolve().parent.parent / "logs"
|
||||||
_log_dir.mkdir(exist_ok=True)
|
_log_dir.mkdir(exist_ok=True)
|
||||||
_file_handler = RotatingFileHandler(
|
_file_handler = RotatingFileHandler(
|
||||||
_log_dir / "cheddahbot.log", maxBytes=5 * 1024 * 1024, backupCount=5
|
_log_dir / "cheddahbot.log", maxBytes=5 * 1024 * 1024, backupCount=5
|
||||||
)
|
)
|
||||||
_file_handler.setLevel(logging.DEBUG)
|
_file_handler.setLevel(logging.WARNING)
|
||||||
_file_handler.setFormatter(
|
_file_handler.setFormatter(
|
||||||
logging.Formatter("%(asctime)s [%(name)s] %(levelname)s: %(message)s")
|
logging.Formatter("%(asctime)s [%(name)s] %(levelname)s: %(message)s")
|
||||||
)
|
)
|
||||||
|
|
@ -181,14 +181,22 @@ def main():
|
||||||
log.info("Starting scheduler...")
|
log.info("Starting scheduler...")
|
||||||
scheduler = Scheduler(config, db, default_agent, notification_bus=notification_bus)
|
scheduler = Scheduler(config, db, default_agent, notification_bus=notification_bus)
|
||||||
scheduler.start()
|
scheduler.start()
|
||||||
# Inject scheduler into tool context so get_active_tasks can read it
|
|
||||||
if tools:
|
|
||||||
tools.scheduler = scheduler
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.warning("Scheduler not available: %s", e)
|
log.warning("Scheduler not available: %s", e)
|
||||||
|
|
||||||
|
log.info("Launching Gradio UI on %s:%s...", config.host, config.port)
|
||||||
|
blocks = create_ui(
|
||||||
|
registry, config, default_llm, notification_bus=notification_bus, scheduler=scheduler
|
||||||
|
)
|
||||||
|
|
||||||
|
# Build a parent FastAPI app so we can mount the dashboard alongside Gradio.
|
||||||
|
# Inserting routes into blocks.app before launch() doesn't work because
|
||||||
|
# launch()/mount_gradio_app() replaces the internal App instance.
|
||||||
|
import gradio as gr
|
||||||
import uvicorn
|
import uvicorn
|
||||||
from fastapi import FastAPI
|
from fastapi import FastAPI
|
||||||
|
from fastapi.responses import RedirectResponse
|
||||||
|
from starlette.staticfiles import StaticFiles
|
||||||
|
|
||||||
fastapi_app = FastAPI()
|
fastapi_app = FastAPI()
|
||||||
|
|
||||||
|
|
@ -199,33 +207,24 @@ def main():
|
||||||
fastapi_app.include_router(api_router)
|
fastapi_app.include_router(api_router)
|
||||||
log.info("API router mounted at /api/")
|
log.info("API router mounted at /api/")
|
||||||
|
|
||||||
# Mount new HTMX web UI (chat at /, dashboard at /dashboard)
|
# Mount the dashboard as static files (must come before Gradio's catch-all)
|
||||||
from .web import mount_web_app
|
dashboard_dir = Path(__file__).resolve().parent.parent / "dashboard"
|
||||||
|
if dashboard_dir.is_dir():
|
||||||
|
# Redirect /dashboard (no trailing slash) → /dashboard/
|
||||||
|
@fastapi_app.get("/dashboard")
|
||||||
|
async def _dashboard_redirect():
|
||||||
|
return RedirectResponse(url="/dashboard/")
|
||||||
|
|
||||||
mount_web_app(
|
fastapi_app.mount(
|
||||||
fastapi_app,
|
"/dashboard",
|
||||||
registry,
|
StaticFiles(directory=str(dashboard_dir), html=True),
|
||||||
config,
|
name="dashboard",
|
||||||
default_llm,
|
|
||||||
notification_bus=notification_bus,
|
|
||||||
scheduler=scheduler,
|
|
||||||
db=db,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Mount Gradio at /old for transition period
|
|
||||||
try:
|
|
||||||
import gradio as gr
|
|
||||||
|
|
||||||
log.info("Mounting Gradio UI at /old...")
|
|
||||||
blocks = create_ui(
|
|
||||||
registry, config, default_llm, notification_bus=notification_bus, scheduler=scheduler
|
|
||||||
)
|
)
|
||||||
gr.mount_gradio_app(fastapi_app, blocks, path="/old", pwa=False, show_error=True)
|
log.info("Dashboard mounted at /dashboard/ (serving %s)", dashboard_dir)
|
||||||
log.info("Gradio UI available at /old")
|
|
||||||
except Exception as e:
|
# Mount Gradio at the root
|
||||||
log.warning("Gradio UI not available: %s", e)
|
gr.mount_gradio_app(fastapi_app, blocks, path="/", pwa=True, show_error=True)
|
||||||
|
|
||||||
log.info("Launching web UI on %s:%s...", config.host, config.port)
|
|
||||||
uvicorn.run(fastapi_app, host=config.host, port=config.port)
|
uvicorn.run(fastapi_app, host=config.host, port=config.port)
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -382,10 +382,7 @@ class Agent:
|
||||||
skip_permissions: If True, run CLI with --dangerously-skip-permissions.
|
skip_permissions: If True, run CLI with --dangerously-skip-permissions.
|
||||||
"""
|
"""
|
||||||
log.info("Execution brain task: %s", prompt[:100])
|
log.info("Execution brain task: %s", prompt[:100])
|
||||||
kwargs: dict = {
|
kwargs: dict = {"system_prompt": system_context}
|
||||||
"system_prompt": system_context,
|
|
||||||
"timeout": self.config.timeouts.execution_brain,
|
|
||||||
}
|
|
||||||
if tools:
|
if tools:
|
||||||
kwargs["tools"] = tools
|
kwargs["tools"] = tools
|
||||||
if model:
|
if model:
|
||||||
|
|
@ -397,7 +394,7 @@ class Agent:
|
||||||
# Log to daily memory
|
# Log to daily memory
|
||||||
if self._memory:
|
if self._memory:
|
||||||
try:
|
try:
|
||||||
self._memory.log_daily(f"[Execution] {prompt[:200]}\n-> {result[:500]}")
|
self._memory.log_daily(f"[Execution] {prompt[:200]}\n→ {result[:500]}")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.warning("Failed to log execution to memory: %s", e)
|
log.warning("Failed to log execution to memory: %s", e)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -98,12 +98,6 @@ class ApiBudgetConfig:
|
||||||
alert_threshold: float = 0.8 # alert at 80% of limit
|
alert_threshold: float = 0.8 # alert at 80% of limit
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class TimeoutConfig:
|
|
||||||
execution_brain: int = 2700 # 45 minutes
|
|
||||||
blm: int = 1800 # 30 minutes
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class ContentConfig:
|
class ContentConfig:
|
||||||
cora_inbox: str = "" # e.g. "Z:/content-cora-inbox"
|
cora_inbox: str = "" # e.g. "Z:/content-cora-inbox"
|
||||||
|
|
@ -163,7 +157,6 @@ class Config:
|
||||||
autocora: AutoCoraConfig = field(default_factory=AutoCoraConfig)
|
autocora: AutoCoraConfig = field(default_factory=AutoCoraConfig)
|
||||||
api_budget: ApiBudgetConfig = field(default_factory=ApiBudgetConfig)
|
api_budget: ApiBudgetConfig = field(default_factory=ApiBudgetConfig)
|
||||||
content: ContentConfig = field(default_factory=ContentConfig)
|
content: ContentConfig = field(default_factory=ContentConfig)
|
||||||
timeouts: TimeoutConfig = field(default_factory=TimeoutConfig)
|
|
||||||
ntfy: NtfyConfig = field(default_factory=NtfyConfig)
|
ntfy: NtfyConfig = field(default_factory=NtfyConfig)
|
||||||
agents: list[AgentConfig] = field(default_factory=lambda: [AgentConfig()])
|
agents: list[AgentConfig] = field(default_factory=lambda: [AgentConfig()])
|
||||||
|
|
||||||
|
|
@ -228,10 +221,6 @@ def load_config() -> Config:
|
||||||
for k, v in data["content"].items():
|
for k, v in data["content"].items():
|
||||||
if hasattr(cfg.content, k):
|
if hasattr(cfg.content, k):
|
||||||
setattr(cfg.content, k, v)
|
setattr(cfg.content, k, v)
|
||||||
if "timeouts" in data and isinstance(data["timeouts"], dict):
|
|
||||||
for k, v in data["timeouts"].items():
|
|
||||||
if hasattr(cfg.timeouts, k):
|
|
||||||
setattr(cfg.timeouts, k, int(v))
|
|
||||||
|
|
||||||
# ntfy push notifications
|
# ntfy push notifications
|
||||||
if "ntfy" in data and isinstance(data["ntfy"], dict):
|
if "ntfy" in data and isinstance(data["ntfy"], dict):
|
||||||
|
|
@ -299,12 +288,6 @@ def load_config() -> Config:
|
||||||
if blm_dir := os.getenv("BLM_DIR"):
|
if blm_dir := os.getenv("BLM_DIR"):
|
||||||
cfg.link_building.blm_dir = blm_dir
|
cfg.link_building.blm_dir = blm_dir
|
||||||
|
|
||||||
# Timeout env var overrides (seconds)
|
|
||||||
if t := os.getenv("CHEDDAH_TIMEOUT_EXECUTION_BRAIN"):
|
|
||||||
cfg.timeouts.execution_brain = int(t)
|
|
||||||
if t := os.getenv("CHEDDAH_TIMEOUT_BLM"):
|
|
||||||
cfg.timeouts.blm = int(t)
|
|
||||||
|
|
||||||
# Ensure data directories exist
|
# Ensure data directories exist
|
||||||
cfg.data_dir.mkdir(parents=True, exist_ok=True)
|
cfg.data_dir.mkdir(parents=True, exist_ok=True)
|
||||||
(cfg.data_dir / "uploads").mkdir(exist_ok=True)
|
(cfg.data_dir / "uploads").mkdir(exist_ok=True)
|
||||||
|
|
|
||||||
|
|
@ -157,7 +157,6 @@ class LLMAdapter:
|
||||||
tools: str = "Bash,Read,Edit,Write,Glob,Grep",
|
tools: str = "Bash,Read,Edit,Write,Glob,Grep",
|
||||||
model: str | None = None,
|
model: str | None = None,
|
||||||
skip_permissions: bool = False,
|
skip_permissions: bool = False,
|
||||||
timeout: int = 2700,
|
|
||||||
) -> str:
|
) -> str:
|
||||||
"""Execution brain: calls Claude Code CLI with full tool access.
|
"""Execution brain: calls Claude Code CLI with full tool access.
|
||||||
|
|
||||||
|
|
@ -168,7 +167,6 @@ class LLMAdapter:
|
||||||
tools: Comma-separated Claude Code tool names (default: standard set).
|
tools: Comma-separated Claude Code tool names (default: standard set).
|
||||||
model: Override the CLI model (e.g. "claude-sonnet-4.5").
|
model: Override the CLI model (e.g. "claude-sonnet-4.5").
|
||||||
skip_permissions: If True, append --dangerously-skip-permissions to
|
skip_permissions: If True, append --dangerously-skip-permissions to
|
||||||
timeout: Max seconds to wait for CLI completion (default: 2700 / 45 min).
|
|
||||||
the CLI invocation (used for automated pipelines).
|
the CLI invocation (used for automated pipelines).
|
||||||
"""
|
"""
|
||||||
claude_bin = shutil.which("claude")
|
claude_bin = shutil.which("claude")
|
||||||
|
|
@ -220,11 +218,10 @@ class LLMAdapter:
|
||||||
)
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
stdout, stderr = proc.communicate(input=prompt, timeout=timeout)
|
stdout, stderr = proc.communicate(input=prompt, timeout=900)
|
||||||
except subprocess.TimeoutExpired:
|
except subprocess.TimeoutExpired:
|
||||||
proc.kill()
|
proc.kill()
|
||||||
minutes = timeout // 60
|
return "Error: Claude Code execution timed out after 15 minutes."
|
||||||
return f"Error: Claude Code execution timed out after {minutes} minutes."
|
|
||||||
|
|
||||||
if proc.returncode != 0:
|
if proc.returncode != 0:
|
||||||
return f"Execution error: {stderr or 'unknown error'}"
|
return f"Execution error: {stderr or 'unknown error'}"
|
||||||
|
|
|
||||||
|
|
@ -8,8 +8,6 @@ import logging
|
||||||
import re
|
import re
|
||||||
import shutil
|
import shutil
|
||||||
import threading
|
import threading
|
||||||
|
|
||||||
import httpx
|
|
||||||
from datetime import UTC, datetime
|
from datetime import UTC, datetime
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import TYPE_CHECKING
|
from typing import TYPE_CHECKING
|
||||||
|
|
@ -25,26 +23,8 @@ if TYPE_CHECKING:
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
log = logging.getLogger(__name__)
|
||||||
|
|
||||||
# Dedicated logger for "tool returned error but likely handled it" cases.
|
|
||||||
# Writes to logs/pipeline_errors.log for manual review.
|
|
||||||
_pipeline_err_log = logging.getLogger("cheddahbot.pipeline_errors")
|
|
||||||
_pipeline_err_log.propagate = False
|
|
||||||
_pe_dir = Path(__file__).resolve().parent.parent / "logs"
|
|
||||||
_pe_dir.mkdir(exist_ok=True)
|
|
||||||
_pe_handler = logging.FileHandler(_pe_dir / "pipeline_errors.log", encoding="utf-8")
|
|
||||||
_pe_handler.setFormatter(
|
|
||||||
logging.Formatter("%(asctime)s | %(message)s")
|
|
||||||
)
|
|
||||||
_pipeline_err_log.addHandler(_pe_handler)
|
|
||||||
_pipeline_err_log.setLevel(logging.INFO)
|
|
||||||
|
|
||||||
HEARTBEAT_OK = "HEARTBEAT_OK"
|
HEARTBEAT_OK = "HEARTBEAT_OK"
|
||||||
|
|
||||||
# Only tasks in these statuses are eligible for xlsx ->ClickUp matching.
|
|
||||||
# "to do" is excluded to prevent accidental matches and AutoCora race conditions.
|
|
||||||
# To force-reuse an xlsx for a "to do" task, set status to "running cora" first.
|
|
||||||
_CORA_ELIGIBLE_STATUSES = frozenset({"running cora", "error"})
|
|
||||||
|
|
||||||
|
|
||||||
class Scheduler:
|
class Scheduler:
|
||||||
# Tasks due within this window are eligible for execution
|
# Tasks due within this window are eligible for execution
|
||||||
|
|
@ -87,62 +67,6 @@ class Scheduler:
|
||||||
"cora_distribute": None,
|
"cora_distribute": None,
|
||||||
"briefing": None,
|
"briefing": None,
|
||||||
}
|
}
|
||||||
self._active_executions: dict[str, dict] = {}
|
|
||||||
self._active_lock = threading.Lock()
|
|
||||||
self._plural_cache: dict[tuple[str, str], bool] = {}
|
|
||||||
|
|
||||||
def _llm_plural_check(self, a: str, b: str) -> bool:
|
|
||||||
"""Ask the chat brain if two keywords are the same aside from plural form.
|
|
||||||
|
|
||||||
Uses OpenRouter with the configured CHEDDAH_CHAT_MODEL. Results are
|
|
||||||
cached for the session to avoid repeat calls.
|
|
||||||
"""
|
|
||||||
key = (a, b) if a <= b else (b, a)
|
|
||||||
if key in self._plural_cache:
|
|
||||||
return self._plural_cache[key]
|
|
||||||
|
|
||||||
api_key = self.config.openrouter_api_key
|
|
||||||
model = self.config.chat_model
|
|
||||||
if not api_key:
|
|
||||||
log.warning("LLM plural check: no OpenRouter API key, returning False")
|
|
||||||
return False
|
|
||||||
|
|
||||||
try:
|
|
||||||
resp = httpx.post(
|
|
||||||
"https://openrouter.ai/api/v1/chat/completions",
|
|
||||||
headers={"Authorization": f"Bearer {api_key}"},
|
|
||||||
json={
|
|
||||||
"model": model,
|
|
||||||
"max_tokens": 5,
|
|
||||||
"messages": [
|
|
||||||
{
|
|
||||||
"role": "system",
|
|
||||||
"content": (
|
|
||||||
"You compare SEO keywords. Reply with ONLY 'YES' or 'NO'. "
|
|
||||||
"Answer YES only if the two keywords are identical except for "
|
|
||||||
"singular vs plural word forms (e.g. 'shaft' vs 'shafts', "
|
|
||||||
"'company' vs 'companies'). Answer NO if they differ in any "
|
|
||||||
"other way (extra words, different words, different meaning)."
|
|
||||||
),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"role": "user",
|
|
||||||
"content": f'Keyword A: "{a}"\nKeyword B: "{b}"',
|
|
||||||
},
|
|
||||||
],
|
|
||||||
},
|
|
||||||
timeout=15,
|
|
||||||
)
|
|
||||||
resp.raise_for_status()
|
|
||||||
answer = (resp.json()["choices"][0]["message"]["content"] or "").strip()
|
|
||||||
result = "YES" in answer.upper()
|
|
||||||
log.debug("LLM plural check: '%s' vs '%s' ->%s (%s)", a, b, result, answer)
|
|
||||||
except Exception as e:
|
|
||||||
log.warning("LLM plural check failed for '%s' vs '%s': %s", a, b, e)
|
|
||||||
result = False
|
|
||||||
|
|
||||||
self._plural_cache[key] = result
|
|
||||||
return result
|
|
||||||
|
|
||||||
def start(self):
|
def start(self):
|
||||||
"""Start the scheduler, heartbeat, and ClickUp threads."""
|
"""Start the scheduler, heartbeat, and ClickUp threads."""
|
||||||
|
|
@ -285,26 +209,6 @@ class Scheduler:
|
||||||
"""Return last_run timestamps for all loops (in-memory)."""
|
"""Return last_run timestamps for all loops (in-memory)."""
|
||||||
return dict(self._loop_timestamps)
|
return dict(self._loop_timestamps)
|
||||||
|
|
||||||
def _register_execution(self, task_id: str, name: str, tool_name: str) -> None:
|
|
||||||
"""Register a task as actively executing."""
|
|
||||||
with self._active_lock:
|
|
||||||
self._active_executions[task_id] = {
|
|
||||||
"name": name,
|
|
||||||
"tool": tool_name,
|
|
||||||
"started_at": datetime.now(UTC),
|
|
||||||
"thread": threading.current_thread().name,
|
|
||||||
}
|
|
||||||
|
|
||||||
def _unregister_execution(self, task_id: str) -> None:
|
|
||||||
"""Remove a task from the active executions registry."""
|
|
||||||
with self._active_lock:
|
|
||||||
self._active_executions.pop(task_id, None)
|
|
||||||
|
|
||||||
def get_active_executions(self) -> dict[str, dict]:
|
|
||||||
"""Return a snapshot of currently executing tasks."""
|
|
||||||
with self._active_lock:
|
|
||||||
return dict(self._active_executions)
|
|
||||||
|
|
||||||
# ── Scheduled Tasks ──
|
# ── Scheduled Tasks ──
|
||||||
|
|
||||||
def _poll_loop(self):
|
def _poll_loop(self):
|
||||||
|
|
@ -488,7 +392,7 @@ class Scheduler:
|
||||||
status_triggers = mapping.get("auto_execute_on_status", [])
|
status_triggers = mapping.get("auto_execute_on_status", [])
|
||||||
if task.status.lower() not in [s.lower() for s in status_triggers]:
|
if task.status.lower() not in [s.lower() for s in status_triggers]:
|
||||||
hint = mapping.get("trigger_hint", "manual trigger only")
|
hint = mapping.get("trigger_hint", "manual trigger only")
|
||||||
log.debug(
|
log.info(
|
||||||
"Skipping task '%s' (type=%s): auto_execute is false (%s)",
|
"Skipping task '%s' (type=%s): auto_execute is false (%s)",
|
||||||
task.name,
|
task.name,
|
||||||
task.task_type,
|
task.task_type,
|
||||||
|
|
@ -542,10 +446,9 @@ class Scheduler:
|
||||||
# Move to "automation underway" on ClickUp immediately
|
# Move to "automation underway" on ClickUp immediately
|
||||||
client.update_task_status(task_id, self.config.clickup.automation_status)
|
client.update_task_status(task_id, self.config.clickup.automation_status)
|
||||||
|
|
||||||
log.info("Executing ClickUp task: %s ->%s", task.name, tool_name)
|
log.info("Executing ClickUp task: %s → %s", task.name, tool_name)
|
||||||
self._notify(f"Executing ClickUp task: **{task.name}** ->Skill: `{tool_name}`")
|
self._notify(f"Executing ClickUp task: **{task.name}** → Skill: `{tool_name}`")
|
||||||
|
|
||||||
self._register_execution(task_id, task.name, tool_name)
|
|
||||||
try:
|
try:
|
||||||
# args already built during validation above
|
# args already built during validation above
|
||||||
args["clickup_task_id"] = task_id
|
args["clickup_task_id"] = task_id
|
||||||
|
|
@ -571,12 +474,12 @@ class Scheduler:
|
||||||
if result.startswith("Skipped:") or result.startswith("Error:"):
|
if result.startswith("Skipped:") or result.startswith("Error:"):
|
||||||
client.add_comment(
|
client.add_comment(
|
||||||
task_id,
|
task_id,
|
||||||
f"[WARNING]CheddahBot could not execute this task.\n\n{result[:2000]}",
|
f"⚠️ CheddahBot could not execute this task.\n\n{result[:2000]}",
|
||||||
)
|
)
|
||||||
client.update_task_status(task_id, self.config.clickup.error_status)
|
client.update_task_status(task_id, self.config.clickup.error_status)
|
||||||
|
|
||||||
self._notify(f"ClickUp task skipped: **{task.name}**\nReason: {result[:200]}")
|
self._notify(f"ClickUp task skipped: **{task.name}**\nReason: {result[:200]}")
|
||||||
log.debug("ClickUp task skipped: %s — %s", task.name, result[:200])
|
log.info("ClickUp task skipped: %s — %s", task.name, result[:200])
|
||||||
return
|
return
|
||||||
|
|
||||||
# Tool handled its own ClickUp sync — just log success
|
# Tool handled its own ClickUp sync — just log success
|
||||||
|
|
@ -585,7 +488,7 @@ class Scheduler:
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
client.add_comment(
|
client.add_comment(
|
||||||
task_id, f"[FAILED]CheddahBot failed to complete this task.\n\nError: {str(e)[:2000]}"
|
task_id, f"❌ CheddahBot failed to complete this task.\n\nError: {str(e)[:2000]}"
|
||||||
)
|
)
|
||||||
client.update_task_status(task_id, self.config.clickup.error_status)
|
client.update_task_status(task_id, self.config.clickup.error_status)
|
||||||
|
|
||||||
|
|
@ -594,8 +497,6 @@ class Scheduler:
|
||||||
f"Skill: `{tool_name}` | Error: {str(e)[:200]}"
|
f"Skill: `{tool_name}` | Error: {str(e)[:200]}"
|
||||||
)
|
)
|
||||||
log.error("ClickUp task failed: %s — %s", task.name, e)
|
log.error("ClickUp task failed: %s — %s", task.name, e)
|
||||||
finally:
|
|
||||||
self._unregister_execution(task_id)
|
|
||||||
|
|
||||||
def _recover_stale_tasks(self):
|
def _recover_stale_tasks(self):
|
||||||
"""Reset tasks stuck in 'automation underway' for too long.
|
"""Reset tasks stuck in 'automation underway' for too long.
|
||||||
|
|
@ -641,7 +542,7 @@ class Scheduler:
|
||||||
client.update_task_status(task.id, reset_status)
|
client.update_task_status(task.id, reset_status)
|
||||||
client.add_comment(
|
client.add_comment(
|
||||||
task.id,
|
task.id,
|
||||||
f"[WARNING]CheddahBot auto-recovered this task. It was stuck in "
|
f"⚠️ CheddahBot auto-recovered this task. It was stuck in "
|
||||||
f"'{automation_status}' for {age_ms / 3_600_000:.1f} hours. "
|
f"'{automation_status}' for {age_ms / 3_600_000:.1f} hours. "
|
||||||
f"Reset to '{reset_status}' for retry.",
|
f"Reset to '{reset_status}' for retry.",
|
||||||
)
|
)
|
||||||
|
|
@ -758,7 +659,7 @@ class Scheduler:
|
||||||
category="autocora",
|
category="autocora",
|
||||||
)
|
)
|
||||||
|
|
||||||
log.debug("AutoCora result for '%s': %s", keyword, status)
|
log.info("AutoCora result for '%s': %s", keyword, status)
|
||||||
|
|
||||||
# Move result file to processed/
|
# Move result file to processed/
|
||||||
processed_dir.mkdir(exist_ok=True)
|
processed_dir.mkdir(exist_ok=True)
|
||||||
|
|
@ -821,7 +722,7 @@ class Scheduler:
|
||||||
"""Try to match a watched .xlsx file to a ClickUp task and run the pipeline."""
|
"""Try to match a watched .xlsx file to a ClickUp task and run the pipeline."""
|
||||||
filename = xlsx_path.name
|
filename = xlsx_path.name
|
||||||
# Normalize filename stem for matching
|
# Normalize filename stem for matching
|
||||||
# e.g., "precision-cnc-machining" ->"precision cnc machining"
|
# e.g., "precision-cnc-machining" → "precision cnc machining"
|
||||||
stem = xlsx_path.stem.lower().replace("-", " ").replace("_", " ")
|
stem = xlsx_path.stem.lower().replace("-", " ").replace("_", " ")
|
||||||
stem = re.sub(r"\s+", " ", stem).strip()
|
stem = re.sub(r"\s+", " ", stem).strip()
|
||||||
|
|
||||||
|
|
@ -858,11 +759,6 @@ class Scheduler:
|
||||||
money_site_url = matched_task.custom_fields.get("IMSURL", "") or ""
|
money_site_url = matched_task.custom_fields.get("IMSURL", "") or ""
|
||||||
if not money_site_url:
|
if not money_site_url:
|
||||||
log.warning("Task %s (%s) missing IMSURL — skipping", task_id, matched_task.name)
|
log.warning("Task %s (%s) missing IMSURL — skipping", task_id, matched_task.name)
|
||||||
client.add_comment(
|
|
||||||
task_id,
|
|
||||||
"[FAILED]Link building skipped — IMSURL field is empty. "
|
|
||||||
"Set the IMSURL field in ClickUp so the pipeline knows where to build links.",
|
|
||||||
)
|
|
||||||
client.update_task_status(task_id, self.config.clickup.error_status)
|
client.update_task_status(task_id, self.config.clickup.error_status)
|
||||||
self._notify(
|
self._notify(
|
||||||
f"Folder watcher: **{filename}** matched task **{matched_task.name}** "
|
f"Folder watcher: **{filename}** matched task **{matched_task.name}** "
|
||||||
|
|
@ -887,7 +783,6 @@ class Scheduler:
|
||||||
with contextlib.suppress(ValueError, TypeError):
|
with contextlib.suppress(ValueError, TypeError):
|
||||||
args["branded_plus_ratio"] = float(bp_raw)
|
args["branded_plus_ratio"] = float(bp_raw)
|
||||||
|
|
||||||
self._register_execution(task_id, matched_task.name, "run_cora_backlinks")
|
|
||||||
try:
|
try:
|
||||||
# Execute via tool registry
|
# Execute via tool registry
|
||||||
if hasattr(self.agent, "_tools") and self.agent._tools:
|
if hasattr(self.agent, "_tools") and self.agent._tools:
|
||||||
|
|
@ -897,10 +792,6 @@ class Scheduler:
|
||||||
|
|
||||||
if "Error" in result and "## Step" not in result:
|
if "Error" in result and "## Step" not in result:
|
||||||
# Pipeline failed — tool handles its own ClickUp error status
|
# Pipeline failed — tool handles its own ClickUp error status
|
||||||
_pipeline_err_log.info(
|
|
||||||
"LINKBUILDING | task=%s | file=%s | result=%s",
|
|
||||||
task_id, filename, result[:500],
|
|
||||||
)
|
|
||||||
self._notify(
|
self._notify(
|
||||||
f"Folder watcher: pipeline **failed** for **{filename}**.\n"
|
f"Folder watcher: pipeline **failed** for **{filename}**.\n"
|
||||||
f"Error: {result[:200]}",
|
f"Error: {result[:200]}",
|
||||||
|
|
@ -925,13 +816,7 @@ class Scheduler:
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.error("Folder watcher pipeline error for %s: %s", filename, e)
|
log.error("Folder watcher pipeline error for %s: %s", filename, e)
|
||||||
client.add_comment(
|
|
||||||
task_id,
|
|
||||||
f"[FAILED]Link building pipeline crashed.\n\nError: {str(e)[:2000]}",
|
|
||||||
)
|
|
||||||
client.update_task_status(task_id, self.config.clickup.error_status)
|
client.update_task_status(task_id, self.config.clickup.error_status)
|
||||||
finally:
|
|
||||||
self._unregister_execution(task_id)
|
|
||||||
|
|
||||||
def _match_xlsx_to_clickup(self, normalized_stem: str):
|
def _match_xlsx_to_clickup(self, normalized_stem: str):
|
||||||
"""Find a ClickUp Link Building task whose Keyword matches the file stem.
|
"""Find a ClickUp Link Building task whose Keyword matches the file stem.
|
||||||
|
|
@ -946,14 +831,12 @@ class Scheduler:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
tasks = client.get_tasks_from_overall_lists(space_id, statuses=list(_CORA_ELIGIBLE_STATUSES))
|
tasks = client.get_tasks_from_overall_lists(space_id)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.warning("ClickUp query failed in _match_xlsx_to_clickup: %s", e)
|
log.warning("ClickUp query failed in _match_xlsx_to_clickup: %s", e)
|
||||||
return None
|
return None
|
||||||
|
|
||||||
for task in tasks:
|
for task in tasks:
|
||||||
if task.status not in _CORA_ELIGIBLE_STATUSES:
|
|
||||||
continue
|
|
||||||
if task.task_type != "Link Building":
|
if task.task_type != "Link Building":
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
|
@ -966,7 +849,7 @@ class Scheduler:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
keyword_norm = _normalize_for_match(str(keyword))
|
keyword_norm = _normalize_for_match(str(keyword))
|
||||||
if _fuzzy_keyword_match(normalized_stem, keyword_norm, self._llm_plural_check):
|
if _fuzzy_keyword_match(normalized_stem, keyword_norm):
|
||||||
return task
|
return task
|
||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
@ -1064,7 +947,6 @@ class Scheduler:
|
||||||
"clickup_task_id": task_id,
|
"clickup_task_id": task_id,
|
||||||
}
|
}
|
||||||
|
|
||||||
self._register_execution(task_id, matched_task.name, "create_content")
|
|
||||||
try:
|
try:
|
||||||
if hasattr(self.agent, "_tools") and self.agent._tools:
|
if hasattr(self.agent, "_tools") and self.agent._tools:
|
||||||
result = self.agent._tools.execute("create_content", args)
|
result = self.agent._tools.execute("create_content", args)
|
||||||
|
|
@ -1072,10 +954,6 @@ class Scheduler:
|
||||||
result = "Error: tool registry not available"
|
result = "Error: tool registry not available"
|
||||||
|
|
||||||
if result.startswith("Error:"):
|
if result.startswith("Error:"):
|
||||||
_pipeline_err_log.info(
|
|
||||||
"CONTENT | task=%s | file=%s | result=%s",
|
|
||||||
task_id, filename, result[:500],
|
|
||||||
)
|
|
||||||
self._notify(
|
self._notify(
|
||||||
f"Content watcher: pipeline **failed** for **{filename}**.\n"
|
f"Content watcher: pipeline **failed** for **{filename}**.\n"
|
||||||
f"Error: {result[:200]}",
|
f"Error: {result[:200]}",
|
||||||
|
|
@ -1100,13 +978,6 @@ class Scheduler:
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.error("Content watcher pipeline error for %s: %s", filename, e)
|
log.error("Content watcher pipeline error for %s: %s", filename, e)
|
||||||
client.add_comment(
|
|
||||||
task_id,
|
|
||||||
f"[FAILED]Content pipeline crashed.\n\nError: {str(e)[:2000]}",
|
|
||||||
)
|
|
||||||
client.update_task_status(task_id, self.config.clickup.error_status)
|
|
||||||
finally:
|
|
||||||
self._unregister_execution(task_id)
|
|
||||||
|
|
||||||
def _match_xlsx_to_content_task(self, normalized_stem: str):
|
def _match_xlsx_to_content_task(self, normalized_stem: str):
|
||||||
"""Find a ClickUp content task whose Keyword matches the file stem.
|
"""Find a ClickUp content task whose Keyword matches the file stem.
|
||||||
|
|
@ -1122,15 +993,13 @@ class Scheduler:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
tasks = client.get_tasks_from_overall_lists(space_id, statuses=list(_CORA_ELIGIBLE_STATUSES))
|
tasks = client.get_tasks_from_overall_lists(space_id)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.warning("ClickUp query failed in _match_xlsx_to_content_task: %s", e)
|
log.warning("ClickUp query failed in _match_xlsx_to_content_task: %s", e)
|
||||||
return None
|
return None
|
||||||
|
|
||||||
content_types = ("Content Creation", "On Page Optimization")
|
content_types = ("Content Creation", "On Page Optimization")
|
||||||
for task in tasks:
|
for task in tasks:
|
||||||
if task.status not in _CORA_ELIGIBLE_STATUSES:
|
|
||||||
continue
|
|
||||||
if task.task_type not in content_types:
|
if task.task_type not in content_types:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
|
@ -1139,7 +1008,7 @@ class Scheduler:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
keyword_norm = _normalize_for_match(str(keyword))
|
keyword_norm = _normalize_for_match(str(keyword))
|
||||||
if _fuzzy_keyword_match(normalized_stem, keyword_norm, self._llm_plural_check):
|
if _fuzzy_keyword_match(normalized_stem, keyword_norm):
|
||||||
return task
|
return task
|
||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
@ -1207,7 +1076,7 @@ class Scheduler:
|
||||||
return
|
return
|
||||||
|
|
||||||
try:
|
try:
|
||||||
tasks = client.get_tasks_from_overall_lists(space_id, statuses=list(_CORA_ELIGIBLE_STATUSES))
|
tasks = client.get_tasks_from_overall_lists(space_id)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
log.warning("ClickUp query failed in _distribute_cora_file: %s", e)
|
log.warning("ClickUp query failed in _distribute_cora_file: %s", e)
|
||||||
return
|
return
|
||||||
|
|
@ -1216,22 +1085,17 @@ class Scheduler:
|
||||||
has_lb = False
|
has_lb = False
|
||||||
has_content = False
|
has_content = False
|
||||||
matched_names = []
|
matched_names = []
|
||||||
matched_error_tasks = []
|
|
||||||
|
|
||||||
for task in tasks:
|
for task in tasks:
|
||||||
if task.status not in _CORA_ELIGIBLE_STATUSES:
|
|
||||||
continue
|
|
||||||
keyword = task.custom_fields.get("Keyword", "")
|
keyword = task.custom_fields.get("Keyword", "")
|
||||||
if not keyword:
|
if not keyword:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
keyword_norm = _normalize_for_match(str(keyword))
|
keyword_norm = _normalize_for_match(str(keyword))
|
||||||
if not _fuzzy_keyword_match(stem, keyword_norm, self._llm_plural_check):
|
if not _fuzzy_keyword_match(stem, keyword_norm):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
matched_names.append(task.name)
|
matched_names.append(task.name)
|
||||||
if task.status == self.config.clickup.error_status:
|
|
||||||
matched_error_tasks.append(task)
|
|
||||||
if task.task_type == "Link Building":
|
if task.task_type == "Link Building":
|
||||||
has_lb = True
|
has_lb = True
|
||||||
elif task.task_type in ("Content Creation", "On Page Optimization"):
|
elif task.task_type in ("Content Creation", "On Page Optimization"):
|
||||||
|
|
@ -1268,19 +1132,6 @@ class Scheduler:
|
||||||
)
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
# Reset any matched tasks that were in "error" back to "running cora"
|
|
||||||
# so the pipeline picks them up again.
|
|
||||||
for task in matched_error_tasks:
|
|
||||||
try:
|
|
||||||
client.update_task_status(task.id, "running cora")
|
|
||||||
client.add_comment(
|
|
||||||
task.id,
|
|
||||||
f"New Cora XLSX distributed — resetting from error to running cora.",
|
|
||||||
)
|
|
||||||
log.info("Distributor: reset task %s (%s) from error ->running cora", task.id, task.name)
|
|
||||||
except Exception as e:
|
|
||||||
log.warning("Distributor: failed to reset task %s: %s", task.id, e)
|
|
||||||
|
|
||||||
# Move original to processed/
|
# Move original to processed/
|
||||||
processed_dir = xlsx_path.parent / "processed"
|
processed_dir = xlsx_path.parent / "processed"
|
||||||
processed_dir.mkdir(exist_ok=True)
|
processed_dir.mkdir(exist_ok=True)
|
||||||
|
|
@ -1289,7 +1140,7 @@ class Scheduler:
|
||||||
except OSError as e:
|
except OSError as e:
|
||||||
log.warning("Could not move %s to processed: %s", filename, e)
|
log.warning("Could not move %s to processed: %s", filename, e)
|
||||||
|
|
||||||
log.info("Cora distributor: %s ->%s", filename, ", ".join(copied_to))
|
log.info("Cora distributor: %s → %s", filename, ", ".join(copied_to))
|
||||||
self._notify(
|
self._notify(
|
||||||
f"Cora distributor: **{filename}** copied to {', '.join(copied_to)}.\n"
|
f"Cora distributor: **{filename}** copied to {', '.join(copied_to)}.\n"
|
||||||
f"Matched tasks: {', '.join(matched_names)}",
|
f"Matched tasks: {', '.join(matched_names)}",
|
||||||
|
|
@ -1352,7 +1203,7 @@ class Scheduler:
|
||||||
if not keyword:
|
if not keyword:
|
||||||
continue
|
continue
|
||||||
keyword_norm = _normalize_for_match(str(keyword))
|
keyword_norm = _normalize_for_match(str(keyword))
|
||||||
if _fuzzy_keyword_match(normalized_stem, keyword_norm, self._llm_plural_check):
|
if _fuzzy_keyword_match(normalized_stem, keyword_norm):
|
||||||
task_ids.append(task.id)
|
task_ids.append(task.id)
|
||||||
|
|
||||||
# Post comments
|
# Post comments
|
||||||
|
|
@ -1505,7 +1356,7 @@ class Scheduler:
|
||||||
def _check_cora_file_status(self, cora_tasks) -> dict[str, str]:
|
def _check_cora_file_status(self, cora_tasks) -> dict[str, str]:
|
||||||
"""For each 'running cora' task, check where its xlsx sits on the network.
|
"""For each 'running cora' task, check where its xlsx sits on the network.
|
||||||
|
|
||||||
Returns a dict of task_id ->human-readable status note.
|
Returns a dict of task_id → human-readable status note.
|
||||||
"""
|
"""
|
||||||
from .tools.linkbuilding import _fuzzy_keyword_match, _normalize_for_match
|
from .tools.linkbuilding import _fuzzy_keyword_match, _normalize_for_match
|
||||||
|
|
||||||
|
|
@ -1519,7 +1370,7 @@ class Scheduler:
|
||||||
"content_processed": Path(self.config.content.cora_inbox) / "processed",
|
"content_processed": Path(self.config.content.cora_inbox) / "processed",
|
||||||
}
|
}
|
||||||
|
|
||||||
# Build a map: normalized_stem ->set of folder keys
|
# Build a map: normalized_stem → set of folder keys
|
||||||
file_locations: dict[str, set[str]] = {}
|
file_locations: dict[str, set[str]] = {}
|
||||||
for folder_key, folder_path in folders.items():
|
for folder_key, folder_path in folders.items():
|
||||||
if not folder_path.exists():
|
if not folder_path.exists():
|
||||||
|
|
@ -1540,7 +1391,7 @@ class Scheduler:
|
||||||
# Find which folders have a matching file
|
# Find which folders have a matching file
|
||||||
matched_folders: set[str] = set()
|
matched_folders: set[str] = set()
|
||||||
for stem, locs in file_locations.items():
|
for stem, locs in file_locations.items():
|
||||||
if _fuzzy_keyword_match(keyword_norm, stem, self._llm_plural_check):
|
if _fuzzy_keyword_match(keyword_norm, stem):
|
||||||
matched_folders.update(locs)
|
matched_folders.update(locs)
|
||||||
|
|
||||||
if not matched_folders:
|
if not matched_folders:
|
||||||
|
|
|
||||||
|
|
@ -1,619 +0,0 @@
|
||||||
/* CheddahBot Dark Theme */
|
|
||||||
|
|
||||||
:root {
|
|
||||||
--bg-primary: #0d1117;
|
|
||||||
--bg-surface: #161b22;
|
|
||||||
--bg-surface-hover: #1c2129;
|
|
||||||
--bg-input: #0d1117;
|
|
||||||
--text-primary: #e6edf3;
|
|
||||||
--text-secondary: #8b949e;
|
|
||||||
--text-muted: #484f58;
|
|
||||||
--accent: #2dd4bf;
|
|
||||||
--accent-dim: #134e4a;
|
|
||||||
--border: #30363d;
|
|
||||||
--success: #3fb950;
|
|
||||||
--error: #f85149;
|
|
||||||
--warning: #d29922;
|
|
||||||
--font-sans: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif;
|
|
||||||
--font-mono: 'JetBrains Mono', 'Fira Code', 'Cascadia Code', monospace;
|
|
||||||
--radius: 8px;
|
|
||||||
--sidebar-width: 280px;
|
|
||||||
}
|
|
||||||
|
|
||||||
* { margin: 0; padding: 0; box-sizing: border-box; }
|
|
||||||
|
|
||||||
html, body {
|
|
||||||
height: 100%;
|
|
||||||
font-family: var(--font-sans);
|
|
||||||
font-size: 15px;
|
|
||||||
line-height: 1.5;
|
|
||||||
color: var(--text-primary);
|
|
||||||
background: var(--bg-primary);
|
|
||||||
overflow: hidden;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Top Navigation */
|
|
||||||
.top-nav {
|
|
||||||
display: flex;
|
|
||||||
align-items: center;
|
|
||||||
gap: 24px;
|
|
||||||
padding: 0 20px;
|
|
||||||
height: 48px;
|
|
||||||
background: var(--bg-surface);
|
|
||||||
border-bottom: 1px solid var(--border);
|
|
||||||
flex-shrink: 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
.nav-brand {
|
|
||||||
font-weight: 700;
|
|
||||||
font-size: 1.1em;
|
|
||||||
color: var(--accent);
|
|
||||||
}
|
|
||||||
|
|
||||||
.nav-links { display: flex; gap: 4px; }
|
|
||||||
|
|
||||||
.nav-link {
|
|
||||||
color: var(--text-secondary);
|
|
||||||
text-decoration: none;
|
|
||||||
padding: 6px 14px;
|
|
||||||
border-radius: var(--radius);
|
|
||||||
font-size: 0.9em;
|
|
||||||
transition: background 0.15s, color 0.15s;
|
|
||||||
}
|
|
||||||
.nav-link:hover { background: var(--bg-surface-hover); color: var(--text-primary); }
|
|
||||||
.nav-link.active { color: var(--accent); background: var(--accent-dim); }
|
|
||||||
|
|
||||||
/* Main content area */
|
|
||||||
.main-content {
|
|
||||||
height: calc(100vh - 48px);
|
|
||||||
overflow: hidden;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* ─── Chat Layout ─── */
|
|
||||||
.chat-layout {
|
|
||||||
display: flex;
|
|
||||||
height: 100%;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Sidebar */
|
|
||||||
.chat-sidebar {
|
|
||||||
width: var(--sidebar-width);
|
|
||||||
min-width: var(--sidebar-width);
|
|
||||||
background: var(--bg-surface);
|
|
||||||
border-right: 1px solid var(--border);
|
|
||||||
display: flex;
|
|
||||||
flex-direction: column;
|
|
||||||
padding: 12px;
|
|
||||||
gap: 8px;
|
|
||||||
overflow-y: auto;
|
|
||||||
flex-shrink: 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
.sidebar-header {
|
|
||||||
display: flex;
|
|
||||||
justify-content: space-between;
|
|
||||||
align-items: center;
|
|
||||||
}
|
|
||||||
|
|
||||||
.sidebar-header h3 { font-size: 0.85em; color: var(--text-secondary); text-transform: uppercase; letter-spacing: 0.05em; }
|
|
||||||
|
|
||||||
.sidebar-toggle {
|
|
||||||
display: none;
|
|
||||||
background: none;
|
|
||||||
border: none;
|
|
||||||
color: var(--text-secondary);
|
|
||||||
font-size: 1.2em;
|
|
||||||
cursor: pointer;
|
|
||||||
}
|
|
||||||
|
|
||||||
.sidebar-open-btn {
|
|
||||||
display: none;
|
|
||||||
position: fixed;
|
|
||||||
top: 56px;
|
|
||||||
left: 8px;
|
|
||||||
z-index: 20;
|
|
||||||
background: var(--bg-surface);
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
color: var(--text-primary);
|
|
||||||
padding: 6px 10px;
|
|
||||||
border-radius: var(--radius);
|
|
||||||
cursor: pointer;
|
|
||||||
font-size: 1.2em;
|
|
||||||
}
|
|
||||||
|
|
||||||
.sidebar-divider {
|
|
||||||
height: 1px;
|
|
||||||
background: var(--border);
|
|
||||||
margin: 4px 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
.agent-selector { display: flex; flex-direction: column; gap: 4px; }
|
|
||||||
|
|
||||||
.agent-btn {
|
|
||||||
padding: 8px 12px;
|
|
||||||
background: transparent;
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
border-radius: var(--radius);
|
|
||||||
color: var(--text-primary);
|
|
||||||
cursor: pointer;
|
|
||||||
text-align: left;
|
|
||||||
font-size: 0.9em;
|
|
||||||
transition: border-color 0.15s, background 0.15s;
|
|
||||||
}
|
|
||||||
.agent-btn:hover { background: var(--bg-surface-hover); }
|
|
||||||
.agent-btn.active { border-color: var(--accent); background: var(--accent-dim); }
|
|
||||||
|
|
||||||
.btn-new-chat {
|
|
||||||
width: 100%;
|
|
||||||
padding: 8px;
|
|
||||||
background: var(--accent-dim);
|
|
||||||
border: 1px solid var(--accent);
|
|
||||||
border-radius: var(--radius);
|
|
||||||
color: var(--accent);
|
|
||||||
cursor: pointer;
|
|
||||||
font-size: 0.9em;
|
|
||||||
transition: background 0.15s;
|
|
||||||
}
|
|
||||||
.btn-new-chat:hover { background: var(--accent); color: var(--bg-primary); }
|
|
||||||
|
|
||||||
.chat-sidebar h3 {
|
|
||||||
font-size: 0.8em;
|
|
||||||
color: var(--text-secondary);
|
|
||||||
text-transform: uppercase;
|
|
||||||
letter-spacing: 0.05em;
|
|
||||||
margin-top: 8px;
|
|
||||||
}
|
|
||||||
|
|
||||||
.conv-btn {
|
|
||||||
display: block;
|
|
||||||
width: 100%;
|
|
||||||
padding: 8px 10px;
|
|
||||||
background: transparent;
|
|
||||||
border: 1px solid transparent;
|
|
||||||
border-radius: var(--radius);
|
|
||||||
color: var(--text-primary);
|
|
||||||
cursor: pointer;
|
|
||||||
text-align: left;
|
|
||||||
font-size: 0.85em;
|
|
||||||
white-space: nowrap;
|
|
||||||
overflow: hidden;
|
|
||||||
text-overflow: ellipsis;
|
|
||||||
transition: background 0.15s;
|
|
||||||
}
|
|
||||||
.conv-btn:hover { background: var(--bg-surface-hover); }
|
|
||||||
.conv-btn.active { border-color: var(--accent); background: var(--accent-dim); }
|
|
||||||
|
|
||||||
/* Chat main area */
|
|
||||||
.chat-main {
|
|
||||||
flex: 1;
|
|
||||||
display: flex;
|
|
||||||
flex-direction: column;
|
|
||||||
min-width: 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Status bar */
|
|
||||||
.status-bar {
|
|
||||||
display: flex;
|
|
||||||
gap: 16px;
|
|
||||||
padding: 8px 20px;
|
|
||||||
font-size: 0.8em;
|
|
||||||
color: var(--text-secondary);
|
|
||||||
border-bottom: 1px solid var(--border);
|
|
||||||
background: var(--bg-surface);
|
|
||||||
flex-shrink: 0;
|
|
||||||
}
|
|
||||||
.status-item strong { color: var(--text-primary); }
|
|
||||||
.text-ok { color: var(--success) !important; }
|
|
||||||
.text-err { color: var(--error) !important; }
|
|
||||||
|
|
||||||
/* Notification banner */
|
|
||||||
.notification-banner {
|
|
||||||
margin: 8px 20px 0;
|
|
||||||
padding: 10px 16px;
|
|
||||||
background: var(--bg-surface);
|
|
||||||
border: 1px solid var(--accent-dim);
|
|
||||||
border-radius: var(--radius);
|
|
||||||
font-size: 0.9em;
|
|
||||||
color: var(--accent);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Messages area */
|
|
||||||
.chat-messages {
|
|
||||||
flex: 1;
|
|
||||||
overflow-y: auto;
|
|
||||||
padding: 16px 20px;
|
|
||||||
display: flex;
|
|
||||||
flex-direction: column;
|
|
||||||
gap: 12px;
|
|
||||||
}
|
|
||||||
|
|
||||||
.message {
|
|
||||||
display: flex;
|
|
||||||
gap: 10px;
|
|
||||||
max-width: 85%;
|
|
||||||
animation: fadeIn 0.2s ease-out;
|
|
||||||
}
|
|
||||||
|
|
||||||
@keyframes fadeIn {
|
|
||||||
from { opacity: 0; transform: translateY(4px); }
|
|
||||||
to { opacity: 1; transform: translateY(0); }
|
|
||||||
}
|
|
||||||
|
|
||||||
.message.user { align-self: flex-end; flex-direction: row-reverse; }
|
|
||||||
.message.assistant { align-self: flex-start; }
|
|
||||||
|
|
||||||
.message-avatar {
|
|
||||||
width: 32px;
|
|
||||||
height: 32px;
|
|
||||||
border-radius: 50%;
|
|
||||||
display: flex;
|
|
||||||
align-items: center;
|
|
||||||
justify-content: center;
|
|
||||||
font-size: 0.7em;
|
|
||||||
font-weight: 700;
|
|
||||||
flex-shrink: 0;
|
|
||||||
}
|
|
||||||
.message.user .message-avatar { background: var(--accent-dim); color: var(--accent); }
|
|
||||||
.message.assistant .message-avatar { background: #1c2129; color: var(--text-secondary); }
|
|
||||||
|
|
||||||
.message-body {
|
|
||||||
background: var(--bg-surface);
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
border-radius: var(--radius);
|
|
||||||
padding: 10px 14px;
|
|
||||||
min-width: 0;
|
|
||||||
}
|
|
||||||
.message.user .message-body { background: var(--accent-dim); border-color: var(--accent); }
|
|
||||||
|
|
||||||
.message-content {
|
|
||||||
word-wrap: break-word;
|
|
||||||
overflow-wrap: break-word;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Markdown rendering in messages */
|
|
||||||
.message-content p { margin: 0.4em 0; }
|
|
||||||
.message-content p:first-child { margin-top: 0; }
|
|
||||||
.message-content p:last-child { margin-bottom: 0; }
|
|
||||||
.message-content pre {
|
|
||||||
background: var(--bg-primary);
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
border-radius: 4px;
|
|
||||||
padding: 10px;
|
|
||||||
overflow-x: auto;
|
|
||||||
font-family: var(--font-mono);
|
|
||||||
font-size: 0.9em;
|
|
||||||
margin: 0.5em 0;
|
|
||||||
}
|
|
||||||
.message-content code {
|
|
||||||
font-family: var(--font-mono);
|
|
||||||
font-size: 0.9em;
|
|
||||||
background: var(--bg-primary);
|
|
||||||
padding: 2px 5px;
|
|
||||||
border-radius: 3px;
|
|
||||||
}
|
|
||||||
.message-content pre code { background: none; padding: 0; }
|
|
||||||
.message-content ul, .message-content ol { margin: 0.4em 0; padding-left: 1.5em; }
|
|
||||||
.message-content a { color: var(--accent); }
|
|
||||||
.message-content blockquote {
|
|
||||||
border-left: 3px solid var(--accent);
|
|
||||||
padding-left: 12px;
|
|
||||||
color: var(--text-secondary);
|
|
||||||
margin: 0.5em 0;
|
|
||||||
}
|
|
||||||
.message-content table { border-collapse: collapse; margin: 0.5em 0; }
|
|
||||||
.message-content th, .message-content td {
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
padding: 6px 10px;
|
|
||||||
text-align: left;
|
|
||||||
}
|
|
||||||
.message-content th { background: var(--bg-surface-hover); }
|
|
||||||
|
|
||||||
/* Chat input area */
|
|
||||||
.chat-input-area {
|
|
||||||
padding: 12px 20px;
|
|
||||||
border-top: 1px solid var(--border);
|
|
||||||
background: var(--bg-surface);
|
|
||||||
flex-shrink: 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
.input-row {
|
|
||||||
display: flex;
|
|
||||||
align-items: flex-end;
|
|
||||||
gap: 8px;
|
|
||||||
}
|
|
||||||
|
|
||||||
#chat-input {
|
|
||||||
flex: 1;
|
|
||||||
background: var(--bg-input);
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
border-radius: var(--radius);
|
|
||||||
padding: 10px 14px;
|
|
||||||
color: var(--text-primary);
|
|
||||||
font-family: var(--font-sans);
|
|
||||||
font-size: 15px;
|
|
||||||
resize: none;
|
|
||||||
max-height: 200px;
|
|
||||||
line-height: 1.4;
|
|
||||||
}
|
|
||||||
#chat-input:focus { outline: none; border-color: var(--accent); }
|
|
||||||
#chat-input::placeholder { color: var(--text-muted); }
|
|
||||||
|
|
||||||
.file-upload-btn {
|
|
||||||
padding: 8px 10px;
|
|
||||||
cursor: pointer;
|
|
||||||
font-size: 1.2em;
|
|
||||||
color: var(--text-secondary);
|
|
||||||
transition: color 0.15s;
|
|
||||||
flex-shrink: 0;
|
|
||||||
}
|
|
||||||
.file-upload-btn:hover { color: var(--accent); }
|
|
||||||
|
|
||||||
.send-btn {
|
|
||||||
padding: 8px 14px;
|
|
||||||
background: var(--accent);
|
|
||||||
border: none;
|
|
||||||
border-radius: var(--radius);
|
|
||||||
color: var(--bg-primary);
|
|
||||||
font-size: 1.1em;
|
|
||||||
cursor: pointer;
|
|
||||||
flex-shrink: 0;
|
|
||||||
transition: opacity 0.15s;
|
|
||||||
}
|
|
||||||
.send-btn:hover { opacity: 0.85; }
|
|
||||||
|
|
||||||
.file-preview {
|
|
||||||
margin-top: 6px;
|
|
||||||
font-size: 0.85em;
|
|
||||||
color: var(--text-secondary);
|
|
||||||
}
|
|
||||||
.file-preview .file-tag {
|
|
||||||
display: inline-block;
|
|
||||||
background: var(--bg-primary);
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
border-radius: 4px;
|
|
||||||
padding: 2px 8px;
|
|
||||||
margin-right: 6px;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* ─── Dashboard Layout ─── */
|
|
||||||
.dashboard-layout {
|
|
||||||
display: flex;
|
|
||||||
flex-direction: column;
|
|
||||||
gap: 16px;
|
|
||||||
padding: 16px 20px;
|
|
||||||
height: 100%;
|
|
||||||
overflow-y: auto;
|
|
||||||
}
|
|
||||||
|
|
||||||
.panel {
|
|
||||||
background: var(--bg-surface);
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
border-radius: var(--radius);
|
|
||||||
padding: 16px;
|
|
||||||
}
|
|
||||||
|
|
||||||
.panel-title {
|
|
||||||
font-size: 1.1em;
|
|
||||||
font-weight: 600;
|
|
||||||
margin-bottom: 12px;
|
|
||||||
color: var(--accent);
|
|
||||||
}
|
|
||||||
|
|
||||||
.panel-section {
|
|
||||||
margin-bottom: 16px;
|
|
||||||
}
|
|
||||||
|
|
||||||
.panel-section h3 {
|
|
||||||
font-size: 0.85em;
|
|
||||||
color: var(--text-secondary);
|
|
||||||
text-transform: uppercase;
|
|
||||||
letter-spacing: 0.05em;
|
|
||||||
margin-bottom: 8px;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Loop health grid */
|
|
||||||
.loop-grid {
|
|
||||||
display: flex;
|
|
||||||
flex-wrap: wrap;
|
|
||||||
gap: 8px;
|
|
||||||
}
|
|
||||||
|
|
||||||
.loop-badge {
|
|
||||||
display: flex;
|
|
||||||
flex-direction: column;
|
|
||||||
align-items: center;
|
|
||||||
padding: 8px 12px;
|
|
||||||
border-radius: var(--radius);
|
|
||||||
font-size: 0.8em;
|
|
||||||
min-width: 90px;
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
}
|
|
||||||
.loop-name { font-weight: 600; }
|
|
||||||
.loop-ago { color: var(--text-secondary); font-size: 0.85em; }
|
|
||||||
|
|
||||||
.badge-ok { border-color: var(--success); background: rgba(63, 185, 80, 0.1); }
|
|
||||||
.badge-ok .loop-name { color: var(--success); }
|
|
||||||
.badge-warn { border-color: var(--warning); background: rgba(210, 153, 34, 0.1); }
|
|
||||||
.badge-warn .loop-name { color: var(--warning); }
|
|
||||||
.badge-err { border-color: var(--error); background: rgba(248, 81, 73, 0.1); }
|
|
||||||
.badge-err .loop-name { color: var(--error); }
|
|
||||||
.badge-muted { border-color: var(--text-muted); }
|
|
||||||
.badge-muted .loop-name { color: var(--text-muted); }
|
|
||||||
|
|
||||||
/* Active executions */
|
|
||||||
.exec-list { display: flex; flex-direction: column; gap: 6px; }
|
|
||||||
.exec-item {
|
|
||||||
display: flex;
|
|
||||||
gap: 12px;
|
|
||||||
padding: 6px 10px;
|
|
||||||
background: var(--bg-primary);
|
|
||||||
border-radius: 4px;
|
|
||||||
font-size: 0.85em;
|
|
||||||
}
|
|
||||||
.exec-name { flex: 1; font-weight: 500; }
|
|
||||||
.exec-tool { color: var(--text-secondary); }
|
|
||||||
.exec-dur { color: var(--accent); font-family: var(--font-mono); }
|
|
||||||
|
|
||||||
/* Action buttons */
|
|
||||||
.action-buttons { display: flex; gap: 8px; flex-wrap: wrap; }
|
|
||||||
|
|
||||||
.btn {
|
|
||||||
padding: 8px 16px;
|
|
||||||
background: var(--bg-surface-hover);
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
border-radius: var(--radius);
|
|
||||||
color: var(--text-primary);
|
|
||||||
cursor: pointer;
|
|
||||||
font-size: 0.9em;
|
|
||||||
transition: border-color 0.15s, background 0.15s;
|
|
||||||
}
|
|
||||||
.btn:hover { border-color: var(--accent); }
|
|
||||||
|
|
||||||
.btn-sm { padding: 6px 12px; font-size: 0.8em; }
|
|
||||||
|
|
||||||
/* Notification feed */
|
|
||||||
.notif-feed { display: flex; flex-direction: column; gap: 4px; max-height: 300px; overflow-y: auto; }
|
|
||||||
.notif-item {
|
|
||||||
padding: 6px 10px;
|
|
||||||
font-size: 0.85em;
|
|
||||||
border-left: 3px solid var(--border);
|
|
||||||
background: var(--bg-primary);
|
|
||||||
border-radius: 0 4px 4px 0;
|
|
||||||
}
|
|
||||||
.notif-clickup { border-left-color: var(--accent); }
|
|
||||||
.notif-info { border-left-color: var(--text-secondary); }
|
|
||||||
.notif-error { border-left-color: var(--error); }
|
|
||||||
.notif-cat {
|
|
||||||
font-weight: 600;
|
|
||||||
font-size: 0.8em;
|
|
||||||
text-transform: uppercase;
|
|
||||||
color: var(--text-secondary);
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Task table */
|
|
||||||
.task-table { width: 100%; border-collapse: collapse; font-size: 0.85em; }
|
|
||||||
.task-table th, .task-table td { padding: 8px 12px; border-bottom: 1px solid var(--border); text-align: left; }
|
|
||||||
.task-table th { color: var(--text-secondary); font-weight: 600; text-transform: uppercase; font-size: 0.85em; }
|
|
||||||
.task-table a { color: var(--accent); text-decoration: none; }
|
|
||||||
.task-table a:hover { text-decoration: underline; }
|
|
||||||
|
|
||||||
.status-badge {
|
|
||||||
display: inline-block;
|
|
||||||
padding: 2px 8px;
|
|
||||||
border-radius: 4px;
|
|
||||||
font-size: 0.85em;
|
|
||||||
font-weight: 500;
|
|
||||||
}
|
|
||||||
.status-to-do { background: rgba(139, 148, 158, 0.2); color: var(--text-secondary); }
|
|
||||||
.status-in-progress, .status-automation-underway { background: rgba(45, 212, 191, 0.15); color: var(--accent); }
|
|
||||||
.status-error { background: rgba(248, 81, 73, 0.15); color: var(--error); }
|
|
||||||
.status-complete, .status-closed { background: rgba(63, 185, 80, 0.15); color: var(--success); }
|
|
||||||
.status-internal-review, .status-outline-review { background: rgba(210, 153, 34, 0.15); color: var(--warning); }
|
|
||||||
|
|
||||||
/* Pipeline groups */
|
|
||||||
.pipeline-group { margin-bottom: 16px; }
|
|
||||||
.pipeline-group h4 {
|
|
||||||
font-size: 0.9em;
|
|
||||||
margin-bottom: 8px;
|
|
||||||
padding-bottom: 4px;
|
|
||||||
border-bottom: 1px solid var(--border);
|
|
||||||
}
|
|
||||||
.pipeline-stats {
|
|
||||||
display: flex;
|
|
||||||
gap: 12px;
|
|
||||||
margin-bottom: 12px;
|
|
||||||
flex-wrap: wrap;
|
|
||||||
}
|
|
||||||
.pipeline-stat {
|
|
||||||
padding: 8px 14px;
|
|
||||||
background: var(--bg-primary);
|
|
||||||
border: 1px solid var(--border);
|
|
||||||
border-radius: var(--radius);
|
|
||||||
text-align: center;
|
|
||||||
}
|
|
||||||
.pipeline-stat .stat-count { font-size: 1.5em; font-weight: 700; color: var(--accent); }
|
|
||||||
.pipeline-stat .stat-label { font-size: 0.75em; color: var(--text-secondary); }
|
|
||||||
|
|
||||||
/* Flash messages */
|
|
||||||
.flash-msg {
|
|
||||||
position: fixed;
|
|
||||||
bottom: 20px;
|
|
||||||
right: 20px;
|
|
||||||
background: var(--accent);
|
|
||||||
color: var(--bg-primary);
|
|
||||||
padding: 10px 20px;
|
|
||||||
border-radius: var(--radius);
|
|
||||||
font-weight: 600;
|
|
||||||
font-size: 0.9em;
|
|
||||||
z-index: 100;
|
|
||||||
animation: fadeIn 0.2s ease-out, fadeOut 0.5s 2.5s ease-out forwards;
|
|
||||||
}
|
|
||||||
@keyframes fadeOut { to { opacity: 0; transform: translateY(10px); } }
|
|
||||||
|
|
||||||
/* Utility */
|
|
||||||
.text-muted { color: var(--text-muted); }
|
|
||||||
|
|
||||||
/* Typing indicator */
|
|
||||||
.typing-indicator span {
|
|
||||||
display: inline-block;
|
|
||||||
width: 6px;
|
|
||||||
height: 6px;
|
|
||||||
background: var(--text-secondary);
|
|
||||||
border-radius: 50%;
|
|
||||||
margin: 0 2px;
|
|
||||||
animation: bounce 1.2s infinite;
|
|
||||||
}
|
|
||||||
.typing-indicator span:nth-child(2) { animation-delay: 0.2s; }
|
|
||||||
.typing-indicator span:nth-child(3) { animation-delay: 0.4s; }
|
|
||||||
@keyframes bounce {
|
|
||||||
0%, 60%, 100% { transform: translateY(0); }
|
|
||||||
30% { transform: translateY(-6px); }
|
|
||||||
}
|
|
||||||
|
|
||||||
/* ─── Mobile ─── */
|
|
||||||
@media (max-width: 768px) {
|
|
||||||
.chat-sidebar {
|
|
||||||
position: fixed;
|
|
||||||
top: 48px;
|
|
||||||
left: 0;
|
|
||||||
bottom: 0;
|
|
||||||
z-index: 30;
|
|
||||||
transform: translateX(-100%);
|
|
||||||
transition: transform 0.2s ease;
|
|
||||||
width: 280px;
|
|
||||||
}
|
|
||||||
.chat-sidebar.open { transform: translateX(0); }
|
|
||||||
.sidebar-toggle { display: block; }
|
|
||||||
.sidebar-open-btn { display: block; }
|
|
||||||
|
|
||||||
.status-bar { flex-wrap: wrap; gap: 8px; padding: 6px 12px; font-size: 0.75em; }
|
|
||||||
|
|
||||||
.chat-messages { padding: 12px; }
|
|
||||||
.message { max-width: 95%; }
|
|
||||||
.chat-input-area { padding: 8px 12px; }
|
|
||||||
|
|
||||||
#chat-input { font-size: 16px; } /* Prevent iOS zoom */
|
|
||||||
|
|
||||||
.dashboard-layout { padding: 12px; }
|
|
||||||
.loop-grid { gap: 6px; }
|
|
||||||
.loop-badge { min-width: 70px; padding: 6px 8px; font-size: 0.75em; }
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Overlay for mobile sidebar */
|
|
||||||
.sidebar-overlay {
|
|
||||||
display: none;
|
|
||||||
position: fixed;
|
|
||||||
top: 48px;
|
|
||||||
left: 0;
|
|
||||||
right: 0;
|
|
||||||
bottom: 0;
|
|
||||||
background: rgba(0, 0, 0, 0.5);
|
|
||||||
z-index: 25;
|
|
||||||
}
|
|
||||||
.sidebar-overlay.visible { display: block; }
|
|
||||||
|
|
||||||
/* Scrollbar styling */
|
|
||||||
::-webkit-scrollbar { width: 6px; }
|
|
||||||
::-webkit-scrollbar-track { background: transparent; }
|
|
||||||
::-webkit-scrollbar-thumb { background: var(--border); border-radius: 3px; }
|
|
||||||
::-webkit-scrollbar-thumb:hover { background: var(--text-muted); }
|
|
||||||
|
|
@ -1,284 +0,0 @@
|
||||||
/* CheddahBot Frontend JS */
|
|
||||||
|
|
||||||
// ── Session Management ──
|
|
||||||
const SESSION_KEY = 'cheddahbot_session';
|
|
||||||
|
|
||||||
function getSession() {
|
|
||||||
try { return JSON.parse(localStorage.getItem(SESSION_KEY) || '{}'); }
|
|
||||||
catch { return {}; }
|
|
||||||
}
|
|
||||||
|
|
||||||
function saveSession(data) {
|
|
||||||
const s = getSession();
|
|
||||||
Object.assign(s, data);
|
|
||||||
localStorage.setItem(SESSION_KEY, JSON.stringify(s));
|
|
||||||
}
|
|
||||||
|
|
||||||
function getActiveAgent() {
|
|
||||||
return getSession().agent_name || document.getElementById('input-agent-name')?.value || 'default';
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Agent Switching ──
|
|
||||||
function switchAgent(name) {
|
|
||||||
// Update UI
|
|
||||||
document.querySelectorAll('.agent-btn').forEach(b => {
|
|
||||||
b.classList.toggle('active', b.dataset.agent === name);
|
|
||||||
});
|
|
||||||
document.getElementById('input-agent-name').value = name;
|
|
||||||
document.getElementById('input-conv-id').value = '';
|
|
||||||
saveSession({ agent_name: name, conv_id: null });
|
|
||||||
|
|
||||||
// Clear chat and load new sidebar
|
|
||||||
document.getElementById('chat-messages').innerHTML = '';
|
|
||||||
refreshSidebar();
|
|
||||||
}
|
|
||||||
|
|
||||||
function setActiveAgent(name) {
|
|
||||||
document.querySelectorAll('.agent-btn').forEach(b => {
|
|
||||||
b.classList.toggle('active', b.dataset.agent === name);
|
|
||||||
});
|
|
||||||
const agentInput = document.getElementById('input-agent-name');
|
|
||||||
if (agentInput) agentInput.value = name;
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Sidebar ──
|
|
||||||
function refreshSidebar() {
|
|
||||||
const agent = getActiveAgent();
|
|
||||||
htmx.ajax('GET', '/chat/conversations?agent_name=' + agent, {
|
|
||||||
target: '#sidebar-conversations',
|
|
||||||
swap: 'innerHTML'
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Conversation Loading ──
|
|
||||||
function loadConversation(convId) {
|
|
||||||
const agent = getActiveAgent();
|
|
||||||
document.getElementById('input-conv-id').value = convId;
|
|
||||||
saveSession({ conv_id: convId });
|
|
||||||
|
|
||||||
htmx.ajax('GET', '/chat/load/' + convId + '?agent_name=' + agent, {
|
|
||||||
target: '#chat-messages',
|
|
||||||
swap: 'innerHTML'
|
|
||||||
}).then(() => {
|
|
||||||
scrollChat();
|
|
||||||
renderAllMarkdown();
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Chat Input ──
|
|
||||||
function handleKeydown(e) {
|
|
||||||
if (e.key === 'Enter' && !e.shiftKey) {
|
|
||||||
e.preventDefault();
|
|
||||||
document.getElementById('chat-form').requestSubmit();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function autoResize(el) {
|
|
||||||
el.style.height = 'auto';
|
|
||||||
el.style.height = Math.min(el.scrollHeight, 200) + 'px';
|
|
||||||
}
|
|
||||||
|
|
||||||
function afterSend(event) {
|
|
||||||
const input = document.getElementById('chat-input');
|
|
||||||
input.value = '';
|
|
||||||
input.style.height = 'auto';
|
|
||||||
|
|
||||||
// Clear file input and preview
|
|
||||||
const fileInput = document.querySelector('input[type="file"]');
|
|
||||||
if (fileInput) fileInput.value = '';
|
|
||||||
const preview = document.getElementById('file-preview');
|
|
||||||
if (preview) { preview.style.display = 'none'; preview.innerHTML = ''; }
|
|
||||||
|
|
||||||
scrollChat();
|
|
||||||
}
|
|
||||||
|
|
||||||
function scrollChat() {
|
|
||||||
const el = document.getElementById('chat-messages');
|
|
||||||
if (el) {
|
|
||||||
requestAnimationFrame(() => {
|
|
||||||
el.scrollTop = el.scrollHeight;
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── File Upload Preview ──
|
|
||||||
function showFileNames(input) {
|
|
||||||
const preview = document.getElementById('file-preview');
|
|
||||||
if (!input.files.length) {
|
|
||||||
preview.style.display = 'none';
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
let html = '';
|
|
||||||
for (const f of input.files) {
|
|
||||||
html += '<span class="file-tag">' + f.name + '</span>';
|
|
||||||
}
|
|
||||||
preview.innerHTML = html;
|
|
||||||
preview.style.display = 'block';
|
|
||||||
}
|
|
||||||
|
|
||||||
// Drag and drop
|
|
||||||
document.addEventListener('DOMContentLoaded', () => {
|
|
||||||
const chatMain = document.querySelector('.chat-main');
|
|
||||||
if (!chatMain) return;
|
|
||||||
|
|
||||||
chatMain.addEventListener('dragover', e => {
|
|
||||||
e.preventDefault();
|
|
||||||
chatMain.style.outline = '2px dashed var(--accent)';
|
|
||||||
});
|
|
||||||
chatMain.addEventListener('dragleave', () => {
|
|
||||||
chatMain.style.outline = '';
|
|
||||||
});
|
|
||||||
chatMain.addEventListener('drop', e => {
|
|
||||||
e.preventDefault();
|
|
||||||
chatMain.style.outline = '';
|
|
||||||
const fileInput = document.querySelector('input[type="file"]');
|
|
||||||
if (fileInput && e.dataTransfer.files.length) {
|
|
||||||
fileInput.files = e.dataTransfer.files;
|
|
||||||
showFileNames(fileInput);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
// ── SSE Streaming ──
|
|
||||||
// Handle SSE chunks for chat streaming
|
|
||||||
let streamBuffer = '';
|
|
||||||
let activeSSE = null;
|
|
||||||
|
|
||||||
document.addEventListener('htmx:sseBeforeMessage', function(e) {
|
|
||||||
// This fires for each SSE event received by htmx
|
|
||||||
});
|
|
||||||
|
|
||||||
// Watch for SSE trigger divs being added to the DOM
|
|
||||||
const observer = new MutationObserver(mutations => {
|
|
||||||
for (const m of mutations) {
|
|
||||||
for (const node of m.addedNodes) {
|
|
||||||
if (node.id === 'sse-trigger') {
|
|
||||||
setupStream(node);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
document.addEventListener('DOMContentLoaded', () => {
|
|
||||||
const chatMessages = document.getElementById('chat-messages');
|
|
||||||
if (chatMessages) {
|
|
||||||
observer.observe(chatMessages, { childList: true, subtree: true });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
function setupStream(triggerDiv) {
|
|
||||||
const sseUrl = triggerDiv.getAttribute('sse-connect');
|
|
||||||
if (!sseUrl) return;
|
|
||||||
|
|
||||||
// Remove the htmx SSE to manage manually
|
|
||||||
triggerDiv.remove();
|
|
||||||
|
|
||||||
const responseDiv = document.getElementById('assistant-response');
|
|
||||||
if (!responseDiv) return;
|
|
||||||
|
|
||||||
streamBuffer = '';
|
|
||||||
|
|
||||||
// Show typing indicator
|
|
||||||
responseDiv.innerHTML = '<div class="typing-indicator"><span></span><span></span><span></span></div>';
|
|
||||||
|
|
||||||
const source = new EventSource(sseUrl);
|
|
||||||
activeSSE = source;
|
|
||||||
|
|
||||||
source.addEventListener('chunk', function(e) {
|
|
||||||
if (streamBuffer === '') {
|
|
||||||
// Remove typing indicator on first chunk
|
|
||||||
responseDiv.innerHTML = '';
|
|
||||||
}
|
|
||||||
streamBuffer += e.data;
|
|
||||||
// Render markdown
|
|
||||||
try {
|
|
||||||
responseDiv.innerHTML = marked.parse(streamBuffer);
|
|
||||||
} catch {
|
|
||||||
responseDiv.textContent = streamBuffer;
|
|
||||||
}
|
|
||||||
scrollChat();
|
|
||||||
});
|
|
||||||
|
|
||||||
source.addEventListener('done', function(e) {
|
|
||||||
source.close();
|
|
||||||
activeSSE = null;
|
|
||||||
// Final markdown render
|
|
||||||
if (streamBuffer) {
|
|
||||||
try {
|
|
||||||
responseDiv.innerHTML = marked.parse(streamBuffer);
|
|
||||||
} catch {
|
|
||||||
responseDiv.textContent = streamBuffer;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
streamBuffer = '';
|
|
||||||
|
|
||||||
// Update conv_id from done event data
|
|
||||||
const convId = e.data;
|
|
||||||
if (convId) {
|
|
||||||
document.getElementById('input-conv-id').value = convId;
|
|
||||||
saveSession({ conv_id: convId });
|
|
||||||
}
|
|
||||||
|
|
||||||
// Refresh sidebar
|
|
||||||
refreshSidebar();
|
|
||||||
scrollChat();
|
|
||||||
});
|
|
||||||
|
|
||||||
source.onerror = function() {
|
|
||||||
source.close();
|
|
||||||
activeSSE = null;
|
|
||||||
if (!streamBuffer) {
|
|
||||||
responseDiv.innerHTML = '<span class="text-err">Connection lost</span>';
|
|
||||||
}
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Markdown Rendering ──
|
|
||||||
function renderAllMarkdown() {
|
|
||||||
document.querySelectorAll('.message-content').forEach(el => {
|
|
||||||
const raw = el.textContent;
|
|
||||||
if (raw && typeof marked !== 'undefined') {
|
|
||||||
try {
|
|
||||||
el.innerHTML = marked.parse(raw);
|
|
||||||
} catch { /* keep raw text */ }
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Mobile Sidebar ──
|
|
||||||
function toggleSidebar() {
|
|
||||||
const sidebar = document.getElementById('chat-sidebar');
|
|
||||||
const overlay = document.getElementById('sidebar-overlay');
|
|
||||||
if (sidebar) {
|
|
||||||
sidebar.classList.toggle('open');
|
|
||||||
}
|
|
||||||
if (overlay) {
|
|
||||||
overlay.classList.toggle('visible');
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Notification Banner (chat page) ──
|
|
||||||
function setupChatNotifications() {
|
|
||||||
const banner = document.getElementById('notification-banner');
|
|
||||||
if (!banner) return;
|
|
||||||
|
|
||||||
const source = new EventSource('/sse/notifications');
|
|
||||||
source.addEventListener('notification', function(e) {
|
|
||||||
const notif = JSON.parse(e.data);
|
|
||||||
banner.textContent = notif.message;
|
|
||||||
banner.style.display = 'block';
|
|
||||||
// Auto-hide after 15s
|
|
||||||
setTimeout(() => { banner.style.display = 'none'; }, 15000);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
document.addEventListener('DOMContentLoaded', setupChatNotifications);
|
|
||||||
|
|
||||||
// ── HTMX Events ──
|
|
||||||
document.addEventListener('scrollChat', scrollChat);
|
|
||||||
document.addEventListener('htmx:afterSwap', function(e) {
|
|
||||||
if (e.target.id === 'chat-messages') {
|
|
||||||
renderAllMarkdown();
|
|
||||||
scrollChat();
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
@ -1,27 +0,0 @@
|
||||||
<!DOCTYPE html>
|
|
||||||
<html lang="en">
|
|
||||||
<head>
|
|
||||||
<meta charset="UTF-8">
|
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
||||||
<title>{% block title %}CheddahBot{% endblock %}</title>
|
|
||||||
<link rel="stylesheet" href="/static/app.css">
|
|
||||||
<script src="https://unpkg.com/htmx.org@2.0.4"></script>
|
|
||||||
<script src="https://unpkg.com/htmx-ext-sse@2.3.0/sse.js"></script>
|
|
||||||
<script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script>
|
|
||||||
{% block head %}{% endblock %}
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<nav class="top-nav">
|
|
||||||
<div class="nav-brand">CheddahBot</div>
|
|
||||||
<div class="nav-links">
|
|
||||||
<a href="/" class="nav-link {% block nav_chat_active %}{% endblock %}">Chat</a>
|
|
||||||
<a href="/dashboard" class="nav-link {% block nav_dash_active %}{% endblock %}">Dashboard</a>
|
|
||||||
</div>
|
|
||||||
</nav>
|
|
||||||
<main class="main-content">
|
|
||||||
{% block content %}{% endblock %}
|
|
||||||
</main>
|
|
||||||
<script src="/static/app.js"></script>
|
|
||||||
{% block scripts %}{% endblock %}
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
|
|
@ -1,111 +0,0 @@
|
||||||
{% extends "base.html" %}
|
|
||||||
|
|
||||||
{% block title %}Chat - CheddahBot{% endblock %}
|
|
||||||
{% block nav_chat_active %}active{% endblock %}
|
|
||||||
|
|
||||||
{% block content %}
|
|
||||||
<div class="chat-layout">
|
|
||||||
<!-- Sidebar -->
|
|
||||||
<aside class="chat-sidebar" id="chat-sidebar">
|
|
||||||
<div class="sidebar-header">
|
|
||||||
<h3>Agents</h3>
|
|
||||||
<button class="sidebar-toggle" onclick="toggleSidebar()" aria-label="Close sidebar">✕</button>
|
|
||||||
</div>
|
|
||||||
<div class="agent-selector" id="agent-selector">
|
|
||||||
{% for agent in agents %}
|
|
||||||
<button
|
|
||||||
class="agent-btn {% if agent.name == default_agent %}active{% endif %}"
|
|
||||||
data-agent="{{ agent.name }}"
|
|
||||||
onclick="switchAgent('{{ agent.name }}')"
|
|
||||||
>{{ agent.display_name }}</button>
|
|
||||||
{% endfor %}
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="sidebar-divider"></div>
|
|
||||||
|
|
||||||
<button
|
|
||||||
class="btn btn-new-chat"
|
|
||||||
hx-post="/chat/new"
|
|
||||||
hx-vals='{"agent_name": "{{ default_agent }}"}'
|
|
||||||
hx-target="#chat-messages"
|
|
||||||
hx-swap="innerHTML"
|
|
||||||
onclick="this.setAttribute('hx-vals', JSON.stringify({agent_name: getActiveAgent()}))"
|
|
||||||
>+ New Chat</button>
|
|
||||||
|
|
||||||
<h3>History</h3>
|
|
||||||
<div id="sidebar-conversations"
|
|
||||||
hx-get="/chat/conversations?agent_name={{ default_agent }}"
|
|
||||||
hx-trigger="load"
|
|
||||||
hx-swap="innerHTML">
|
|
||||||
</div>
|
|
||||||
</aside>
|
|
||||||
|
|
||||||
<!-- Mobile sidebar toggle + overlay -->
|
|
||||||
<button class="sidebar-open-btn" onclick="toggleSidebar()" aria-label="Open sidebar">☰</button>
|
|
||||||
<div id="sidebar-overlay" class="sidebar-overlay" onclick="toggleSidebar()"></div>
|
|
||||||
|
|
||||||
<!-- Chat area -->
|
|
||||||
<div class="chat-main">
|
|
||||||
<!-- Status bar -->
|
|
||||||
<div class="status-bar">
|
|
||||||
<span class="status-item">Model: <strong>{{ chat_model }}</strong></span>
|
|
||||||
<span class="status-item">Exec: <strong class="{% if exec_available %}text-ok{% else %}text-err{% endif %}">{{ "OK" if exec_available else "N/A" }}</strong></span>
|
|
||||||
<span class="status-item">ClickUp: <strong class="{% if clickup_enabled %}text-ok{% else %}text-err{% endif %}">{{ "ON" if clickup_enabled else "OFF" }}</strong></span>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Notification banner (populated by SSE) -->
|
|
||||||
<div id="notification-banner" class="notification-banner" style="display:none;"></div>
|
|
||||||
|
|
||||||
<!-- Messages -->
|
|
||||||
<div class="chat-messages" id="chat-messages">
|
|
||||||
<!-- Messages loaded here -->
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Input area -->
|
|
||||||
<form id="chat-form" class="chat-input-area"
|
|
||||||
hx-post="/chat/send"
|
|
||||||
hx-target="#chat-messages"
|
|
||||||
hx-swap="beforeend"
|
|
||||||
hx-encoding="multipart/form-data"
|
|
||||||
hx-on::after-request="afterSend(event)">
|
|
||||||
<input type="hidden" name="agent_name" id="input-agent-name" value="{{ default_agent }}">
|
|
||||||
<input type="hidden" name="conv_id" id="input-conv-id" value="">
|
|
||||||
<div class="input-row">
|
|
||||||
<label class="file-upload-btn" title="Attach files">
|
|
||||||
📎
|
|
||||||
<input type="file" name="files" multiple style="display:none;" onchange="showFileNames(this)">
|
|
||||||
</label>
|
|
||||||
<textarea name="text" id="chat-input" rows="1" placeholder="Type a message..."
|
|
||||||
onkeydown="handleKeydown(event)" oninput="autoResize(this)"></textarea>
|
|
||||||
<button type="submit" class="send-btn" title="Send">➤</button>
|
|
||||||
</div>
|
|
||||||
<div id="file-preview" class="file-preview" style="display:none;"></div>
|
|
||||||
</form>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
{% endblock %}
|
|
||||||
|
|
||||||
{% block scripts %}
|
|
||||||
<script>
|
|
||||||
// Initialize session state
|
|
||||||
const SESSION_KEY = 'cheddahbot_session';
|
|
||||||
let session = JSON.parse(localStorage.getItem(SESSION_KEY) || '{}');
|
|
||||||
if (!session.agent_name) session.agent_name = '{{ default_agent }}';
|
|
||||||
|
|
||||||
// Restore session on load
|
|
||||||
document.addEventListener('DOMContentLoaded', function() {
|
|
||||||
if (session.agent_name) {
|
|
||||||
setActiveAgent(session.agent_name);
|
|
||||||
}
|
|
||||||
if (session.conv_id) {
|
|
||||||
loadConversation(session.conv_id);
|
|
||||||
}
|
|
||||||
// Load conversations for sidebar
|
|
||||||
refreshSidebar();
|
|
||||||
});
|
|
||||||
|
|
||||||
function saveSession() {
|
|
||||||
localStorage.setItem(SESSION_KEY, JSON.stringify(session));
|
|
||||||
}
|
|
||||||
</script>
|
|
||||||
{% endblock %}
|
|
||||||
|
|
@ -1,174 +0,0 @@
|
||||||
{% extends "base.html" %}
|
|
||||||
|
|
||||||
{% block title %}Dashboard - CheddahBot{% endblock %}
|
|
||||||
{% block nav_dash_active %}active{% endblock %}
|
|
||||||
|
|
||||||
{% block content %}
|
|
||||||
<div class="dashboard-layout">
|
|
||||||
|
|
||||||
<!-- Ops Panel -->
|
|
||||||
<section class="panel" id="ops-panel">
|
|
||||||
<h2 class="panel-title">Operations</h2>
|
|
||||||
|
|
||||||
<!-- Active Executions -->
|
|
||||||
<div class="panel-section">
|
|
||||||
<h3>Active Executions</h3>
|
|
||||||
<div id="active-executions" class="exec-list">
|
|
||||||
<span class="text-muted">Loading...</span>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Loop Health -->
|
|
||||||
<div class="panel-section">
|
|
||||||
<h3>Loop Health</h3>
|
|
||||||
<div id="loop-health" class="loop-grid">
|
|
||||||
<span class="text-muted">Loading...</span>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Actions -->
|
|
||||||
<div class="panel-section">
|
|
||||||
<h3>Actions</h3>
|
|
||||||
<div class="action-buttons">
|
|
||||||
<button class="btn btn-sm"
|
|
||||||
hx-post="/api/system/loops/force"
|
|
||||||
hx-swap="none"
|
|
||||||
hx-on::after-request="showFlash('Force pulse sent')">
|
|
||||||
Force Pulse
|
|
||||||
</button>
|
|
||||||
<button class="btn btn-sm"
|
|
||||||
hx-post="/api/system/briefing/force"
|
|
||||||
hx-swap="none"
|
|
||||||
hx-on::after-request="showFlash('Briefing triggered')">
|
|
||||||
Force Briefing
|
|
||||||
</button>
|
|
||||||
<button class="btn btn-sm"
|
|
||||||
hx-post="/api/cache/clear"
|
|
||||||
hx-swap="none"
|
|
||||||
hx-on::after-request="showFlash('Cache cleared')">
|
|
||||||
Clear Cache
|
|
||||||
</button>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<!-- Notification Feed -->
|
|
||||||
<div class="panel-section">
|
|
||||||
<h3>Notifications</h3>
|
|
||||||
<div id="notification-feed" class="notif-feed">
|
|
||||||
<span class="text-muted">Waiting for notifications...</span>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</section>
|
|
||||||
|
|
||||||
<!-- Pipeline Panel -->
|
|
||||||
<section class="panel" id="pipeline-panel">
|
|
||||||
<h2 class="panel-title">Pipeline</h2>
|
|
||||||
<div id="pipeline-content"
|
|
||||||
hx-get="/dashboard/pipeline"
|
|
||||||
hx-trigger="load, every 120s"
|
|
||||||
hx-swap="innerHTML">
|
|
||||||
<span class="text-muted">Loading pipeline data...</span>
|
|
||||||
</div>
|
|
||||||
</section>
|
|
||||||
|
|
||||||
</div>
|
|
||||||
{% endblock %}
|
|
||||||
|
|
||||||
{% block scripts %}
|
|
||||||
<script>
|
|
||||||
// Connect to SSE for live loop updates
|
|
||||||
const loopSource = new EventSource('/sse/loops');
|
|
||||||
loopSource.addEventListener('loops', function(e) {
|
|
||||||
const data = JSON.parse(e.data);
|
|
||||||
renderLoopHealth(data.loops);
|
|
||||||
renderActiveExecutions(data.executions);
|
|
||||||
});
|
|
||||||
|
|
||||||
// Connect to SSE for notifications
|
|
||||||
const notifSource = new EventSource('/sse/notifications');
|
|
||||||
notifSource.addEventListener('notification', function(e) {
|
|
||||||
const notif = JSON.parse(e.data);
|
|
||||||
addNotification(notif.message, notif.category);
|
|
||||||
});
|
|
||||||
|
|
||||||
function renderLoopHealth(loops) {
|
|
||||||
const container = document.getElementById('loop-health');
|
|
||||||
if (!loops || Object.keys(loops).length === 0) {
|
|
||||||
container.innerHTML = '<span class="text-muted">No loop data</span>';
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
let html = '';
|
|
||||||
const now = new Date();
|
|
||||||
for (const [name, ts] of Object.entries(loops)) {
|
|
||||||
let statusClass = 'badge-muted';
|
|
||||||
let agoText = 'never';
|
|
||||||
if (ts) {
|
|
||||||
const dt = new Date(ts);
|
|
||||||
const secs = Math.floor((now - dt) / 1000);
|
|
||||||
if (secs < 120) {
|
|
||||||
statusClass = 'badge-ok';
|
|
||||||
agoText = secs + 's ago';
|
|
||||||
} else if (secs < 600) {
|
|
||||||
statusClass = 'badge-warn';
|
|
||||||
agoText = Math.floor(secs / 60) + 'm ago';
|
|
||||||
} else {
|
|
||||||
statusClass = 'badge-err';
|
|
||||||
agoText = Math.floor(secs / 60) + 'm ago';
|
|
||||||
}
|
|
||||||
}
|
|
||||||
html += '<div class="loop-badge ' + statusClass + '">' +
|
|
||||||
'<span class="loop-name">' + name + '</span>' +
|
|
||||||
'<span class="loop-ago">' + agoText + '</span>' +
|
|
||||||
'</div>';
|
|
||||||
}
|
|
||||||
container.innerHTML = html;
|
|
||||||
}
|
|
||||||
|
|
||||||
function renderActiveExecutions(execs) {
|
|
||||||
const container = document.getElementById('active-executions');
|
|
||||||
if (!execs || Object.keys(execs).length === 0) {
|
|
||||||
container.innerHTML = '<span class="text-muted">No active executions</span>';
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
let html = '';
|
|
||||||
const now = new Date();
|
|
||||||
for (const [id, info] of Object.entries(execs)) {
|
|
||||||
const started = new Date(info.started_at);
|
|
||||||
const durSecs = Math.floor((now - started) / 1000);
|
|
||||||
let dur = durSecs + 's';
|
|
||||||
if (durSecs >= 60) dur = Math.floor(durSecs / 60) + 'm ' + (durSecs % 60) + 's';
|
|
||||||
html += '<div class="exec-item">' +
|
|
||||||
'<span class="exec-name">' + info.name + '</span>' +
|
|
||||||
'<span class="exec-tool">' + info.tool + '</span>' +
|
|
||||||
'<span class="exec-dur">' + dur + '</span>' +
|
|
||||||
'</div>';
|
|
||||||
}
|
|
||||||
container.innerHTML = html;
|
|
||||||
}
|
|
||||||
|
|
||||||
let notifCount = 0;
|
|
||||||
function addNotification(message, category) {
|
|
||||||
const container = document.getElementById('notification-feed');
|
|
||||||
if (notifCount === 0) container.innerHTML = '';
|
|
||||||
notifCount++;
|
|
||||||
|
|
||||||
const div = document.createElement('div');
|
|
||||||
div.className = 'notif-item notif-' + (category || 'info');
|
|
||||||
div.innerHTML = '<span class="notif-cat">' + (category || 'info') + '</span> ' + message;
|
|
||||||
container.insertBefore(div, container.firstChild);
|
|
||||||
|
|
||||||
// Keep max 30
|
|
||||||
while (container.children.length > 30) {
|
|
||||||
container.removeChild(container.lastChild);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function showFlash(msg) {
|
|
||||||
const el = document.createElement('div');
|
|
||||||
el.className = 'flash-msg';
|
|
||||||
el.textContent = msg;
|
|
||||||
document.body.appendChild(el);
|
|
||||||
setTimeout(() => el.remove(), 3000);
|
|
||||||
}
|
|
||||||
</script>
|
|
||||||
{% endblock %}
|
|
||||||
|
|
@ -1,6 +0,0 @@
|
||||||
<div class="message {{ role }}">
|
|
||||||
<div class="message-avatar">{% if role == 'user' %}You{% else %}CB{% endif %}</div>
|
|
||||||
<div class="message-body">
|
|
||||||
<div class="message-content">{{ content }}</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
@ -1,11 +0,0 @@
|
||||||
{% if conversations %}
|
|
||||||
{% for conv in conversations %}
|
|
||||||
<button class="conv-btn"
|
|
||||||
onclick="loadConversation('{{ conv.id }}')"
|
|
||||||
title="{{ conv.title or 'New Chat' }}">
|
|
||||||
{{ conv.title or 'New Chat' }}
|
|
||||||
</button>
|
|
||||||
{% endfor %}
|
|
||||||
{% else %}
|
|
||||||
<p class="text-muted">No conversations yet</p>
|
|
||||||
{% endif %}
|
|
||||||
|
|
@ -1,6 +0,0 @@
|
||||||
{% for name, info in loops.items() %}
|
|
||||||
<div class="loop-badge {{ info.class }}">
|
|
||||||
<span class="loop-name">{{ name }}</span>
|
|
||||||
<span class="loop-ago">{{ info.ago }}</span>
|
|
||||||
</div>
|
|
||||||
{% endfor %}
|
|
||||||
|
|
@ -1,6 +0,0 @@
|
||||||
{% for notif in notifications %}
|
|
||||||
<div class="notif-item notif-{{ notif.category or 'info' }}">
|
|
||||||
<span class="notif-cat">{{ notif.category }}</span>
|
|
||||||
{{ notif.message }}
|
|
||||||
</div>
|
|
||||||
{% endfor %}
|
|
||||||
|
|
@ -1,27 +0,0 @@
|
||||||
{% if tasks %}
|
|
||||||
<table class="task-table">
|
|
||||||
<thead>
|
|
||||||
<tr>
|
|
||||||
<th>Task</th>
|
|
||||||
<th>Customer</th>
|
|
||||||
<th>Status</th>
|
|
||||||
<th>Due</th>
|
|
||||||
</tr>
|
|
||||||
</thead>
|
|
||||||
<tbody>
|
|
||||||
{% for task in tasks %}
|
|
||||||
<tr>
|
|
||||||
<td>
|
|
||||||
{% if task.url %}<a href="{{ task.url }}" target="_blank" rel="noopener">{{ task.name }}</a>
|
|
||||||
{% else %}{{ task.name }}{% endif %}
|
|
||||||
</td>
|
|
||||||
<td>{{ task.custom_fields.get('Client', 'N/A') if task.custom_fields else 'N/A' }}</td>
|
|
||||||
<td><span class="status-badge status-{{ task.status|replace(' ', '-') }}">{{ task.status }}</span></td>
|
|
||||||
<td>{{ task.due_display or '-' }}</td>
|
|
||||||
</tr>
|
|
||||||
{% endfor %}
|
|
||||||
</tbody>
|
|
||||||
</table>
|
|
||||||
{% else %}
|
|
||||||
<p class="text-muted">No tasks</p>
|
|
||||||
{% endif %}
|
|
||||||
|
|
@ -105,7 +105,6 @@ class ToolRegistry:
|
||||||
self.db = db
|
self.db = db
|
||||||
self.agent = agent
|
self.agent = agent
|
||||||
self.agent_registry = None # set after multi-agent setup
|
self.agent_registry = None # set after multi-agent setup
|
||||||
self.scheduler = None # set after scheduler creation
|
|
||||||
self._discover_tools()
|
self._discover_tools()
|
||||||
|
|
||||||
def _discover_tools(self):
|
def _discover_tools(self):
|
||||||
|
|
@ -159,7 +158,6 @@ class ToolRegistry:
|
||||||
"agent": self.agent,
|
"agent": self.agent,
|
||||||
"memory": self.agent._memory,
|
"memory": self.agent._memory,
|
||||||
"agent_registry": self.agent_registry,
|
"agent_registry": self.agent_registry,
|
||||||
"scheduler": self.scheduler,
|
|
||||||
}
|
}
|
||||||
# Pass scheduler-injected metadata through ctx (not LLM-visible)
|
# Pass scheduler-injected metadata through ctx (not LLM-visible)
|
||||||
if "clickup_task_id" in args:
|
if "clickup_task_id" in args:
|
||||||
|
|
|
||||||
|
|
@ -347,7 +347,7 @@ def submit_autocora_jobs(target_date: str = "", ctx: dict | None = None) -> str:
|
||||||
client.update_task_status(tid, "automation underway")
|
client.update_task_status(tid, "automation underway")
|
||||||
|
|
||||||
submitted.append(group["keyword"])
|
submitted.append(group["keyword"])
|
||||||
log.info("Submitted AutoCora job: %s -> %s", group["keyword"], job_id)
|
log.info("Submitted AutoCora job: %s → %s", group["keyword"], job_id)
|
||||||
|
|
||||||
# Build response
|
# Build response
|
||||||
lines = [f"AutoCora submission ({label}):"]
|
lines = [f"AutoCora submission ({label}):"]
|
||||||
|
|
|
||||||
|
|
@ -3,7 +3,6 @@
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from datetime import UTC, datetime
|
|
||||||
|
|
||||||
from . import tool
|
from . import tool
|
||||||
|
|
||||||
|
|
@ -285,79 +284,3 @@ def clickup_reset_task(task_id: str, ctx: dict | None = None) -> str:
|
||||||
f"Task '{task_id}' reset to '{reset_status}'. "
|
f"Task '{task_id}' reset to '{reset_status}'. "
|
||||||
f"It will be picked up on the next scheduler poll."
|
f"It will be picked up on the next scheduler poll."
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def _format_duration(delta) -> str:
|
|
||||||
"""Format a timedelta as a human-readable duration string."""
|
|
||||||
total_seconds = int(delta.total_seconds())
|
|
||||||
hours, remainder = divmod(total_seconds, 3600)
|
|
||||||
minutes, seconds = divmod(remainder, 60)
|
|
||||||
if hours:
|
|
||||||
return f"{hours}h {minutes}m {seconds}s"
|
|
||||||
if minutes:
|
|
||||||
return f"{minutes}m {seconds}s"
|
|
||||||
return f"{seconds}s"
|
|
||||||
|
|
||||||
|
|
||||||
def _format_ago(iso_str: str | None) -> str:
|
|
||||||
"""Format an ISO timestamp as 'Xm ago' relative to now."""
|
|
||||||
if not iso_str:
|
|
||||||
return "never"
|
|
||||||
try:
|
|
||||||
ts = datetime.fromisoformat(iso_str)
|
|
||||||
delta = datetime.now(UTC) - ts
|
|
||||||
total_seconds = int(delta.total_seconds())
|
|
||||||
if total_seconds < 60:
|
|
||||||
return f"{total_seconds}s ago"
|
|
||||||
minutes = total_seconds // 60
|
|
||||||
if minutes < 60:
|
|
||||||
return f"{minutes}m ago"
|
|
||||||
hours = minutes // 60
|
|
||||||
return f"{hours}h {minutes % 60}m ago"
|
|
||||||
except (ValueError, TypeError):
|
|
||||||
return "unknown"
|
|
||||||
|
|
||||||
|
|
||||||
@tool(
|
|
||||||
"get_active_tasks",
|
|
||||||
"Show what CheddahBot is actively executing right now. "
|
|
||||||
"Reports running tasks, loop health, and whether it's safe to restart.",
|
|
||||||
category="clickup",
|
|
||||||
)
|
|
||||||
def get_active_tasks(ctx: dict | None = None) -> str:
|
|
||||||
"""Show actively running scheduler tasks and loop health."""
|
|
||||||
scheduler = ctx.get("scheduler") if ctx else None
|
|
||||||
if not scheduler:
|
|
||||||
return "Scheduler not available — cannot check active executions."
|
|
||||||
|
|
||||||
now = datetime.now(UTC)
|
|
||||||
lines = []
|
|
||||||
|
|
||||||
# Active executions
|
|
||||||
active = scheduler.get_active_executions()
|
|
||||||
if active:
|
|
||||||
lines.append(f"**Active Executions ({len(active)}):**")
|
|
||||||
for task_id, info in active.items():
|
|
||||||
duration = _format_duration(now - info["started_at"])
|
|
||||||
lines.append(
|
|
||||||
f"- **{info['name']}** — `{info['tool']}` — "
|
|
||||||
f"running {duration} ({info['thread']} thread)"
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
lines.append("**No tasks actively executing.**")
|
|
||||||
|
|
||||||
# Loop health
|
|
||||||
timestamps = scheduler.get_loop_timestamps()
|
|
||||||
lines.append("")
|
|
||||||
lines.append("**Loop Health:**")
|
|
||||||
for loop_name, ts in timestamps.items():
|
|
||||||
lines.append(f"- {loop_name}: last ran {_format_ago(ts)}")
|
|
||||||
|
|
||||||
# Safe to restart?
|
|
||||||
lines.append("")
|
|
||||||
if active:
|
|
||||||
lines.append(f"**Safe to restart: No** ({len(active)} task(s) actively running)")
|
|
||||||
else:
|
|
||||||
lines.append("**Safe to restart: Yes**")
|
|
||||||
|
|
||||||
return "\n".join(lines)
|
|
||||||
|
|
|
||||||
|
|
@ -77,7 +77,7 @@ def _sync_clickup_outline_ready(ctx: dict | None, task_id: str, outline_path: st
|
||||||
|
|
||||||
client.add_comment(
|
client.add_comment(
|
||||||
task_id,
|
task_id,
|
||||||
f"[OUTLINE]CheddahBot generated a content outline.\n\n"
|
f"📝 CheddahBot generated a content outline.\n\n"
|
||||||
f"Outline saved to: `{outline_path}`\n\n"
|
f"Outline saved to: `{outline_path}`\n\n"
|
||||||
f"Please review and edit the outline, then move this task to "
|
f"Please review and edit the outline, then move this task to "
|
||||||
f"**outline approved** to trigger the full content write.",
|
f"**outline approved** to trigger the full content write.",
|
||||||
|
|
@ -100,7 +100,7 @@ def _sync_clickup_complete(ctx: dict | None, task_id: str, content_path: str) ->
|
||||||
config = ctx["config"]
|
config = ctx["config"]
|
||||||
client.add_comment(
|
client.add_comment(
|
||||||
task_id,
|
task_id,
|
||||||
f"[DONE]CheddahBot completed the content.\n\n"
|
f"✅ CheddahBot completed the content.\n\n"
|
||||||
f"Final content saved to: `{content_path}`\n\n"
|
f"Final content saved to: `{content_path}`\n\n"
|
||||||
f"Ready for internal review.",
|
f"Ready for internal review.",
|
||||||
)
|
)
|
||||||
|
|
@ -122,7 +122,7 @@ def _sync_clickup_fail(ctx: dict | None, task_id: str, error: str) -> None:
|
||||||
config = ctx["config"]
|
config = ctx["config"]
|
||||||
client.add_comment(
|
client.add_comment(
|
||||||
task_id,
|
task_id,
|
||||||
f"[FAILED]CheddahBot failed during content creation.\n\nError: {error[:2000]}",
|
f"❌ CheddahBot failed during content creation.\n\nError: {error[:2000]}",
|
||||||
)
|
)
|
||||||
client.update_task_status(task_id, config.clickup.error_status)
|
client.update_task_status(task_id, config.clickup.error_status)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
|
@ -489,7 +489,7 @@ def _build_optimization_prompt(
|
||||||
f"Generate `{work_dir}/optimization_instructions.md` — a surgical playbook "
|
f"Generate `{work_dir}/optimization_instructions.md` — a surgical playbook "
|
||||||
f"for the human editor with these sections:\n\n"
|
f"for the human editor with these sections:\n\n"
|
||||||
f"1. **Executive Summary** — one-paragraph overview of optimization opportunity\n"
|
f"1. **Executive Summary** — one-paragraph overview of optimization opportunity\n"
|
||||||
f"2. **Heading Changes** — specific H1/H2/H3 modifications with before/after\n"
|
f"2. **Heading Changes** — specific H1/H2/H3 modifications with before→after\n"
|
||||||
f"3. **Sections to Expand** — which sections need more content and what to add\n"
|
f"3. **Sections to Expand** — which sections need more content and what to add\n"
|
||||||
f"4. **Entity Integration Points** — exact locations to weave in missing entities\n"
|
f"4. **Entity Integration Points** — exact locations to weave in missing entities\n"
|
||||||
f"5. **Meta Tag Updates** — title tag and meta description recommendations\n"
|
f"5. **Meta Tag Updates** — title tag and meta description recommendations\n"
|
||||||
|
|
@ -645,7 +645,7 @@ def _finalize_optimization(
|
||||||
for name, fpath in found_files.items():
|
for name, fpath in found_files.items():
|
||||||
dest = net_dir / name
|
dest = net_dir / name
|
||||||
dest.write_bytes(fpath.read_bytes())
|
dest.write_bytes(fpath.read_bytes())
|
||||||
log.info("Copied %s -> %s", fpath, dest)
|
log.info("Copied %s → %s", fpath, dest)
|
||||||
except OSError as e:
|
except OSError as e:
|
||||||
log.warning("Could not copy deliverables to network path %s: %s", net_dir, e)
|
log.warning("Could not copy deliverables to network path %s: %s", net_dir, e)
|
||||||
|
|
||||||
|
|
@ -699,7 +699,7 @@ def _sync_clickup_optimization_complete(
|
||||||
|
|
||||||
# Build comment with validation summary
|
# Build comment with validation summary
|
||||||
comment_parts = [
|
comment_parts = [
|
||||||
f"[DONE]Optimization pipeline complete for '{keyword}'.\n",
|
f"✅ Optimization pipeline complete for '{keyword}'.\n",
|
||||||
f"**URL:** {url}\n",
|
f"**URL:** {url}\n",
|
||||||
"**Deliverables attached:**",
|
"**Deliverables attached:**",
|
||||||
]
|
]
|
||||||
|
|
@ -1018,7 +1018,7 @@ def _run_phase2(
|
||||||
client.update_task_status(task_id, reset_status)
|
client.update_task_status(task_id, reset_status)
|
||||||
client.add_comment(
|
client.add_comment(
|
||||||
task_id,
|
task_id,
|
||||||
f"[WARNING]Outline file not found for keyword '{keyword}'. "
|
f"⚠️ Outline file not found for keyword '{keyword}'. "
|
||||||
f"Searched: {outline_path or '(no path saved)'}. "
|
f"Searched: {outline_path or '(no path saved)'}. "
|
||||||
f"Please re-run Phase 1 (create_content) to generate a new outline.",
|
f"Please re-run Phase 1 (create_content) to generate a new outline.",
|
||||||
)
|
)
|
||||||
|
|
|
||||||
|
|
@ -10,7 +10,6 @@ import logging
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
import subprocess
|
import subprocess
|
||||||
from collections.abc import Callable
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
from . import tool
|
from . import tool
|
||||||
|
|
@ -30,13 +29,6 @@ def _get_blm_dir(ctx: dict | None) -> str:
|
||||||
return os.getenv("BLM_DIR", "E:/dev/Big-Link-Man")
|
return os.getenv("BLM_DIR", "E:/dev/Big-Link-Man")
|
||||||
|
|
||||||
|
|
||||||
def _get_blm_timeout(ctx: dict | None) -> int:
|
|
||||||
"""Get BLM subprocess timeout from config or default (1800s / 30 min)."""
|
|
||||||
if ctx and "config" in ctx:
|
|
||||||
return ctx["config"].timeouts.blm
|
|
||||||
return 1800
|
|
||||||
|
|
||||||
|
|
||||||
def _run_blm_command(
|
def _run_blm_command(
|
||||||
args: list[str], blm_dir: str, timeout: int = 1800
|
args: list[str], blm_dir: str, timeout: int = 1800
|
||||||
) -> subprocess.CompletedProcess:
|
) -> subprocess.CompletedProcess:
|
||||||
|
|
@ -272,20 +264,26 @@ def _normalize_for_match(text: str) -> str:
|
||||||
return text
|
return text
|
||||||
|
|
||||||
|
|
||||||
def _fuzzy_keyword_match(a: str, b: str, llm_check: Callable[[str, str], bool] | None = None) -> bool:
|
def _fuzzy_keyword_match(a: str, b: str) -> bool:
|
||||||
"""Check if two normalized strings match, allowing singular/plural differences.
|
"""Check if two normalized strings are a fuzzy match.
|
||||||
|
|
||||||
Fast path: exact match after normalization.
|
Matches if: exact, substring in either direction, or >80% word overlap.
|
||||||
Slow path: ask an LLM if the two keywords are the same aside from plural form.
|
|
||||||
Falls back to False if no llm_check is provided and strings differ.
|
|
||||||
"""
|
"""
|
||||||
if not a or not b:
|
if not a or not b:
|
||||||
return False
|
return False
|
||||||
if a == b:
|
if a == b:
|
||||||
return True
|
return True
|
||||||
if llm_check is None:
|
if a in b or b in a:
|
||||||
|
return True
|
||||||
|
|
||||||
|
# Word overlap check
|
||||||
|
words_a = set(a.split())
|
||||||
|
words_b = set(b.split())
|
||||||
|
if not words_a or not words_b:
|
||||||
return False
|
return False
|
||||||
return llm_check(a, b)
|
overlap = len(words_a & words_b)
|
||||||
|
min_len = min(len(words_a), len(words_b))
|
||||||
|
return overlap / min_len >= 0.8 if min_len > 0 else False
|
||||||
|
|
||||||
|
|
||||||
def _complete_clickup_task(ctx: dict | None, task_id: str, message: str, status: str = "") -> None:
|
def _complete_clickup_task(ctx: dict | None, task_id: str, message: str, status: str = "") -> None:
|
||||||
|
|
@ -322,7 +320,7 @@ def _fail_clickup_task(ctx: dict | None, task_id: str, error_msg: str) -> None:
|
||||||
try:
|
try:
|
||||||
cu_client.add_comment(
|
cu_client.add_comment(
|
||||||
task_id,
|
task_id,
|
||||||
f"[FAILED]Link building pipeline failed.\n\nError: {error_msg[:2000]}",
|
f"❌ Link building pipeline failed.\n\nError: {error_msg[:2000]}",
|
||||||
)
|
)
|
||||||
cu_client.update_task_status(task_id, error_status)
|
cu_client.update_task_status(task_id, error_status)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
|
@ -436,7 +434,7 @@ def run_cora_backlinks(
|
||||||
# ── Step 1: ingest-cora ──
|
# ── Step 1: ingest-cora ──
|
||||||
_set_status(ctx, f"Step 1/2: Ingesting CORA report for {project_name}...")
|
_set_status(ctx, f"Step 1/2: Ingesting CORA report for {project_name}...")
|
||||||
if clickup_task_id:
|
if clickup_task_id:
|
||||||
_sync_clickup(ctx, clickup_task_id, "ingest", "[STARTED]Starting Cora Backlinks pipeline...")
|
_sync_clickup(ctx, clickup_task_id, "ingest", "🔄 Starting Cora Backlinks pipeline...")
|
||||||
|
|
||||||
# Convert branded_plus_ratio from string if needed
|
# Convert branded_plus_ratio from string if needed
|
||||||
try:
|
try:
|
||||||
|
|
@ -453,11 +451,10 @@ def run_cora_backlinks(
|
||||||
cli_flags=cli_flags,
|
cli_flags=cli_flags,
|
||||||
)
|
)
|
||||||
|
|
||||||
blm_timeout = _get_blm_timeout(ctx)
|
|
||||||
try:
|
try:
|
||||||
ingest_result = _run_blm_command(ingest_args, blm_dir, timeout=blm_timeout)
|
ingest_result = _run_blm_command(ingest_args, blm_dir)
|
||||||
except subprocess.TimeoutExpired:
|
except subprocess.TimeoutExpired:
|
||||||
error = f"ingest-cora timed out after {blm_timeout // 60} minutes"
|
error = "ingest-cora timed out after 30 minutes"
|
||||||
_set_status(ctx, "")
|
_set_status(ctx, "")
|
||||||
if clickup_task_id:
|
if clickup_task_id:
|
||||||
_fail_clickup_task(ctx, clickup_task_id, error)
|
_fail_clickup_task(ctx, clickup_task_id, error)
|
||||||
|
|
@ -490,7 +487,7 @@ def run_cora_backlinks(
|
||||||
ctx,
|
ctx,
|
||||||
clickup_task_id,
|
clickup_task_id,
|
||||||
"ingest_done",
|
"ingest_done",
|
||||||
f"[DONE]CORA report ingested. Project ID: {project_id}. Job file: {job_file}",
|
f"✅ CORA report ingested. Project ID: {project_id}. Job file: {job_file}",
|
||||||
)
|
)
|
||||||
|
|
||||||
# ── Step 2: generate-batch ──
|
# ── Step 2: generate-batch ──
|
||||||
|
|
@ -502,9 +499,9 @@ def run_cora_backlinks(
|
||||||
gen_args = ["generate-batch", "-j", str(job_path), "--continue-on-error"]
|
gen_args = ["generate-batch", "-j", str(job_path), "--continue-on-error"]
|
||||||
|
|
||||||
try:
|
try:
|
||||||
gen_result = _run_blm_command(gen_args, blm_dir, timeout=blm_timeout)
|
gen_result = _run_blm_command(gen_args, blm_dir)
|
||||||
except subprocess.TimeoutExpired:
|
except subprocess.TimeoutExpired:
|
||||||
error = f"generate-batch timed out after {blm_timeout // 60} minutes"
|
error = "generate-batch timed out after 30 minutes"
|
||||||
_set_status(ctx, "")
|
_set_status(ctx, "")
|
||||||
if clickup_task_id:
|
if clickup_task_id:
|
||||||
_fail_clickup_task(ctx, clickup_task_id, error)
|
_fail_clickup_task(ctx, clickup_task_id, error)
|
||||||
|
|
@ -534,7 +531,7 @@ def run_cora_backlinks(
|
||||||
|
|
||||||
if clickup_task_id:
|
if clickup_task_id:
|
||||||
summary = (
|
summary = (
|
||||||
f"[DONE]Cora Backlinks pipeline completed for {project_name}.\n\n"
|
f"✅ Cora Backlinks pipeline completed for {project_name}.\n\n"
|
||||||
f"Project ID: {project_id}\n"
|
f"Project ID: {project_id}\n"
|
||||||
f"Keyword: {ingest_parsed['main_keyword']}\n"
|
f"Keyword: {ingest_parsed['main_keyword']}\n"
|
||||||
f"Job file: {gen_parsed['job_moved_to'] or job_file}"
|
f"Job file: {gen_parsed['job_moved_to'] or job_file}"
|
||||||
|
|
@ -592,11 +589,10 @@ def blm_ingest_cora(
|
||||||
cli_flags=cli_flags,
|
cli_flags=cli_flags,
|
||||||
)
|
)
|
||||||
|
|
||||||
blm_timeout = _get_blm_timeout(ctx)
|
|
||||||
try:
|
try:
|
||||||
result = _run_blm_command(ingest_args, blm_dir, timeout=blm_timeout)
|
result = _run_blm_command(ingest_args, blm_dir)
|
||||||
except subprocess.TimeoutExpired:
|
except subprocess.TimeoutExpired:
|
||||||
return f"Error: ingest-cora timed out after {blm_timeout // 60} minutes."
|
return "Error: ingest-cora timed out after 30 minutes."
|
||||||
|
|
||||||
parsed = _parse_ingest_output(result.stdout)
|
parsed = _parse_ingest_output(result.stdout)
|
||||||
|
|
||||||
|
|
@ -647,11 +643,10 @@ def blm_generate_batch(
|
||||||
if debug:
|
if debug:
|
||||||
args.append("--debug")
|
args.append("--debug")
|
||||||
|
|
||||||
blm_timeout = _get_blm_timeout(ctx)
|
|
||||||
try:
|
try:
|
||||||
result = _run_blm_command(args, blm_dir, timeout=blm_timeout)
|
result = _run_blm_command(args, blm_dir)
|
||||||
except subprocess.TimeoutExpired:
|
except subprocess.TimeoutExpired:
|
||||||
return f"Error: generate-batch timed out after {blm_timeout // 60} minutes."
|
return "Error: generate-batch timed out after 30 minutes."
|
||||||
|
|
||||||
parsed = _parse_generate_output(result.stdout)
|
parsed = _parse_generate_output(result.stdout)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -574,7 +574,7 @@ def write_press_releases(
|
||||||
cu_client.update_task_status(clickup_task_id, config.clickup.automation_status)
|
cu_client.update_task_status(clickup_task_id, config.clickup.automation_status)
|
||||||
cu_client.add_comment(
|
cu_client.add_comment(
|
||||||
clickup_task_id,
|
clickup_task_id,
|
||||||
f"[STARTED]CheddahBot starting press release creation.\n\n"
|
f"🔄 CheddahBot starting press release creation.\n\n"
|
||||||
f"Topic: {topic}\nCompany: {company_name}",
|
f"Topic: {topic}\nCompany: {company_name}",
|
||||||
)
|
)
|
||||||
log.info("ClickUp task %s set to automation-underway", clickup_task_id)
|
log.info("ClickUp task %s set to automation-underway", clickup_task_id)
|
||||||
|
|
@ -752,7 +752,7 @@ def write_press_releases(
|
||||||
if failed_uploads:
|
if failed_uploads:
|
||||||
paths_list = "\n".join(f" - {p}" for p in failed_uploads)
|
paths_list = "\n".join(f" - {p}" for p in failed_uploads)
|
||||||
upload_warning = (
|
upload_warning = (
|
||||||
f"\n[WARNING]Warning: {len(failed_uploads)} attachment(s) failed to upload. "
|
f"\n⚠️ Warning: {len(failed_uploads)} attachment(s) failed to upload. "
|
||||||
f"Files saved locally at:\n{paths_list}"
|
f"Files saved locally at:\n{paths_list}"
|
||||||
)
|
)
|
||||||
cu_client.add_comment(
|
cu_client.add_comment(
|
||||||
|
|
@ -855,7 +855,7 @@ def write_press_releases(
|
||||||
attach_note = f"\n📎 {uploaded_count} file(s) attached." if uploaded_count else ""
|
attach_note = f"\n📎 {uploaded_count} file(s) attached." if uploaded_count else ""
|
||||||
result_text = "\n".join(output_parts)[:3000]
|
result_text = "\n".join(output_parts)[:3000]
|
||||||
comment = (
|
comment = (
|
||||||
f"[DONE]CheddahBot completed this task.\n\n"
|
f"✅ CheddahBot completed this task.\n\n"
|
||||||
f"Skill: write_press_releases\n"
|
f"Skill: write_press_releases\n"
|
||||||
f"Result:\n{result_text}{attach_note}"
|
f"Result:\n{result_text}{attach_note}"
|
||||||
)
|
)
|
||||||
|
|
@ -1270,7 +1270,7 @@ def submit_press_release(
|
||||||
if link_list:
|
if link_list:
|
||||||
output_parts.append("\n**Links:**")
|
output_parts.append("\n**Links:**")
|
||||||
for link in link_list:
|
for link in link_list:
|
||||||
output_parts.append(f' - "{link["anchor"]}" -> {link["url"]}')
|
output_parts.append(f' - "{link["anchor"]}" → {link["url"]}')
|
||||||
|
|
||||||
if link_warnings:
|
if link_warnings:
|
||||||
output_parts.append("\n**Link warnings:**")
|
output_parts.append("\n**Link warnings:**")
|
||||||
|
|
|
||||||
|
|
@ -1,57 +0,0 @@
|
||||||
"""HTMX + FastAPI web frontend for CheddahBot."""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import logging
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import TYPE_CHECKING
|
|
||||||
|
|
||||||
from fastapi import FastAPI
|
|
||||||
from fastapi.templating import Jinja2Templates
|
|
||||||
from starlette.staticfiles import StaticFiles
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from ..agent_registry import AgentRegistry
|
|
||||||
from ..config import Config
|
|
||||||
from ..db import Database
|
|
||||||
from ..llm import LLMAdapter
|
|
||||||
from ..notifications import NotificationBus
|
|
||||||
from ..scheduler import Scheduler
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
_TEMPLATE_DIR = Path(__file__).resolve().parent.parent / "templates"
|
|
||||||
_STATIC_DIR = Path(__file__).resolve().parent.parent / "static"
|
|
||||||
|
|
||||||
templates = Jinja2Templates(directory=str(_TEMPLATE_DIR))
|
|
||||||
|
|
||||||
|
|
||||||
def mount_web_app(
|
|
||||||
app: FastAPI,
|
|
||||||
registry: AgentRegistry,
|
|
||||||
config: Config,
|
|
||||||
llm: LLMAdapter,
|
|
||||||
notification_bus: NotificationBus | None = None,
|
|
||||||
scheduler: Scheduler | None = None,
|
|
||||||
db: Database | None = None,
|
|
||||||
):
|
|
||||||
"""Mount all web routes and static files onto the FastAPI app."""
|
|
||||||
# Wire dependencies into route modules
|
|
||||||
from . import routes_chat, routes_pages, routes_sse
|
|
||||||
from .routes_chat import router as chat_router
|
|
||||||
from .routes_pages import router as pages_router
|
|
||||||
from .routes_sse import router as sse_router
|
|
||||||
|
|
||||||
routes_pages.setup(registry, config, llm, templates, db=db, scheduler=scheduler)
|
|
||||||
routes_chat.setup(registry, config, llm, db, templates)
|
|
||||||
routes_sse.setup(notification_bus, scheduler, db)
|
|
||||||
|
|
||||||
app.include_router(chat_router)
|
|
||||||
app.include_router(sse_router)
|
|
||||||
# Pages router last (it has catch-all GET /)
|
|
||||||
app.include_router(pages_router)
|
|
||||||
|
|
||||||
# Static files
|
|
||||||
app.mount("/static", StaticFiles(directory=str(_STATIC_DIR)), name="static")
|
|
||||||
|
|
||||||
log.info("Web UI mounted (templates: %s, static: %s)", _TEMPLATE_DIR, _STATIC_DIR)
|
|
||||||
|
|
@ -1,270 +0,0 @@
|
||||||
"""Chat routes: send messages, stream responses, manage conversations."""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import asyncio
|
|
||||||
import logging
|
|
||||||
import tempfile
|
|
||||||
import time
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import TYPE_CHECKING
|
|
||||||
|
|
||||||
from fastapi import APIRouter, Form, Request, UploadFile
|
|
||||||
from fastapi.responses import HTMLResponse
|
|
||||||
from fastapi.templating import Jinja2Templates
|
|
||||||
from sse_starlette.sse import EventSourceResponse
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from ..agent_registry import AgentRegistry
|
|
||||||
from ..config import Config
|
|
||||||
from ..db import Database
|
|
||||||
from ..llm import LLMAdapter
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
router = APIRouter(prefix="/chat")
|
|
||||||
|
|
||||||
_registry: AgentRegistry | None = None
|
|
||||||
_config: Config | None = None
|
|
||||||
_llm: LLMAdapter | None = None
|
|
||||||
_db: Database | None = None
|
|
||||||
_templates: Jinja2Templates | None = None
|
|
||||||
|
|
||||||
# Pending responses: conv_id -> {text, files, timestamp}
|
|
||||||
_pending: dict[str, dict] = {}
|
|
||||||
|
|
||||||
|
|
||||||
def setup(registry, config, llm, db, templates):
|
|
||||||
global _registry, _config, _llm, _db, _templates
|
|
||||||
_registry = registry
|
|
||||||
_config = config
|
|
||||||
_llm = llm
|
|
||||||
_db = db
|
|
||||||
_templates = templates
|
|
||||||
|
|
||||||
|
|
||||||
def _get_agent(name: str):
|
|
||||||
if _registry:
|
|
||||||
return _registry.get(name) or _registry.default
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def _cleanup_pending():
|
|
||||||
"""Remove pending entries older than 60s."""
|
|
||||||
now = time.time()
|
|
||||||
expired = [k for k, v in _pending.items() if now - v["timestamp"] > 60]
|
|
||||||
for k in expired:
|
|
||||||
del _pending[k]
|
|
||||||
|
|
||||||
|
|
||||||
@router.post("/send")
|
|
||||||
async def send_message(
|
|
||||||
request: Request,
|
|
||||||
text: str = Form(""),
|
|
||||||
agent_name: str = Form("default"),
|
|
||||||
conv_id: str = Form(""),
|
|
||||||
files: list[UploadFile] | None = None,
|
|
||||||
):
|
|
||||||
"""Accept user message, return user bubble HTML + trigger SSE stream."""
|
|
||||||
_cleanup_pending()
|
|
||||||
|
|
||||||
agent = _get_agent(agent_name)
|
|
||||||
if not agent:
|
|
||||||
return HTMLResponse("<div class='error'>Agent not found</div>", status_code=400)
|
|
||||||
|
|
||||||
# Handle file uploads
|
|
||||||
saved_files = []
|
|
||||||
for f in (files or []):
|
|
||||||
if f.filename and f.size and f.size > 0:
|
|
||||||
tmp = Path(tempfile.mkdtemp()) / f.filename
|
|
||||||
content = await f.read()
|
|
||||||
tmp.write_bytes(content)
|
|
||||||
saved_files.append(str(tmp))
|
|
||||||
|
|
||||||
if not text.strip() and not saved_files:
|
|
||||||
return HTMLResponse("")
|
|
||||||
|
|
||||||
# Ensure conversation exists
|
|
||||||
if not conv_id:
|
|
||||||
agent.new_conversation()
|
|
||||||
conv_id = agent.ensure_conversation()
|
|
||||||
else:
|
|
||||||
agent.conv_id = conv_id
|
|
||||||
|
|
||||||
# Build display text
|
|
||||||
display_text = text
|
|
||||||
if saved_files:
|
|
||||||
file_names = [Path(f).name for f in saved_files]
|
|
||||||
display_text += f"\n[Attached: {', '.join(file_names)}]"
|
|
||||||
|
|
||||||
# Stash for SSE stream
|
|
||||||
_pending[conv_id] = {
|
|
||||||
"text": text,
|
|
||||||
"files": saved_files,
|
|
||||||
"timestamp": time.time(),
|
|
||||||
"agent_name": agent_name,
|
|
||||||
}
|
|
||||||
|
|
||||||
# Render user bubble + SSE trigger div
|
|
||||||
user_html = _templates.get_template("partials/chat_message.html").render(
|
|
||||||
role="user", content=display_text
|
|
||||||
)
|
|
||||||
# The SSE trigger div connects to the stream endpoint
|
|
||||||
sse_div = (
|
|
||||||
f'<div id="sse-trigger" '
|
|
||||||
f'hx-ext="sse" '
|
|
||||||
f'sse-connect="/chat/stream/{conv_id}" '
|
|
||||||
f'sse-swap="chunk" '
|
|
||||||
f'hx-target="#assistant-response" '
|
|
||||||
f'hx-swap="beforeend">'
|
|
||||||
f'</div>'
|
|
||||||
f'<div id="assistant-bubble" class="message assistant">'
|
|
||||||
f'<div class="message-avatar">CB</div>'
|
|
||||||
f'<div class="message-body">'
|
|
||||||
f'<div id="assistant-response" class="message-content"></div>'
|
|
||||||
f'</div></div>'
|
|
||||||
)
|
|
||||||
|
|
||||||
headers = {
|
|
||||||
"HX-Trigger-After-Swap": "scrollChat",
|
|
||||||
"HX-Push-Url": f"/?conv={conv_id}",
|
|
||||||
}
|
|
||||||
|
|
||||||
return HTMLResponse(user_html + sse_div, headers=headers)
|
|
||||||
|
|
||||||
|
|
||||||
@router.get("/stream/{conv_id}")
|
|
||||||
async def stream_response(conv_id: str):
|
|
||||||
"""SSE endpoint: stream assistant response chunks."""
|
|
||||||
pending = _pending.pop(conv_id, None)
|
|
||||||
if not pending:
|
|
||||||
async def empty():
|
|
||||||
yield {"event": "done", "data": ""}
|
|
||||||
return EventSourceResponse(empty())
|
|
||||||
|
|
||||||
agent = _get_agent(pending["agent_name"])
|
|
||||||
if not agent:
|
|
||||||
async def error():
|
|
||||||
yield {"event": "chunk", "data": "Agent not found"}
|
|
||||||
yield {"event": "done", "data": ""}
|
|
||||||
return EventSourceResponse(error())
|
|
||||||
|
|
||||||
agent.conv_id = conv_id
|
|
||||||
|
|
||||||
async def generate():
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
queue: asyncio.Queue = asyncio.Queue()
|
|
||||||
|
|
||||||
def run_agent():
|
|
||||||
try:
|
|
||||||
for chunk in agent.respond(pending["text"], files=pending.get("files")):
|
|
||||||
loop.call_soon_threadsafe(queue.put_nowait, ("chunk", chunk))
|
|
||||||
except Exception as e:
|
|
||||||
log.error("Stream error: %s", e, exc_info=True)
|
|
||||||
loop.call_soon_threadsafe(
|
|
||||||
queue.put_nowait, ("chunk", f"\n\nError: {e}")
|
|
||||||
)
|
|
||||||
finally:
|
|
||||||
loop.call_soon_threadsafe(queue.put_nowait, ("done", ""))
|
|
||||||
|
|
||||||
# Run agent.respond() in a thread
|
|
||||||
import threading
|
|
||||||
t = threading.Thread(target=run_agent, daemon=True)
|
|
||||||
t.start()
|
|
||||||
|
|
||||||
while True:
|
|
||||||
event, data = await queue.get()
|
|
||||||
if event == "done":
|
|
||||||
yield {"event": "done", "data": conv_id}
|
|
||||||
break
|
|
||||||
yield {"event": "chunk", "data": data}
|
|
||||||
|
|
||||||
return EventSourceResponse(generate())
|
|
||||||
|
|
||||||
|
|
||||||
@router.get("/conversations")
|
|
||||||
async def list_conversations(agent_name: str = "default"):
|
|
||||||
"""Return sidebar conversation list as HTML partial."""
|
|
||||||
agent = _get_agent(agent_name)
|
|
||||||
if not agent:
|
|
||||||
return HTMLResponse("")
|
|
||||||
|
|
||||||
convs = agent.db.list_conversations(limit=50, agent_name=agent_name)
|
|
||||||
html = _templates.get_template("partials/chat_sidebar.html").render(
|
|
||||||
conversations=convs
|
|
||||||
)
|
|
||||||
return HTMLResponse(html)
|
|
||||||
|
|
||||||
|
|
||||||
@router.post("/new")
|
|
||||||
async def new_conversation(agent_name: str = Form("default")):
|
|
||||||
"""Create a new conversation, return empty chat + updated sidebar."""
|
|
||||||
agent = _get_agent(agent_name)
|
|
||||||
if not agent:
|
|
||||||
return HTMLResponse("")
|
|
||||||
|
|
||||||
agent.new_conversation()
|
|
||||||
conv_id = agent.ensure_conversation()
|
|
||||||
|
|
||||||
convs = agent.db.list_conversations(limit=50, agent_name=agent_name)
|
|
||||||
sidebar_html = _templates.get_template("partials/chat_sidebar.html").render(
|
|
||||||
conversations=convs
|
|
||||||
)
|
|
||||||
|
|
||||||
# Return empty chat area + sidebar update via OOB swap
|
|
||||||
html = (
|
|
||||||
f'<div id="chat-messages"></div>'
|
|
||||||
f'<div id="sidebar-conversations" hx-swap-oob="innerHTML">'
|
|
||||||
f'{sidebar_html}</div>'
|
|
||||||
)
|
|
||||||
|
|
||||||
headers = {"HX-Push-Url": f"/?conv={conv_id}"}
|
|
||||||
return HTMLResponse(html, headers=headers)
|
|
||||||
|
|
||||||
|
|
||||||
@router.get("/load/{conv_id}")
|
|
||||||
async def load_conversation(conv_id: str, agent_name: str = "default"):
|
|
||||||
"""Load conversation history as HTML."""
|
|
||||||
agent = _get_agent(agent_name)
|
|
||||||
if not agent:
|
|
||||||
return HTMLResponse("")
|
|
||||||
|
|
||||||
messages = agent.load_conversation(conv_id)
|
|
||||||
parts = []
|
|
||||||
for msg in messages:
|
|
||||||
role = msg.get("role", "")
|
|
||||||
content = msg.get("content", "")
|
|
||||||
if role in ("user", "assistant") and content:
|
|
||||||
parts.append(
|
|
||||||
_templates.get_template("partials/chat_message.html").render(
|
|
||||||
role=role, content=content
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
headers = {"HX-Push-Url": f"/?conv={conv_id}"}
|
|
||||||
return HTMLResponse("\n".join(parts), headers=headers)
|
|
||||||
|
|
||||||
|
|
||||||
@router.post("/agent/{name}")
|
|
||||||
async def switch_agent(name: str):
|
|
||||||
"""Switch active agent. Returns updated sidebar via OOB."""
|
|
||||||
agent = _get_agent(name)
|
|
||||||
if not agent:
|
|
||||||
return HTMLResponse("<div class='error'>Agent not found</div>", status_code=400)
|
|
||||||
|
|
||||||
agent.new_conversation()
|
|
||||||
conv_id = agent.ensure_conversation()
|
|
||||||
|
|
||||||
convs = agent.db.list_conversations(limit=50, agent_name=name)
|
|
||||||
sidebar_html = _templates.get_template("partials/chat_sidebar.html").render(
|
|
||||||
conversations=convs
|
|
||||||
)
|
|
||||||
|
|
||||||
html = (
|
|
||||||
f'<div id="chat-messages"></div>'
|
|
||||||
f'<div id="sidebar-conversations" hx-swap-oob="innerHTML">'
|
|
||||||
f'{sidebar_html}</div>'
|
|
||||||
)
|
|
||||||
|
|
||||||
headers = {"HX-Push-Url": f"/?conv={conv_id}"}
|
|
||||||
return HTMLResponse(html, headers=headers)
|
|
||||||
|
|
@ -1,172 +0,0 @@
|
||||||
"""Page routes: GET / (chat), GET /dashboard, dashboard partials."""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import logging
|
|
||||||
from datetime import UTC, datetime
|
|
||||||
from typing import TYPE_CHECKING
|
|
||||||
|
|
||||||
from fastapi import APIRouter, Request
|
|
||||||
from fastapi.responses import HTMLResponse
|
|
||||||
from fastapi.templating import Jinja2Templates
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from ..agent_registry import AgentRegistry
|
|
||||||
from ..config import Config
|
|
||||||
from ..db import Database
|
|
||||||
from ..llm import LLMAdapter
|
|
||||||
from ..scheduler import Scheduler
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
router = APIRouter()
|
|
||||||
|
|
||||||
_registry: AgentRegistry | None = None
|
|
||||||
_config: Config | None = None
|
|
||||||
_llm: LLMAdapter | None = None
|
|
||||||
_db: Database | None = None
|
|
||||||
_scheduler: Scheduler | None = None
|
|
||||||
_templates: Jinja2Templates | None = None
|
|
||||||
|
|
||||||
|
|
||||||
def setup(registry, config, llm, templates, db=None, scheduler=None):
|
|
||||||
global _registry, _config, _llm, _templates, _db, _scheduler
|
|
||||||
_registry = registry
|
|
||||||
_config = config
|
|
||||||
_llm = llm
|
|
||||||
_templates = templates
|
|
||||||
_db = db
|
|
||||||
_scheduler = scheduler
|
|
||||||
|
|
||||||
|
|
||||||
@router.get("/")
|
|
||||||
async def chat_page(request: Request):
|
|
||||||
agent_names = _registry.list_agents() if _registry else []
|
|
||||||
agents = []
|
|
||||||
for name in agent_names:
|
|
||||||
agent = _registry.get(name)
|
|
||||||
display = agent.agent_config.display_name if agent else name
|
|
||||||
agents.append({"name": name, "display_name": display})
|
|
||||||
|
|
||||||
default_agent = _registry.default_name if _registry else "default"
|
|
||||||
chat_model = _config.chat_model if _config else "unknown"
|
|
||||||
exec_available = _llm.is_execution_brain_available() if _llm else False
|
|
||||||
clickup_enabled = _config.clickup.enabled if _config else False
|
|
||||||
|
|
||||||
return _templates.TemplateResponse("chat.html", {
|
|
||||||
"request": request,
|
|
||||||
"agents": agents,
|
|
||||||
"default_agent": default_agent,
|
|
||||||
"chat_model": chat_model,
|
|
||||||
"exec_available": exec_available,
|
|
||||||
"clickup_enabled": clickup_enabled,
|
|
||||||
})
|
|
||||||
|
|
||||||
|
|
||||||
@router.get("/dashboard")
|
|
||||||
async def dashboard_page(request: Request):
|
|
||||||
return _templates.TemplateResponse("dashboard.html", {
|
|
||||||
"request": request,
|
|
||||||
})
|
|
||||||
|
|
||||||
|
|
||||||
@router.get("/dashboard/pipeline")
|
|
||||||
async def dashboard_pipeline():
|
|
||||||
"""Return pipeline panel HTML partial with task data."""
|
|
||||||
if not _config or not _config.clickup.enabled:
|
|
||||||
return HTMLResponse('<p class="text-muted">ClickUp not configured</p>')
|
|
||||||
|
|
||||||
try:
|
|
||||||
from ..api import get_tasks
|
|
||||||
data = await get_tasks()
|
|
||||||
all_tasks = data.get("tasks", [])
|
|
||||||
except Exception as e:
|
|
||||||
log.error("Pipeline data fetch failed: %s", e)
|
|
||||||
return HTMLResponse(f'<p class="text-err">Error: {e}</p>')
|
|
||||||
|
|
||||||
# Group by work category, then by status
|
|
||||||
pipeline_statuses = [
|
|
||||||
"to do", "automation underway", "outline review", "internal review", "error",
|
|
||||||
]
|
|
||||||
categories = {} # category -> {status -> [tasks]}
|
|
||||||
for t in all_tasks:
|
|
||||||
cat = t.get("task_type") or "Other"
|
|
||||||
status = t.get("status", "unknown")
|
|
||||||
|
|
||||||
# Only show tasks in pipeline-relevant statuses
|
|
||||||
if status not in pipeline_statuses:
|
|
||||||
continue
|
|
||||||
|
|
||||||
if cat not in categories:
|
|
||||||
categories[cat] = {}
|
|
||||||
categories[cat].setdefault(status, []).append(t)
|
|
||||||
|
|
||||||
# Build HTML
|
|
||||||
html_parts = []
|
|
||||||
|
|
||||||
# Status summary counts
|
|
||||||
total_counts = {}
|
|
||||||
for cat_data in categories.values():
|
|
||||||
for status, tasks in cat_data.items():
|
|
||||||
total_counts[status] = total_counts.get(status, 0) + len(tasks)
|
|
||||||
|
|
||||||
if total_counts:
|
|
||||||
html_parts.append('<div class="pipeline-stats">')
|
|
||||||
for status in pipeline_statuses:
|
|
||||||
count = total_counts.get(status, 0)
|
|
||||||
html_parts.append(
|
|
||||||
f'<div class="pipeline-stat">'
|
|
||||||
f'<div class="stat-count">{count}</div>'
|
|
||||||
f'<div class="stat-label">{status}</div>'
|
|
||||||
f'</div>'
|
|
||||||
)
|
|
||||||
html_parts.append('</div>')
|
|
||||||
|
|
||||||
# Per-category tables
|
|
||||||
for cat_name in sorted(categories.keys()):
|
|
||||||
cat_data = categories[cat_name]
|
|
||||||
all_cat_tasks = []
|
|
||||||
for status in pipeline_statuses:
|
|
||||||
all_cat_tasks.extend(cat_data.get(status, []))
|
|
||||||
|
|
||||||
if not all_cat_tasks:
|
|
||||||
continue
|
|
||||||
|
|
||||||
html_parts.append(f'<div class="pipeline-group"><h4>{cat_name} ({len(all_cat_tasks)})</h4>')
|
|
||||||
html_parts.append('<table class="task-table"><thead><tr>'
|
|
||||||
'<th>Task</th><th>Customer</th><th>Status</th><th>Due</th>'
|
|
||||||
'</tr></thead><tbody>')
|
|
||||||
|
|
||||||
for task in all_cat_tasks:
|
|
||||||
name = task.get("name", "")
|
|
||||||
url = task.get("url", "")
|
|
||||||
customer = (task.get("custom_fields") or {}).get("Client", "N/A")
|
|
||||||
status = task.get("status", "")
|
|
||||||
status_class = "status-" + status.replace(" ", "-")
|
|
||||||
|
|
||||||
# Format due date
|
|
||||||
due_display = "-"
|
|
||||||
due_raw = task.get("due_date")
|
|
||||||
if due_raw:
|
|
||||||
try:
|
|
||||||
due_dt = datetime.fromtimestamp(int(due_raw) / 1000, tz=UTC)
|
|
||||||
due_display = due_dt.strftime("%b %d")
|
|
||||||
except (ValueError, TypeError, OSError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
name_cell = (
|
|
||||||
f'<a href="{url}" target="_blank">{name}</a>' if url else name
|
|
||||||
)
|
|
||||||
|
|
||||||
html_parts.append(
|
|
||||||
f'<tr><td>{name_cell}</td><td>{customer}</td>'
|
|
||||||
f'<td><span class="status-badge {status_class}">{status}</span></td>'
|
|
||||||
f'<td>{due_display}</td></tr>'
|
|
||||||
)
|
|
||||||
|
|
||||||
html_parts.append('</tbody></table></div>')
|
|
||||||
|
|
||||||
if not html_parts:
|
|
||||||
return HTMLResponse('<p class="text-muted">No active pipeline tasks</p>')
|
|
||||||
|
|
||||||
return HTMLResponse('\n'.join(html_parts))
|
|
||||||
|
|
@ -1,94 +0,0 @@
|
||||||
"""SSE routes for live dashboard updates."""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import asyncio
|
|
||||||
import json
|
|
||||||
import logging
|
|
||||||
from datetime import datetime
|
|
||||||
from typing import TYPE_CHECKING
|
|
||||||
|
|
||||||
from fastapi import APIRouter
|
|
||||||
from sse_starlette.sse import EventSourceResponse
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from ..db import Database
|
|
||||||
from ..notifications import NotificationBus
|
|
||||||
from ..scheduler import Scheduler
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
router = APIRouter(prefix="/sse")
|
|
||||||
|
|
||||||
_notification_bus: NotificationBus | None = None
|
|
||||||
_scheduler: Scheduler | None = None
|
|
||||||
_db: Database | None = None
|
|
||||||
|
|
||||||
|
|
||||||
def setup(notification_bus, scheduler, db):
|
|
||||||
global _notification_bus, _scheduler, _db
|
|
||||||
_notification_bus = notification_bus
|
|
||||||
_scheduler = scheduler
|
|
||||||
_db = db
|
|
||||||
|
|
||||||
|
|
||||||
@router.get("/notifications")
|
|
||||||
async def sse_notifications():
|
|
||||||
"""Stream new notifications as they arrive."""
|
|
||||||
listener_id = f"sse-notif-{id(asyncio.current_task())}"
|
|
||||||
|
|
||||||
# Subscribe to notification bus
|
|
||||||
queue: asyncio.Queue = asyncio.Queue()
|
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
|
|
||||||
if _notification_bus:
|
|
||||||
def on_notify(msg, cat):
|
|
||||||
loop.call_soon_threadsafe(
|
|
||||||
queue.put_nowait, {"message": msg, "category": cat}
|
|
||||||
)
|
|
||||||
_notification_bus.subscribe(listener_id, on_notify)
|
|
||||||
|
|
||||||
async def generate():
|
|
||||||
try:
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
notif = await asyncio.wait_for(queue.get(), timeout=30)
|
|
||||||
yield {
|
|
||||||
"event": "notification",
|
|
||||||
"data": json.dumps(notif),
|
|
||||||
}
|
|
||||||
except TimeoutError:
|
|
||||||
yield {"event": "heartbeat", "data": ""}
|
|
||||||
finally:
|
|
||||||
if _notification_bus:
|
|
||||||
_notification_bus.unsubscribe(listener_id)
|
|
||||||
|
|
||||||
return EventSourceResponse(generate())
|
|
||||||
|
|
||||||
|
|
||||||
@router.get("/loops")
|
|
||||||
async def sse_loops():
|
|
||||||
"""Push loop timestamps + active executions every 15s."""
|
|
||||||
async def generate():
|
|
||||||
while True:
|
|
||||||
data = {"loops": {}, "executions": {}}
|
|
||||||
if _scheduler:
|
|
||||||
ts = _scheduler.get_loop_timestamps()
|
|
||||||
data["loops"] = ts
|
|
||||||
# Serialize active executions (datetime -> str)
|
|
||||||
raw_exec = _scheduler.get_active_executions()
|
|
||||||
execs = {}
|
|
||||||
for tid, info in raw_exec.items():
|
|
||||||
execs[tid] = {
|
|
||||||
"name": info.get("name", ""),
|
|
||||||
"tool": info.get("tool", ""),
|
|
||||||
"started_at": info["started_at"].isoformat()
|
|
||||||
if isinstance(info.get("started_at"), datetime)
|
|
||||||
else str(info.get("started_at", "")),
|
|
||||||
"thread": info.get("thread", ""),
|
|
||||||
}
|
|
||||||
data["executions"] = execs
|
|
||||||
yield {"event": "loops", "data": json.dumps(data)}
|
|
||||||
await asyncio.sleep(15)
|
|
||||||
|
|
||||||
return EventSourceResponse(generate())
|
|
||||||
|
|
@ -99,7 +99,7 @@ clickup:
|
||||||
link_building:
|
link_building:
|
||||||
blm_dir: "E:/dev/Big-Link-Man"
|
blm_dir: "E:/dev/Big-Link-Man"
|
||||||
watch_folder: "//PennQnap1/SHARE1/cora-inbox"
|
watch_folder: "//PennQnap1/SHARE1/cora-inbox"
|
||||||
watch_interval_minutes: 10
|
watch_interval_minutes: 60
|
||||||
default_branded_plus_ratio: 0.7
|
default_branded_plus_ratio: 0.7
|
||||||
|
|
||||||
# AutoCora job submission
|
# AutoCora job submission
|
||||||
|
|
|
||||||
287
cora-link.md
287
cora-link.md
|
|
@ -1,287 +0,0 @@
|
||||||
# Link Building Agent Plan
|
|
||||||
|
|
||||||
## Context
|
|
||||||
|
|
||||||
CheddahBot needs a link building agent that orchestrates the external Big-Link-Man CLI tool (`E:/dev/Big-Link-Man/`). The current workflow is manual: run Cora on another machine → get .xlsx → manually run `main.py ingest-cora` → manually run `main.py generate-batch`. This agent automates steps 2 and 3, triggered by folder watching, ClickUp tasks, or chat commands. It must be expandable for future link building methods (MCP server path, ingest-simple, etc.).
|
|
||||||
|
|
||||||
## Decisions Made
|
|
||||||
|
|
||||||
- **Watch folder**: `Z:/cora-inbox` (network drive, Cora machine accessible)
|
|
||||||
- **File→task matching**: Fuzzy match .xlsx filename stem against ClickUp task's `Keyword` custom field
|
|
||||||
- **New ClickUp field "LB Method"**: Dropdown with initial option "Cora Backlinks" (more added later)
|
|
||||||
- **Dashboard**: API endpoint + NotificationBus events only (no frontend work — separate project)
|
|
||||||
- **Sidecar files**: Not needed — all metadata comes from the matching ClickUp task
|
|
||||||
- **Tool naming**: Orchestrator pattern — `run_link_building` is a thin dispatcher that reads `LB Method` and routes to the specific pipeline tool (e.g., `run_cora_backlinks`). Future link building methods get their own tools and slot into the orchestrator.
|
|
||||||
|
|
||||||
## Files to Create
|
|
||||||
|
|
||||||
### 1. `cheddahbot/tools/linkbuilding.py` — Main tool module
|
|
||||||
|
|
||||||
Four `@tool`-decorated functions + private helpers:
|
|
||||||
|
|
||||||
**`run_link_building(lb_method="", xlsx_path="", project_name="", money_site_url="", branded_plus_ratio=0.7, custom_anchors="", cli_flags="", ctx=None)`**
|
|
||||||
- **Orchestrator/dispatcher** — reads `lb_method` (from ClickUp "LB Method" field or chat) and routes to the correct pipeline tool
|
|
||||||
- If `lb_method` is "Cora Backlinks" or empty (default): calls `run_cora_backlinks()`
|
|
||||||
- Future: if `lb_method` is "MCP Link Building": calls `run_mcp_link_building()` (not yet implemented)
|
|
||||||
- Passes all other args through to the sub-tool
|
|
||||||
- This is what the ClickUp skill_map always routes to
|
|
||||||
|
|
||||||
**`run_cora_backlinks(xlsx_path, project_name, money_site_url, branded_plus_ratio=0.7, custom_anchors="", cli_flags="", ctx=None)`**
|
|
||||||
- The actual Cora pipeline — runs ingest-cora → generate-batch
|
|
||||||
- Step 1: Build CLI args, call `_run_blm_command(["ingest-cora", ...])`, parse stdout for job file path
|
|
||||||
- Step 2: Call `_run_blm_command(["generate-batch", "-j", job_file, "--continue-on-error"])`
|
|
||||||
- Updates KV store state and posts ClickUp comments at each step (following press_release.py pattern)
|
|
||||||
- Returns `## ClickUp Sync` in output to signal scheduler that sync was handled internally
|
|
||||||
- Can also be called directly from chat for explicit Cora runs
|
|
||||||
|
|
||||||
**`blm_ingest_cora(xlsx_path, project_name, money_site_url, branded_plus_ratio=0.7, custom_anchors="", cli_flags="", ctx=None)`**
|
|
||||||
- Standalone ingest — runs ingest-cora only, returns project ID and job file path
|
|
||||||
- For cases where user wants to ingest but not generate yet
|
|
||||||
|
|
||||||
**`blm_generate_batch(job_file, continue_on_error=True, debug=False, ctx=None)`**
|
|
||||||
- Standalone generate — runs generate-batch only on an existing job file
|
|
||||||
- For re-running generation or running a manually-created job
|
|
||||||
|
|
||||||
**Private helpers:**
|
|
||||||
- `_run_blm_command(args, timeout=1800)` — subprocess wrapper, runs `uv run python main.py <args>` from BLM_DIR, injects `-u`/`-p` from `BLM_USERNAME`/`BLM_PASSWORD` env vars
|
|
||||||
- `_parse_ingest_output(stdout)` — regex extract project_id + job_file path
|
|
||||||
- `_parse_generate_output(stdout)` — extract completion stats
|
|
||||||
- `_build_ingest_args(...)` — construct CLI argument list from tool params
|
|
||||||
- `_set_status(ctx, message)` — write pipeline status to KV store (for UI polling)
|
|
||||||
- `_sync_clickup(ctx, task_id, step, message)` — post comment + update state
|
|
||||||
|
|
||||||
**Critical: always pass `-m` flag** to ingest-cora to prevent interactive stdin prompt from blocking the subprocess.
|
|
||||||
|
|
||||||
### 2. `skills/linkbuilding.md` — Skill file
|
|
||||||
|
|
||||||
YAML frontmatter linking to `[run_link_building, run_cora_backlinks, blm_ingest_cora, blm_generate_batch, scan_cora_folder]` tools and `[link_builder, default]` agents. Markdown body describes when to use, default flags, workflow steps.
|
|
||||||
|
|
||||||
### 3. `tests/test_linkbuilding.py` — Test suite (~40 tests)
|
|
||||||
|
|
||||||
All tests mock `subprocess.run` — never call Big-Link-Man. Categories:
|
|
||||||
- Output parser unit tests (`_parse_ingest_output`, `_parse_generate_output`)
|
|
||||||
- CLI arg builder tests (all flag combinations, missing required params)
|
|
||||||
- Full pipeline integration (happy path, ingest failure, generate failure)
|
|
||||||
- ClickUp state machine (executing → completed, executing → failed)
|
|
||||||
- Folder watcher scan logic (new files, skip processed, missing ClickUp match)
|
|
||||||
|
|
||||||
## Files to Modify
|
|
||||||
|
|
||||||
### 4. `cheddahbot/config.py` — Add LinkBuildingConfig
|
|
||||||
|
|
||||||
```python
|
|
||||||
@dataclass
|
|
||||||
class LinkBuildingConfig:
|
|
||||||
blm_dir: str = "E:/dev/Big-Link-Man"
|
|
||||||
watch_folder: str = "" # empty = disabled
|
|
||||||
watch_interval_minutes: int = 60
|
|
||||||
default_branded_plus_ratio: float = 0.7
|
|
||||||
```
|
|
||||||
|
|
||||||
Add `link_building: LinkBuildingConfig` field to `Config` dataclass. Add YAML loading block in `load_config()` (same pattern as memory/scheduler/shell). Add env var override for `BLM_DIR`.
|
|
||||||
|
|
||||||
### 5. `config.yaml` — Three additions
|
|
||||||
|
|
||||||
**New top-level section:**
|
|
||||||
```yaml
|
|
||||||
link_building:
|
|
||||||
blm_dir: "E:/dev/Big-Link-Man"
|
|
||||||
watch_folder: "Z:/cora-inbox"
|
|
||||||
watch_interval_minutes: 60
|
|
||||||
default_branded_plus_ratio: 0.7
|
|
||||||
```
|
|
||||||
|
|
||||||
**New skill_map entry under clickup:**
|
|
||||||
```yaml
|
|
||||||
"Link Building":
|
|
||||||
tool: "run_link_building"
|
|
||||||
auto_execute: false # Cora Backlinks triggered by folder watcher, not scheduler
|
|
||||||
complete_status: "complete" # Override: use "complete" instead of "internal review"
|
|
||||||
error_status: "internal review" # On failure, move to internal review
|
|
||||||
field_mapping:
|
|
||||||
lb_method: "LB Method"
|
|
||||||
project_name: "task_name"
|
|
||||||
money_site_url: "IMSURL"
|
|
||||||
custom_anchors: "CustomAnchors"
|
|
||||||
branded_plus_ratio: "BrandedPlusRatio"
|
|
||||||
cli_flags: "CLIFlags"
|
|
||||||
xlsx_path: "CoraFile"
|
|
||||||
```
|
|
||||||
|
|
||||||
**New agent:**
|
|
||||||
```yaml
|
|
||||||
- name: link_builder
|
|
||||||
display_name: Link Builder
|
|
||||||
tools: [run_link_building, run_cora_backlinks, blm_ingest_cora, blm_generate_batch, scan_cora_folder, delegate_task, remember, search_memory]
|
|
||||||
memory_scope: ""
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6. `cheddahbot/scheduler.py` — Add folder watcher (4th daemon thread)
|
|
||||||
|
|
||||||
**New thread `_folder_watch_loop`** alongside existing poll, heartbeat, and ClickUp threads:
|
|
||||||
- Starts if `config.link_building.watch_folder` is non-empty
|
|
||||||
- Runs every `watch_interval_minutes` (default 60)
|
|
||||||
- `_scan_watch_folder()` globs `*.xlsx` in watch folder
|
|
||||||
- For each file, checks KV store `linkbuilding:watched:{filename}` — skip if already processed
|
|
||||||
- **Fuzzy-matches filename stem against ClickUp tasks** with `LB Method = "Cora Backlinks"` and status "to do":
|
|
||||||
- Queries ClickUp for Link Building tasks
|
|
||||||
- Compares normalized filename stem against each task's `Keyword` custom field
|
|
||||||
- If match found: extracts money_site_url from IMSURL field, cli_flags from CLIFlags field, etc.
|
|
||||||
- If no match: logs warning, marks as "unmatched" in KV store, sends notification asking user to create/link a ClickUp task
|
|
||||||
- On match: executes `run_link_building` tool with args from the ClickUp task fields
|
|
||||||
- On completion: moves .xlsx to `Z:/cora-inbox/processed/` subfolder, updates KV state
|
|
||||||
- On failure: updates KV state with error, notifies via NotificationBus
|
|
||||||
|
|
||||||
**File handling after pipeline:**
|
|
||||||
- On success: .xlsx moved from `Z:/cora-inbox/` → `Z:/cora-inbox/processed/`
|
|
||||||
- On failure: .xlsx stays in `Z:/cora-inbox/` (KV store marks it as failed so watcher doesn't retry automatically; user can reset KV entry to retry)
|
|
||||||
|
|
||||||
**Also adds `scan_cora_folder` tool** (can live in linkbuilding.py):
|
|
||||||
- Chat-invocable utility for the agent to check what's in the watch folder
|
|
||||||
- Returns list of unprocessed .xlsx files with ClickUp match status
|
|
||||||
- Internal agent tool, not a dashboard concern
|
|
||||||
|
|
||||||
### 7. `cheddahbot/clickup.py` — Add field creation method
|
|
||||||
|
|
||||||
Add `create_custom_field(list_id, name, field_type, type_config=None)` method that calls `POST /list/{list_id}/field`. Used by the setup tool to auto-create custom fields across lists.
|
|
||||||
|
|
||||||
### 8. `cheddahbot/__main__.py` — Add API endpoint
|
|
||||||
|
|
||||||
Add before Gradio mount:
|
|
||||||
```python
|
|
||||||
@fastapi_app.get("/api/linkbuilding/status")
|
|
||||||
async def linkbuilding_status():
|
|
||||||
"""Return link building status for dashboard consumption."""
|
|
||||||
# Returns:
|
|
||||||
# {
|
|
||||||
# "pending_cora_runs": [
|
|
||||||
# {"keyword": "precision cnc machining", "url": "https://...", "client": "Chapter 2", "task_id": "abc123"},
|
|
||||||
# ...
|
|
||||||
# ],
|
|
||||||
# "in_progress": [...], # Currently executing pipelines
|
|
||||||
# "completed": [...], # Recently completed (last 7 days)
|
|
||||||
# "failed": [...] # Failed tasks needing attention
|
|
||||||
# }
|
|
||||||
```
|
|
||||||
|
|
||||||
The `pending_cora_runs` section is the key dashboard data: queries ClickUp for "to do" tasks with Work Category="Link Building" and LB Method="Cora Backlinks", returns each task's `Keyword` field and `IMSURL` (copiable URL) so the user can see exactly which Cora reports need to be run.
|
|
||||||
|
|
||||||
Also push link building events to NotificationBus (category="linkbuilding") at each pipeline step for future real-time dashboard support.
|
|
||||||
|
|
||||||
No other `__main__.py` changes needed — agent wiring is automatic from config.yaml.
|
|
||||||
|
|
||||||
## ClickUp Custom Fields (Auto-Created)
|
|
||||||
|
|
||||||
New custom fields to be created programmatically:
|
|
||||||
|
|
||||||
| Field | Type | Purpose |
|
|
||||||
|-------|------|---------|
|
|
||||||
| `LB Method` | Dropdown | Link building subtype. Initial option: "Cora Backlinks" |
|
|
||||||
| `Keyword` | Short Text | Target keyword (used for file matching) |
|
|
||||||
| `CoraFile` | Short Text | Path to .xlsx file (optional, set by agent after file match) |
|
|
||||||
| `CustomAnchors` | Short Text | Comma-separated anchor text overrides |
|
|
||||||
| `BrandedPlusRatio` | Short Text | Override for `-bp` flag (e.g., "0.7") |
|
|
||||||
| `CLIFlags` | Short Text | Raw additional CLI flags (e.g., "-r 5 -t 0.3") |
|
|
||||||
|
|
||||||
Fields that already exist and will be reused: `Client`, `IMSURL`, `Work Category` (add "Link Building" option).
|
|
||||||
|
|
||||||
### Auto-creation approach
|
|
||||||
|
|
||||||
- Add `create_custom_field(list_id, name, type, type_config=None)` method to `cheddahbot/clickup.py` — calls `POST /list/{list_id}/field`
|
|
||||||
- Add a `setup_linkbuilding_fields` tool (category="linkbuilding") that:
|
|
||||||
1. Gets all list IDs in the space
|
|
||||||
2. For each list, checks if fields already exist (via `get_custom_fields`)
|
|
||||||
3. Creates missing fields via the new API method
|
|
||||||
4. For `LB Method` dropdown, creates with `type_config` containing "Cora Backlinks" option
|
|
||||||
5. For `Work Category`, adds "Link Building" option if missing
|
|
||||||
- This tool runs once during initial setup, or can be re-run if new lists are added
|
|
||||||
- Also add "Link Building" as an option to the existing `Work Category` dropdown if not present
|
|
||||||
|
|
||||||
## Data Flow & Status Lifecycle
|
|
||||||
|
|
||||||
### Primary Trigger: Folder Watcher (Cora Backlinks)
|
|
||||||
|
|
||||||
The folder watcher is the main trigger for Cora Backlinks. The ClickUp scheduler does NOT auto-execute these — it can't, because the .xlsx doesn't exist until the user runs Cora.
|
|
||||||
|
|
||||||
```
|
|
||||||
1. ClickUp task created:
|
|
||||||
Work Category="Link Building", LB Method="Cora Backlinks", status="to do"
|
|
||||||
Fields filled: Client, IMSURL, Keyword, CLIFlags, BrandedPlusRatio, etc.
|
|
||||||
→ Appears on dashboard as "needs Cora run"
|
|
||||||
|
|
||||||
2. User runs Cora manually, drops .xlsx in Z:/cora-inbox
|
|
||||||
|
|
||||||
3. Folder watcher (_scan_watch_folder, runs every 60 min):
|
|
||||||
→ Finds precision-cnc-machining.xlsx
|
|
||||||
→ Fuzzy matches "precision cnc machining" against Keyword field on ClickUp "to do" Link Building tasks
|
|
||||||
→ Match found → extracts metadata from ClickUp task (IMSURL, CLIFlags, etc.)
|
|
||||||
→ Sets CoraFile field on the ClickUp task to the file path
|
|
||||||
→ Moves task to "in progress"
|
|
||||||
→ Posts comment: "Starting Cora Backlinks pipeline..."
|
|
||||||
|
|
||||||
4. Pipeline runs:
|
|
||||||
→ Step 1: ingest-cora → comment: "CORA report ingested. Job file: jobs/xxx.json"
|
|
||||||
→ Step 2: generate-batch → comment: "Content generation complete. X articles across Y tiers."
|
|
||||||
|
|
||||||
5. On success:
|
|
||||||
→ Move task to "complete"
|
|
||||||
→ Post summary comment with stats
|
|
||||||
→ Move .xlsx to Z:/cora-inbox/processed/
|
|
||||||
|
|
||||||
6. On failure:
|
|
||||||
→ Move task to "internal review"
|
|
||||||
→ Post error comment with details
|
|
||||||
→ .xlsx stays in Z:/cora-inbox (can retry)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Secondary Trigger: Chat
|
|
||||||
|
|
||||||
```
|
|
||||||
User: "Run link building for Z:/cora-inbox/precision-cnc-machining.xlsx"
|
|
||||||
→ Chat brain calls run_cora_backlinks (or run_link_building with explicit lb_method)
|
|
||||||
→ Tool auto-looks up matching ClickUp task via Keyword field (if exists)
|
|
||||||
→ Same pipeline + ClickUp sync as above
|
|
||||||
→ If no ClickUp match: runs pipeline without ClickUp tracking, returns results to chat only
|
|
||||||
```
|
|
||||||
|
|
||||||
### Future Trigger: ClickUp Scheduler (other LB Methods)
|
|
||||||
|
|
||||||
Future link building methods (MCP, etc.) that don't need a .xlsx CAN be auto-executed by the ClickUp scheduler. The `run_link_building` orchestrator checks `lb_method`:
|
|
||||||
- "Cora Backlinks" → requires xlsx_path, skips if empty (folder watcher handles these)
|
|
||||||
- Future methods → can execute directly from ClickUp task data
|
|
||||||
|
|
||||||
### ClickUp Skill Map Note
|
|
||||||
|
|
||||||
The skill_map entry for "Link Building" exists primarily for **field mapping reference** (so the folder watcher and chat know which ClickUp fields map to which tool params). The ClickUp scheduler will discover these tasks but `run_link_building` will skip Cora Backlinks that have no xlsx_path — they're waiting for the folder watcher.
|
|
||||||
|
|
||||||
## Implementation Order
|
|
||||||
|
|
||||||
1. **Config** — Add `LinkBuildingConfig` to config.py, add `link_building:` section to config.yaml, add `link_builder` agent to config.yaml
|
|
||||||
2. **Core tools** — Create `cheddahbot/tools/linkbuilding.py` with `_run_blm_command`, parsers, `run_link_building` orchestrator, and `run_cora_backlinks` pipeline
|
|
||||||
3. **Standalone tools** — Add `blm_ingest_cora` and `blm_generate_batch`
|
|
||||||
4. **Tests** — Create `tests/test_linkbuilding.py`, verify with `uv run pytest tests/test_linkbuilding.py -v`
|
|
||||||
5. **ClickUp field creation** — Add `create_custom_field` to clickup.py, add `setup_linkbuilding_fields` tool
|
|
||||||
6. **ClickUp integration** — Add skill_map entry, add ClickUp state tracking to tools
|
|
||||||
7. **Folder watcher** — Add `_folder_watch_loop` to scheduler.py, add `scan_cora_folder` tool
|
|
||||||
8. **API endpoint** — Add `/api/linkbuilding/status` to `__main__.py`
|
|
||||||
9. **Skill file** — Create `skills/linkbuilding.md`
|
|
||||||
10. **ClickUp setup** — Run `setup_linkbuilding_fields` to auto-create custom fields across all lists
|
|
||||||
11. **Full test run** — `uv run pytest -v --no-cov`
|
|
||||||
|
|
||||||
## Verification
|
|
||||||
|
|
||||||
1. **Unit tests**: `uv run pytest tests/test_linkbuilding.py -v` — all pass with mocked subprocess
|
|
||||||
2. **Full suite**: `uv run pytest -v --no-cov` — no regressions
|
|
||||||
3. **Lint**: `uv run ruff check .` + `uv run ruff format .`
|
|
||||||
4. **Manual e2e**: Drop a real .xlsx in Z:/cora-inbox, verify ingest-cora runs, job JSON created, generate-batch runs
|
|
||||||
5. **ClickUp e2e**: Create a Link Building task in ClickUp with proper fields, wait for scheduler poll, verify execution
|
|
||||||
6. **Chat e2e**: Ask CheddahBot to "run link building for [keyword]" via chat UI
|
|
||||||
7. **API check**: Hit `http://localhost:7860/api/linkbuilding/status` and verify data returned
|
|
||||||
|
|
||||||
## Key Reference Files
|
|
||||||
|
|
||||||
- `cheddahbot/tools/press_release.py` — Reference pattern for multi-step pipeline tool
|
|
||||||
- `cheddahbot/scheduler.py:55-76` — Where to add 4th daemon thread
|
|
||||||
- `cheddahbot/config.py:108-200` — load_config() pattern for new config sections
|
|
||||||
- `E:/dev/Big-Link-Man/docs/CLI_COMMAND_REFERENCE.md` — Full CLI reference
|
|
||||||
- `E:/dev/Big-Link-Man/src/cli/commands.py` — Exact output formats to parse
|
|
||||||
|
|
@ -1,721 +0,0 @@
|
||||||
# CheddahBot Architecture
|
|
||||||
|
|
||||||
## System Overview
|
|
||||||
|
|
||||||
CheddahBot is a personal AI assistant built in Python. It exposes a Gradio-based
|
|
||||||
web UI, routes user messages through an agent loop backed by a model-agnostic LLM
|
|
||||||
adapter, persists conversations in SQLite, maintains a 4-layer memory system with
|
|
||||||
optional semantic search, and provides an extensible tool registry that the LLM
|
|
||||||
can invoke mid-conversation. A background scheduler handles cron-based tasks and
|
|
||||||
periodic heartbeat checks.
|
|
||||||
|
|
||||||
### Data Flow Diagram
|
|
||||||
|
|
||||||
```
|
|
||||||
User (browser)
|
|
||||||
|
|
|
||||||
v
|
|
||||||
+-----------+ +------------+ +--------------+
|
|
||||||
| Gradio UI | ---> | Agent | ---> | LLM Adapter |
|
|
||||||
| (ui.py) | | (agent.py) | | (llm.py) |
|
|
||||||
+-----------+ +-----+------+ +------+-------+
|
|
||||||
| |
|
|
||||||
+------------+-------+ +-------+--------+
|
|
||||||
| | | | Claude CLI |
|
|
||||||
v v v | OpenRouter |
|
|
||||||
+---------+ +---------+ +---+ | Ollama |
|
|
||||||
| Router | | Tools | | DB| | LM Studio |
|
|
||||||
|(router) | |(tools/) | |(db| +----------------+
|
|
||||||
+----+----+ +----+----+ +---+
|
|
||||||
| |
|
|
||||||
+-------+--+ +----+----+
|
|
||||||
| Identity | | Memory |
|
|
||||||
| SOUL.md | | System |
|
|
||||||
| USER.md | |(memory) |
|
|
||||||
+----------+ +---------+
|
|
||||||
```
|
|
||||||
|
|
||||||
1. The user submits text (or voice / files) through the Gradio interface.
|
|
||||||
2. `ui.py` hands the message to `Agent.respond()`.
|
|
||||||
3. The agent stores the user message in SQLite, builds a system prompt via
|
|
||||||
`router.py` (loading identity files and memory context), and formats the
|
|
||||||
conversation history.
|
|
||||||
4. The agent sends messages to `LLMAdapter.chat()` which dispatches to the
|
|
||||||
correct provider backend.
|
|
||||||
5. The LLM response streams back. If it contains tool-call requests, the agent
|
|
||||||
executes them through `ToolRegistry.execute()`, appends the results, and loops
|
|
||||||
back to step 4 (up to 10 iterations).
|
|
||||||
6. The final assistant response is stored in the database and streamed to the UI.
|
|
||||||
7. After responding, the agent checks whether the conversation has exceeded the
|
|
||||||
flush threshold; if so, the memory system summarizes older messages into the
|
|
||||||
daily log.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Module-by-Module Breakdown
|
|
||||||
|
|
||||||
### `__main__.py` -- Entry Point
|
|
||||||
|
|
||||||
**File:** `cheddahbot/__main__.py`
|
|
||||||
|
|
||||||
Orchestrates startup in this order:
|
|
||||||
|
|
||||||
1. `load_config()` -- loads configuration from env vars / YAML / defaults.
|
|
||||||
2. `Database(config.db_path)` -- opens (or creates) the SQLite database.
|
|
||||||
3. `LLMAdapter(...)` -- initializes the model-agnostic LLM client.
|
|
||||||
4. `Agent(config, db, llm)` -- creates the core agent.
|
|
||||||
5. `MemorySystem(config, db)` -- initializes the memory system and injects it
|
|
||||||
into the agent via `agent.set_memory()`.
|
|
||||||
6. `ToolRegistry(config, db, agent)` -- auto-discovers and loads all tool
|
|
||||||
modules, then injects via `agent.set_tools()`.
|
|
||||||
7. `Scheduler(config, db, agent)` -- starts two daemon threads (task poller and
|
|
||||||
heartbeat).
|
|
||||||
8. `create_ui(agent, config, llm)` -- builds the Gradio Blocks app and launches
|
|
||||||
it on the configured host/port.
|
|
||||||
|
|
||||||
Each subsystem (memory, tools, scheduler) is wrapped in a try/except so the
|
|
||||||
application degrades gracefully if optional dependencies are missing.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### `config.py` -- Configuration
|
|
||||||
|
|
||||||
**File:** `cheddahbot/config.py`
|
|
||||||
|
|
||||||
Defines four dataclasses:
|
|
||||||
|
|
||||||
| Dataclass | Key Fields |
|
|
||||||
|------------------|---------------------------------------------------------------|
|
|
||||||
| `Config` | `default_model`, `host`, `port`, `ollama_url`, `lmstudio_url`, `openrouter_api_key`, plus derived paths (`root_dir`, `data_dir`, `identity_dir`, `memory_dir`, `skills_dir`, `db_path`) |
|
|
||||||
| `MemoryConfig` | `max_context_messages` (50), `flush_threshold` (40), `embedding_model` ("all-MiniLM-L6-v2"), `search_top_k` (5) |
|
|
||||||
| `SchedulerConfig` | `heartbeat_interval_minutes` (30), `poll_interval_seconds` (60) |
|
|
||||||
| `ShellConfig` | `blocked_commands`, `require_approval` (False) |
|
|
||||||
|
|
||||||
`load_config()` applies three layers of configuration in priority order:
|
|
||||||
|
|
||||||
1. Dataclass defaults (lowest priority).
|
|
||||||
2. `config.yaml` at the project root (middle priority).
|
|
||||||
3. Environment variables with the `CHEDDAH_` prefix, plus `OPENROUTER_API_KEY`
|
|
||||||
(highest priority).
|
|
||||||
|
|
||||||
The function also ensures required data directories exist on disk.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### `db.py` -- Database Layer
|
|
||||||
|
|
||||||
**File:** `cheddahbot/db.py`
|
|
||||||
|
|
||||||
A thin wrapper around SQLite using thread-local connections (one connection per
|
|
||||||
thread), WAL journal mode, and foreign keys.
|
|
||||||
|
|
||||||
**Key methods:**
|
|
||||||
|
|
||||||
- `create_conversation(conv_id, title)` -- insert a new conversation row.
|
|
||||||
- `list_conversations(limit)` -- return recent conversations ordered by
|
|
||||||
`updated_at`.
|
|
||||||
- `add_message(conv_id, role, content, ...)` -- insert a message and touch the
|
|
||||||
conversation's `updated_at`.
|
|
||||||
- `get_messages(conv_id, limit)` -- return messages in chronological order.
|
|
||||||
- `count_messages(conv_id)` -- count messages for flush-threshold checks.
|
|
||||||
- `add_scheduled_task(name, prompt, schedule)` -- persist a scheduled task.
|
|
||||||
- `get_due_tasks()` -- return tasks whose `next_run` is in the past or NULL.
|
|
||||||
- `update_task_next_run(task_id, next_run)` -- update the next execution time.
|
|
||||||
- `log_task_run(task_id, result, error)` -- record the outcome of a task run.
|
|
||||||
- `kv_set(key, value)` / `kv_get(key)` -- generic key-value store.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### `agent.py` -- Core Agent Loop
|
|
||||||
|
|
||||||
**File:** `cheddahbot/agent.py`
|
|
||||||
|
|
||||||
Contains the `Agent` class, the central coordinator.
|
|
||||||
|
|
||||||
**Key members:**
|
|
||||||
|
|
||||||
- `conv_id` -- current conversation ID (a 12-character hex string).
|
|
||||||
- `_memory` -- optional `MemorySystem` reference.
|
|
||||||
- `_tools` -- optional `ToolRegistry` reference.
|
|
||||||
|
|
||||||
**Primary method: `respond(user_input, files)`**
|
|
||||||
|
|
||||||
This is a Python generator that yields text chunks for streaming. The detailed
|
|
||||||
flow is described in the next section.
|
|
||||||
|
|
||||||
**Helper: `respond_to_prompt(prompt)`**
|
|
||||||
|
|
||||||
Non-streaming wrapper that collects all chunks and returns a single string. Used
|
|
||||||
by the scheduler and heartbeat for internal prompts.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### `router.py` -- System Prompt Builder
|
|
||||||
|
|
||||||
**File:** `cheddahbot/router.py`
|
|
||||||
|
|
||||||
Two functions:
|
|
||||||
|
|
||||||
1. `build_system_prompt(identity_dir, memory_context, tools_description)` --
|
|
||||||
assembles the full system prompt by concatenating these sections separated by
|
|
||||||
horizontal rules:
|
|
||||||
- Contents of `identity/SOUL.md`
|
|
||||||
- Contents of `identity/USER.md`
|
|
||||||
- Memory context string (from the memory system)
|
|
||||||
- Tools description listing (from the tool registry)
|
|
||||||
- A fixed "Instructions" section with core behavioral directives.
|
|
||||||
|
|
||||||
2. `format_messages_for_llm(system_prompt, history, max_messages)` --
|
|
||||||
converts raw database rows into the `[{role, content}]` format expected by
|
|
||||||
the LLM. The system prompt becomes the first message. Tool results are
|
|
||||||
converted to user messages prefixed with `[Tool Result]`. History is trimmed
|
|
||||||
to the most recent `max_messages` entries.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### `llm.py` -- LLM Adapter
|
|
||||||
|
|
||||||
**File:** `cheddahbot/llm.py`
|
|
||||||
|
|
||||||
Described in detail in a dedicated section below.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### `memory.py` -- Memory System
|
|
||||||
|
|
||||||
**File:** `cheddahbot/memory.py`
|
|
||||||
|
|
||||||
Described in detail in a dedicated section below.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### `media.py` -- Audio/Video Processing
|
|
||||||
|
|
||||||
**File:** `cheddahbot/media.py`
|
|
||||||
|
|
||||||
Three utility functions:
|
|
||||||
|
|
||||||
- `transcribe_audio(path)` -- Speech-to-text. Tries local Whisper first, then
|
|
||||||
falls back to the OpenAI Whisper API.
|
|
||||||
- `text_to_speech(text, output_path, voice)` -- Text-to-speech via `edge-tts`
|
|
||||||
(free, no API key). Defaults to the `en-US-AriaNeural` voice.
|
|
||||||
- `extract_video_frames(video_path, max_frames)` -- Extracts key frames from
|
|
||||||
video using `ffprobe` (to get duration) and `ffmpeg` (to extract JPEG frames).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### `scheduler.py` -- Scheduler and Heartbeat
|
|
||||||
|
|
||||||
**File:** `cheddahbot/scheduler.py`
|
|
||||||
|
|
||||||
Described in detail in a dedicated section below.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### `ui.py` -- Gradio Web Interface
|
|
||||||
|
|
||||||
**File:** `cheddahbot/ui.py`
|
|
||||||
|
|
||||||
Builds a Gradio Blocks application with:
|
|
||||||
|
|
||||||
- A model dropdown (populated from `llm.list_available_models()`) with a refresh
|
|
||||||
button and a "New Chat" button.
|
|
||||||
- A `gr.Chatbot` widget for the conversation (500px height, copy buttons).
|
|
||||||
- A `gr.MultimodalTextbox` supporting text, file upload, and microphone input.
|
|
||||||
- A "Voice Chat" accordion for record-and-respond audio interaction.
|
|
||||||
- A "Conversation History" accordion showing past conversations from the
|
|
||||||
database.
|
|
||||||
- A "Settings" accordion with guidance on editing identity and config files.
|
|
||||||
|
|
||||||
**Event wiring:**
|
|
||||||
|
|
||||||
- Model dropdown change calls `llm.switch_model()`.
|
|
||||||
- Refresh button re-discovers local models.
|
|
||||||
- Message submit calls `agent.respond()` in streaming mode, updating the chatbot
|
|
||||||
widget with each chunk.
|
|
||||||
- Audio files attached to messages are transcribed via `media.transcribe_audio()`
|
|
||||||
before being sent to the agent.
|
|
||||||
- Voice Chat records audio, transcribes it, gets a text response from the agent,
|
|
||||||
converts it to speech via `media.text_to_speech()`, and plays it back.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### `tools/__init__.py` -- Tool Registry
|
|
||||||
|
|
||||||
**File:** `cheddahbot/tools/__init__.py`
|
|
||||||
|
|
||||||
Described in detail in a dedicated section below.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### `skills/__init__.py` -- Skill Registry
|
|
||||||
|
|
||||||
**File:** `cheddahbot/skills/__init__.py`
|
|
||||||
|
|
||||||
Defines a parallel registry for "skills" (multi-step operations). Key pieces:
|
|
||||||
|
|
||||||
- `SkillDef` -- dataclass holding `name`, `description`, `func`.
|
|
||||||
- `@skill(name, description)` -- decorator that registers a skill in the global
|
|
||||||
`_SKILLS` dict.
|
|
||||||
- `load_skill(path)` -- dynamically loads a `.py` file as a module (triggering
|
|
||||||
any `@skill` decorators inside it).
|
|
||||||
- `discover_skills(skills_dir)` -- loads all `.py` files from the skills
|
|
||||||
directory.
|
|
||||||
- `list_skills()` / `run_skill(name, **kwargs)` -- query and execute skills.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### `providers/__init__.py` -- Provider Extensions
|
|
||||||
|
|
||||||
**File:** `cheddahbot/providers/__init__.py`
|
|
||||||
|
|
||||||
Reserved for future custom provider implementations. Currently empty.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## The Agent Loop in Detail
|
|
||||||
|
|
||||||
When `Agent.respond(user_input)` is called, the following sequence occurs:
|
|
||||||
|
|
||||||
```
|
|
||||||
1. ensure_conversation()
|
|
||||||
|-- Creates a new conversation in the DB if one doesn't exist
|
|
||||||
|
|
|
||||||
2. db.add_message(conv_id, "user", user_input)
|
|
||||||
|-- Persists the user's message
|
|
||||||
|
|
|
||||||
3. Build system prompt
|
|
||||||
|-- memory.get_context(user_input) --> memory context string
|
|
||||||
|-- tools.get_tools_schema() --> OpenAI-format JSON schemas
|
|
||||||
|-- tools.get_tools_description() --> human-readable tool list
|
|
||||||
|-- router.build_system_prompt(identity_dir, memory_context, tools_description)
|
|
||||||
|
|
|
||||||
4. Load conversation history from DB
|
|
||||||
|-- db.get_messages(conv_id, limit=max_context_messages)
|
|
||||||
|-- router.format_messages_for_llm(system_prompt, history, max_messages)
|
|
||||||
|
|
|
||||||
5. AGENT LOOP (up to MAX_TOOL_ITERATIONS = 10):
|
|
||||||
|
|
|
||||||
|-- llm.chat(messages, tools=tools_schema, stream=True)
|
|
||||||
| |-- Yields {"type":"text","content":"..."} chunks --> streamed to user
|
|
||||||
| |-- Yields {"type":"tool_use","name":"...","input":{...}} chunks
|
|
||||||
|
|
|
||||||
|-- If no tool_calls: store assistant message, BREAK
|
|
||||||
|
|
|
||||||
|-- If tool_calls present:
|
|
||||||
| |-- Store assistant message with tool_calls metadata
|
|
||||||
| |-- For each tool call:
|
|
||||||
| | |-- yield "Using tool: <name>" indicator
|
|
||||||
| | |-- tools.execute(name, input) --> result string
|
|
||||||
| | |-- yield tool result (truncated to 2000 chars)
|
|
||||||
| | |-- db.add_message(conv_id, "tool", result)
|
|
||||||
| | |-- Append result to messages as user message
|
|
||||||
| |-- Continue loop (LLM sees tool results and can respond or call more tools)
|
|
||||||
|
|
|
||||||
6. After loop: check if memory flush is needed
|
|
||||||
|-- If message count > flush_threshold:
|
|
||||||
| |-- memory.auto_flush(conv_id)
|
|
||||||
```
|
|
||||||
|
|
||||||
The loop allows the LLM to chain up to 10 consecutive tool calls before being
|
|
||||||
cut off. Each tool result is injected back into the conversation as a user
|
|
||||||
message so the LLM can reason about it in the next iteration.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## LLM Adapter Design
|
|
||||||
|
|
||||||
**File:** `cheddahbot/llm.py`
|
|
||||||
|
|
||||||
### Provider Routing
|
|
||||||
|
|
||||||
The `LLMAdapter` supports four provider paths. The active provider is determined
|
|
||||||
by examining the current model ID:
|
|
||||||
|
|
||||||
| Model ID Pattern | Provider | Backend |
|
|
||||||
|-----------------------------|---------------|----------------------------------|
|
|
||||||
| `claude-*` | `claude` | Claude Code CLI (subprocess) |
|
|
||||||
| `local/ollama/<model>` | `ollama` | Ollama HTTP API (OpenAI-compat) |
|
|
||||||
| `local/lmstudio/<model>` | `lmstudio` | LM Studio HTTP API (OpenAI-compat) |
|
|
||||||
| Anything else | `openrouter` | OpenRouter API (OpenAI-compat) |
|
|
||||||
|
|
||||||
### The `chat()` Method
|
|
||||||
|
|
||||||
This is the single entry point. It accepts a list of messages, an optional tools
|
|
||||||
schema, and a stream flag. It returns a generator yielding dictionaries:
|
|
||||||
|
|
||||||
- `{"type": "text", "content": "..."}` -- a text chunk to display.
|
|
||||||
- `{"type": "tool_use", "id": "...", "name": "...", "input": {...}}` -- a tool
|
|
||||||
invocation request.
|
|
||||||
|
|
||||||
### Claude Code CLI Path (`_chat_claude_sdk`)
|
|
||||||
|
|
||||||
For Claude models, CheddahBot shells out to the `claude` CLI binary (the Claude
|
|
||||||
Code SDK):
|
|
||||||
|
|
||||||
1. Separates system prompt, conversation history, and the latest user message
|
|
||||||
from the messages list.
|
|
||||||
2. Builds a full system prompt by appending conversation history under a
|
|
||||||
"Conversation So Far" heading.
|
|
||||||
3. Invokes `claude -p <prompt> --model <model> --output-format json --system-prompt <system>`.
|
|
||||||
4. The `CLAUDECODE` environment variable is stripped from the subprocess
|
|
||||||
environment to avoid nested-session errors.
|
|
||||||
5. Parses the JSON output and yields the `result` field as a text chunk.
|
|
||||||
6. On Windows, `shell=True` is used for compatibility with npm-installed
|
|
||||||
binaries.
|
|
||||||
|
|
||||||
### OpenAI-Compatible Path (`_chat_openai_sdk`)
|
|
||||||
|
|
||||||
For OpenRouter, Ollama, and LM Studio, the adapter uses the `openai` Python SDK:
|
|
||||||
|
|
||||||
1. `_resolve_endpoint(provider)` returns the base URL and API key:
|
|
||||||
- OpenRouter: `https://openrouter.ai/api/v1` with the configured API key.
|
|
||||||
- Ollama: `http://localhost:11434/v1` with dummy key `"ollama"`.
|
|
||||||
- LM Studio: `http://localhost:1234/v1` with dummy key `"lm-studio"`.
|
|
||||||
2. `_resolve_model_id(provider)` strips the `local/ollama/` or
|
|
||||||
`local/lmstudio/` prefix from the model ID.
|
|
||||||
3. Creates an `openai.OpenAI` client with the resolved base URL and API key.
|
|
||||||
4. In streaming mode: iterates over `client.chat.completions.create(stream=True)`,
|
|
||||||
accumulates tool call arguments across chunks (indexed by `tc.index`), yields
|
|
||||||
text deltas immediately, and yields completed tool calls at the end of the
|
|
||||||
stream.
|
|
||||||
5. In non-streaming mode: makes a single call and yields text and tool calls from
|
|
||||||
the response.
|
|
||||||
|
|
||||||
### Model Discovery
|
|
||||||
|
|
||||||
- `discover_local_models()` -- probes the Ollama tags endpoint and LM Studio
|
|
||||||
models endpoint (3-second timeout each) and returns `ModelInfo` objects.
|
|
||||||
- `list_available_models()` -- returns a combined list of hardcoded Claude
|
|
||||||
models, hardcoded OpenRouter models (if an API key is configured), and
|
|
||||||
dynamically discovered local models.
|
|
||||||
|
|
||||||
### Model Switching
|
|
||||||
|
|
||||||
`switch_model(model_id)` updates `current_model`. The `provider` property
|
|
||||||
re-evaluates on every access, so switching models also implicitly switches
|
|
||||||
providers.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Memory System
|
|
||||||
|
|
||||||
**File:** `cheddahbot/memory.py`
|
|
||||||
|
|
||||||
### The 4 Layers
|
|
||||||
|
|
||||||
```
|
|
||||||
Layer 1: Identity -- identity/SOUL.md, identity/USER.md
|
|
||||||
(loaded by router.py into the system prompt)
|
|
||||||
|
|
||||||
Layer 2: Long-term -- memory/MEMORY.md
|
|
||||||
(persisted facts and instructions, appended over time)
|
|
||||||
|
|
||||||
Layer 3: Daily logs -- memory/YYYY-MM-DD.md
|
|
||||||
(timestamped entries per day, including auto-flush summaries)
|
|
||||||
|
|
||||||
Layer 4: Semantic -- memory/embeddings.db
|
|
||||||
(SQLite with vector embeddings for similarity search)
|
|
||||||
```
|
|
||||||
|
|
||||||
### How Memory Context is Built
|
|
||||||
|
|
||||||
`MemorySystem.get_context(query)` is called once per agent turn. It assembles a
|
|
||||||
string from:
|
|
||||||
|
|
||||||
1. **Long-term memory** -- the last 2000 characters of `MEMORY.md`.
|
|
||||||
2. **Today's log** -- the last 1500 characters of today's date file.
|
|
||||||
3. **Semantic search results** -- the top-k most similar entries to the user's
|
|
||||||
query, formatted as a bulleted list.
|
|
||||||
|
|
||||||
This string is injected into the system prompt by `router.py` under the heading
|
|
||||||
"Relevant Memory".
|
|
||||||
|
|
||||||
### Embedding and Search
|
|
||||||
|
|
||||||
- The embedding model is `all-MiniLM-L6-v2` from `sentence-transformers` (lazy
|
|
||||||
loaded, thread-safe via a lock).
|
|
||||||
- `_index_text(text, doc_id)` -- encodes the text into a vector and stores it in
|
|
||||||
`memory/embeddings.db` (table: `embeddings` with columns `id TEXT`, `text TEXT`,
|
|
||||||
`vector BLOB`).
|
|
||||||
- `search(query, top_k)` -- encodes the query, loads all vectors from the
|
|
||||||
database, computes cosine similarity against each one, sorts by score, and
|
|
||||||
returns the top-k results.
|
|
||||||
- If `sentence-transformers` is not installed, `_fallback_search()` performs
|
|
||||||
simple case-insensitive substring matching across all `.md` files in the memory
|
|
||||||
directory.
|
|
||||||
|
|
||||||
### Writing to Memory
|
|
||||||
|
|
||||||
- `remember(text)` -- appends a timestamped entry to `memory/MEMORY.md` and
|
|
||||||
indexes it for semantic search. Exposed to the LLM via the `remember_this`
|
|
||||||
tool.
|
|
||||||
- `log_daily(text)` -- appends a timestamped entry to today's daily log file and
|
|
||||||
indexes it. Exposed via the `log_note` tool.
|
|
||||||
|
|
||||||
### Auto-Flush
|
|
||||||
|
|
||||||
When `Agent.respond()` finishes, it checks `db.count_messages(conv_id)`. If the
|
|
||||||
count exceeds `config.memory.flush_threshold` (default 40):
|
|
||||||
|
|
||||||
1. `auto_flush(conv_id)` loads up to 200 messages.
|
|
||||||
2. All but the last 10 are selected for summarization.
|
|
||||||
3. A summary string is built from the selected messages (truncated to 1000
|
|
||||||
chars).
|
|
||||||
4. The summary is appended to the daily log via `log_daily()`.
|
|
||||||
|
|
||||||
This prevents conversations from growing unbounded while preserving context in
|
|
||||||
the daily log for future semantic search.
|
|
||||||
|
|
||||||
### Reindexing
|
|
||||||
|
|
||||||
`reindex_all()` clears all embeddings and re-indexes every line (longer than 10
|
|
||||||
characters) from every `.md` file in the memory directory. This can be called
|
|
||||||
to rebuild the search index from scratch.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Tool System
|
|
||||||
|
|
||||||
**File:** `cheddahbot/tools/__init__.py` (registry) and `cheddahbot/tools/*.py`
|
|
||||||
(tool modules)
|
|
||||||
|
|
||||||
### The `@tool` Decorator
|
|
||||||
|
|
||||||
```python
|
|
||||||
from cheddahbot.tools import tool
|
|
||||||
|
|
||||||
@tool("my_tool_name", "Description of what this tool does", category="general")
|
|
||||||
def my_tool_name(param1: str, param2: int = 10) -> str:
|
|
||||||
return f"Result: {param1}, {param2}"
|
|
||||||
```
|
|
||||||
|
|
||||||
The decorator:
|
|
||||||
|
|
||||||
1. Creates a `ToolDef` object containing the function, name, description,
|
|
||||||
category, and auto-extracted parameter schema.
|
|
||||||
2. Registers it in the global `_TOOLS` dictionary keyed by name.
|
|
||||||
3. Attaches the `ToolDef` as `func._tool_def` on the original function.
|
|
||||||
|
|
||||||
### Parameter Schema Generation
|
|
||||||
|
|
||||||
`_extract_params(func)` inspects the function signature using `inspect`:
|
|
||||||
|
|
||||||
- Skips parameters named `self` or `ctx`.
|
|
||||||
- Maps type annotations to JSON Schema types: `str` -> `"string"`, `int` ->
|
|
||||||
`"integer"`, `float` -> `"number"`, `bool` -> `"boolean"`, `list` ->
|
|
||||||
`"array"`. Unannotated parameters default to `"string"`.
|
|
||||||
- Parameters without defaults are marked as required.
|
|
||||||
|
|
||||||
### Schema Output
|
|
||||||
|
|
||||||
`ToolDef.to_openai_schema()` returns the tool definition in OpenAI
|
|
||||||
function-calling format:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"type": "function",
|
|
||||||
"function": {
|
|
||||||
"name": "tool_name",
|
|
||||||
"description": "...",
|
|
||||||
"parameters": {
|
|
||||||
"type": "object",
|
|
||||||
"properties": { ... },
|
|
||||||
"required": [ ... ]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Auto-Discovery
|
|
||||||
|
|
||||||
When `ToolRegistry.__init__()` is called, `_discover_tools()` uses
|
|
||||||
`pkgutil.iter_modules` to find every `.py` file in `cheddahbot/tools/` (skipping
|
|
||||||
files starting with `_`). Each module is imported via `importlib.import_module`,
|
|
||||||
which triggers the `@tool` decorators and populates the global registry.
|
|
||||||
|
|
||||||
### Tool Execution
|
|
||||||
|
|
||||||
`ToolRegistry.execute(name, args)`:
|
|
||||||
|
|
||||||
1. Looks up the `ToolDef` in the global `_TOOLS` dict.
|
|
||||||
2. Inspects the function signature for a `ctx` parameter. If present, injects a
|
|
||||||
context dictionary containing `config`, `db`, `agent`, and `memory`.
|
|
||||||
3. Calls the function with the provided arguments.
|
|
||||||
4. Returns the result as a string (or `"Done."` if the function returns `None`).
|
|
||||||
5. Catches all exceptions and returns `"Tool error: ..."`.
|
|
||||||
|
|
||||||
### Meta-Tools
|
|
||||||
|
|
||||||
Two special tools enable runtime extensibility:
|
|
||||||
|
|
||||||
**`build_tool`** (in `cheddahbot/tools/build_tool.py`):
|
|
||||||
- Accepts `name`, `description`, and `code` (Python source using the `@tool`
|
|
||||||
decorator).
|
|
||||||
- Writes a new `.py` file into `cheddahbot/tools/`.
|
|
||||||
- Hot-imports the module via `importlib.import_module`, which triggers the
|
|
||||||
`@tool` decorator and registers the new tool immediately.
|
|
||||||
- If the import fails, the file is deleted.
|
|
||||||
|
|
||||||
**`build_skill`** (in `cheddahbot/tools/build_skill.py`):
|
|
||||||
- Accepts `name`, `description`, and `steps` (Python source using the `@skill`
|
|
||||||
decorator).
|
|
||||||
- Writes a new `.py` file into the configured `skills/` directory.
|
|
||||||
- Calls `skills.load_skill()` to dynamically import it.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Scheduler and Heartbeat Design
|
|
||||||
|
|
||||||
**File:** `cheddahbot/scheduler.py`
|
|
||||||
|
|
||||||
The `Scheduler` class starts two daemon threads at application boot.
|
|
||||||
|
|
||||||
### Task Poller Thread
|
|
||||||
|
|
||||||
- Runs in `_poll_loop()`, sleeping for `poll_interval_seconds` (default 60)
|
|
||||||
between iterations.
|
|
||||||
- Each iteration calls `_run_due_tasks()`:
|
|
||||||
1. Queries `db.get_due_tasks()` for tasks where `next_run` is NULL or in the
|
|
||||||
past.
|
|
||||||
2. For each due task, calls `agent.respond_to_prompt(task["prompt"])` to
|
|
||||||
generate a response.
|
|
||||||
3. Logs the result via `db.log_task_run()`.
|
|
||||||
4. If the schedule is `"once:<datetime>"`, the task is disabled.
|
|
||||||
5. Otherwise, the schedule is treated as a cron expression: `croniter` is used
|
|
||||||
to calculate the next run time, which is saved via
|
|
||||||
`db.update_task_next_run()`.
|
|
||||||
|
|
||||||
### Heartbeat Thread
|
|
||||||
|
|
||||||
- Runs in `_heartbeat_loop()`, sleeping for `heartbeat_interval_minutes`
|
|
||||||
(default 30) between iterations.
|
|
||||||
- Waits 60 seconds before the first heartbeat to let the system initialize.
|
|
||||||
- Each iteration calls `_run_heartbeat()`:
|
|
||||||
1. Reads `identity/HEARTBEAT.md`.
|
|
||||||
2. Sends the checklist to the agent as a prompt: "HEARTBEAT CHECK. Review this
|
|
||||||
checklist and take action if needed."
|
|
||||||
3. If the response contains `"HEARTBEAT_OK"`, no action is logged.
|
|
||||||
4. Otherwise, the response is logged to the daily log via
|
|
||||||
`memory.log_daily()`.
|
|
||||||
|
|
||||||
### Thread Safety
|
|
||||||
|
|
||||||
Both threads are daemon threads (they die when the main process exits). The
|
|
||||||
`_stop_event` threading event can be set to gracefully shut down both loops. The
|
|
||||||
database layer uses thread-local connections, so concurrent access from the
|
|
||||||
scheduler threads and the Gradio request threads is safe.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Database Schema
|
|
||||||
|
|
||||||
The SQLite database (`data/cheddahbot.db`) contains five tables:
|
|
||||||
|
|
||||||
### `conversations`
|
|
||||||
|
|
||||||
| Column | Type | Notes |
|
|
||||||
|--------------|------|--------------------|
|
|
||||||
| `id` | TEXT | Primary key (hex) |
|
|
||||||
| `title` | TEXT | Display title |
|
|
||||||
| `created_at` | TEXT | ISO 8601 UTC |
|
|
||||||
| `updated_at` | TEXT | ISO 8601 UTC |
|
|
||||||
|
|
||||||
### `messages`
|
|
||||||
|
|
||||||
| Column | Type | Notes |
|
|
||||||
|---------------|---------|--------------------------------------------|
|
|
||||||
| `id` | INTEGER | Autoincrement primary key |
|
|
||||||
| `conv_id` | TEXT | Foreign key to `conversations.id` |
|
|
||||||
| `role` | TEXT | `"user"`, `"assistant"`, or `"tool"` |
|
|
||||||
| `content` | TEXT | Message body |
|
|
||||||
| `tool_calls` | TEXT | JSON array of `{name, input}` (nullable) |
|
|
||||||
| `tool_result` | TEXT | Name of the tool that produced this result (nullable) |
|
|
||||||
| `model` | TEXT | Model ID used for this response (nullable) |
|
|
||||||
| `created_at` | TEXT | ISO 8601 UTC |
|
|
||||||
|
|
||||||
Index: `idx_messages_conv` on `(conv_id, created_at)`.
|
|
||||||
|
|
||||||
### `scheduled_tasks`
|
|
||||||
|
|
||||||
| Column | Type | Notes |
|
|
||||||
|--------------|---------|---------------------------------------|
|
|
||||||
| `id` | INTEGER | Autoincrement primary key |
|
|
||||||
| `name` | TEXT | Human-readable task name |
|
|
||||||
| `prompt` | TEXT | The prompt to send to the agent |
|
|
||||||
| `schedule` | TEXT | Cron expression or `"once:<datetime>"`|
|
|
||||||
| `enabled` | INTEGER | 1 = active, 0 = disabled |
|
|
||||||
| `next_run` | TEXT | ISO 8601 UTC (nullable) |
|
|
||||||
| `created_at` | TEXT | ISO 8601 UTC |
|
|
||||||
|
|
||||||
### `task_run_logs`
|
|
||||||
|
|
||||||
| Column | Type | Notes |
|
|
||||||
|---------------|---------|------------------------------------|
|
|
||||||
| `id` | INTEGER | Autoincrement primary key |
|
|
||||||
| `task_id` | INTEGER | Foreign key to `scheduled_tasks.id`|
|
|
||||||
| `started_at` | TEXT | ISO 8601 UTC |
|
|
||||||
| `finished_at` | TEXT | ISO 8601 UTC (nullable) |
|
|
||||||
| `result` | TEXT | Agent response (nullable) |
|
|
||||||
| `error` | TEXT | Error message if failed (nullable) |
|
|
||||||
|
|
||||||
### `kv_store`
|
|
||||||
|
|
||||||
| Column | Type | Notes |
|
|
||||||
|---------|------|-----------------|
|
|
||||||
| `key` | TEXT | Primary key |
|
|
||||||
| `value` | TEXT | Arbitrary value |
|
|
||||||
|
|
||||||
### Embeddings Database
|
|
||||||
|
|
||||||
A separate SQLite file at `memory/embeddings.db` holds one table:
|
|
||||||
|
|
||||||
### `embeddings`
|
|
||||||
|
|
||||||
| Column | Type | Notes |
|
|
||||||
|----------|------|--------------------------------------|
|
|
||||||
| `id` | TEXT | Primary key (e.g. `"daily:2026-02-14:08:30"`) |
|
|
||||||
| `text` | TEXT | The original text that was embedded |
|
|
||||||
| `vector` | BLOB | Raw float32 bytes of the embedding vector |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Identity Files
|
|
||||||
|
|
||||||
Three Markdown files in the `identity/` directory define the agent's personality,
|
|
||||||
user context, and background behavior.
|
|
||||||
|
|
||||||
### `identity/SOUL.md`
|
|
||||||
|
|
||||||
Defines the agent's personality, communication style, boundaries, and quirks.
|
|
||||||
This is loaded first into the system prompt, making it the most prominent
|
|
||||||
identity influence on every response.
|
|
||||||
|
|
||||||
Contents are read by `router.build_system_prompt()` at the beginning of each
|
|
||||||
agent turn.
|
|
||||||
|
|
||||||
### `identity/USER.md`
|
|
||||||
|
|
||||||
Contains a user profile template: name, technical level, primary language,
|
|
||||||
current projects, and communication preferences. The user edits this file to
|
|
||||||
customize how the agent addresses them and what context it assumes.
|
|
||||||
|
|
||||||
Loaded by `router.build_system_prompt()` immediately after SOUL.md.
|
|
||||||
|
|
||||||
### `identity/HEARTBEAT.md`
|
|
||||||
|
|
||||||
A checklist of items to review on each heartbeat cycle. The scheduler reads this
|
|
||||||
file and sends it to the agent as a prompt every `heartbeat_interval_minutes`
|
|
||||||
(default 30 minutes). The agent processes the checklist and either confirms
|
|
||||||
"HEARTBEAT_OK" or takes action and logs it.
|
|
||||||
|
|
||||||
### Loading Order in the System Prompt
|
|
||||||
|
|
||||||
The system prompt assembled by `router.build_system_prompt()` concatenates these
|
|
||||||
sections, separated by `\n\n---\n\n`:
|
|
||||||
|
|
||||||
1. SOUL.md contents
|
|
||||||
2. USER.md contents
|
|
||||||
3. Memory context (long-term + daily log + semantic search results)
|
|
||||||
4. Tools description (categorized list of available tools)
|
|
||||||
5. Core instructions (hardcoded behavioral directives)
|
|
||||||
|
|
@ -1,61 +0,0 @@
|
||||||
# ClickUp Task Creation
|
|
||||||
|
|
||||||
## CLI Script
|
|
||||||
|
|
||||||
```bash
|
|
||||||
uv run python scripts/create_clickup_task.py --name "LINKS - keyword" --client "Client Name" \
|
|
||||||
--category "Link Building" --due-date 2026-03-18 --tag mar26 --time-estimate 2h \
|
|
||||||
--field "Keyword=keyword" --field "IMSURL=https://example.com" --field "LB Method=Cora Backlinks"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Defaults
|
|
||||||
|
|
||||||
- Priority: High (2)
|
|
||||||
- Assignee: Bryan (10765627)
|
|
||||||
- Status: "to do"
|
|
||||||
- Due date format: YYYY-MM-DD
|
|
||||||
- Tag format: mmmYY (e.g. feb26, mar26)
|
|
||||||
|
|
||||||
## Custom Fields
|
|
||||||
|
|
||||||
Any field can be set via `--field "Name=Value"`. Dropdowns are auto-resolved by name (case-insensitive).
|
|
||||||
|
|
||||||
## Task Types
|
|
||||||
|
|
||||||
### Link Building
|
|
||||||
- **Prefix**: `LINKS - {keyword}`
|
|
||||||
- **Work Category**: "Link Building"
|
|
||||||
- **Required fields**: Keyword, IMSURL
|
|
||||||
- **LB Method**: default "Cora Backlinks"
|
|
||||||
- **CLIFlags**: only add `--tier1-count N` when count is specified
|
|
||||||
- **BrandedPlusRatio**: default to 0.7
|
|
||||||
- **CustomAnchors**: only if given a list of custom anchors
|
|
||||||
- **time estimate**: 2.5h
|
|
||||||
|
|
||||||
### On Page Optimization
|
|
||||||
- **Prefix**: `OPT - {keyword}`
|
|
||||||
- **Work Category**: "On Page Optimization"
|
|
||||||
- **Required fields**: Keyword, IMSURL
|
|
||||||
- **time estimate**: 3h
|
|
||||||
-
|
|
||||||
|
|
||||||
### Content Creation
|
|
||||||
- **Prefix**: `CREATE - {keyword}`
|
|
||||||
- **Work Category**: "Content Creation"
|
|
||||||
- **Required fields**: Keyword
|
|
||||||
- **time estimate**: 4h
|
|
||||||
|
|
||||||
### Press Release
|
|
||||||
- **Prefix**: `PR - {keyword}`
|
|
||||||
- **Required fields**: Keyword, IMSURL
|
|
||||||
- **Work Category**: "Press Release"
|
|
||||||
- **PR Topic**: if not provided, ask if there is a topic. it can be blank if they respond with none.
|
|
||||||
- **time estimate**: 1.5h
|
|
||||||
|
|
||||||
## Chat Tool
|
|
||||||
|
|
||||||
The `clickup_create_task` tool provides the same capabilities via CheddahBot UI. Arbitrary custom fields are passed as JSON via `custom_fields_json`.
|
|
||||||
|
|
||||||
## Client Folder Lookup
|
|
||||||
|
|
||||||
Tasks are created in the "Overall" list inside the client's folder. Folder name is matched case-insensitively.
|
|
||||||
|
|
@ -1,110 +0,0 @@
|
||||||
# ntfy.sh Push Notifications Setup
|
|
||||||
|
|
||||||
CheddahBot sends push notifications to your phone and desktop via [ntfy.sh](https://ntfy.sh) when tasks complete, reports are ready, or errors occur.
|
|
||||||
|
|
||||||
## 1. Install the ntfy App
|
|
||||||
|
|
||||||
- **Android:** [Play Store](https://play.google.com/store/apps/details?id=io.heckel.ntfy)
|
|
||||||
- **iOS:** [App Store](https://apps.apple.com/us/app/ntfy/id1625396347)
|
|
||||||
- **Desktop:** Open [ntfy.sh](https://ntfy.sh) in your browser and enable browser notifications when prompted
|
|
||||||
|
|
||||||
## 2. Pick Topic Names
|
|
||||||
|
|
||||||
Topics are like channels. Anyone who knows the topic name can subscribe, so use random strings:
|
|
||||||
|
|
||||||
```
|
|
||||||
cheddahbot-a8f3k9x2m7
|
|
||||||
cheddahbot-errors-p4w2j6n8
|
|
||||||
```
|
|
||||||
|
|
||||||
Generate your own — any random string works. No account or registration needed.
|
|
||||||
|
|
||||||
## 3. Subscribe to Your Topics
|
|
||||||
|
|
||||||
**Phone app:**
|
|
||||||
1. Open the ntfy app
|
|
||||||
2. Tap the + button
|
|
||||||
3. Enter your topic name (e.g. `cheddahbot-a8f3k9x2m7`)
|
|
||||||
4. Server: `https://ntfy.sh` (default)
|
|
||||||
5. Repeat for your errors topic
|
|
||||||
|
|
||||||
**Browser:**
|
|
||||||
1. Go to [ntfy.sh](https://ntfy.sh)
|
|
||||||
2. Click "Subscribe to topic"
|
|
||||||
3. Enter the same topic names
|
|
||||||
4. Allow browser notifications when prompted
|
|
||||||
|
|
||||||
## 4. Add Topics to .env
|
|
||||||
|
|
||||||
Add these lines to your `.env` file in the CheddahBot root:
|
|
||||||
|
|
||||||
```
|
|
||||||
NTFY_TOPIC_HUMAN_ACTION=cheddahbot-a8f3k9x2m7
|
|
||||||
NTFY_TOPIC_ERRORS=cheddahbot-errors-p4w2j6n8
|
|
||||||
```
|
|
||||||
|
|
||||||
Replace with your actual topic names.
|
|
||||||
|
|
||||||
## 5. Restart CheddahBot
|
|
||||||
|
|
||||||
Kill the running instance and restart:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
uv run python -m cheddahbot
|
|
||||||
```
|
|
||||||
|
|
||||||
You should see in the startup logs:
|
|
||||||
|
|
||||||
```
|
|
||||||
ntfy notifier initialized with 2 channel(s): human_action, errors
|
|
||||||
ntfy notifier subscribed to notification bus
|
|
||||||
```
|
|
||||||
|
|
||||||
## What Gets Notified
|
|
||||||
|
|
||||||
### human_action channel (high priority)
|
|
||||||
Notifications where you need to do something:
|
|
||||||
- Cora report finished and ready
|
|
||||||
- Press release completed
|
|
||||||
- Content outline ready for review
|
|
||||||
- Content optimization completed
|
|
||||||
- Link building pipeline finished
|
|
||||||
- Cora report distributed to inbox
|
|
||||||
|
|
||||||
### errors channel (urgent priority)
|
|
||||||
Notifications when something went wrong:
|
|
||||||
- ClickUp task failed or was skipped
|
|
||||||
- AutoCora job failed
|
|
||||||
- Link building pipeline error
|
|
||||||
- Content pipeline error
|
|
||||||
- Missing ClickUp field matches
|
|
||||||
- File copy failures
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
Channel routing is configured in `config.yaml` under the `ntfy:` section. Each channel has:
|
|
||||||
|
|
||||||
- `topic_env_var` — which env var holds the topic name
|
|
||||||
- `categories` — notification categories to listen to (`clickup`, `autocora`, `linkbuilding`, `content`)
|
|
||||||
- `include_patterns` — regex patterns the message must match (at least one)
|
|
||||||
- `exclude_patterns` — regex patterns that reject the message (takes priority over include)
|
|
||||||
- `priority` — ntfy priority level: `min`, `low`, `default`, `high`, `urgent`
|
|
||||||
- `tags` — emoji shortcodes shown on the notification (e.g. `white_check_mark`, `rotating_light`)
|
|
||||||
|
|
||||||
### Adding a New Channel
|
|
||||||
|
|
||||||
1. Add a new entry under `ntfy.channels` in `config.yaml`
|
|
||||||
2. Add the topic env var to `.env`
|
|
||||||
3. Subscribe to the topic in your ntfy app
|
|
||||||
4. Restart CheddahBot
|
|
||||||
|
|
||||||
### Privacy
|
|
||||||
|
|
||||||
The public ntfy.sh server has no authentication by default. Your topic name is the only security — use a long random string to make it unguessable. Alternatively:
|
|
||||||
|
|
||||||
- Create a free ntfy.sh account and set read/write ACLs on your topics
|
|
||||||
- Self-host ntfy (single binary) and set `server: http://localhost:8080` in config.yaml
|
|
||||||
|
|
||||||
### Disabling
|
|
||||||
|
|
||||||
Set `enabled: false` in the `ntfy:` section of `config.yaml`, or remove the env vars from `.env`.
|
|
||||||
|
|
@ -1,43 +0,0 @@
|
||||||
# Scheduler Refactor Notes
|
|
||||||
|
|
||||||
## Issue: AutoCora Single-Day Window (found 2026-02-27)
|
|
||||||
|
|
||||||
**Symptom:** Task `86b8grf16` ("LINKS - anti vibration rubber mounts", due Feb 18) has been sitting in "to do" forever with no Cora report generated.
|
|
||||||
|
|
||||||
**Root cause:** `_find_qualifying_tasks()` in `tools/autocora.py` filters tasks to **exactly one calendar day** (the `target_date`, which defaults to today). The scheduler calls this daily with `today`:
|
|
||||||
|
|
||||||
```python
|
|
||||||
today = datetime.now(UTC).strftime("%Y-%m-%d")
|
|
||||||
result = submit_autocora_jobs(target_date=today, ctx=ctx)
|
|
||||||
```
|
|
||||||
|
|
||||||
If CheddahBot isn't running on the task's due date (or the DB is empty/wiped), the task is **permanently orphaned** — no catch-up, no retry, no visibility.
|
|
||||||
|
|
||||||
**Affected task types:** All three `cora_categories` — Link Building, On Page Optimization, Content Creation.
|
|
||||||
|
|
||||||
**What needs to change:** Auto-submit should also pick up overdue tasks (due date in the past, still "to do", no existing AutoCora job in KV store).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Empty Database State (found 2026-02-27)
|
|
||||||
|
|
||||||
`cheddahbot.db` has zero rows in all tables (kv_store, notifications, scheduled_tasks, etc.). Either fresh DB or wiped. This means:
|
|
||||||
- No task state tracking is happening
|
|
||||||
- No AutoCora job submissions are recorded
|
|
||||||
- Folder watcher has no history
|
|
||||||
- All loops show no `last_run` timestamps
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Context: Claude Scheduled Tasks
|
|
||||||
|
|
||||||
Claude released scheduled tasks (2026-02-26). Need to evaluate whether parts of CheddahBot's scheduler (heartbeat, poll loop, ClickUp polling, folder watchers, AutoCora) could be replaced or augmented by Claude's native scheduling.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Additional Issues to Investigate
|
|
||||||
|
|
||||||
- [ ] `auto_execute: false` on Link Building — is this intentional given the folder-watcher pipeline?
|
|
||||||
- [ ] Folder watcher at `Z:/cora-inbox` — does this path stay accessible?
|
|
||||||
- [ ] No dashboard/UI surfacing "tasks waiting for action" — stuck tasks are invisible
|
|
||||||
- [ ] AutoCora loop waits 30s before first poll, then runs every 5min — but auto-submit only checks today's tasks each cycle (redundant repeated calls)
|
|
||||||
|
|
@ -50,10 +50,9 @@ CheddahBot runs 6 daemon threads. All start at boot and run until shutdown.
|
||||||
| **poll** | 60 seconds | Runs cron-scheduled tasks from the database |
|
| **poll** | 60 seconds | Runs cron-scheduled tasks from the database |
|
||||||
| **heartbeat** | 30 minutes | Reads HEARTBEAT.md checklist, takes action if needed |
|
| **heartbeat** | 30 minutes | Reads HEARTBEAT.md checklist, takes action if needed |
|
||||||
| **clickup** | 20 minutes | Polls ClickUp for tasks to auto-execute (only Press Releases currently) |
|
| **clickup** | 20 minutes | Polls ClickUp for tasks to auto-execute (only Press Releases currently) |
|
||||||
| **folder_watch** | 40 minutes | Scans `//PennQnap1/SHARE1/cora-inbox` for .xlsx files → triggers Link Building |
|
| **folder_watch** | 60 minutes | Scans `Z:/cora-inbox` for .xlsx files → triggers Link Building |
|
||||||
| **autocora** | 5 minutes | Submits Cora jobs for today's tasks + polls for results |
|
| **autocora** | 5 minutes | Submits Cora jobs for today's tasks + polls for results |
|
||||||
| **content_watch** | 40 minutes | Scans `//PennQnap1/SHARE1/content-cora-inbox` for .xlsx files → triggers Content/OPT Phase 1 |
|
| **content_watch** | 60 minutes | Scans `Z:/content-cora-inbox` for .xlsx files → triggers Content/OPT Phase 1 |
|
||||||
| **cora_distribute** | 40 minutes | Scans `//PennQnap1/SHARE1/Cora-For-Human` for .xlsx files → distributes to pipeline inboxes |
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -16,9 +16,6 @@ dependencies = [
|
||||||
"edge-tts>=6.1",
|
"edge-tts>=6.1",
|
||||||
"python-docx>=1.2.0",
|
"python-docx>=1.2.0",
|
||||||
"openpyxl>=3.1.5",
|
"openpyxl>=3.1.5",
|
||||||
"jinja2>=3.1.6",
|
|
||||||
"python-multipart>=0.0.22",
|
|
||||||
"sse-starlette>=3.3.3",
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[build-system]
|
[build-system]
|
||||||
|
|
|
||||||
|
|
@ -1,94 +0,0 @@
|
||||||
"""Query ClickUp 'to do' tasks tagged feb26 in OPT/LINKS/Content categories."""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from datetime import datetime, timezone
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
sys.stdout.reconfigure(line_buffering=True)
|
|
||||||
sys.path.insert(0, str(Path(__file__).resolve().parent.parent))
|
|
||||||
|
|
||||||
from cheddahbot.config import load_config
|
|
||||||
from cheddahbot.clickup import ClickUpClient
|
|
||||||
|
|
||||||
CATEGORY_PREFIXES = ("opt", "link", "content", "ai content")
|
|
||||||
TAG_FILTER = "feb26"
|
|
||||||
|
|
||||||
|
|
||||||
def ms_to_date(ms_str: str) -> str:
|
|
||||||
if not ms_str:
|
|
||||||
return "—"
|
|
||||||
try:
|
|
||||||
ts = int(ms_str) / 1000
|
|
||||||
return datetime.fromtimestamp(ts, tz=timezone.utc).strftime("%m/%d")
|
|
||||||
except (ValueError, OSError):
|
|
||||||
return "—"
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
cfg = load_config()
|
|
||||||
if not cfg.clickup.api_token or not cfg.clickup.space_id:
|
|
||||||
print("ERROR: CLICKUP_API_TOKEN or CLICKUP_SPACE_ID not set.")
|
|
||||||
return
|
|
||||||
|
|
||||||
client = ClickUpClient(
|
|
||||||
api_token=cfg.clickup.api_token,
|
|
||||||
workspace_id=cfg.clickup.workspace_id,
|
|
||||||
task_type_field_name=cfg.clickup.task_type_field_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Fetch all 'to do' tasks across the space
|
|
||||||
tasks = client.get_tasks_from_space(cfg.clickup.space_id, statuses=["to do"])
|
|
||||||
|
|
||||||
# Filter by feb26 tag
|
|
||||||
tagged = [t for t in tasks if TAG_FILTER in [tag.lower() for tag in t.tags]]
|
|
||||||
|
|
||||||
if not tagged:
|
|
||||||
all_tags = set()
|
|
||||||
for t in tasks:
|
|
||||||
all_tags.update(t.tags)
|
|
||||||
print(f"No tasks with tag '{TAG_FILTER}'. Tags seen: {sorted(all_tags)}")
|
|
||||||
print(f"Total 'to do' tasks found: {len(tasks)}")
|
|
||||||
return
|
|
||||||
|
|
||||||
# Filter to OPT/LINKS/Content categories (by task name, Work Category, or list name)
|
|
||||||
def is_target_category(t):
|
|
||||||
name_lower = t.name.lower().strip()
|
|
||||||
wc = (t.custom_fields.get("Work Category") or "").lower()
|
|
||||||
ln = (t.list_name or "").lower()
|
|
||||||
for prefix in CATEGORY_PREFIXES:
|
|
||||||
if name_lower.startswith(prefix) or prefix in wc or prefix in ln:
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
filtered = [t for t in tagged if is_target_category(t)]
|
|
||||||
skipped = [t for t in tagged if not is_target_category(t)]
|
|
||||||
|
|
||||||
# Sort by due date (oldest first), tasks with no due date go last
|
|
||||||
filtered.sort(key=lambda t: int(t.due_date) if t.due_date else float("inf"))
|
|
||||||
|
|
||||||
top = filtered[:10]
|
|
||||||
|
|
||||||
# Build table
|
|
||||||
print(f"feb26-tagged 'to do' tasks — OPT / LINKS / Content (top 10, oldest first)")
|
|
||||||
print(f"\n{'#':>2} | {'ID':<11} | {'Keyword/Name':<50} | {'Due':<6} | {'Customer':<25} | Tags")
|
|
||||||
print("-" * 120)
|
|
||||||
for i, t in enumerate(top, 1):
|
|
||||||
customer = t.custom_fields.get("Customer", "") or "—"
|
|
||||||
due = ms_to_date(t.due_date)
|
|
||||||
tags = ", ".join(t.tags)
|
|
||||||
name = t.name[:50]
|
|
||||||
print(f"{i:>2} | {t.id:<11} | {name:<50} | {due:<6} | {customer:<25} | {tags}")
|
|
||||||
|
|
||||||
print(f"\nShowing {len(top)} of {len(filtered)} OPT/LINKS/Content tasks ({len(tagged)} total feb26-tagged).")
|
|
||||||
if skipped:
|
|
||||||
print(f"\nSkipped {len(skipped)} non-OPT/LINKS/Content tasks:")
|
|
||||||
for t in skipped:
|
|
||||||
print(f" - {t.name} ({t.id})")
|
|
||||||
|
|
||||||
finally:
|
|
||||||
client.close()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
|
|
@ -1,120 +0,0 @@
|
||||||
"""Query ClickUp 'to do' tasks tagged feb26 in OPT/LINKS/Content categories."""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
from datetime import datetime, timezone
|
|
||||||
|
|
||||||
# Add project root to path
|
|
||||||
sys.path.insert(0, str(Path(__file__).resolve().parent.parent))
|
|
||||||
|
|
||||||
from cheddahbot.config import load_config
|
|
||||||
from cheddahbot.clickup import ClickUpClient
|
|
||||||
|
|
||||||
|
|
||||||
def ms_to_date(ms_str: str) -> str:
|
|
||||||
"""Convert Unix-ms timestamp string to YYYY-MM-DD."""
|
|
||||||
if not ms_str:
|
|
||||||
return "—"
|
|
||||||
try:
|
|
||||||
ts = int(ms_str) / 1000
|
|
||||||
return datetime.fromtimestamp(ts, tz=timezone.utc).strftime("%Y-%m-%d")
|
|
||||||
except (ValueError, OSError):
|
|
||||||
return "—"
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
cfg = load_config()
|
|
||||||
if not cfg.clickup.api_token or not cfg.clickup.space_id:
|
|
||||||
print("ERROR: CLICKUP_API_TOKEN or CLICKUP_SPACE_ID not set.")
|
|
||||||
return
|
|
||||||
|
|
||||||
client = ClickUpClient(
|
|
||||||
api_token=cfg.clickup.api_token,
|
|
||||||
workspace_id=cfg.clickup.workspace_id,
|
|
||||||
task_type_field_name=cfg.clickup.task_type_field_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Step 1: Get folders, find OPT/LINKS/Content
|
|
||||||
target_folders = {"opt", "links", "content"}
|
|
||||||
try:
|
|
||||||
folders = client.get_folders(cfg.clickup.space_id)
|
|
||||||
except Exception as e:
|
|
||||||
print(f"ERROR fetching folders: {e}")
|
|
||||||
client.close()
|
|
||||||
return
|
|
||||||
|
|
||||||
print(f"All folders: {[f['name'] for f in folders]}")
|
|
||||||
|
|
||||||
matched_lists = [] # (list_id, list_name, folder_name)
|
|
||||||
for folder in folders:
|
|
||||||
if folder["name"].lower() in target_folders:
|
|
||||||
for lst in folder["lists"]:
|
|
||||||
matched_lists.append((lst["id"], lst["name"], folder["name"]))
|
|
||||||
|
|
||||||
if not matched_lists:
|
|
||||||
print(f"No folders matching {target_folders}. Falling back to full space scan.")
|
|
||||||
try:
|
|
||||||
tasks = client.get_tasks_from_space(cfg.clickup.space_id, statuses=["to do"])
|
|
||||||
finally:
|
|
||||||
client.close()
|
|
||||||
else:
|
|
||||||
print(f"Querying lists: {[(ln, fn) for _, ln, fn in matched_lists]}")
|
|
||||||
tasks = []
|
|
||||||
for list_id, list_name, folder_name in matched_lists:
|
|
||||||
try:
|
|
||||||
batch = client.get_tasks(list_id, statuses=["to do"])
|
|
||||||
# Stash folder name on each task for display
|
|
||||||
for t in batch:
|
|
||||||
t._folder = folder_name
|
|
||||||
tasks.extend(batch)
|
|
||||||
except Exception as e:
|
|
||||||
print(f" Error fetching {list_name}: {e}")
|
|
||||||
client.close()
|
|
||||||
|
|
||||||
print(f"Total 'to do' tasks from target folders: {len(tasks)}")
|
|
||||||
|
|
||||||
# Filter by "feb26" tag (case-insensitive)
|
|
||||||
tagged = [t for t in tasks if any(tag.lower() == "feb26" for tag in t.tags)]
|
|
||||||
|
|
||||||
if not tagged:
|
|
||||||
print(f"No 'to do' tasks with 'feb26' tag found.")
|
|
||||||
all_tags = set()
|
|
||||||
for t in tasks:
|
|
||||||
all_tags.update(t.tags)
|
|
||||||
print(f"Tags found across all to-do tasks: {sorted(all_tags)}")
|
|
||||||
return
|
|
||||||
|
|
||||||
filtered = tagged
|
|
||||||
|
|
||||||
# Sort by due date (oldest first), tasks without due date go last
|
|
||||||
def sort_key(t):
|
|
||||||
if t.due_date:
|
|
||||||
return (0, int(t.due_date))
|
|
||||||
return (1, 0)
|
|
||||||
|
|
||||||
filtered.sort(key=sort_key)
|
|
||||||
|
|
||||||
# Take top 10
|
|
||||||
top10 = filtered[:10]
|
|
||||||
|
|
||||||
# Build table
|
|
||||||
print(f"\n## ClickUp 'to do' — feb26 tag — OPT/LINKS/Content ({len(filtered)} total, showing top 10)\n")
|
|
||||||
print(f"{'#':<3} | {'ID':<12} | {'Keyword/Name':<40} | {'Due':<12} | {'Customer':<20} | Tags")
|
|
||||||
print(f"{'—'*3} | {'—'*12} | {'—'*40} | {'—'*12} | {'—'*20} | {'—'*15}")
|
|
||||||
|
|
||||||
for i, t in enumerate(top10, 1):
|
|
||||||
customer = t.custom_fields.get("Customer", "") or "—"
|
|
||||||
due = ms_to_date(t.due_date)
|
|
||||||
tags = ", ".join(t.tags) if t.tags else "—"
|
|
||||||
name = t.name[:38] + ".." if len(t.name) > 40 else t.name
|
|
||||||
print(f"{i:<3} | {t.id:<12} | {name:<40} | {due:<12} | {customer:<20} | {tags}")
|
|
||||||
|
|
||||||
print(f"\nCategory breakdown:")
|
|
||||||
from collections import Counter
|
|
||||||
cats = Counter(t.task_type for t in filtered)
|
|
||||||
for cat, count in cats.most_common():
|
|
||||||
print(f" {cat or '(none)'}: {count}")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
|
|
@ -1,97 +0,0 @@
|
||||||
"""Query ClickUp for feb26-tagged to-do tasks in OPT/LINKS/Content categories."""
|
|
||||||
|
|
||||||
from datetime import datetime, UTC
|
|
||||||
from cheddahbot.config import load_config
|
|
||||||
from cheddahbot.clickup import ClickUpClient
|
|
||||||
|
|
||||||
cfg = load_config()
|
|
||||||
client = ClickUpClient(
|
|
||||||
api_token=cfg.clickup.api_token,
|
|
||||||
workspace_id=cfg.clickup.workspace_id,
|
|
||||||
task_type_field_name=cfg.clickup.task_type_field_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
tasks = client.get_tasks_from_overall_lists(cfg.clickup.space_id, statuses=["to do"])
|
|
||||||
client.close()
|
|
||||||
|
|
||||||
# Filter: tagged feb26
|
|
||||||
feb26 = [t for t in tasks if "feb26" in t.tags]
|
|
||||||
|
|
||||||
# Filter: OPT / LINKS / Content categories (by Work Category or name prefix)
|
|
||||||
def is_target(t):
|
|
||||||
cat = (t.task_type or "").lower()
|
|
||||||
name = t.name.upper()
|
|
||||||
if cat in ("on page optimization", "link building", "content creation"):
|
|
||||||
return True
|
|
||||||
if name.startswith("OPT") or name.startswith("LINKS") or name.startswith("NEW -"):
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
filtered = [t for t in feb26 if is_target(t)]
|
|
||||||
|
|
||||||
# Sort by due date ascending (no due date = sort last)
|
|
||||||
def sort_key(t):
|
|
||||||
if t.due_date:
|
|
||||||
return int(t.due_date)
|
|
||||||
return float("inf")
|
|
||||||
|
|
||||||
filtered.sort(key=sort_key)
|
|
||||||
top10 = filtered[:10]
|
|
||||||
|
|
||||||
def fmt_due(ms_str):
|
|
||||||
if not ms_str:
|
|
||||||
return "No due"
|
|
||||||
ts = int(ms_str) / 1000
|
|
||||||
return datetime.fromtimestamp(ts, tz=UTC).strftime("%b %d")
|
|
||||||
|
|
||||||
def fmt_customer(t):
|
|
||||||
c = t.custom_fields.get("Customer", "")
|
|
||||||
if c and str(c) != "None":
|
|
||||||
return str(c)
|
|
||||||
return t.list_name
|
|
||||||
|
|
||||||
def fmt_cat(t):
|
|
||||||
cat = t.task_type
|
|
||||||
name = t.name.upper()
|
|
||||||
if not cat or cat.strip() == "":
|
|
||||||
if name.startswith("LINKS"):
|
|
||||||
return "LINKS"
|
|
||||||
elif name.startswith("OPT"):
|
|
||||||
return "OPT"
|
|
||||||
elif name.startswith("NEW"):
|
|
||||||
return "Content"
|
|
||||||
return "?"
|
|
||||||
mapping = {
|
|
||||||
"On Page Optimization": "OPT",
|
|
||||||
"Link Building": "LINKS",
|
|
||||||
"Content Creation": "Content",
|
|
||||||
}
|
|
||||||
return mapping.get(cat, cat)
|
|
||||||
|
|
||||||
def fmt_tags(t):
|
|
||||||
return ", ".join(t.tags) if t.tags else ""
|
|
||||||
|
|
||||||
print(f"## feb26 To-Do: OPT / LINKS / Content ({len(filtered)} total, showing top 10 oldest)")
|
|
||||||
print()
|
|
||||||
print("| # | ID | Keyword/Name | Due | Customer | Tags |")
|
|
||||||
print("|---|-----|-------------|-----|----------|------|")
|
|
||||||
for i, t in enumerate(top10, 1):
|
|
||||||
name = t.name[:55]
|
|
||||||
tid = t.id
|
|
||||||
due = fmt_due(t.due_date)
|
|
||||||
cust = fmt_customer(t)
|
|
||||||
tags = fmt_tags(t)
|
|
||||||
print(f"| {i} | {tid} | {name} | {due} | {cust} | {tags} |")
|
|
||||||
|
|
||||||
if len(filtered) > 10:
|
|
||||||
print()
|
|
||||||
remaining = filtered[10:]
|
|
||||||
print(f"### Remaining {len(remaining)} tasks:")
|
|
||||||
print("| # | ID | Keyword/Name | Due | Customer | Tags |")
|
|
||||||
print("|---|-----|-------------|-----|----------|------|")
|
|
||||||
for i, t in enumerate(remaining, 11):
|
|
||||||
name = t.name[:55]
|
|
||||||
print(f"| {i} | {t.id} | {name} | {fmt_due(t.due_date)} | {fmt_customer(t)} | {fmt_tags(t)} |")
|
|
||||||
|
|
||||||
print()
|
|
||||||
print(f"*{len(filtered)} matching tasks, {len(feb26)} total feb26 tasks, {len(tasks)} total to-do*")
|
|
||||||
|
|
@ -1,87 +0,0 @@
|
||||||
"""Query ClickUp 'to do' tasks tagged feb26 in OPT/LINKS/Content categories."""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
from datetime import datetime, timezone
|
|
||||||
|
|
||||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), ".."))
|
|
||||||
|
|
||||||
from dotenv import load_dotenv
|
|
||||||
|
|
||||||
load_dotenv(os.path.join(os.path.dirname(__file__), "..", ".env"))
|
|
||||||
|
|
||||||
from cheddahbot.clickup import ClickUpClient
|
|
||||||
|
|
||||||
TOKEN = os.getenv("CLICKUP_API_TOKEN", "")
|
|
||||||
SPACE_ID = os.getenv("CLICKUP_SPACE_ID", "")
|
|
||||||
|
|
||||||
if not TOKEN or not SPACE_ID:
|
|
||||||
print("ERROR: CLICKUP_API_TOKEN and CLICKUP_SPACE_ID must be set in .env")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
CATEGORIES = {"On Page Optimization", "Content Creation", "Link Building"}
|
|
||||||
TAG_FILTER = "feb26"
|
|
||||||
|
|
||||||
client = ClickUpClient(api_token=TOKEN, workspace_id="", task_type_field_name="Work Category")
|
|
||||||
|
|
||||||
print(f"Querying ClickUp space {SPACE_ID} for 'to do' tasks...")
|
|
||||||
tasks = client.get_tasks_from_space(SPACE_ID, statuses=["to do"])
|
|
||||||
client.close()
|
|
||||||
|
|
||||||
print(f"Total 'to do' tasks found: {len(tasks)}")
|
|
||||||
|
|
||||||
# Filter by feb26 tag
|
|
||||||
tagged = [t for t in tasks if TAG_FILTER in [tag.lower() for tag in t.tags]]
|
|
||||||
print(f"Tasks with '{TAG_FILTER}' tag: {len(tagged)}")
|
|
||||||
|
|
||||||
# Filter by Work Category (OPT / LINKS / Content)
|
|
||||||
filtered = []
|
|
||||||
for t in tagged:
|
|
||||||
cat = (t.custom_fields.get("Work Category") or t.task_type or "").strip()
|
|
||||||
if cat in CATEGORIES:
|
|
||||||
filtered.append(t)
|
|
||||||
|
|
||||||
if not filtered and tagged:
|
|
||||||
# Show what categories exist so we can refine
|
|
||||||
cats_found = set()
|
|
||||||
for t in tagged:
|
|
||||||
cats_found.add(t.custom_fields.get("Work Category") or t.task_type or "(none)")
|
|
||||||
print(f"\nNo tasks matched categories {CATEGORIES}.")
|
|
||||||
print(f"Work Categories found on feb26-tagged tasks: {cats_found}")
|
|
||||||
print("\nShowing ALL feb26-tagged tasks instead:\n")
|
|
||||||
filtered = tagged
|
|
||||||
|
|
||||||
# Sort by due date (oldest first), tasks without due date go last
|
|
||||||
def sort_key(t):
|
|
||||||
if t.due_date:
|
|
||||||
return int(t.due_date)
|
|
||||||
return float("inf")
|
|
||||||
|
|
||||||
filtered.sort(key=sort_key)
|
|
||||||
|
|
||||||
# Take top 10
|
|
||||||
top = filtered[:10]
|
|
||||||
|
|
||||||
# Format table
|
|
||||||
def fmt_due(raw_due: str) -> str:
|
|
||||||
if not raw_due:
|
|
||||||
return "—"
|
|
||||||
try:
|
|
||||||
ts = int(raw_due) / 1000
|
|
||||||
return datetime.fromtimestamp(ts, tz=timezone.utc).strftime("%m/%d")
|
|
||||||
except (ValueError, OSError):
|
|
||||||
return raw_due
|
|
||||||
|
|
||||||
def fmt_customer(t) -> str:
|
|
||||||
return t.custom_fields.get("Customer", "") or "—"
|
|
||||||
|
|
||||||
print(f"\n{'#':<3} | {'ID':<12} | {'Keyword/Name':<45} | {'Cat':<15} | {'Due':<6} | {'Customer':<20} | Tags")
|
|
||||||
print("-" * 120)
|
|
||||||
|
|
||||||
for i, t in enumerate(top, 1):
|
|
||||||
tags_str = ", ".join(t.tags)
|
|
||||||
name = t.name[:45]
|
|
||||||
cat = t.custom_fields.get("Work Category") or t.task_type or "—"
|
|
||||||
print(f"{i:<3} | {t.id:<12} | {name:<45} | {cat:<15} | {fmt_due(t.due_date):<6} | {fmt_customer(t):<20} | {tags_str}")
|
|
||||||
|
|
||||||
print(f"\nTotal shown: {len(top)} of {len(filtered)} matching tasks")
|
|
||||||
|
|
@ -1,64 +0,0 @@
|
||||||
"""Find all Press Release tasks due in February 2026, any status."""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
from datetime import UTC, datetime
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.WARNING)
|
|
||||||
|
|
||||||
from cheddahbot.config import load_config
|
|
||||||
from cheddahbot.clickup import ClickUpClient
|
|
||||||
import json
|
|
||||||
|
|
||||||
config = load_config()
|
|
||||||
client = ClickUpClient(
|
|
||||||
api_token=config.clickup.api_token,
|
|
||||||
workspace_id=config.clickup.workspace_id,
|
|
||||||
task_type_field_name=config.clickup.task_type_field_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
space_id = config.clickup.space_id
|
|
||||||
list_ids = client.get_list_ids_from_space(space_id)
|
|
||||||
field_filter = client.discover_field_filter(
|
|
||||||
next(iter(list_ids)), config.clickup.task_type_field_name
|
|
||||||
)
|
|
||||||
|
|
||||||
pr_opt_id = field_filter["options"]["Press Release"]
|
|
||||||
custom_fields_filter = json.dumps(
|
|
||||||
[{"field_id": field_filter["field_id"], "operator": "ANY", "value": [pr_opt_id]}]
|
|
||||||
)
|
|
||||||
|
|
||||||
# February 2026 window
|
|
||||||
feb_start = int(datetime(2026, 2, 1, tzinfo=UTC).timestamp() * 1000)
|
|
||||||
feb_end = int(datetime(2026, 3, 1, tzinfo=UTC).timestamp() * 1000)
|
|
||||||
|
|
||||||
# Query with broad statuses, include closed
|
|
||||||
tasks = client.get_tasks_from_space(
|
|
||||||
space_id,
|
|
||||||
custom_fields=custom_fields_filter,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Filter for due in February 2026
|
|
||||||
feb_prs = []
|
|
||||||
for t in tasks:
|
|
||||||
if t.task_type != "Press Release":
|
|
||||||
continue
|
|
||||||
if not t.due_date:
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
due_ms = int(t.due_date)
|
|
||||||
if feb_start <= due_ms < feb_end:
|
|
||||||
feb_prs.append(t)
|
|
||||||
except (ValueError, TypeError):
|
|
||||||
continue
|
|
||||||
|
|
||||||
print(f"\nPress Release tasks due in February 2026: {len(feb_prs)}\n")
|
|
||||||
for t in feb_prs:
|
|
||||||
due_dt = datetime.fromtimestamp(int(t.due_date) / 1000, tz=UTC)
|
|
||||||
due = due_dt.strftime("%Y-%m-%d")
|
|
||||||
tags_str = ", ".join(t.tags) if t.tags else "(none)"
|
|
||||||
customer = t.custom_fields.get("Customer", "?")
|
|
||||||
imsurl = t.custom_fields.get("IMSURL", "")
|
|
||||||
print(f" [{t.status:20s}] {t.name}")
|
|
||||||
print(f" id={t.id} due={due} tags={tags_str}")
|
|
||||||
print(f" customer={customer} imsurl={imsurl or '(none)'}")
|
|
||||||
print()
|
|
||||||
|
|
@ -1,61 +0,0 @@
|
||||||
"""Find all feb26-tagged Press Release tasks regardless of due date or status."""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
from datetime import UTC, datetime
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(message)s", datefmt="%H:%M:%S")
|
|
||||||
|
|
||||||
from cheddahbot.config import load_config
|
|
||||||
from cheddahbot.clickup import ClickUpClient
|
|
||||||
|
|
||||||
config = load_config()
|
|
||||||
client = ClickUpClient(
|
|
||||||
api_token=config.clickup.api_token,
|
|
||||||
workspace_id=config.clickup.workspace_id,
|
|
||||||
task_type_field_name=config.clickup.task_type_field_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
space_id = config.clickup.space_id
|
|
||||||
|
|
||||||
# Query ALL statuses (no status filter, no due date filter) but filter by Press Release
|
|
||||||
list_ids = client.get_list_ids_from_space(space_id)
|
|
||||||
field_filter = client.discover_field_filter(
|
|
||||||
next(iter(list_ids)), config.clickup.task_type_field_name
|
|
||||||
)
|
|
||||||
|
|
||||||
import json
|
|
||||||
pr_opt_id = field_filter["options"]["Press Release"]
|
|
||||||
custom_fields_filter = json.dumps(
|
|
||||||
[{"field_id": field_filter["field_id"], "operator": "ANY", "value": [pr_opt_id]}]
|
|
||||||
)
|
|
||||||
|
|
||||||
# Get tasks with NO status filter and NO due date filter
|
|
||||||
tasks = client.get_tasks_from_space(
|
|
||||||
space_id,
|
|
||||||
statuses=["to do", "outline approved", "in progress", "automation underway"],
|
|
||||||
custom_fields=custom_fields_filter,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Filter for feb26 tag
|
|
||||||
feb26_tasks = [t for t in tasks if "feb26" in t.tags]
|
|
||||||
all_pr = [t for t in tasks if t.task_type == "Press Release"]
|
|
||||||
|
|
||||||
print(f"\n{'='*70}")
|
|
||||||
print(f"Total tasks returned: {len(tasks)}")
|
|
||||||
print(f"Press Release tasks: {len(all_pr)}")
|
|
||||||
print(f"feb26-tagged PR tasks: {len(feb26_tasks)}")
|
|
||||||
print(f"{'='*70}\n")
|
|
||||||
|
|
||||||
for t in all_pr:
|
|
||||||
due = ""
|
|
||||||
if t.due_date:
|
|
||||||
try:
|
|
||||||
due_dt = datetime.fromtimestamp(int(t.due_date) / 1000, tz=UTC)
|
|
||||||
due = due_dt.strftime("%Y-%m-%d")
|
|
||||||
except (ValueError, TypeError):
|
|
||||||
due = t.due_date
|
|
||||||
tags_str = ", ".join(t.tags) if t.tags else "(no tags)"
|
|
||||||
customer = t.custom_fields.get("Customer", "?")
|
|
||||||
print(f" [{t.status:20s}] {t.name}")
|
|
||||||
print(f" id={t.id} due={due or '(none)'} tags={tags_str} customer={customer}")
|
|
||||||
print()
|
|
||||||
|
|
@ -1,102 +0,0 @@
|
||||||
"""Query ClickUp 'to do' tasks tagged 'feb26' in OPT/LINKS/Content categories."""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
from datetime import datetime, timezone
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
_root = Path(__file__).resolve().parent.parent
|
|
||||||
sys.path.insert(0, str(_root))
|
|
||||||
|
|
||||||
from dotenv import load_dotenv
|
|
||||||
load_dotenv(_root / ".env")
|
|
||||||
|
|
||||||
from cheddahbot.clickup import ClickUpClient
|
|
||||||
|
|
||||||
API_TOKEN = os.environ.get("CLICKUP_API_TOKEN", "")
|
|
||||||
SPACE_ID = os.environ.get("CLICKUP_SPACE_ID", "")
|
|
||||||
|
|
||||||
if not API_TOKEN:
|
|
||||||
sys.exit("ERROR: CLICKUP_API_TOKEN env var is required")
|
|
||||||
if not SPACE_ID:
|
|
||||||
sys.exit("ERROR: CLICKUP_SPACE_ID env var is required")
|
|
||||||
|
|
||||||
# Work Category values to include (case-insensitive partial match)
|
|
||||||
CATEGORY_FILTERS = ["opt", "link", "content"]
|
|
||||||
TAG_FILTER = "feb26"
|
|
||||||
|
|
||||||
|
|
||||||
def ms_to_date(ms_str: str) -> str:
|
|
||||||
"""Convert Unix-ms timestamp string to YYYY-MM-DD."""
|
|
||||||
if not ms_str:
|
|
||||||
return "—"
|
|
||||||
try:
|
|
||||||
ts = int(ms_str) / 1000
|
|
||||||
return datetime.fromtimestamp(ts, tz=timezone.utc).strftime("%Y-%m-%d")
|
|
||||||
except (ValueError, OSError):
|
|
||||||
return ms_str
|
|
||||||
|
|
||||||
|
|
||||||
def main() -> None:
|
|
||||||
client = ClickUpClient(api_token=API_TOKEN, task_type_field_name="Work Category")
|
|
||||||
|
|
||||||
print(f"Fetching 'to do' tasks from space {SPACE_ID} ...")
|
|
||||||
tasks = client.get_tasks_from_overall_lists(SPACE_ID, statuses=["to do"])
|
|
||||||
print(f"Total 'to do' tasks: {len(tasks)}")
|
|
||||||
|
|
||||||
# Filter by feb26 tag
|
|
||||||
tagged = [t for t in tasks if TAG_FILTER in [tag.lower() for tag in t.tags]]
|
|
||||||
print(f"Tasks with '{TAG_FILTER}' tag: {len(tagged)}")
|
|
||||||
|
|
||||||
# Show all Work Category values for debugging
|
|
||||||
categories = set()
|
|
||||||
for t in tagged:
|
|
||||||
wc = t.custom_fields.get("Work Category", "") or ""
|
|
||||||
categories.add(wc)
|
|
||||||
print(f"Work Categories found: {categories}")
|
|
||||||
|
|
||||||
# Filter by OPT/LINKS/Content categories
|
|
||||||
filtered = []
|
|
||||||
for t in tagged:
|
|
||||||
wc = str(t.custom_fields.get("Work Category", "") or "").lower()
|
|
||||||
if any(cat in wc for cat in CATEGORY_FILTERS):
|
|
||||||
filtered.append(t)
|
|
||||||
|
|
||||||
print(f"After category filter (OPT/LINKS/Content): {len(filtered)}")
|
|
||||||
|
|
||||||
# Sort by due date (oldest first), tasks with no due date go last
|
|
||||||
def sort_key(t):
|
|
||||||
if t.due_date:
|
|
||||||
try:
|
|
||||||
return (0, int(t.due_date))
|
|
||||||
except ValueError:
|
|
||||||
return (1, 0)
|
|
||||||
return (2, 0)
|
|
||||||
|
|
||||||
filtered.sort(key=sort_key)
|
|
||||||
|
|
||||||
# Top 10
|
|
||||||
top10 = filtered[:10]
|
|
||||||
|
|
||||||
# Print table
|
|
||||||
print(f"\n{'#':>3} | {'ID':>11} | {'Keyword/Name':<45} | {'Due':>10} | {'Customer':<20} | Tags")
|
|
||||||
print("-" * 120)
|
|
||||||
|
|
||||||
for i, t in enumerate(top10, 1):
|
|
||||||
customer = t.custom_fields.get("Customer", "") or "—"
|
|
||||||
due = ms_to_date(t.due_date)
|
|
||||||
wc = t.custom_fields.get("Work Category", "") or ""
|
|
||||||
tags_str = ", ".join(t.tags)
|
|
||||||
name_display = t.name[:45] if len(t.name) > 45 else t.name
|
|
||||||
print(f"{i:>3} | {t.id:>11} | {name_display:<45} | {due:>10} | {customer:<20} | {tags_str}")
|
|
||||||
|
|
||||||
if not top10:
|
|
||||||
print(" (no matching tasks found)")
|
|
||||||
|
|
||||||
print(f"\n--- {len(filtered)} total matching tasks, showing top {len(top10)} (oldest first) ---")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
|
|
@ -1,149 +0,0 @@
|
||||||
"""One-time script: rebuild the 'Customer' dropdown custom field in ClickUp.
|
|
||||||
|
|
||||||
Steps:
|
|
||||||
1. Fetch all folders from the PII-Agency-SEO space
|
|
||||||
2. Filter out non-client folders
|
|
||||||
3. Create a 'Customer' dropdown field with folder names as options
|
|
||||||
4. For each client folder, find the 'Overall' list and set Customer on all tasks
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
DRY_RUN=1 uv run python scripts/rebuild_customer_field.py # preview only
|
|
||||||
uv run python scripts/rebuild_customer_field.py # live run
|
|
||||||
"""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Allow running from repo root
|
|
||||||
_root = Path(__file__).resolve().parent.parent
|
|
||||||
sys.path.insert(0, str(_root))
|
|
||||||
|
|
||||||
from dotenv import load_dotenv
|
|
||||||
|
|
||||||
load_dotenv(_root / ".env")
|
|
||||||
|
|
||||||
from cheddahbot.clickup import ClickUpClient
|
|
||||||
|
|
||||||
# ── Config ──────────────────────────────────────────────────────────────────
|
|
||||||
DRY_RUN = os.environ.get("DRY_RUN", "0") not in ("0", "false", "")
|
|
||||||
EXCLUDED_FOLDERS = {"SEO Audits", "SEO Projects", "Business Related"}
|
|
||||||
FIELD_NAME = "Customer"
|
|
||||||
|
|
||||||
API_TOKEN = os.environ.get("CLICKUP_API_TOKEN", "")
|
|
||||||
SPACE_ID = os.environ.get("CLICKUP_SPACE_ID", "")
|
|
||||||
|
|
||||||
if not API_TOKEN:
|
|
||||||
sys.exit("ERROR: CLICKUP_API_TOKEN env var is required")
|
|
||||||
if not SPACE_ID:
|
|
||||||
sys.exit("ERROR: CLICKUP_SPACE_ID env var is required")
|
|
||||||
|
|
||||||
|
|
||||||
def main() -> None:
|
|
||||||
client = ClickUpClient(api_token=API_TOKEN)
|
|
||||||
|
|
||||||
# 1. Get folders
|
|
||||||
print(f"\n{'=' * 60}")
|
|
||||||
print(f" Rebuild '{FIELD_NAME}' field -- Space {SPACE_ID}")
|
|
||||||
print(f" Mode: {'DRY RUN' if DRY_RUN else 'LIVE'}")
|
|
||||||
print(f"{'=' * 60}\n")
|
|
||||||
|
|
||||||
folders = client.get_folders(SPACE_ID)
|
|
||||||
print(f"Found {len(folders)} folders:\n")
|
|
||||||
|
|
||||||
client_folders = []
|
|
||||||
for f in folders:
|
|
||||||
excluded = f["name"] in EXCLUDED_FOLDERS
|
|
||||||
marker = " [SKIP]" if excluded else ""
|
|
||||||
list_names = [lst["name"] for lst in f["lists"]]
|
|
||||||
print(f" {f['name']}{marker} (lists: {', '.join(list_names) or 'none'})")
|
|
||||||
if not excluded:
|
|
||||||
client_folders.append(f)
|
|
||||||
|
|
||||||
if not client_folders:
|
|
||||||
sys.exit("\nNo client folders found -- nothing to do.")
|
|
||||||
|
|
||||||
option_names = sorted(f["name"] for f in client_folders)
|
|
||||||
print(f"\nDropdown options ({len(option_names)}): {', '.join(option_names)}")
|
|
||||||
|
|
||||||
# 2. Build a plan: folder → Overall list → tasks
|
|
||||||
plan: list[dict] = [] # {folder_name, list_id, tasks: [ClickUpTask]}
|
|
||||||
first_list_id = None
|
|
||||||
|
|
||||||
for f in client_folders:
|
|
||||||
overall = next((lst for lst in f["lists"] if lst["name"] == "Overall"), None)
|
|
||||||
if overall is None:
|
|
||||||
print(f"\n WARNING: '{f['name']}' has no 'Overall' list -- skipping task update")
|
|
||||||
continue
|
|
||||||
if first_list_id is None:
|
|
||||||
first_list_id = overall["id"]
|
|
||||||
tasks = client.get_tasks(overall["id"])
|
|
||||||
plan.append({"folder_name": f["name"], "list_id": overall["id"], "tasks": tasks})
|
|
||||||
|
|
||||||
# 3. Print summary
|
|
||||||
total_tasks = sum(len(p["tasks"]) for p in plan)
|
|
||||||
print("\n--- Update Plan ---")
|
|
||||||
for p in plan:
|
|
||||||
print(f" {p['folder_name']:30s} -> {len(p['tasks']):3d} tasks in list {p['list_id']}")
|
|
||||||
print(f" {'TOTAL':30s} -> {total_tasks:3d} tasks")
|
|
||||||
|
|
||||||
if DRY_RUN:
|
|
||||||
print("\n** DRY RUN -- no changes made. Unset DRY_RUN to execute. **\n")
|
|
||||||
return
|
|
||||||
|
|
||||||
if first_list_id is None:
|
|
||||||
sys.exit("\nNo 'Overall' list found in any client folder -- cannot create field.")
|
|
||||||
|
|
||||||
# 4. Create the dropdown field
|
|
||||||
print(f"\nCreating '{FIELD_NAME}' dropdown on list {first_list_id} ...")
|
|
||||||
type_config = {
|
|
||||||
"options": [{"name": name, "color": None} for name in option_names],
|
|
||||||
}
|
|
||||||
client.create_custom_field(first_list_id, FIELD_NAME, "drop_down", type_config)
|
|
||||||
print(" Field created.")
|
|
||||||
|
|
||||||
# Brief pause for ClickUp to propagate
|
|
||||||
time.sleep(2)
|
|
||||||
|
|
||||||
# 5. Discover the field UUID + option IDs
|
|
||||||
print("Discovering field UUID and option IDs ...")
|
|
||||||
field_info = client.discover_field_filter(first_list_id, FIELD_NAME)
|
|
||||||
if field_info is None:
|
|
||||||
sys.exit(f"\nERROR: Could not find '{FIELD_NAME}' field after creation!")
|
|
||||||
|
|
||||||
field_id = field_info["field_id"]
|
|
||||||
option_map = field_info["options"] # {name: uuid}
|
|
||||||
print(f" Field ID: {field_id}")
|
|
||||||
print(f" Options: {option_map}")
|
|
||||||
|
|
||||||
# 6. Set Customer field on each task
|
|
||||||
updated = 0
|
|
||||||
failed = 0
|
|
||||||
for p in plan:
|
|
||||||
folder_name = p["folder_name"]
|
|
||||||
opt_id = option_map.get(folder_name)
|
|
||||||
if not opt_id:
|
|
||||||
print(f"\n WARNING: No option ID for '{folder_name}' -- skipping")
|
|
||||||
continue
|
|
||||||
|
|
||||||
print(f"\nUpdating {len(p['tasks'])} tasks in '{folder_name}' ...")
|
|
||||||
for task in p["tasks"]:
|
|
||||||
ok = client.set_custom_field_value(task.id, field_id, opt_id)
|
|
||||||
if ok:
|
|
||||||
updated += 1
|
|
||||||
else:
|
|
||||||
failed += 1
|
|
||||||
print(f" FAILED: task {task.id} ({task.name})")
|
|
||||||
# Light rate-limit courtesy
|
|
||||||
time.sleep(0.15)
|
|
||||||
|
|
||||||
print(f"\n{'=' * 60}")
|
|
||||||
print(f" Done! Updated: {updated} | Failed: {failed}")
|
|
||||||
print(f"{'=' * 60}\n")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
|
|
@ -1,144 +0,0 @@
|
||||||
"""Re-run press release pipeline for specific tasks that are missing attachments."""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import sys
|
|
||||||
import io
|
|
||||||
|
|
||||||
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding="utf-8")
|
|
||||||
|
|
||||||
logging.basicConfig(
|
|
||||||
level=logging.INFO,
|
|
||||||
format="%(asctime)s [%(name)s] %(levelname)s: %(message)s",
|
|
||||||
datefmt="%H:%M:%S",
|
|
||||||
handlers=[logging.StreamHandler(stream=io.TextIOWrapper(sys.stderr.buffer, encoding="utf-8"))],
|
|
||||||
)
|
|
||||||
log = logging.getLogger("pr_rerun")
|
|
||||||
|
|
||||||
from cheddahbot.config import load_config
|
|
||||||
from cheddahbot.db import Database
|
|
||||||
from cheddahbot.llm import LLMAdapter
|
|
||||||
from cheddahbot.agent import Agent
|
|
||||||
from cheddahbot.clickup import ClickUpClient
|
|
||||||
|
|
||||||
|
|
||||||
TASKS_TO_RERUN = [
|
|
||||||
("86b8ebfk9", "Advanced Industrial highlights medical grade plastic expertise", "Advanced Industrial"),
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def bootstrap():
|
|
||||||
config = load_config()
|
|
||||||
db = Database(config.db_path)
|
|
||||||
llm = LLMAdapter(
|
|
||||||
default_model=config.chat_model,
|
|
||||||
openrouter_key=config.openrouter_api_key,
|
|
||||||
ollama_url=config.ollama_url,
|
|
||||||
lmstudio_url=config.lmstudio_url,
|
|
||||||
)
|
|
||||||
|
|
||||||
agent_cfg = config.agents[0] if config.agents else None
|
|
||||||
agent = Agent(config, db, llm, agent_config=agent_cfg)
|
|
||||||
|
|
||||||
try:
|
|
||||||
from cheddahbot.memory import MemorySystem
|
|
||||||
scope = agent_cfg.memory_scope if agent_cfg else ""
|
|
||||||
memory = MemorySystem(config, db, scope=scope)
|
|
||||||
agent.set_memory(memory)
|
|
||||||
except Exception as e:
|
|
||||||
log.warning("Memory not available: %s", e)
|
|
||||||
|
|
||||||
from cheddahbot.tools import ToolRegistry
|
|
||||||
tools = ToolRegistry(config, db, agent)
|
|
||||||
agent.set_tools(tools)
|
|
||||||
|
|
||||||
try:
|
|
||||||
from cheddahbot.skills import SkillRegistry
|
|
||||||
skills = SkillRegistry(config.skills_dir)
|
|
||||||
agent.set_skills_registry(skills)
|
|
||||||
except Exception as e:
|
|
||||||
log.warning("Skills not available: %s", e)
|
|
||||||
|
|
||||||
return config, db, agent, tools
|
|
||||||
|
|
||||||
|
|
||||||
def run_task(agent, tools, config, client, task_id, task_name, customer):
|
|
||||||
"""Execute write_press_releases for a specific task."""
|
|
||||||
# Build args matching the field_mapping from config
|
|
||||||
args = {
|
|
||||||
"topic": task_name,
|
|
||||||
"company_name": customer,
|
|
||||||
"clickup_task_id": task_id,
|
|
||||||
}
|
|
||||||
|
|
||||||
# Also fetch IMSURL from the task
|
|
||||||
import httpx as _httpx
|
|
||||||
resp = _httpx.get(
|
|
||||||
f"https://api.clickup.com/api/v2/task/{task_id}",
|
|
||||||
headers={"Authorization": config.clickup.api_token},
|
|
||||||
timeout=30.0,
|
|
||||||
)
|
|
||||||
task_data = resp.json()
|
|
||||||
for cf in task_data.get("custom_fields", []):
|
|
||||||
if cf["name"] == "IMSURL":
|
|
||||||
val = cf.get("value")
|
|
||||||
if val:
|
|
||||||
args["url"] = val
|
|
||||||
elif cf["name"] == "SocialURL":
|
|
||||||
val = cf.get("value")
|
|
||||||
if val:
|
|
||||||
args["branded_url"] = val
|
|
||||||
|
|
||||||
log.info("=" * 70)
|
|
||||||
log.info("EXECUTING: %s", task_name)
|
|
||||||
log.info(" Task ID: %s", task_id)
|
|
||||||
log.info(" Customer: %s", customer)
|
|
||||||
log.info(" Args: %s", {k: v for k, v in args.items() if k != "clickup_task_id"})
|
|
||||||
log.info("=" * 70)
|
|
||||||
|
|
||||||
try:
|
|
||||||
result = tools.execute("write_press_releases", args)
|
|
||||||
|
|
||||||
if result.startswith("Skipped:") or result.startswith("Error:"):
|
|
||||||
log.error("Task skipped/errored: %s", result[:500])
|
|
||||||
return False
|
|
||||||
|
|
||||||
log.info("Task completed!")
|
|
||||||
# Print first 500 chars of result
|
|
||||||
print(f"\n--- Result for {task_name} ---")
|
|
||||||
print(result[:1000])
|
|
||||||
print("--- End ---\n")
|
|
||||||
return True
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
log.error("Task failed: %s", e, exc_info=True)
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
log.info("Bootstrapping CheddahBot...")
|
|
||||||
config, db, agent, tools = bootstrap()
|
|
||||||
|
|
||||||
client = ClickUpClient(
|
|
||||||
api_token=config.clickup.api_token,
|
|
||||||
workspace_id=config.clickup.workspace_id,
|
|
||||||
task_type_field_name=config.clickup.task_type_field_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
log.info("Will re-run %d tasks", len(TASKS_TO_RERUN))
|
|
||||||
|
|
||||||
results = []
|
|
||||||
for i, (task_id, name, customer) in enumerate(TASKS_TO_RERUN):
|
|
||||||
log.info("\n>>> Task %d/%d <<<", i + 1, len(TASKS_TO_RERUN))
|
|
||||||
success = run_task(agent, tools, config, client, task_id, name, customer)
|
|
||||||
results.append((name, success))
|
|
||||||
|
|
||||||
print(f"\n{'=' * 70}")
|
|
||||||
print("RESULTS SUMMARY")
|
|
||||||
print(f"{'=' * 70}")
|
|
||||||
for name, success in results:
|
|
||||||
status = "OK" if success else "FAILED"
|
|
||||||
print(f" [{status}] {name}")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
|
|
@ -1,241 +0,0 @@
|
||||||
"""Run the press-release pipeline for up to N ClickUp tasks.
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
uv run python scripts/run_pr_pipeline.py # discover + execute up to 3
|
|
||||||
uv run python scripts/run_pr_pipeline.py --dry-run # discover only, don't execute
|
|
||||||
uv run python scripts/run_pr_pipeline.py --max 1 # execute only 1 task
|
|
||||||
"""
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import logging
|
|
||||||
import sys
|
|
||||||
from datetime import UTC, datetime
|
|
||||||
|
|
||||||
logging.basicConfig(
|
|
||||||
level=logging.INFO,
|
|
||||||
format="%(asctime)s [%(name)s] %(levelname)s: %(message)s",
|
|
||||||
datefmt="%H:%M:%S",
|
|
||||||
)
|
|
||||||
log = logging.getLogger("pr_pipeline")
|
|
||||||
|
|
||||||
# ── Bootstrap CheddahBot (config, db, agent, tools) ─────────────────────
|
|
||||||
|
|
||||||
from cheddahbot.config import load_config
|
|
||||||
from cheddahbot.db import Database
|
|
||||||
from cheddahbot.llm import LLMAdapter
|
|
||||||
from cheddahbot.agent import Agent
|
|
||||||
from cheddahbot.clickup import ClickUpClient
|
|
||||||
|
|
||||||
|
|
||||||
def bootstrap():
|
|
||||||
"""Set up config, db, agent, and tool registry — same as __main__.py."""
|
|
||||||
config = load_config()
|
|
||||||
db = Database(config.db_path)
|
|
||||||
llm = LLMAdapter(
|
|
||||||
default_model=config.chat_model,
|
|
||||||
openrouter_key=config.openrouter_api_key,
|
|
||||||
ollama_url=config.ollama_url,
|
|
||||||
lmstudio_url=config.lmstudio_url,
|
|
||||||
)
|
|
||||||
|
|
||||||
agent_cfg = config.agents[0] if config.agents else None
|
|
||||||
agent = Agent(config, db, llm, agent_config=agent_cfg)
|
|
||||||
|
|
||||||
# Memory
|
|
||||||
try:
|
|
||||||
from cheddahbot.memory import MemorySystem
|
|
||||||
scope = agent_cfg.memory_scope if agent_cfg else ""
|
|
||||||
memory = MemorySystem(config, db, scope=scope)
|
|
||||||
agent.set_memory(memory)
|
|
||||||
except Exception as e:
|
|
||||||
log.warning("Memory not available: %s", e)
|
|
||||||
|
|
||||||
# Tools
|
|
||||||
from cheddahbot.tools import ToolRegistry
|
|
||||||
tools = ToolRegistry(config, db, agent)
|
|
||||||
agent.set_tools(tools)
|
|
||||||
|
|
||||||
# Skills
|
|
||||||
try:
|
|
||||||
from cheddahbot.skills import SkillRegistry
|
|
||||||
skills = SkillRegistry(config.skills_dir)
|
|
||||||
agent.set_skills_registry(skills)
|
|
||||||
except Exception as e:
|
|
||||||
log.warning("Skills not available: %s", e)
|
|
||||||
|
|
||||||
return config, db, agent, tools
|
|
||||||
|
|
||||||
|
|
||||||
def discover_pr_tasks(config):
|
|
||||||
"""Poll ClickUp for Press Release tasks — same logic as scheduler._poll_clickup()."""
|
|
||||||
client = ClickUpClient(
|
|
||||||
api_token=config.clickup.api_token,
|
|
||||||
workspace_id=config.clickup.workspace_id,
|
|
||||||
task_type_field_name=config.clickup.task_type_field_name,
|
|
||||||
)
|
|
||||||
space_id = config.clickup.space_id
|
|
||||||
skill_map = config.clickup.skill_map
|
|
||||||
|
|
||||||
if not space_id:
|
|
||||||
log.error("No space_id configured")
|
|
||||||
return [], client
|
|
||||||
|
|
||||||
# Discover field filter (Work Category UUID + options)
|
|
||||||
list_ids = client.get_list_ids_from_space(space_id)
|
|
||||||
if not list_ids:
|
|
||||||
log.error("No lists found in space %s", space_id)
|
|
||||||
return [], client
|
|
||||||
|
|
||||||
first_list = next(iter(list_ids))
|
|
||||||
field_filter = client.discover_field_filter(
|
|
||||||
first_list, config.clickup.task_type_field_name
|
|
||||||
)
|
|
||||||
|
|
||||||
# Build custom fields filter for API query
|
|
||||||
custom_fields_filter = None
|
|
||||||
if field_filter and field_filter.get("options"):
|
|
||||||
import json
|
|
||||||
field_id = field_filter["field_id"]
|
|
||||||
options = field_filter["options"]
|
|
||||||
# Only Press Release
|
|
||||||
pr_opt_id = options.get("Press Release")
|
|
||||||
if pr_opt_id:
|
|
||||||
custom_fields_filter = json.dumps(
|
|
||||||
[{"field_id": field_id, "operator": "ANY", "value": [pr_opt_id]}]
|
|
||||||
)
|
|
||||||
log.info("Filtering for Press Release option ID: %s", pr_opt_id)
|
|
||||||
else:
|
|
||||||
log.warning("'Press Release' not found in Work Category options: %s", list(options.keys()))
|
|
||||||
return [], client
|
|
||||||
|
|
||||||
# Due date window (3 weeks)
|
|
||||||
now_ms = int(datetime.now(UTC).timestamp() * 1000)
|
|
||||||
due_date_lt = now_ms + (3 * 7 * 24 * 60 * 60 * 1000)
|
|
||||||
|
|
||||||
tasks = client.get_tasks_from_space(
|
|
||||||
space_id,
|
|
||||||
statuses=config.clickup.poll_statuses,
|
|
||||||
due_date_lt=due_date_lt,
|
|
||||||
custom_fields=custom_fields_filter,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Client-side filter: must be Press Release + have due date in window
|
|
||||||
pr_tasks = []
|
|
||||||
for task in tasks:
|
|
||||||
if task.task_type != "Press Release":
|
|
||||||
continue
|
|
||||||
if not task.due_date:
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
if int(task.due_date) > due_date_lt:
|
|
||||||
continue
|
|
||||||
except (ValueError, TypeError):
|
|
||||||
continue
|
|
||||||
pr_tasks.append(task)
|
|
||||||
|
|
||||||
return pr_tasks, client
|
|
||||||
|
|
||||||
|
|
||||||
def execute_task(agent, tools, config, client, task):
|
|
||||||
"""Execute a single PR task — same logic as scheduler._execute_task()."""
|
|
||||||
skill_map = config.clickup.skill_map
|
|
||||||
mapping = skill_map.get("Press Release", {})
|
|
||||||
tool_name = mapping.get("tool", "write_press_releases")
|
|
||||||
|
|
||||||
task_id = task.id
|
|
||||||
|
|
||||||
# Build tool args from field mapping
|
|
||||||
field_mapping = mapping.get("field_mapping", {})
|
|
||||||
args = {}
|
|
||||||
for tool_param, source in field_mapping.items():
|
|
||||||
if source == "task_name":
|
|
||||||
args[tool_param] = task.name
|
|
||||||
elif source == "task_description":
|
|
||||||
args[tool_param] = task.custom_fields.get("description", "")
|
|
||||||
else:
|
|
||||||
args[tool_param] = task.custom_fields.get(source, "")
|
|
||||||
|
|
||||||
args["clickup_task_id"] = task_id
|
|
||||||
|
|
||||||
log.info("=" * 70)
|
|
||||||
log.info("EXECUTING: %s", task.name)
|
|
||||||
log.info(" Task ID: %s", task_id)
|
|
||||||
log.info(" Tool: %s", tool_name)
|
|
||||||
log.info(" Args: %s", {k: v for k, v in args.items() if k != "clickup_task_id"})
|
|
||||||
log.info("=" * 70)
|
|
||||||
|
|
||||||
# Move to "automation underway"
|
|
||||||
client.update_task_status(task_id, config.clickup.automation_status)
|
|
||||||
|
|
||||||
try:
|
|
||||||
result = tools.execute(tool_name, args)
|
|
||||||
|
|
||||||
if result.startswith("Skipped:") or result.startswith("Error:"):
|
|
||||||
log.error("Task skipped/errored: %s", result[:500])
|
|
||||||
client.add_comment(
|
|
||||||
task_id,
|
|
||||||
f"⚠️ CheddahBot could not execute this task.\n\n{result[:2000]}",
|
|
||||||
)
|
|
||||||
client.update_task_status(task_id, config.clickup.error_status)
|
|
||||||
return False
|
|
||||||
|
|
||||||
log.info("Task completed successfully!")
|
|
||||||
log.info("Result preview:\n%s", result[:1000])
|
|
||||||
return True
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
log.error("Task failed with exception: %s", e, exc_info=True)
|
|
||||||
client.add_comment(
|
|
||||||
task_id,
|
|
||||||
f"❌ CheddahBot failed to complete this task.\n\nError: {str(e)[:2000]}",
|
|
||||||
)
|
|
||||||
client.update_task_status(task_id, config.clickup.error_status)
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = argparse.ArgumentParser(description="Run PR pipeline from ClickUp")
|
|
||||||
parser.add_argument("--dry-run", action="store_true", help="Discover only, don't execute")
|
|
||||||
parser.add_argument("--max", type=int, default=3, help="Max tasks to execute (default: 3)")
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
log.info("Bootstrapping CheddahBot...")
|
|
||||||
config, db, agent, tools = bootstrap()
|
|
||||||
|
|
||||||
log.info("Polling ClickUp for Press Release tasks...")
|
|
||||||
pr_tasks, client = discover_pr_tasks(config)
|
|
||||||
|
|
||||||
if not pr_tasks:
|
|
||||||
log.info("No Press Release tasks found in statuses %s", config.clickup.poll_statuses)
|
|
||||||
return
|
|
||||||
|
|
||||||
log.info("Found %d Press Release task(s):", len(pr_tasks))
|
|
||||||
for i, task in enumerate(pr_tasks):
|
|
||||||
status_str = f"status={task.status}" if hasattr(task, "status") else ""
|
|
||||||
log.info(" %d. %s (id=%s) %s", i + 1, task.name, task.id, status_str)
|
|
||||||
log.info(" Custom fields: %s", task.custom_fields)
|
|
||||||
|
|
||||||
if args.dry_run:
|
|
||||||
log.info("Dry run — not executing. Use without --dry-run to execute.")
|
|
||||||
return
|
|
||||||
|
|
||||||
# Execute up to --max tasks
|
|
||||||
to_run = pr_tasks[: args.max]
|
|
||||||
log.info("Will execute %d task(s) (max=%d)", len(to_run), args.max)
|
|
||||||
|
|
||||||
results = []
|
|
||||||
for i, task in enumerate(to_run):
|
|
||||||
log.info("\n>>> Task %d/%d <<<", i + 1, len(to_run))
|
|
||||||
success = execute_task(agent, tools, config, client, task)
|
|
||||||
results.append((task.name, success))
|
|
||||||
|
|
||||||
log.info("\n" + "=" * 70)
|
|
||||||
log.info("RESULTS SUMMARY")
|
|
||||||
log.info("=" * 70)
|
|
||||||
for name, success in results:
|
|
||||||
status = "OK" if success else "FAILED"
|
|
||||||
log.info(" [%s] %s", status, name)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
3
start.sh
3
start.sh
|
|
@ -1,3 +0,0 @@
|
||||||
#!/usr/bin/env bash
|
|
||||||
cd "$(dirname "$0")"
|
|
||||||
exec uv run python -m cheddahbot
|
|
||||||
|
|
@ -10,7 +10,6 @@ from cheddahbot.tools.clickup_tool import (
|
||||||
clickup_query_tasks,
|
clickup_query_tasks,
|
||||||
clickup_reset_task,
|
clickup_reset_task,
|
||||||
clickup_task_status,
|
clickup_task_status,
|
||||||
get_active_tasks,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -143,38 +142,3 @@ class TestClickupResetTask:
|
||||||
|
|
||||||
result = clickup_reset_task(task_id="t1", ctx=_make_ctx())
|
result = clickup_reset_task(task_id="t1", ctx=_make_ctx())
|
||||||
assert "Error" in result
|
assert "Error" in result
|
||||||
|
|
||||||
|
|
||||||
class TestGetActiveTasks:
|
|
||||||
def test_no_scheduler(self):
|
|
||||||
result = get_active_tasks(ctx={"config": MagicMock()})
|
|
||||||
assert "not available" in result.lower()
|
|
||||||
|
|
||||||
def test_nothing_running(self):
|
|
||||||
scheduler = MagicMock()
|
|
||||||
scheduler.get_active_executions.return_value = {}
|
|
||||||
scheduler.get_loop_timestamps.return_value = {"clickup": None, "folder_watch": None}
|
|
||||||
|
|
||||||
result = get_active_tasks(ctx={"scheduler": scheduler})
|
|
||||||
assert "No tasks actively executing" in result
|
|
||||||
assert "Safe to restart: Yes" in result
|
|
||||||
|
|
||||||
def test_tasks_running(self):
|
|
||||||
from datetime import UTC, datetime, timedelta
|
|
||||||
|
|
||||||
scheduler = MagicMock()
|
|
||||||
scheduler.get_active_executions.return_value = {
|
|
||||||
"t1": {
|
|
||||||
"name": "Press Release for Acme",
|
|
||||||
"tool": "write_press_releases",
|
|
||||||
"started_at": datetime.now(UTC) - timedelta(minutes=5),
|
|
||||||
"thread": "clickup_thread",
|
|
||||||
}
|
|
||||||
}
|
|
||||||
scheduler.get_loop_timestamps.return_value = {"clickup": datetime.now(UTC).isoformat()}
|
|
||||||
|
|
||||||
result = get_active_tasks(ctx={"scheduler": scheduler})
|
|
||||||
assert "Active Executions (1)" in result
|
|
||||||
assert "Press Release for Acme" in result
|
|
||||||
assert "write_press_releases" in result
|
|
||||||
assert "Safe to restart: No" in result
|
|
||||||
|
|
|
||||||
|
|
@ -16,7 +16,6 @@ class FakeTask:
|
||||||
id: str = "fake_id"
|
id: str = "fake_id"
|
||||||
name: str = ""
|
name: str = ""
|
||||||
task_type: str = ""
|
task_type: str = ""
|
||||||
status: str = "running cora"
|
|
||||||
custom_fields: dict = field(default_factory=dict)
|
custom_fields: dict = field(default_factory=dict)
|
||||||
|
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -227,36 +227,23 @@ class TestFuzzyKeywordMatch:
|
||||||
def test_exact_match(self):
|
def test_exact_match(self):
|
||||||
assert _fuzzy_keyword_match("precision cnc", "precision cnc") is True
|
assert _fuzzy_keyword_match("precision cnc", "precision cnc") is True
|
||||||
|
|
||||||
def test_no_match_without_llm(self):
|
def test_substring_match_a_in_b(self):
|
||||||
"""Without an llm_check, non-exact strings return False."""
|
assert _fuzzy_keyword_match("cnc machining", "precision cnc machining services") is True
|
||||||
assert _fuzzy_keyword_match("shaft", "shafts") is False
|
|
||||||
assert _fuzzy_keyword_match("shaft manufacturing", "custom shaft manufacturing") is False
|
def test_substring_match_b_in_a(self):
|
||||||
|
assert _fuzzy_keyword_match("precision cnc machining services", "cnc machining") is True
|
||||||
|
|
||||||
|
def test_word_overlap(self):
|
||||||
|
assert _fuzzy_keyword_match("precision cnc machining", "cnc machining precision") is True
|
||||||
|
|
||||||
|
def test_no_match(self):
|
||||||
|
assert _fuzzy_keyword_match("precision cnc", "web design agency") is False
|
||||||
|
|
||||||
def test_empty_strings(self):
|
def test_empty_strings(self):
|
||||||
assert _fuzzy_keyword_match("", "test") is False
|
assert _fuzzy_keyword_match("", "test") is False
|
||||||
assert _fuzzy_keyword_match("test", "") is False
|
assert _fuzzy_keyword_match("test", "") is False
|
||||||
assert _fuzzy_keyword_match("", "") is False
|
assert _fuzzy_keyword_match("", "") is False
|
||||||
|
|
||||||
def test_llm_check_called_on_mismatch(self):
|
|
||||||
"""When strings differ, llm_check is called and its result is returned."""
|
|
||||||
llm_yes = lambda a, b: True
|
|
||||||
llm_no = lambda a, b: False
|
|
||||||
|
|
||||||
assert _fuzzy_keyword_match("shaft", "shafts", llm_check=llm_yes) is True
|
|
||||||
assert _fuzzy_keyword_match("shaft", "shafts", llm_check=llm_no) is False
|
|
||||||
|
|
||||||
def test_llm_check_not_called_on_exact(self):
|
|
||||||
"""Exact match should not call llm_check."""
|
|
||||||
def boom(a, b):
|
|
||||||
raise AssertionError("should not be called")
|
|
||||||
|
|
||||||
assert _fuzzy_keyword_match("shaft", "shaft", llm_check=boom) is True
|
|
||||||
|
|
||||||
def test_no_substring_match_without_llm(self):
|
|
||||||
"""Substring matching is gone — different keywords must not match."""
|
|
||||||
assert _fuzzy_keyword_match("shaft manufacturing", "custom shaft manufacturing") is False
|
|
||||||
assert _fuzzy_keyword_match("cnc machining", "precision cnc machining services") is False
|
|
||||||
|
|
||||||
|
|
||||||
class TestNormalizeForMatch:
|
class TestNormalizeForMatch:
|
||||||
def test_lowercase_and_strip(self):
|
def test_lowercase_and_strip(self):
|
||||||
|
|
|
||||||
|
|
@ -232,62 +232,3 @@ class TestFieldFilterDiscovery:
|
||||||
mock_client.discover_field_filter.reset_mock()
|
mock_client.discover_field_filter.reset_mock()
|
||||||
scheduler._poll_clickup()
|
scheduler._poll_clickup()
|
||||||
mock_client.discover_field_filter.assert_not_called()
|
mock_client.discover_field_filter.assert_not_called()
|
||||||
|
|
||||||
|
|
||||||
class TestActiveExecutions:
|
|
||||||
"""Test the active execution registry."""
|
|
||||||
|
|
||||||
def test_register_and_get(self, tmp_db):
|
|
||||||
config = _FakeConfig()
|
|
||||||
scheduler = Scheduler(config, tmp_db, MagicMock())
|
|
||||||
|
|
||||||
scheduler._register_execution("t1", "Task One", "write_press_releases")
|
|
||||||
active = scheduler.get_active_executions()
|
|
||||||
|
|
||||||
assert "t1" in active
|
|
||||||
assert active["t1"]["name"] == "Task One"
|
|
||||||
assert active["t1"]["tool"] == "write_press_releases"
|
|
||||||
assert "started_at" in active["t1"]
|
|
||||||
assert "thread" in active["t1"]
|
|
||||||
|
|
||||||
def test_unregister(self, tmp_db):
|
|
||||||
config = _FakeConfig()
|
|
||||||
scheduler = Scheduler(config, tmp_db, MagicMock())
|
|
||||||
|
|
||||||
scheduler._register_execution("t1", "Task One", "write_press_releases")
|
|
||||||
scheduler._unregister_execution("t1")
|
|
||||||
assert scheduler.get_active_executions() == {}
|
|
||||||
|
|
||||||
def test_unregister_nonexistent_is_noop(self, tmp_db):
|
|
||||||
config = _FakeConfig()
|
|
||||||
scheduler = Scheduler(config, tmp_db, MagicMock())
|
|
||||||
|
|
||||||
# Should not raise
|
|
||||||
scheduler._unregister_execution("nonexistent")
|
|
||||||
assert scheduler.get_active_executions() == {}
|
|
||||||
|
|
||||||
def test_multiple_executions(self, tmp_db):
|
|
||||||
config = _FakeConfig()
|
|
||||||
scheduler = Scheduler(config, tmp_db, MagicMock())
|
|
||||||
|
|
||||||
scheduler._register_execution("t1", "Task One", "write_press_releases")
|
|
||||||
scheduler._register_execution("t2", "Task Two", "run_cora_backlinks")
|
|
||||||
active = scheduler.get_active_executions()
|
|
||||||
|
|
||||||
assert len(active) == 2
|
|
||||||
assert "t1" in active
|
|
||||||
assert "t2" in active
|
|
||||||
|
|
||||||
def test_get_returns_snapshot(self, tmp_db):
|
|
||||||
"""get_active_executions returns a copy, not a reference."""
|
|
||||||
config = _FakeConfig()
|
|
||||||
scheduler = Scheduler(config, tmp_db, MagicMock())
|
|
||||||
|
|
||||||
scheduler._register_execution("t1", "Task One", "tool_a")
|
|
||||||
snapshot = scheduler.get_active_executions()
|
|
||||||
scheduler._unregister_execution("t1")
|
|
||||||
|
|
||||||
# Snapshot should still have t1
|
|
||||||
assert "t1" in snapshot
|
|
||||||
# But live state should be empty
|
|
||||||
assert scheduler.get_active_executions() == {}
|
|
||||||
|
|
|
||||||
25
uv.lock
25
uv.lock
|
|
@ -325,16 +325,13 @@ dependencies = [
|
||||||
{ name = "edge-tts" },
|
{ name = "edge-tts" },
|
||||||
{ name = "gradio" },
|
{ name = "gradio" },
|
||||||
{ name = "httpx" },
|
{ name = "httpx" },
|
||||||
{ name = "jinja2" },
|
|
||||||
{ name = "numpy" },
|
{ name = "numpy" },
|
||||||
{ name = "openai" },
|
{ name = "openai" },
|
||||||
{ name = "openpyxl" },
|
{ name = "openpyxl" },
|
||||||
{ name = "python-docx" },
|
{ name = "python-docx" },
|
||||||
{ name = "python-dotenv" },
|
{ name = "python-dotenv" },
|
||||||
{ name = "python-multipart" },
|
|
||||||
{ name = "pyyaml" },
|
{ name = "pyyaml" },
|
||||||
{ name = "sentence-transformers" },
|
{ name = "sentence-transformers" },
|
||||||
{ name = "sse-starlette" },
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dev-dependencies]
|
[package.dev-dependencies]
|
||||||
|
|
@ -360,16 +357,13 @@ requires-dist = [
|
||||||
{ name = "edge-tts", specifier = ">=6.1" },
|
{ name = "edge-tts", specifier = ">=6.1" },
|
||||||
{ name = "gradio", specifier = ">=5.0" },
|
{ name = "gradio", specifier = ">=5.0" },
|
||||||
{ name = "httpx", specifier = ">=0.27" },
|
{ name = "httpx", specifier = ">=0.27" },
|
||||||
{ name = "jinja2", specifier = ">=3.1.6" },
|
|
||||||
{ name = "numpy", specifier = ">=1.24" },
|
{ name = "numpy", specifier = ">=1.24" },
|
||||||
{ name = "openai", specifier = ">=1.30" },
|
{ name = "openai", specifier = ">=1.30" },
|
||||||
{ name = "openpyxl", specifier = ">=3.1.5" },
|
{ name = "openpyxl", specifier = ">=3.1.5" },
|
||||||
{ name = "python-docx", specifier = ">=1.2.0" },
|
{ name = "python-docx", specifier = ">=1.2.0" },
|
||||||
{ name = "python-dotenv", specifier = ">=1.0" },
|
{ name = "python-dotenv", specifier = ">=1.0" },
|
||||||
{ name = "python-multipart", specifier = ">=0.0.22" },
|
|
||||||
{ name = "pyyaml", specifier = ">=6.0" },
|
{ name = "pyyaml", specifier = ">=6.0" },
|
||||||
{ name = "sentence-transformers", specifier = ">=3.0" },
|
{ name = "sentence-transformers", specifier = ">=3.0" },
|
||||||
{ name = "sse-starlette", specifier = ">=3.3.3" },
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.metadata.requires-dev]
|
[package.metadata.requires-dev]
|
||||||
|
|
@ -2562,19 +2556,6 @@ wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/46/2c/1462b1d0a634697ae9e55b3cecdcb64788e8b7d63f54d923fcd0bb140aed/soupsieve-2.8.3-py3-none-any.whl", hash = "sha256:ed64f2ba4eebeab06cc4962affce381647455978ffc1e36bb79a545b91f45a95", size = 37016, upload-time = "2026-01-20T04:27:01.012Z" },
|
{ url = "https://files.pythonhosted.org/packages/46/2c/1462b1d0a634697ae9e55b3cecdcb64788e8b7d63f54d923fcd0bb140aed/soupsieve-2.8.3-py3-none-any.whl", hash = "sha256:ed64f2ba4eebeab06cc4962affce381647455978ffc1e36bb79a545b91f45a95", size = 37016, upload-time = "2026-01-20T04:27:01.012Z" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "sse-starlette"
|
|
||||||
version = "3.3.3"
|
|
||||||
source = { registry = "https://pypi.org/simple" }
|
|
||||||
dependencies = [
|
|
||||||
{ name = "anyio" },
|
|
||||||
{ name = "starlette" },
|
|
||||||
]
|
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/14/2f/9223c24f568bb7a0c03d751e609844dce0968f13b39a3f73fbb3a96cd27a/sse_starlette-3.3.3.tar.gz", hash = "sha256:72a95d7575fd5129bd0ae15275ac6432bb35ac542fdebb82889c24bb9f3f4049", size = 32420, upload-time = "2026-03-17T20:05:55.529Z" }
|
|
||||||
wheels = [
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/78/e2/b8cff57a67dddf9a464d7e943218e031617fb3ddc133aeeb0602ff5f6c85/sse_starlette-3.3.3-py3-none-any.whl", hash = "sha256:c5abb5082a1cc1c6294d89c5290c46b5f67808cfdb612b7ec27e8ba061c22e8d", size = 14329, upload-time = "2026-03-17T20:05:54.35Z" },
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "starlette"
|
name = "starlette"
|
||||||
version = "0.52.1"
|
version = "0.52.1"
|
||||||
|
|
@ -2741,12 +2722,6 @@ wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/0f/8b/4b61d6e13f7108f36910df9ab4b58fd389cc2520d54d81b88660804aad99/torch-2.10.0-2-cp311-none-macosx_11_0_arm64.whl", hash = "sha256:418997cb02d0a0f1497cf6a09f63166f9f5df9f3e16c8a716ab76a72127c714f", size = 79423467, upload-time = "2026-02-10T21:44:48.711Z" },
|
{ url = "https://files.pythonhosted.org/packages/0f/8b/4b61d6e13f7108f36910df9ab4b58fd389cc2520d54d81b88660804aad99/torch-2.10.0-2-cp311-none-macosx_11_0_arm64.whl", hash = "sha256:418997cb02d0a0f1497cf6a09f63166f9f5df9f3e16c8a716ab76a72127c714f", size = 79423467, upload-time = "2026-02-10T21:44:48.711Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/d3/54/a2ba279afcca44bbd320d4e73675b282fcee3d81400ea1b53934efca6462/torch-2.10.0-2-cp312-none-macosx_11_0_arm64.whl", hash = "sha256:13ec4add8c3faaed8d13e0574f5cd4a323c11655546f91fbe6afa77b57423574", size = 79498202, upload-time = "2026-02-10T21:44:52.603Z" },
|
{ url = "https://files.pythonhosted.org/packages/d3/54/a2ba279afcca44bbd320d4e73675b282fcee3d81400ea1b53934efca6462/torch-2.10.0-2-cp312-none-macosx_11_0_arm64.whl", hash = "sha256:13ec4add8c3faaed8d13e0574f5cd4a323c11655546f91fbe6afa77b57423574", size = 79498202, upload-time = "2026-02-10T21:44:52.603Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/ec/23/2c9fe0c9c27f7f6cb865abcea8a4568f29f00acaeadfc6a37f6801f84cb4/torch-2.10.0-2-cp313-none-macosx_11_0_arm64.whl", hash = "sha256:e521c9f030a3774ed770a9c011751fb47c4d12029a3d6522116e48431f2ff89e", size = 79498254, upload-time = "2026-02-10T21:44:44.095Z" },
|
{ url = "https://files.pythonhosted.org/packages/ec/23/2c9fe0c9c27f7f6cb865abcea8a4568f29f00acaeadfc6a37f6801f84cb4/torch-2.10.0-2-cp313-none-macosx_11_0_arm64.whl", hash = "sha256:e521c9f030a3774ed770a9c011751fb47c4d12029a3d6522116e48431f2ff89e", size = 79498254, upload-time = "2026-02-10T21:44:44.095Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/36/ab/7b562f1808d3f65414cd80a4f7d4bb00979d9355616c034c171249e1a303/torch-2.10.0-3-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:ac5bdcbb074384c66fa160c15b1ead77839e3fe7ed117d667249afce0acabfac", size = 915518691, upload-time = "2026-03-11T14:15:43.147Z" },
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/b3/7a/abada41517ce0011775f0f4eacc79659bc9bc6c361e6bfe6f7052a6b9363/torch-2.10.0-3-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:98c01b8bb5e3240426dcde1446eed6f40c778091c8544767ef1168fc663a05a6", size = 915622781, upload-time = "2026-03-11T14:17:11.354Z" },
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/ab/c6/4dfe238342ffdcec5aef1c96c457548762d33c40b45a1ab7033bb26d2ff2/torch-2.10.0-3-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:80b1b5bfe38eb0e9f5ff09f206dcac0a87aadd084230d4a36eea5ec5232c115b", size = 915627275, upload-time = "2026-03-11T14:16:11.325Z" },
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/d8/f0/72bf18847f58f877a6a8acf60614b14935e2f156d942483af1ffc081aea0/torch-2.10.0-3-cp313-cp313t-manylinux_2_28_x86_64.whl", hash = "sha256:46b3574d93a2a8134b3f5475cfb98e2eb46771794c57015f6ad1fb795ec25e49", size = 915523474, upload-time = "2026-03-11T14:17:44.422Z" },
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/f4/39/590742415c3030551944edc2ddc273ea1fdfe8ffb2780992e824f1ebee98/torch-2.10.0-3-cp314-cp314-manylinux_2_28_x86_64.whl", hash = "sha256:b1d5e2aba4eb7f8e87fbe04f86442887f9167a35f092afe4c237dfcaaef6e328", size = 915632474, upload-time = "2026-03-11T14:15:13.666Z" },
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/b6/8e/34949484f764dde5b222b7fe3fede43e4a6f0da9d7f8c370bb617d629ee2/torch-2.10.0-3-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:0228d20b06701c05a8f978357f657817a4a63984b0c90745def81c18aedfa591", size = 915523882, upload-time = "2026-03-11T14:14:46.311Z" },
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/78/89/f5554b13ebd71e05c0b002f95148033e730d3f7067f67423026cc9c69410/torch-2.10.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:3282d9febd1e4e476630a099692b44fdc214ee9bf8ee5377732d9d9dfe5712e4", size = 145992610, upload-time = "2026-01-21T16:25:26.327Z" },
|
{ url = "https://files.pythonhosted.org/packages/78/89/f5554b13ebd71e05c0b002f95148033e730d3f7067f67423026cc9c69410/torch-2.10.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:3282d9febd1e4e476630a099692b44fdc214ee9bf8ee5377732d9d9dfe5712e4", size = 145992610, upload-time = "2026-01-21T16:25:26.327Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/ae/30/a3a2120621bf9c17779b169fc17e3dc29b230c29d0f8222f499f5e159aa8/torch-2.10.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:a2f9edd8dbc99f62bc4dfb78af7bf89499bca3d753423ac1b4e06592e467b763", size = 915607863, upload-time = "2026-01-21T16:25:06.696Z" },
|
{ url = "https://files.pythonhosted.org/packages/ae/30/a3a2120621bf9c17779b169fc17e3dc29b230c29d0f8222f499f5e159aa8/torch-2.10.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:a2f9edd8dbc99f62bc4dfb78af7bf89499bca3d753423ac1b4e06592e467b763", size = 915607863, upload-time = "2026-01-21T16:25:06.696Z" },
|
||||||
{ url = "https://files.pythonhosted.org/packages/6f/3d/c87b33c5f260a2a8ad68da7147e105f05868c281c63d65ed85aa4da98c66/torch-2.10.0-cp311-cp311-win_amd64.whl", hash = "sha256:29b7009dba4b7a1c960260fc8ac85022c784250af43af9fb0ebafc9883782ebd", size = 113723116, upload-time = "2026-01-21T16:25:21.916Z" },
|
{ url = "https://files.pythonhosted.org/packages/6f/3d/c87b33c5f260a2a8ad68da7147e105f05868c281c63d65ed85aa4da98c66/torch-2.10.0-cp311-cp311-win_amd64.whl", hash = "sha256:29b7009dba4b7a1c960260fc8ac85022c784250af43af9fb0ebafc9883782ebd", size = 113723116, upload-time = "2026-01-21T16:25:21.916Z" },
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue