Supporting Reference 2: User Guide

Run day-to-day operations safely with this guide. Follow it to run batches, review outputs, and publish with control. Use it after environment setup is complete.

R2.1 Before each run

  1. Activate the correct Python environment (usually .venv313).
  2. For EXE runs, set AMIR_PYTHON to the Python 3.13 interpreter before launch.
  3. Confirm Ollama is reachable with ollama list.
  4. At app startup, verify the runtime line reports expected processor mode: [INFO] Ollama startup check: ... processor=GPU/CPU ....
  5. Confirm input folders are available and writable paths are healthy (data/, logs/, data/ollama_tmp/).
  6. If publishing is planned, verify FTP/MySQL credentials and endpoint availability.
Set-Location "\path\to\amir2000_image_automation"
.\.venv313\Scripts\Activate.ps1
ollama list
python .\main_set.py

R2.2 Start a batch

  1. Open the Multi-Set UI from main_set.py.
  2. Add one or more sets from local folders.
  3. Review subject/location/folder mapping fields before starting. The Add set action is intentionally separated to reduce mis-clicks.
  4. Subject input now auto-applies Title Case while typing (for example foggy bike path becomes Foggy Bike Path) and spellcheck runs with a safe debounce to avoid per-keystroke UI instability.
  5. Subject normalization preserves natural trailing joiners like in, of, and the when they are part of a valid phrase.
  6. Use the resizable queue table to validate large runs. The table expands with the window and supports scrollbars for long set lists.
  7. Start the batch and monitor stage progress + ETA in UI/console.
Multi-set UI used to prepare and run batch sets

Multi-set view: set intake, subject suggestion, queue inspection, and batch start.

R2.2A Crash-safe continue flow

  1. Reopen the Multi-Set app after an interruption or crash.
  2. Click Recover crash session (next to Clear all).
  3. Confirm the recovered set/pending counts shown in the dialog.
  4. Continue normally with Start Batch.

Recovery source: data/multiset_session.json (or latest backup snapshot if present). Restore checks now validate files in both incoming and staged paths.

Add-set stability hardening: background AI subject generation now runs single-flight per selection (prevents overlapping workers while building many sets), and unexpected add-set callback failures are written to data/crash_runtime.log.

R2.3 What stages run automatically

Run the normal sequence in this order:

  1. Validate sets
  2. Prepare DB and copy to incoming
  3. Extract EXIF and initial metadata
  4. Insert or refresh review rows
  5. AI quality scoring
  6. Resize images for Ollama
  7. Caption and keywords prefill
  8. Open review editor

See Step 2: Workflow for full technical behavior.

R2.4 Review editor workflow

  1. Use the Image n/x indicator to track position in long review queues.
  2. In Multi-Set, confirm spellcheck health before batch build. Status now shows Spellcheck: ON/OFF next to the Add set controls.
  3. Prioritize rows with weak quality or questionable caption/keywords.
  4. Review and edit File_Name, Caption, alt_text, Keywords, Subject, and Location.
  5. Use Generate for row-level metadata retry; it regenerates Caption, alt_text, and Keywords for the current row and persists results to DB.
  6. Generate retry is duplicate-aware for pending rows, so exact caption collisions are rejected before save.
  7. The review editor uses compact multi-line fields for Caption, alt_text, and Keywords to reduce scrolling during review edits.
  8. Primary actions are isolated (Generate, Approve) with secondary decisions grouped below (Back, Reject, Pending, Publish).
  9. Caption, alt_text, and Keywords now use the same dictionary-backed spellcheck system as Subject (shared suggestions + exceptions).
  10. Use right-click in text fields to replace a flagged word or keep/add it to local spellcheck exceptions when the term is valid.
  11. Validate quality fields (QR, QC_Status) and adjust when needed.
  12. Set each row decision explicitly: approved, pending, or rejected based on final quality.
Review editor with Generate action for current-row metadata retry

Generate flow: reruns caption, alt text, and keywords for the active row only.

Review editor showing regenerated output and separated action rows

Post-generate review: verify regenerated text before using Approve/Reject/Pending decisions.

R2.5 Decision rules for operators

Quality-first rule: speed should never override naming/metadata accuracy.

R2.6 Publish approved rows

  1. Trigger publish from review editor after final row decisions are complete.
  2. Uploader sends image + thumbnail assets to FTP target paths.
  3. Metadata upsert is applied to MySQL by File_Name.
  4. Local mirror DB is synchronized with canonical MySQL IDs.
  5. On success, processed queue rows are cleared from review_queue.
  6. At completion, one final publish dialog is shown; clicking OK closes the review window.

R2.7 Post-run validation checklist

  1. Check dist/logs/latest_run.log when running the EXE, or logs/latest_run.log when running Python directly.
  2. Confirm startup line shows expected Ollama processor mode (GPU when configured).
  3. If OLLAMA_CLOSE_ON_RUN_END=1, confirm app-started Ollama runtime closes after run completion.
  4. Check logs/db_uploader.log for upload/upsert failures.
  5. Check data/prefill_qc_last.json for duplicate/suspicious prefill scan output.
  6. Spot-check published image and thumbnail URLs.
  7. Verify expected records in MySQL photos_info_revamp.
  8. Confirm queue statuses and mirror DB state are consistent.

R2.8 Fast recovery pointers

Detailed incident procedures are in Step 3: Runbook and Supporting Reference 3: Troubleshooting.

R2.9 Operator quality tips

R2.10 Continuation path

Continue to troubleshooting to map issue patterns to targeted fixes.

© 2026 Amir Darzi
Privacy Policy  |  Photography site | W3C-Valid  |  Cookie settings