Skip to main content

QuickAdd scripts

info

This page covers the QuickAdd scripts that import data into Obsidian, including book search, show tracking, health metrics, and fitness data.

Personal Data Hub series

  1. Personal Data Hub
  2. Foundations
  3. Data pipelines
  4. QuickAdd scripts - You are here
  5. Chrome extensions
  6. Plugins

Why I built my own

I tried the existing Obsidian plugins for books and media tracking. They all fell short in ways that mattered to me.

The popular book plugin required adding books through its own modal and stored tracking data in a separate database file. If the plugin disappeared, so would my reading history. That defeated the entire point of plain markdown files I could read without any tool.

Another plugin came close but could not handle multiple formats of the same book. I own several titles as both Kindle and audiobook. I needed separate purchase records while linking to the same book note.

So I wrote my own scripts. QuickAdd runs JavaScript inside Obsidian and that was all I needed.

Overview

ScriptPurpose
add-book.jsAdd individual books with API search
add-book-batch.jsBatch import from CSV
shows.jsImport watch history with TMDB metadata
health.jsImport daily health metrics from CSV
fitness.jsImport workouts with type linking

All scripts are iOS compatible and track progress so batch imports can resume where they left off.

1. Books script

I wanted to search for a book, see the covers, pick one, and have everything created automatically. Author notes, genre notes, cover images, and the book note itself.

The books script provides:

  • Google Books API as the primary search source
  • Apple Books fallback when Google has no results
  • Visual picker with cover thumbnails and metadata
  • Duplicate detection by volume ID, ISBN, or title/author
  • Multi-format support with separate linked notes per format

Usage

  1. Run add-book.js via QuickAdd
  2. Search for the book
  3. Select from visual results
  4. Fill in purchase details (store, date, format)
  5. Book note created with cover and author links

Features

Author linking - Creates author notes in People/ and links them via wikilinks. If the author already exists, it just links. No duplicates.

Genre linking - Creates genre notes in Genres/ for each genre.

AI summaries - Generates summaries via Ollama models running on my Kubernetes cluster. No cloud API, no data leaving my network. This is private data about what I read and I want to keep it that way. This was a late addition after I realised how useful it would be to have a summary without opening the book:

const OLLAMA_CONFIG = {
url: "http://localhost:11434/api/generate",
model: "llama3.1:latest",
timeout: 60000,
};

Frontmatter schema

---
categories:
- "[[Books]]"
created: "2026-01-15"
title: "Book Title"
author:
- "[[Author Name]]"
genre:
- "[[Fiction]]"
publisher: "Publisher Name"
publishDate: "2024-03-15"
totalPage: 320
isbn13: "9781234567890"
localCoverImage: "books/covers/Title - Author.jpg"
purchasedStore: "[[Amazon]]"
purchasedDate: "2026-01-01"
format: "[[kindle]]"
rating: 8
ai_summary: |
AI-generated summary...
---

2. Shows script

The shows script was the most complex to build. Streaming services have their own naming conventions. An episode might be called "S1E5" on Prime, "Season 1, Episode 5" on Netflix, and something completely different on TMDB. Sometimes three episodes are combined into one. Sometimes they split a special into multiple parts.

The script handles watch history from multiple sources:

  • TMDB integration for movie and series metadata
  • Episode tracking with season and episode numbers
  • Watch count tracking for rewatches
  • Artwork downloads for series and season posters

How it works

Episode matching

Streaming services sometimes number episodes differently than TMDB. The script:

  1. Compares CSV episode titles with TMDB titles
  2. Uses fuzzy matching with confidence scoring
  3. Logs mismatches for manual review
  4. Supports episode mapping overrides

I spent more time on fuzzy matching than any other feature. The first version was too strict and missed obvious matches. The second was too loose and created duplicates. The current version uses a confidence threshold that catches most edge cases while flagging uncertain matches for review.

Frontmatter schema (series)

---
categories:
- "[[Series]]"
title: "Series Name"
tmdbId: 12345
status: "watching"
rating: 8
totalSeasons: 4
totalEpisodes: 75
localCoverImage: "shows/covers/series/Series Name/series.jpg"
---

Frontmatter schema (watch log)

---
categories:
- "[[Watched]]"
date: "2026-01-15"
type: "episode"
show: "[[Series Name/_series|Series Name]]"
episode: "[[Series Name/S01E05 - Episode Title]]"
season: 1
episodeNum: 5
source: "[[Prime]]"
---

3. Health script

The health script imports daily metrics from CSV files exported by the health pipeline. This was the simplest script to write because the data is already clean.

Features

  • Daily aggregation - Combines metrics, sleep, weight, mindfulness into one note per day
  • Incremental import - Only imports new data, skips existing
  • Unit handling - Parses values and stores units separately for calculations
  • Progress tracking - Resumable imports with progress files

Expected CSV format

Place CSVs in csv-imports/health/Health/:

  • metrics.csv - Steps, HRV, energy
  • sleep.csv - Sleep duration, quality
  • weight.csv - Weight measurements
  • mindfulness.csv - Mindfulness sessions

Frontmatter schema

---
categories:
- "[[Health]]"
date: "2026-01-15"
steps: 8542
stepsUnit: "steps"
activeEnergy: 425
activeEnergyUnit: "kcal"
restingHeartRate: 58
restingHeartRateUnit: "bpm"
hrv: 45
hrvUnit: "ms"
sleepDuration: 7.5
sleepDurationUnit: "hours"
weight: 82.5
weightUnit: "kg"
---

4. Fitness script

The fitness script imports workout data with type linking.

Features

  • Workout type linking - Creates notes for each workout type (Running, Cycling, etc.)
  • Duration and calories - Tracks workout metrics
  • Distance tracking - For applicable workout types
  • Incremental import - Progress tracking for batch imports

Frontmatter schema

---
categories:
- "[[Workouts]]"
date: "2026-01-15"
workoutType: "[[Outdoor Running]]"
duration: 45
durationUnit: "minutes"
calories: 420
caloriesUnit: "kcal"
distance: 7.2
distanceUnit: "km"
---

5. Progress tracking

Batch imports can take time. I did not want to start over if something failed halfway through.

All scripts track progress to enable resumable imports:

{
"processedIds": ["id1", "id2"],
"lastProcessedIndex": 42,
"timestamp": "2026-01-15T10:30:00Z"
}

Progress files are stored in .obsidian/ and checked before processing each item.

Reset progress: When all items are processed, you will be prompted to reset. Or manually delete the progress file.

Troubleshooting

"No results from Google Books"

  • Try broader search terms
  • Check internet connection
  • Try Apple Books fallback

API rate limiting

  • Add a Google Books API key to quickadd-secrets.json
  • The key is free and has generous limits

Script not working on iOS

  • Ensure using obsidian.requestUrl() not fetch()
  • Check for console errors
  • Verify secrets file path is vault-relative

Next: Chrome extensions covers the browser extensions for capturing watch history directly.