Agent Skills by ALSEL
Anthropic ClaudeLLM・AI開発⭐ リポ 201品質スコア 93/100

notebooklm-research

Google NotebookLMを活用したフルオートパイロットAIリサーチエージェント(notebooklm-py v0.3.4)です。URL、テキスト、PDF、DOCX、YouTube、Google Driveなどの様々なソースを取り込み、深掘りしたWeb調査を実施します。引用元を明記した質問への回答と、10種類のネイティブアーティファクト(音声ポッドキャスト、動画、シネマティック動画、スライドデッキ、レポート、クイズ、フラッシュカード、マインドマップ、インフォグラフィック、データテーブル、学習ガイド)を生成できます。Claudeを使用してオリジナルコンテンツドラフトを作成し、threads-viral-agentの統合により、ソーシャルメディアへのオプション公開にも対応します。NotebookLM、ソース活用リサーチ、ポッドキャスト生成、研究をコンテンツ化する、トレンドトピックのリサーチ、リサーチパイプライン、ソースベースの分析、引用付き調査回答、スライド生成、クイズ作成、フラッシュカード作成、深いWeb調査、インフォグラフィック作成、ソース比較、リサーチレポート、学習ガイド、ソース分析、知識統合などのご依頼の際にご活用ください。

description の原文を見る

Full-autopilot AI research agent powered by Google NotebookLM (notebooklm-py v0.3.4). Ingests sources (URL, text, PDF, DOCX, YouTube, Google Drive), runs deep web research, asks cited questions, and generates 10 native artifact types (audio podcast, video, cinematic video, slide deck, report, quiz, flashcards, mind map, infographic, data table, study guide). Produces original content drafts via Claude, with optional publishing to social platforms via threads-viral-agent integration. Use this skill when the user mentions: NotebookLM, research with sources, create notebook, generate podcast from articles, turn research into content, trending topic research, research pipeline, source-based analysis, cited research answers, generate slides, generate quiz, make flashcards, deep web research, create infographic, compare sources, research report, study guide, source analysis, or knowledge synthesis.

SKILL.md 本文

NotebookLM Research Agent

A fully autonomous AI research agent that ingests sources into Google NotebookLM, runs deep web research, synthesizes knowledge through cited Q&A and 9 downloadable artifact types, creates polished content drafts, and optionally publishes to social platforms.

Zero-cost research engine -- NotebookLM is free. No API keys. No per-query charges.

Authentication

NotebookLM uses RPC/HTTP calls after a one-time browser cookie auth. No browser automation per operation -- the session is stored and reused.

~/.notebooklm/storage_state.json

Login once via the built-in CLI:

notebooklm login              # One-time browser auth, saves session
notebooklm login --check      # Verify stored session is still valid

The session persists until Google expires it (typically weeks). All scripts and the MCP server auto-load the stored session. No API keys or environment variables needed.

Architecture Overview

Core Principle: NotebookLM provides cited research, Claude creates content.

NotebookLM handles source ingestion, indexing, deep web research, cited answers, and native artifact generation (9 downloadable types). Claude uses that research output to write original articles, social posts, and reports. The pipeline is zero-cost and produces citation-backed content.

ComponentRole
notebooklm-py (v0.3.4)Python client for NotebookLM (8 sub-APIs, 50+ methods, built-in CLI)
notebooklm CLIBuilt-in CLI: notebooklm login, notebook, source, chat, generate, download, research, share
MCP Server (mcp_server/)FastMCP server exposing 13 tools for Claude Code / Cursor / Gemini CLI
Wrapper CLI (scripts/)Our higher-level wrappers: notebooklm_client.py, pipeline.py
LLM (Claude)Content creator (writes original text using NotebookLM research)
trend-pulse (optional)Trending topic discovery for research-to-content pipelines
threads-viral-agent (optional)Social publishing for content distribution
┌──────────────────────────────────────────────────────────────────────────────────┐
│                          NOTEBOOKLM RESEARCH AGENT                              │
├──────────────┬──────────────┬─────────────────┬─────────────────────────────────┤
│  Phase 1     │  Phase 2     │   Phase 3       │    Phase 4                      │
│  INGEST      │  SYNTHESIZE  │   CREATE        │    PUBLISH (optional)           │
│              │              │                 │                                 │
│ Sources:     │ Chat:        │ Claude writes:  │ threads-viral-agent:            │
│  URL         │  ask()       │  Articles ────→ │  → Threads                      │
│  Text        │  → cited     │  Social posts → │  → Instagram                    │
│  PDF/DOCX    │    answers   │  Newsletters  → │  → Facebook                     │
│  YouTube     │  → follow-up │  Reports ─────→ │                                 │
│  Google Drive│  → citations │                 │ Direct output:                  │
│  File upload │              │ trend-pulse     │  → Markdown file                │
│              │ Artifacts    │  → topic ideas  │  → JSON data                    │
│ Research:    │ (9 types):   │                 │  → Newsletter draft             │
│  web (fast)  │  audio       │ NotebookLM      │  → Podcast MP4                  │
│  web (deep)  │  video       │ artifacts used  │  → Video MP4                    │
│  drive       │  cinematic*  │ directly:       │  → Slide deck PDF               │
│              │  slide_deck  │  → Podcast      │  → Quiz / Flashcards            │
│ Auto-import  │  report      │  → Report       │                                 │
│ discovered   │  quiz        │  → Data table   │                                 │
│ sources      │  flashcards  │  → Infographic   │                                 │
│              │  mind_map    │                 │ * cinematic = Veo 3,            │
│              │  infographic │                 │   AI Ultra only                 │
│              │  data_table  │                 │                                 │
│              │  study_guide │                 │                                 │
└──────────────┴──────────────┴─────────────────┴─────────────────────────────────┘

8 Sub-APIs (notebooklm-py v0.3.4)

Sub-APIAccessorDescription
Notebooksclient.notebooksCreate, list, get, delete, rename, describe, share
Sourcesclient.sourcesAdd URL/text/file/Drive, list, delete, rename, refresh, guide, fulltext, wait
Artifactsclient.artifactsGenerate 9 downloadable types, poll status, download, list, delete, rename, revise slides
Chatclient.chatAsk with citations, follow-up, conversation history, configure persona
Researchclient.researchWeb/Drive research, poll results, import discovered sources
Notesclient.notesCreate, list, update, delete text notes and mind maps
Settingsclient.settingsUser settings (output language)
Sharingclient.sharingPublic links, user permissions, view levels

Phase 1: INGEST -- Source Collection

Create a notebook and populate it with sources. NotebookLM accepts 8 source types: URLs, text, PDF, DOCX, Markdown, CSV, YouTube, and Google Drive documents.

Create Notebook and Add Sources

Built-in CLI (notebooklm-py):

# Create a notebook
notebooklm notebook create "AI Agents Research"

# Add sources
notebooklm source add NOTEBOOK_ID --url "https://arxiv.org/abs/2401.12345"
notebooklm source add NOTEBOOK_ID --url "https://youtube.com/watch?v=VIDEO_ID"
notebooklm source add NOTEBOOK_ID --text "Custom Notes" --content "Full text here..."
notebooklm source add NOTEBOOK_ID --file /path/to/document.pdf

Our wrapper CLI (global command or scripts/notebooklm_client.py):

# After pip install ., use global commands:
# notebooklm-skill create --title "AI Agents Research" --sources url1 url2

# Or use scripts directly:
python3 scripts/notebooklm_client.py create \
  --title "AI Agents Research" \
  --sources \
    "https://arxiv.org/abs/2401.12345" \
    "https://blog.example.com/ai-agents-2026"

# Add more sources to existing notebook
python3 scripts/notebooklm_client.py add-source \
  --notebook NOTEBOOK_ID \
  --url "https://another-source.com/article"

# Add text source (pasted content)
python3 scripts/notebooklm_client.py add-source \
  --notebook NOTEBOOK_ID \
  --text "Full text content here..." \
  --text-title "Title of Source"

# Add file (PDF, Markdown, DOCX, CSV)
python3 scripts/notebooklm_client.py add-source \
  --notebook NOTEBOOK_ID \
  --file "/path/to/document.pdf"

# Add YouTube video (auto-extracts transcript)
python3 scripts/notebooklm_client.py add-source \
  --notebook NOTEBOOK_ID \
  --url "https://youtube.com/watch?v=VIDEO_ID"

# Add Google Drive document
python3 scripts/notebooklm_client.py add-source \
  --notebook NOTEBOOK_ID \
  --drive-id "DRIVE_FILE_ID" \
  --drive-title "Document Title"

Deep Web Research (auto-discover sources)

NotebookLM can search the web or Google Drive and auto-import relevant sources. This is one of the most powerful features -- it finds sources you did not know existed.

Built-in CLI:

notebooklm research start NOTEBOOK_ID "latest advances in AI agents"
notebooklm research poll NOTEBOOK_ID

Our wrapper CLI:

# Fast web research (quick scan, returns URLs)
python3 scripts/notebooklm_client.py research \
  --notebook NOTEBOOK_ID \
  --query "latest advances in AI agents" \
  --source web \
  --mode fast

# Deep web research (thorough analysis, returns report + URLs)
python3 scripts/notebooklm_client.py research \
  --notebook NOTEBOOK_ID \
  --query "comparison of agent frameworks" \
  --source web \
  --mode deep

# Google Drive research
python3 scripts/notebooklm_client.py research \
  --notebook NOTEBOOK_ID \
  --query "project notes on agent design" \
  --source drive

# Poll results and auto-import top discovered sources
python3 scripts/notebooklm_client.py research-poll \
  --notebook NOTEBOOK_ID \
  --import-top 5

Research modes:

ModeSpeedOutputBest For
fast10-30 secURL list + brief summaryQuick source discovery
deep1-5 minFull research report (Markdown) + URLsThorough analysis, complex topics

Deep research returns a comprehensive Markdown report synthesizing findings across all discovered sources -- usable as-is or as input for Claude.

Source Types Reference

TypeMethodCLI FlagNotes
Web URLadd_url(url)--urlAny web page, auto-indexes content
YouTubeadd_url(youtube_url)--urlAuto-detects YouTube, extracts transcript
PDFadd_file(path)--fileResumable upload, large files OK
DOCXadd_file(path)--fileWord documents
Markdownadd_file(path)--file.md files
CSVadd_file(path)--fileSpreadsheet data
Textadd_text(title, content)--text --contentPasted/copied content
Google Docsadd_drive(file_id, title)--drive-id --drive-titleRequires Drive access
Google Slidesadd_drive(file_id, title, mime)--drive-id --drive-titlePresentation content
Google Sheetsadd_drive(file_id, title, mime)--drive-id --drive-titleSpreadsheet data
Imageadd_file(path)--fileImage content (OCR)

Source Limits and Wait Behavior

  • Max 50 sources per notebook
  • Sources require processing time (5-60 seconds depending on size/type)
  • Use --wait flag to block until source is ready
  • Use wait_for_sources() for batch operations
  • Source statuses: 1=processing, 2=ready, 3=error, 4=preparing

Python API (for custom scripts)

from notebooklm import NotebookLMClient

async with await NotebookLMClient.from_storage() as client:
    # Create notebook
    nb = await client.notebooks.create("AI Research")

    # Add sources
    src1 = await client.sources.add_url(nb.id, "https://example.com", wait=True)
    src2 = await client.sources.add_text(nb.id, "Notes", "Content...", wait=True)
    src3 = await client.sources.add_file(nb.id, "/path/to/doc.pdf", wait=True)

    # Deep web research
    task = await client.research.start(nb.id, "AI agents 2026", mode="deep")
    results = await client.research.poll(nb.id)  # Poll until complete
    imported = await client.research.import_sources(nb.id, task["task_id"], results["sources"][:5])

Phase 2: SYNTHESIZE -- Research & Analysis

Once sources are ingested, use NotebookLM to extract knowledge through cited Q&A and generate 9 types of downloadable native artifacts.

Ask Questions (Cited Answers)

Every answer includes source citations with exact passage references.

Built-in CLI:

notebooklm chat NOTEBOOK_ID "What are the key differences between ReAct and Reflexion?"
notebooklm chat NOTEBOOK_ID "Can you elaborate on point 3?" --conversation CONV_ID

Our wrapper CLI:

# Ask a question -- answer includes source citations
python3 scripts/notebooklm_client.py ask \
  --notebook NOTEBOOK_ID \
  --query "What are the key differences between ReAct and Reflexion agents?"

# Ask with specific sources only
python3 scripts/notebooklm_client.py ask \
  --notebook NOTEBOOK_ID \
  --query "Summarize the main findings" \
  --sources SOURCE_ID_1 SOURCE_ID_2

# Follow-up question (maintains conversation context)
python3 scripts/notebooklm_client.py ask \
  --notebook NOTEBOOK_ID \
  --query "Can you elaborate on point 3?" \
  --conversation CONVERSATION_ID

Chat Configuration

NotebookLM's chat can be configured for different interaction styles:

ModeDescriptionUse Case
defaultBalanced answersGeneral research
learning_guideSocratic, asks follow-up questionsStudy, learning
conciseShort, direct answersQuick lookups
detailedThorough, comprehensive answersDeep analysis

Python API:

from notebooklm.models import ChatMode, ChatResponseLength

await client.chat.set_mode(nb.id, ChatMode.LEARNING_GUIDE)
await client.chat.configure(nb.id, response_length=ChatResponseLength.LONGER)

Generate Artifacts (10 Types)

NotebookLM natively generates 9 downloadable artifact types from ingested sources. These are generated server-side by Google -- no LLM cost on our end.

Warning: infographic generation works but download is unreliable (fragile API structure parsing). Use slides instead for downloadable visual content.

Built-in CLI:

notebooklm generate audio NOTEBOOK_ID
notebooklm generate video NOTEBOOK_ID
notebooklm generate report NOTEBOOK_ID --format briefing_doc
notebooklm generate quiz NOTEBOOK_ID
notebooklm generate flashcards NOTEBOOK_ID
# notebooklm generate infographic NOTEBOOK_ID  # ⚠️ download unreliable
notebooklm generate slide-deck NOTEBOOK_ID
notebooklm generate data-table NOTEBOOK_ID
notebooklm generate mind-map NOTEBOOK_ID

Our wrapper CLI:

# 1. Audio Overview (podcast-style discussion)
python3 scripts/notebooklm_client.py generate audio \
  --notebook NOTEBOOK_ID \
  --language en \
  --format deep_dive \
  --length default \
  --instructions "Focus on practical implications"

# 2. Video Overview
python3 scripts/notebooklm_client.py generate video \
  --notebook NOTEBOOK_ID \
  --format explainer \
  --style whiteboard

# 3. Cinematic Video (Veo 3, requires AI Ultra subscription)
python3 scripts/notebooklm_client.py generate cinematic-video \
  --notebook NOTEBOOK_ID \
  --instructions "Dramatic visual storytelling"

# 4. Slide Deck
python3 scripts/notebooklm_client.py generate slide-deck \
  --notebook NOTEBOOK_ID \
  --format detailed_deck

# 5. Report (Briefing Doc / Study Guide / Blog Post / Custom)
python3 scripts/notebooklm_client.py generate report \
  --notebook NOTEBOOK_ID \
  --format briefing_doc

# 6. Study Guide (convenience shortcut for report format=study_guide)
python3 scripts/notebooklm_client.py generate report \
  --notebook NOTEBOOK_ID \
  --format study_guide

# 7. Quiz
python3 scripts/notebooklm_client.py generate quiz \
  --notebook NOTEBOOK_ID \
  --quantity standard \
  --difficulty medium

# 8. Flashcards
python3 scripts/notebooklm_client.py generate flashcards \
  --notebook NOTEBOOK_ID

# 9. Mind Map
python3 scripts/notebooklm_client.py generate mind-map \
  --notebook NOTEBOOK_ID

# 10. Infographic — ⚠️ download unreliable, use slides instead
# python3 scripts/notebooklm_client.py generate infographic \
#   --notebook NOTEBOOK_ID \
#   --orientation landscape \
#   --detail standard

# 11. Data Table
python3 scripts/notebooklm_client.py generate data-table \
  --notebook NOTEBOOK_ID \
  --instructions "Compare frameworks by features, performance, and community size"

Artifact Generation Options

Audio formats:

FormatDurationStyleBest For
deep_dive15-30 minThorough explorationComplex topics
brief3-5 minQuick overviewNews updates
critique10-20 minCritical analysisReviews, evaluations
debate10-20 minTwo opposing viewsControversial topics

Audio lengths: short (~5 min), default (~10-15 min), long (~20-30 min)

Video formats: explainer, brief, cinematic (AI Ultra only)

Video styles: auto_select, classic, whiteboard, conversational, dynamic

Report formats: briefing_doc, study_guide, blog_post, custom (with --prompt)

Quiz options: quantity (fewer, standard, more), difficulty (easy, medium, hard)

Infographic options: orientation (landscape, portrait, square), detail (concise, standard, detailed)

Slide deck formats: detailed_deck, presenter_slides

Download Artifacts

Built-in CLI:

notebooklm download audio NOTEBOOK_ID output.m4a
notebooklm download video NOTEBOOK_ID output.mp4

Our wrapper CLI:

# Download audio (M4A)
python3 scripts/notebooklm_client.py download audio \
  --notebook NOTEBOOK_ID \
  --output podcast.m4a

# Download video (MP4)
python3 scripts/notebooklm_client.py download video \
  --notebook NOTEBOOK_ID \
  --output video.mp4

# Download slide deck (PDF)
python3 scripts/notebooklm_client.py download slide-deck \
  --notebook NOTEBOOK_ID \
  --output slides.pdf

# Get report content (Markdown)
python3 scripts/notebooklm_client.py download report \
  --notebook NOTEBOOK_ID \
  --output report.md

# Export quiz as JSON
python3 scripts/notebooklm_client.py download quiz \
  --notebook NOTEBOOK_ID \
  --format json \
  --output quiz.json

# Export flashcards
python3 scripts/notebooklm_client.py download flashcards \
  --notebook NOTEBOOK_ID \
  --output flashcards.json

# Export mind map as JSON
python3 scripts/notebooklm_client.py download mind-map \
  --notebook NOTEBOOK_ID \
  --output mindmap.json

# Export data table as CSV
python3 scripts/notebooklm_client.py download data-table \
  --notebook NOTEBOOK_ID \
  --output comparison.csv

Notebook Management

# List all notebooks
python3 scripts/notebooklm_client.py list
notebooklm notebook list

# Get notebook summary and suggested topics
python3 scripts/notebooklm_client.py describe --notebook NOTEBOOK_ID

# List sources in a notebook
python3 scripts/notebooklm_client.py sources --notebook NOTEBOOK_ID

# Get source guide (AI summary + keywords for a specific source)
python3 scripts/notebooklm_client.py source-guide --notebook NOTEBOOK_ID --source SOURCE_ID

# Get full indexed text of a source
python3 scripts/notebooklm_client.py fulltext --notebook NOTEBOOK_ID --source SOURCE_ID

# Rename notebook
python3 scripts/notebooklm_client.py rename --notebook NOTEBOOK_ID --title "New Title"

# Delete notebook
python3 scripts/notebooklm_client.py delete --notebook NOTEBOOK_ID

# Share notebook (public link)
python3 scripts/notebooklm_client.py share --notebook NOTEBOOK_ID --public
notebooklm share NOTEBOOK_ID --public
notebooklm share NOTEBOOK_ID --add user@example.com

Phase 3: CREATE -- Content Generation

Claude uses research output from Phase 2 to write original content. NotebookLM artifacts can also be used directly (reports, podcasts, slide decks).

Research-to-Article Pipeline

# Full pipeline: create notebook -> ask questions -> write article
python3 scripts/pipeline.py research-to-article \
  --sources "https://url1.com" "https://url2.com" \
  --title "AI Agent Frameworks in 2026" \
  --output article.md

# From existing notebook
python3 scripts/pipeline.py research-to-article \
  --notebook NOTEBOOK_ID \
  --topic "AI Agent Frameworks" \
  --output article.md

Research-to-Social Pipeline

# Research -> social posts for Threads/IG/FB
python3 scripts/pipeline.py research-to-social \
  --sources "https://url1.com" "https://url2.com" \
  --platform threads \
  --output posts.json

Trend-to-Content Pipeline (requires trend-pulse MCP)

# Discover trending topic -> research it -> create content
python3 scripts/pipeline.py trend-to-content \
  --geo TW \
  --count 3 \
  --platform threads \
  --output content.json

The trend-to-content pipeline:

  1. Calls trend-pulse get_trending(geo="TW", count=20) to discover hot topics
  2. Claude picks the best topic for the target niche
  3. Creates a NotebookLM notebook with relevant URLs (from trend sources)
  4. Asks research questions to build understanding
  5. Claude writes platform-specific content using cited research

Batch Digest Pipeline

# RSS feed -> notebook -> digest summary
python3 scripts/pipeline.py batch-digest \
  --rss "https://example.com/feed.xml" \
  --title "Weekly AI Digest" \
  --max-entries 15

Integration with trend-pulse MCP

When trend-pulse MCP is available, use its tools directly:

get_trending(sources="hackernews,reddit", geo="TW", count=20)
-> Pick relevant topics
-> Feed URLs into NotebookLM notebook
-> Research and create content

Research-and-Write Workflow (Manual)

User: "Research AI agent frameworks and write a blog post"

1. Create notebook with relevant URLs (from search or user-provided)
2. Run deep web research to discover additional sources
3. Import top discovered sources into the notebook
4. Ask 3-5 research questions covering key angles
5. Generate a briefing doc report for structured overview
6. Generate a data table for feature comparison
7. Claude writes article using:
   - Cited answers from step 4
   - Report summary from step 5
   - Data table from step 6
   - Original analysis and opinion
8. Output polished markdown article with source citations

Artifacts as Direct Content

Some NotebookLM artifacts are usable directly without Claude rewriting:

ArtifactDirect UseClaude Enhancement
Audio (podcast)Distribute as-isGenerate show notes, write companion article
VideoDistribute as-isWrite video description, social posts
Report (briefing doc)Publish as blog postEdit tone, add opinion, localize
Slide deckPresent as-is (PDF)Add speaker notes, create handout
QuizUse for training/educationAdapt for social engagement (polls)
FlashcardsUse for studyConvert to Threads carousel
Mind mapVisual overviewNarrate as article outline
InfographicShare on social mediaWrite accompanying caption
Data tableEmbed in articlesNarrate findings, add analysis
Study guideDistribute for learningCondense into social-sized tips

Phase 4: PUBLISH -- Distribution (Optional)

Integration with threads-viral-agent

If the threads-viral-agent skill is available, pipe content directly:

# Research -> social post -> publish to Threads
python3 scripts/pipeline.py research-to-social \
  --notebook NOTEBOOK_ID \
  --topic "Topic" \
  --publish \
  --account cw

Direct Output Formats

Without social integration, output as files:

FormatUse CaseOutput
Markdown articleBlog post, website.md file
Social post JSONManual posting.json with platform-specific text
Newsletter draftEmail campaign.md with sections
Report (briefing doc)Internal use, blogMarkdown from NotebookLM
Podcast audioDistribution.m4a from NotebookLM audio artifact
VideoSocial media, YouTube.mp4 from NotebookLM video artifact
Slide deckPresentations.pdf from NotebookLM slide deck
Quiz / FlashcardsEducation, training.json structured data
InfographicSocial media, reportsImage from NotebookLM
Data tableAnalysis, spreadsheets.csv export

Full Auto-Pilot Mode

When the user says anything like "research this topic", "create a notebook about X", "turn these articles into a post", "research pipeline", "generate a podcast from these sources", "make a quiz", execute the complete flow.

Single Run

  1. Collect source URLs from user or trend-pulse
  2. Create notebook: notebooklm_client.py create --title "Topic" --sources url1 url2
  3. Optionally run deep web research to discover more sources
  4. Wait for source processing
  5. Ask research questions: notebooklm_client.py ask --query "Q1"
  6. Generate requested artifacts (audio, video, report, quiz, slides, etc.)
  7. Claude writes content using research answers (with citations)
  8. Output article/posts/report + downloadable artifacts

Deep Research Flow

User: "Deep dive into AI coding assistants"

1. Create notebook with user-provided or searched URLs
2. Run deep web research: research --mode deep --query "AI coding assistants 2026"
3. Poll results, import top 5 discovered sources
4. Ask 3-5 probing questions
5. Generate podcast (deep_dive format, long length)
6. Generate briefing doc report
7. Generate data table (feature comparison)
8. Download all artifacts
9. Claude writes companion article using cited research
10. Output: article.md + podcast.m4a + report.md + comparison.csv

Artifact Generation Flow

User: "Generate a quiz and flashcards from my notebook"

1. Find notebook by name or ID
2. Generate quiz: generate quiz --quantity standard --difficulty medium
3. Generate flashcards: generate flashcards
4. Wait for both to complete (poll_status / wait_for_completion)
5. Download quiz: download quiz --output quiz.json
6. Download flashcards: download flashcards --output flashcards.json
7. Output both files

MCP Server

The mcp_server/ directory contains a FastMCP server that exposes NotebookLM operations as MCP tools. Works with Claude Code, Cursor, Gemini CLI, and any MCP-compatible client.

Configuration

After pip install .:

{
  "mcpServers": {
    "notebooklm": {
      "command": "notebooklm-mcp"
    }
  }
}

Or using script path:

{
  "mcpServers": {
    "notebooklm": {
      "command": "python3",
      "args": ["/path/to/notebooklm-skill/mcp_server/server.py"]
    }
  }
}

HTTP mode (for remote / multi-client access):

notebooklm-mcp --http --port 8765
{
  "mcpServers": {
    "notebooklm": {
      "url": "http://localhost:8765/mcp"
    }
  }
}

MCP Tools (13 tools)

Core notebook operations (7):

ToolParametersDescription
nlm_create_notebook(title, sources[], text_sources?)title, URL list, optional text listCreate notebook and add sources
nlm_list()--List all notebooks
nlm_delete(notebook)notebook ID or titleDelete a notebook (irreversible)
nlm_add_source(notebook, url?, text?, file_path?)notebook + sourceAdd a source to existing notebook
nlm_ask(notebook, query)notebook ID/title, questionAsk question, get cited answer
nlm_summarize(notebook)notebook ID or titleGet comprehensive summary
nlm_list_sources(notebook)notebook ID or titleList all sources in notebook

Artifact operations (3):

ToolParametersDescription
nlm_generate(notebook, type, lang?, instructions?)notebook, artifact typeGenerate any of 9 artifact types (infographic excluded)
nlm_download(notebook, type, output_path)notebook, artifact type, outputDownload artifact to file
nlm_list_artifacts(notebook, type?)notebook ID, optional type filterList artifacts in notebook

Research operations (1):

ToolParametersDescription
nlm_research(notebook, query, mode?)notebook, search query, modeRun web research (fast or deep)

Pipeline operations (2):

ToolParametersDescription
nlm_research_pipeline(sources[], questions[], output_format?)URLs, questions, formatFull research-to-content pipeline
nlm_trend_research(geo?, count?, platform?)region, count, platformTrending topics to researched content

Built-in CLI Reference (notebooklm-py)

The notebooklm CLI is installed with pip install notebooklm and mirrors the Python API directly.

Authentication

notebooklm login                              # One-time browser auth
notebooklm login --check                      # Verify stored session

Notebooks

notebooklm notebook list                      # List all notebooks
notebooklm notebook create "Title"            # Create notebook
notebooklm notebook get NOTEBOOK_ID           # Get notebook details
notebooklm notebook delete NOTEBOOK_ID        # Delete notebook
notebooklm notebook rename NOTEBOOK_ID "New"  # Rename notebook

Sources

notebooklm source list NOTEBOOK_ID                         # List sources
notebooklm source add NOTEBOOK_ID --url "https://..."      # Add URL
notebooklm source add NOTEBOOK_ID --text "T" --content "." # Add text
notebooklm source add NOTEBOOK_ID --file /path/to/doc.pdf  # Add file
notebooklm source delete NOTEBOOK_ID SOURCE_ID             # Delete source
notebooklm source guide NOTEBOOK_ID SOURCE_ID              # AI summary + keywords
notebooklm source fulltext NOTEBOOK_ID SOURCE_ID           # Full indexed text

Chat

notebooklm chat NOTEBOOK_ID "Question?"                    # Ask question
notebooklm chat NOTEBOOK_ID "Follow up" --conversation ID  # Follow-up

Artifact Generation

notebooklm generate audio NOTEBOOK_ID                      # Podcast
notebooklm generate video NOTEBOOK_ID                      # Video
notebooklm generate report NOTEBOOK_ID --format briefing_doc
notebooklm generate quiz NOTEBOOK_ID
notebooklm generate flashcards NOTEBOOK_ID
# notebooklm generate infographic NOTEBOOK_ID  # ⚠️ download unreliable
notebooklm generate slide-deck NOTEBOOK_ID
notebooklm generate data-table NOTEBOOK_ID
notebooklm generate mind-map NOTEBOOK_ID

Download

notebooklm download audio NOTEBOOK_ID output.m4a           # Download podcast
notebooklm download video NOTEBOOK_ID output.mp4           # Download video
notebooklm download slide-deck NOTEBOOK_ID output.pdf      # Download slides

Research

notebooklm research start NOTEBOOK_ID "query"              # Start research
notebooklm research poll NOTEBOOK_ID                       # Poll results

Sharing

notebooklm share NOTEBOOK_ID --public                      # Enable public link
notebooklm share NOTEBOOK_ID --add user@example.com        # Share with user

Our Wrapper CLI Reference (scripts/)

notebooklm_client.py -- Core Operations

SubcommandDescriptionKey Flags
createCreate notebook with sources--title, --sources, --text-sources
askAsk question, get cited answer--notebook, --query, --sources, --conversation
summarizeSummarize notebook content--notebook
podcastGenerate audio overview--notebook, --lang
qaGenerate Q&A pairs--notebook, --count
listList all notebooks--
deleteDelete a notebook--notebook
add-sourceAdd source to notebook--notebook, --url/--text/--file/--drive-id
describeGet AI summary + topics--notebook
sourcesList sources in notebook--notebook
source-guideGet AI summary of source--notebook, --source
fulltextGet full source text--notebook, --source
renameRename notebook--notebook, --title
shareShare notebook--notebook, --public
generateGenerate any artifact type--notebook, --type, --format, --language
downloadDownload artifact--notebook, --type, --output
researchStart web/drive research--notebook, --query, --source, --mode
research-pollPoll research results--notebook, --import-top

pipeline.py -- Higher-Level Workflows

WorkflowDescriptionKey Flags
research-to-articleSources -> research -> article--sources, --title, --output
research-to-socialSources -> summarize -> social post--sources, --platform, --output
trend-to-contentTrends -> research -> content--geo, --count, --platform
batch-digestRSS feed -> digest summary--rss, --title, --max-entries

Rate Limits

These are estimated safe limits. Actual limits are undocumented and may vary. If you receive rate limit errors, wait 60 seconds and retry.

OperationLimitNotes
Notebook creation~10/hourSuggested safe rate
Source addition~20/hourPer notebook
Chat questions~30/hourAcross all notebooks
Audio generation~5/hourResource-intensive, 3-10 min processing
Video generation~3/hourVery resource-intensive, 5-15 min processing
Cinematic video~2/hourVeo 3 rendering, AI Ultra only
Report generation~10/hourModerate, 10-60 sec processing
Quiz/Flashcards~10/hourModerate
Slide deck~5/hourModerate-heavy
Infographic~5/hourModerate-heavy
Data table~10/hourModerate
Mind map~10/hourLightweight
Web research (fast)~10/hourGoogle search backend
Web research (deep)~5/hourExtended processing

Rate limit detection: The API returns is_rate_limited: true in GenerationStatus. The error code is "USER_DISPLAYABLE_ERROR". Wait 60 seconds and retry.

Error Handling

The API provides a structured error hierarchy:

NotebookLMError (base)
+-- AuthError              # Session expired -> run `notebooklm login`
+-- RPCError               # Google RPC failures
|   +-- RPCTimeoutError    # Increase timeout
+-- SourceError
|   +-- SourceAddError     # Bad URL or file format
|   +-- SourceTimeoutError # Source took too long to process
+-- ArtifactError
|   +-- ArtifactNotReadyError  # Poll again or wait
+-- RateLimitError         # Wait 60s and retry

Common fixes:

  • AuthError: Run notebooklm login to refresh the session
  • SourceTimeoutError: Increase wait_timeout or check source URL
  • RateLimitError: Wait 60 seconds, then retry
  • ArtifactNotReadyError: Use wait_for_completion() instead of immediate download

Quick Reference: All Components

ComponentPathPurpose
scripts/notebooklm_client.pyscripts/Core CLI (also: notebooklm-skill after pip install)
scripts/pipeline.pyscripts/Higher-level pipelines (also: notebooklm-pipeline after pip install)
mcp_server/server.pymcp_server/FastMCP server (also: notebooklm-mcp after pip install)
mcp_server/tools.pymcp_server/MCP tool implementations
scripts/auth_helper.pyscripts/Authentication helper
references/api_surface.mdreferences/Full notebooklm-py v0.3.4 API documentation (8 sub-APIs, all methods)
references/output_formats.mdreferences/JSON output format specifications for all API responses
references/pipeline_recipes.mdreferences/7 common pipeline recipes with full command sequences
docs/SETUP.mddocs/Installation and setup guide

Quick Reference: 10 Artifact Types

TypeGenerateDownloadOutput FormatProcessing Time
Audio (podcast)generate audiodownload audioM4A3-10 min
Videogenerate videodownload videoMP45-15 min
Cinematic Videogenerate cinematic-videodownload videoMP410-20 min
Slide Deckgenerate slide-deckdownload slide-deckPDF30-120 sec
Reportgenerate reportdownload reportMarkdown10-60 sec
Study Guidegenerate report --format study_guidedownload reportMarkdown10-60 sec
Quizgenerate quizdownload quizJSON (structured)10-30 sec
Flashcardsgenerate flashcardsdownload flashcardsJSON (structured)10-30 sec
Mind Mapgenerate mind-mapdownload mind-mapJSON (tree)5-15 sec
Infographicgenerate infographicdownload infographicImage⚠️ download unreliable — use slides
Data Tablegenerate data-tabledownload data-tableCSV/JSON10-30 sec

Trigger Patterns

English

  • "Research X using NotebookLM"
  • "Create a notebook about X"
  • "Turn these articles into a blog post"
  • "Generate a podcast from these sources"
  • "Generate a video overview of X"
  • "Make a slide deck from this research"
  • "Create a quiz from this material"
  • "Generate flashcards for studying X"
  • "Create an infographic about X" (⚠️ use slides instead — infographic download unreliable)
  • "Build a mind map of X"
  • "Generate a data table comparing X and Y"
  • "Write a report on X"
  • "Deep research on X"
  • "Find sources about X"
  • "What does the research say about X?"
  • "Research and write about X"
  • "Summarize these sources"
  • "Research pipeline for X"
  • "Compare these sources"
  • "Turn this into a study guide"
  • "Research trending topics and write content"
  • "Create a weekly digest from these feeds"

ZH-TW

  • "用 NotebookLM 研究 X"
  • "建立一個關於 X 的筆記本"
  • "把這些文章變成部落格文章"
  • "從這些來源生成 Podcast"
  • "幫我做一個影片摘要"
  • "做一份簡報 / 投影片"
  • "從這些資料出題 / 出考卷"
  • "幫我做閃卡 / 字卡"
  • "做一張資訊圖表"
  • "畫一個心智圖"
  • "做一個比較表格"
  • "寫一份報告"
  • "深入研究 X"
  • "幫我找 X 的資料"
  • "研究 X 並寫一篇文章"
  • "幫我整理這些資料"
  • "研究流水線"
  • "比較這些來源"
  • "做一份讀書指南"
  • "研究熱門趨勢並寫內容"
  • "做每週摘要"

ライセンス: MIT(寛容ライセンスのため全文を引用しています) · 原本リポジトリ

詳細情報

作者
claude-world
リポジトリ
claude-world/notebooklm-skill
ライセンス
MIT
最終更新
2026/4/14

Source: https://github.com/claude-world/notebooklm-skill / ライセンス: MIT

本サイトは GitHub 上で公開されているオープンソースの SKILL.md ファイルをクロール・インデックス化したものです。 各スキルの著作権は原作者に帰属します。掲載に問題がある場合は info@alsel.co.jp または /takedown フォームよりご連絡ください。
原作者: claude-world · claude-world/notebooklm-skill · ライセンス: MIT