-
Notifications
You must be signed in to change notification settings - Fork 2.8k
[ZEPPELIN-6411] Semantic search for Zeppelin #5218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from all commits
425d7c3
232d3d0
5b3e18b
0562c99
641daa4
e5cdf16
786ac9a
6c63b5f
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,49 @@ | ||
| #!/usr/bin/env bash | ||
| # Licensed to the Apache Software Foundation (ASF) under one or more | ||
| # contributor license agreements. See the NOTICE file distributed with | ||
| # this work for additional information regarding copyright ownership. | ||
| # The ASF licenses this file to You under the Apache License, Version 2.0 | ||
| # (the "License"); you may not use this file except in compliance with | ||
| # the License. You may obtain a copy of the License at | ||
| # | ||
| # http://www.apache.org/licenses/LICENSE-2.0 | ||
| # | ||
| # Unless required by applicable law or agreed to in writing, software | ||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
|
|
||
| # Downloads the sentence-transformer model required for semantic search. | ||
| # Run this once before starting Zeppelin with zeppelin.search.semantic.enable=true. | ||
| # | ||
| # Usage: bin/install-search-model.sh [INDEX_PATH] | ||
| # INDEX_PATH defaults to /tmp/zeppelin-index (matches zeppelin.search.index.path) | ||
|
|
||
| set -euo pipefail | ||
|
|
||
| MODEL_NAME="all-MiniLM-L6-v2" | ||
| MODEL_REVISION="c9745ed1d9f207416be6d2e6f8de32d1f16199bf" | ||
| BASE_URL="https://huggingface.co/sentence-transformers/${MODEL_NAME}/resolve/${MODEL_REVISION}" | ||
|
|
||
| INDEX_PATH="${1:-/tmp/zeppelin-index}" | ||
| MODEL_DIR="${INDEX_PATH}/models/${MODEL_NAME}" | ||
|
|
||
| mkdir -p "${MODEL_DIR}" | ||
|
|
||
| download() { | ||
| local url="$1" dest="$2" | ||
| if [ -f "${dest}" ]; then | ||
| echo "Already exists: ${dest}" | ||
| return | ||
| fi | ||
| echo "Downloading ${url} ..." | ||
| curl -fSL --connect-timeout 30 --max-time 300 -o "${dest}.tmp" "${url}" | ||
| mv "${dest}.tmp" "${dest}" | ||
| echo "Saved: ${dest}" | ||
| } | ||
|
|
||
| download "${BASE_URL}/onnx/model.onnx" "${MODEL_DIR}/model.onnx" | ||
| download "${BASE_URL}/tokenizer.json" "${MODEL_DIR}/tokenizer.json" | ||
|
|
||
| echo "Model installed to ${MODEL_DIR}" | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,205 @@ | ||
| <!-- | ||
| Licensed under the Apache License, Version 2.0 (the "License"); | ||
| you may not use this file except in compliance with the License. | ||
| You may obtain a copy of the License at | ||
|
|
||
| http://www.apache.org/licenses/LICENSE-2.0 | ||
|
|
||
| Unless required by applicable law or agreed to in writing, software | ||
| distributed under the License is distributed on an "AS IS" BASIS, | ||
| WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| See the License for the specific language governing permissions and | ||
| limitations under the License. | ||
| --> | ||
|
|
||
| # ZEPPELIN-6411: Semantic Search for Notebooks using Sentence Embeddings | ||
|
|
||
| ## Summary | ||
|
|
||
| Add `EmbeddingSearch` — a new `SearchService` implementation that enables natural language | ||
| search across Zeppelin notebooks using ONNX-based sentence embeddings. This is a drop-in | ||
| replacement for `LuceneSearch` that understands meaning, not just keywords. | ||
|
|
||
| **Example**: Searching "yesterday's spending" finds paragraphs containing | ||
| `SELECT sum(cost) FROM analytics.daily_sales WHERE date = current_date - interval '1' day` | ||
| — something keyword search cannot do (returns 0 results with LuceneSearch). | ||
|
|
||
| ## Motivation | ||
|
|
||
| Zeppelin's current search (`LuceneSearch`) uses keyword-based full-text search with | ||
| Lucene's `StandardAnalyzer`. This has several limitations for notebook search: | ||
|
|
||
| 1. **No semantic understanding** — "yesterday's spend" won't find `current_date - 1` | ||
| 2. **Poor SQL tokenization** — `StandardAnalyzer` breaks on underscores and dots in | ||
| table names like `analytics_db.daily_sales` | ||
| 3. **No output indexing** — query results (table data, text output) are not searchable | ||
| 4. **Exact match only** — users must guess the exact terms used in notebooks | ||
|
|
||
| For teams with hundreds or thousands of notebooks (common in data/analytics teams), | ||
| finding the right query becomes a significant productivity bottleneck. | ||
|
|
||
| ## Architecture | ||
|
|
||
| ``` | ||
| SearchService (abstract) | ||
| ├── LuceneSearch (existing, keyword-based) | ||
| ├── EmbeddingSearch (new, semantic) | ||
| └── NoSearchService (existing, no-op) | ||
|
|
||
| ┌─────────────────────────────────────────────────────────────┐ | ||
| │ EmbeddingSearch │ | ||
| │ │ | ||
| │ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │ | ||
| │ │ HuggingFace │ │ ONNX Runtime │ │ In-Memory Index │ │ | ||
| │ │ Tokenizer │→ │ Inference │→ │ float[][] + meta │ │ | ||
| │ │ (DJL) │ │ (CPU) │ │ ConcurrentHashMap│ │ | ||
| │ └──────────────┘ └──────────────┘ └────────┬─────────┘ │ | ||
| │ │ │ | ||
| │ Two-phase query: │ │ | ||
| │ 1. Embed query → cosine sim → find tables │ │ | ||
| │ 2. Re-rank with table boost → top-20 │ │ | ||
| │ ▼ │ | ||
| │ Index: text + title + output + tables embedding_index.bin│ | ||
| │ (persisted to disk, versioned) │ | ||
| └─────────────────────────────────────────────────────────────┘ | ||
| ``` | ||
|
|
||
| ### Model | ||
|
|
||
| - **all-MiniLM-L6-v2**: 384-dimensional sentence embeddings | ||
| - 86MB ONNX model (quantized version available at 22MB) | ||
| - Downloaded on first use to `zeppelin.search.index.path/models/` | ||
| - Runs on CPU via ONNX Runtime (~5ms per paragraph) | ||
|
|
||
| ### Index | ||
|
|
||
| - In-memory `ConcurrentHashMap<String, IndexEntry>` with `ReadWriteLock` | ||
| - Each entry stores: embedding (384 floats), notebook name, paragraph text, | ||
| title, extracted SQL table names, and paragraph output | ||
| - 10K paragraphs ≈ 15MB RAM, 50K paragraphs ≈ 75MB RAM | ||
| - Persisted as versioned binary file (`embedding_index.bin`, currently v3) | ||
| - Brute-force cosine similarity: < 50ms for 50K paragraphs | ||
|
|
||
| ### What gets indexed (vs. LuceneSearch) | ||
|
|
||
| | Content | LuceneSearch | EmbeddingSearch | | ||
| |---------|:---:|:---:| | ||
| | Paragraph text | ✓ | ✓ | | ||
| | Paragraph title | ✓ | ✓ | | ||
| | Notebook name | ✓ | ✓ (in embedding context) | | ||
| | Paragraph output (TABLE, TEXT) | ✗ | ✓ | | ||
| | SQL table names (FROM/JOIN) | ✗ | ✓ (extracted + boosted) | | ||
| | Interpreter prefix stripped | ✗ | ✓ | | ||
|
|
||
| ### Two-Phase Search | ||
|
|
||
| 1. **Phase 1 — Table Discovery**: Run cosine similarity, collect SQL table names | ||
| from top-20 results weighted by rank | ||
| 2. **Phase 2 — Table Boost**: Re-score results, boosting paragraphs that reference | ||
| the discovered tables (+0.05 per matching table) | ||
|
|
||
| This helps queries like "click funnel analysis" surface all paragraphs that query | ||
| the same tables, even if their SQL text is very different. | ||
|
|
||
| ## Configuration | ||
|
|
||
| Disabled by default. Enable with a single property: | ||
|
|
||
| ```xml | ||
| <!-- In zeppelin-site.xml --> | ||
| <property> | ||
| <name>zeppelin.search.semantic.enable</name> | ||
| <value>true</value> | ||
| </property> | ||
| ``` | ||
|
|
||
| Requires `zeppelin.search.enable = true` (already the default). | ||
|
|
||
| ### Configuration matrix | ||
|
|
||
| | `search.enable` | `search.semantic.enable` | Result | | ||
| |:---:|:---:|---| | ||
| | true | false (default) | LuceneSearch (existing behavior) | | ||
| | true | true | EmbeddingSearch (semantic) | | ||
| | false | any | NoSearchService | | ||
|
|
||
| ## Changes | ||
|
|
||
| ### New files | ||
| - `zeppelin-zengine/.../search/EmbeddingSearch.java` — Core implementation (~700 lines) | ||
| - `zeppelin-zengine/.../search/EmbeddingSearchTest.java` — 11 tests including semantic validation | ||
| - `docs/embedding-search.md` — This document | ||
|
|
||
| ### Modified files — Backend | ||
| - `zeppelin-zengine/pom.xml` — Add `onnxruntime` and `djl-tokenizers` dependencies | ||
| - `zeppelin-zengine/.../conf/ZeppelinConfiguration.java` — Add `ZEPPELIN_SEARCH_SEMANTIC_ENABLE` | ||
| - `zeppelin-server/.../server/ZeppelinServer.java` — Wire `EmbeddingSearch` based on config | ||
| - `NOTICE` — Attribution for ONNX Runtime and DJL | ||
|
|
||
| ### Modified files — Frontend | ||
| - `zeppelin-web-angular/.../result-item/` — Render search results with separate | ||
| code block, output block, and table name display (replaces Monaco editor) | ||
| - `zeppelin-web/src/app/search/` — Same improvements for Classic UI | ||
| - Various TypeScript build fixes (`tsconfig`, type annotations) | ||
|
|
||
| ### Dependencies added | ||
| - `com.microsoft.onnxruntime:onnxruntime:1.18.0` (~50MB, Apache 2.0 compatible) | ||
| - `ai.djl.huggingface:tokenizers:0.28.0` (~2MB, Apache 2.0, JNA excluded to | ||
| avoid version conflict with Zeppelin's existing JNA 4.1.0) | ||
|
|
||
| ## Search Result Display | ||
|
|
||
| Both Angular and Classic UIs now render search results with: | ||
| - **Code block**: SQL/Python code with syntax-appropriate styling | ||
| - **Output block**: Paragraph execution results (table data, text output) | ||
| - **Table names**: Extracted SQL table names highlighted with 📊 icon | ||
| - **Language badge**: `sql`, `python`, `md`, etc. | ||
|
|
||
| ## Design Decisions | ||
|
|
||
| ### Why ONNX Runtime instead of a Java ML library? | ||
|
|
||
| ONNX Runtime is the standard inference engine for transformer models. It supports | ||
| the exact same model files used by Python (HuggingFace, ChromaDB, etc.), ensuring | ||
| embedding compatibility. | ||
|
|
||
| ### Why brute-force instead of HNSW/ANN? | ||
|
|
||
| For Zeppelin's scale (typically < 50K paragraphs), brute-force cosine similarity | ||
| on normalized vectors is fast enough (< 50ms), exact (no approximation error), | ||
| and adds zero complexity. | ||
|
|
||
| ### Why download model on first use instead of bundling? | ||
|
|
||
| The ONNX model is 86MB. Bundling it would bloat the Zeppelin distribution. | ||
| Downloading on first use keeps the distribution lean and allows users to swap models. | ||
|
|
||
| ### Why not use Lucene's vector search (since 9.0)? | ||
|
|
||
| Zeppelin uses Lucene 8.7.0. Upgrading to 9.x is a separate, larger effort. | ||
|
|
||
| ## Testing | ||
|
|
||
| ```bash | ||
| # Run embedding search tests (requires model download, ~86MB first time) | ||
| ZEPPELIN_EMBEDDING_TEST=true mvn test -pl zeppelin-zengine \ | ||
| -Dtest=EmbeddingSearchTest | ||
|
|
||
| # Run existing Lucene tests (should still pass, no changes) | ||
| mvn test -pl zeppelin-zengine -Dtest=LuceneSearchTest | ||
| ``` | ||
|
|
||
| ### Key tests | ||
|
|
||
| - `semanticSearchFindsRelatedConcepts` — validates that "yesterday's spending" | ||
| ranks a SQL spend query above an unrelated user count query | ||
| - `newParagraphIsLiveIndexed` — validates that newly added paragraphs are | ||
| immediately searchable without restart | ||
|
|
||
| ## Future Work | ||
|
|
||
| - [ ] Quantized model support (22MB INT8 vs 86MB FP32) | ||
| - [ ] Hybrid search: combine embedding similarity with keyword matching | ||
| - [ ] Configurable model URL for air-gapped environments | ||
| - [ ] Batch embedding during initial index rebuild | ||
| - [ ] Similarity score display in search results |
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -146,7 +146,7 @@ export class CredentialComponent { | |||||
| this.credentialService.getCredentials().subscribe(data => { | ||||||
| const controls = [...Object.entries(data.userCredentials)].map(e => { | ||||||
| const entity = e[0]; | ||||||
| const { username, password } = e[1]; | ||||||
| const { username, password } = e[1] as any; | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
| return this.fb.group({ | ||||||
| entity: [entity, [Validators.required]], | ||||||
| username: [username, [Validators.required]], | ||||||
|
|
||||||
| Original file line number | Diff line number | Diff line change | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -12,11 +12,18 @@ | |||||||||
|
|
||||||||||
| <nz-card [nzTitle]="titleTemplateRef"> | ||||||||||
| <ng-template #titleTemplateRef> | ||||||||||
| <a [routerLink]="routerLink" [queryParams]="queryParams">{{ displayName }}</a> | ||||||||||
| <div class="result-header"> | ||||||||||
| <a [routerLink]="routerLink" [queryParams]="queryParams">{{ displayName }}</a> | ||||||||||
| <span *ngIf="interpreter" class="badge" [ngClass]="interpreter">{{ interpreter }}</span> | ||||||||||
| </div> | ||||||||||
| </ng-template> | ||||||||||
| <zeppelin-code-editor | ||||||||||
| [style.height.px]="height" | ||||||||||
| [nzEditorOption]="editorOption" | ||||||||||
| (nzEditorInitialized)="initializedEditor($event)" | ||||||||||
| ></zeppelin-code-editor> | ||||||||||
| <div *ngIf="codeText" class="code-block"> | ||||||||||
| <pre>{{ codeText }}</pre> | ||||||||||
| </div> | ||||||||||
| <div *ngIf="outputText" class="output-block"> | ||||||||||
| <pre>{{ outputText }}</pre> | ||||||||||
| </div> | ||||||||||
| <div *ngIf="tablesText" class="tables-block"> | ||||||||||
| 📊 {{ tablesText }} | ||||||||||
| </div> | ||||||||||
|
Comment on lines
+26
to
+28
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
The CI(run-playwright-e2e-tests) is failing due to lint issues, so this part needs to be fixed. Running |
||||||||||
| </nz-card> | ||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔒 Verify SHA256 of downloaded files. The script pins a HuggingFace commit SHA, which protects against repository content drift, but it does not verify the bytes received. ORT 1.18.x has had RCE/DoS CVEs around model deserialization, so the following scenarios remain exploitable:
Suggest hardcoding the expected SHA256 of
model.onnxandtokenizer.jsonand verifying after download: