chore(release): release v1.10.0#126
Conversation
…ty and performance
…ction for improved clarity and maintainability
…n row_generator service
Cherry-pick features from main: TMDB language-aware image fetching for posters/logos/backgrounds, and translation fix that preserves item titles in catalog names instead of translating them word-by-word. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
| e.preventDefault(); | ||
| e.stopPropagation(); | ||
| const url = document.getElementById('addonUrl').textContent; | ||
| window.location.href = `stremio://${url.replace(/^https?:\/\//, '')}`; |
Check failure
Code scanning / CodeQL
DOM text reinterpreted as HTML High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 27 days ago
General fix: When reading data from the DOM and reusing it in another sensitive context (here, as a URL for navigation), validate and normalize it rather than trusting the raw string. Ensure that only expected schemes, hostnames, and path characters are allowed, and avoid interpreting arbitrary text as a protocol URL.
Best specific fix here: Validate the url read from addonUrl before using it to build the stremio:// URL. A robust approach is:
- Trim the string and ensure it is non-empty.
- Normalize it as an HTTP(S) URL first via the
URLconstructor to ensure it has the expected scheme and structure. - Extract the host and path from the normalized URL, and reconstruct the
stremio://URL from those parts, rather than doing a naive stringreplace. - If parsing fails or the scheme is not
httporhttps, abort (or show an error) instead of navigating.
This preserves functionality (it still converts an HTTP(S) addon URL into a stremio:// URL) but prevents arbitrary text from becoming an unchecked custom-protocol URL. All changes are confined to app/static/js/modules/form-success.js, specifically around lines 25–27. No new imports are needed.
Concretely, we will replace:
const url = document.getElementById('addonUrl').textContent;
window.location.href = `stremio://${url.replace(/^https?:\/\//, '')}`;with logic that:
- Reads and trims the text;
- Uses
new URL(...)to parse it (falling back tohttps://if no scheme is provided); - Verifies the protocol is
http:orhttps:; - Builds
stremio://usingurlObj.host + urlObj.pathname + urlObj.search + urlObj.hash; - Handles parsing errors by not redirecting (optionally using
showErrorif we want minimal behavior change; to avoid new behavior we can just silently return).
| @@ -22,8 +22,31 @@ | ||
| installDesktopBtn.addEventListener('click', (e) => { | ||
| e.preventDefault(); | ||
| e.stopPropagation(); | ||
| const url = document.getElementById('addonUrl').textContent; | ||
| window.location.href = `stremio://${url.replace(/^https?:\/\//, '')}`; | ||
| const rawText = document.getElementById('addonUrl').textContent || ''; | ||
| const trimmed = rawText.trim(); | ||
| if (!trimmed) { | ||
| return; | ||
| } | ||
|
|
||
| let normalizedUrl; | ||
| try { | ||
| // Ensure we have an absolute HTTP(S) URL before converting to the stremio:// protocol | ||
| if (/^https?:\/\//i.test(trimmed)) { | ||
| normalizedUrl = new URL(trimmed); | ||
| } else { | ||
| normalizedUrl = new URL(`https://${trimmed}`); | ||
| } | ||
| } catch (_) { | ||
| // If the URL is not valid, do not attempt to navigate | ||
| return; | ||
| } | ||
|
|
||
| if (normalizedUrl.protocol !== 'http:' && normalizedUrl.protocol !== 'https:') { | ||
| return; | ||
| } | ||
|
|
||
| const addonTarget = `${normalizedUrl.host}${normalizedUrl.pathname}${normalizedUrl.search}${normalizedUrl.hash}`; | ||
| window.location.href = `stremio://${addonTarget}`; | ||
| }); | ||
| } | ||
|
|
| username = user_info.get("user", {}).get("username") or user_info.get("username", "Unknown") | ||
| except Exception as e: | ||
| logger.error(f"Trakt OAuth callback failed: {e}") | ||
| return HTMLResponse(_oauth_error_page("Trakt", str(e))) |
Check warning
Code scanning / CodeQL
Information exposure through an exception Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 11 hours ago
General fix: never return raw exception text to the client. Log full diagnostic details on the server, and return a generic, non-sensitive message to users.
Best fix here (without changing functionality flow): in app/api/endpoints/oauth.py, update the except block in trakt_callback so _oauth_error_page receives a static user-safe message instead of str(e). Keep server-side logging for troubleshooting; ideally log with traceback (logger.exception) to preserve debugging value.
Concretely, replace:
logger.error(f"Trakt OAuth callback failed: {e}")return HTMLResponse(_oauth_error_page("Trakt", str(e)))
with:
logger.exception("Trakt OAuth callback failed")return HTMLResponse(_oauth_error_page("Trakt", "An internal error occurred. Please try again."))
No new imports or dependencies are required.
| @@ -85,9 +85,9 @@ | ||
| # Fetch username for display | ||
| user_info = await trakt_service.get_user_info(access_token) | ||
| username = user_info.get("user", {}).get("username") or user_info.get("username", "Unknown") | ||
| except Exception as e: | ||
| logger.error(f"Trakt OAuth callback failed: {e}") | ||
| return HTMLResponse(_oauth_error_page("Trakt", str(e))) | ||
| except Exception: | ||
| logger.exception("Trakt OAuth callback failed") | ||
| return HTMLResponse(_oauth_error_page("Trakt", "An internal error occurred. Please try again.")) | ||
|
|
||
| return HTMLResponse( | ||
| _oauth_success_page( |
| username = user_info.get("user", {}).get("name") or user_info.get("account", {}).get("id", "Unknown") | ||
| except Exception as e: | ||
| logger.error(f"Simkl OAuth callback failed: {e}") | ||
| return HTMLResponse(_oauth_error_page("Simkl", str(e))) |
Check warning
Code scanning / CodeQL
Information exposure through an exception Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 11 hours ago
To fix this safely, keep detailed exception information in server-side logs, but return a generic error message to the browser. This preserves existing functionality (OAuth failure page still shown) while preventing exception content disclosure.
Best targeted change in app/api/endpoints/oauth.py:
- In
simkl_callbackexception handler (around lines 152–154), replacelogger.error(...)withlogger.exception(...)(or equivalent) so diagnostics remain available server-side with traceback. - Replace
_oauth_error_page("Simkl", str(e))with a static, non-sensitive message such as"An internal error occurred during authentication. Please try again.".
No new imports or dependencies are required.
| @@ -149,9 +149,11 @@ | ||
|
|
||
| user_info = await simkl_service.get_user_settings(access_token, settings.SIMKL_CLIENT_ID) | ||
| username = user_info.get("user", {}).get("name") or user_info.get("account", {}).get("id", "Unknown") | ||
| except Exception as e: | ||
| logger.error(f"Simkl OAuth callback failed: {e}") | ||
| return HTMLResponse(_oauth_error_page("Simkl", str(e))) | ||
| except Exception: | ||
| logger.exception("Simkl OAuth callback failed") | ||
| return HTMLResponse( | ||
| _oauth_error_page("Simkl", "An internal error occurred during authentication. Please try again.") | ||
| ) | ||
|
|
||
| return HTMLResponse( | ||
| _oauth_success_page( |
There was a problem hiding this comment.
Code Review
This pull request introduces significant architectural changes, including the addition of OAuth support for Trakt and Simkl, a refactored token management system, and a new unified library/history model. While these changes expand functionality, several critical issues were identified: an AttributeError in the profile service due to incorrect attribute access on Pydantic models, a security vulnerability in the OAuth callback using a wildcard origin for postMessage, a missing import for the caching decorator in the TMDB service, and a breaking change to the health check endpoint's response format.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
postMessage was using window.location.origin, which broadcasts tokens to whatever origin the popup is on at script time. Pin the target to settings.HOST_NAME so the parent window only receives the message when it lives at the trusted origin. Also drop the stray ';;'.
DISCOVERY_SETTINGS.get(pop_pref, {}) followed by 'if not params:
continue' meant any popularity preference without a mapping silently
filtered out every candidate. Hoist the lookup out of the per-item
loop and treat empty params as 'no popularity constraint'.
asyncio.create_task without holding the handle leaves the task GC-eligible. Keep handles in a set, attach a done callback that logs unhandled exceptions and discards the task on completion.
The intermediate 'filtered' local was used only as the threshold for fetching more pages and then discarded. Inline the threshold check and add a comment that the caller re-filters. Behavior unchanged.
Google Translate failures returned the original text inside the cached function, so a transient API blip would persist (untranslated) text in the LRU for 7 days. Move the cache to an inner method that raises on failure; the outer wrapper catches and falls back without caching.
Caught Exception masked all failure modes; now distinguishes HTTP status errors (with code in the log) from network/parse errors so operators can tell 'API down' from 'no items'. Behavior on the happy path is unchanged.
Generic logger.exception masked transient outages as item-not-found noise. Now logs 404 at WARNING (expected for unknown items) and 5xx at ERROR with the status code, while keeping unexpected exceptions at full traceback. Behavior on the happy path is unchanged.
Cinemeta, RPDB, and TopPosters created a new httpx.AsyncClient per request (with its own connection pool). Replaced with a lazy singleton client per service and added a close() hook so connection reuse works and the pool can be torn down on shutdown.
Every Simkl history item was being recorded with watch_count=1, discarding the play-count signal the profile builder uses to weight favorites. Now extract total_plays_count for movies and watched_episodes_count for shows, falling back to 1 when missing.
The error message was concatenating the exception string into the HTTP body, exposing internal traces. Server-side log keeps the full context; client gets a generic message.
Both endpoints raised 500 when an external API call failed, while the sibling validation endpoints (gemini/tmdb/trakt/simkl-sync) returned BaseValidationResponse(valid=False). Aligning so the front-end can handle 'invalid' uniformly instead of seeing a 500 for upstream hiccups.
Shallow .copy() on catalog dicts shared nested 'extra' lists and option dicts across users. Translation and sort_catalogs mutate those, so user A's manifest could leak modifications into user B's copy. Switch to copy.deepcopy().
…pages The success and error pages embedded provider-supplied username and exception text directly into the HTML. html.escape() the values so quirky/hostile characters can't break out of the surrounding tags.
Returning a raw string serialized to '"Settings deleted successfully"',
which clients expecting an object would break on. Now returns
{status, message}.
int(runtime_str.split(' ')[0]) crashed on values like 'NA min' or a
non-string. Wrap in try/except and fall back to 0 (which is then
treated as 'no runtime' below).
When the exclude set drained the pool dry, _pick rebuilt the pool without exclusions, silently returning an item the caller had asked to exclude. Drop the fallback so callers get None and can decide what to do.
- L1: /health returns {status: healthy} object instead of raw string
- L4: drop dead bytes/str branch (redis_service has decode_responses=True)
- L5: catch json.JSONDecodeError separately in Gemini for better logs
- L7: remove duplicate logo URL assignment
- L2: replace len-only token check with [A-Za-z0-9]{1,32} regex
- L8: log APP_ENV/reload/port at startup for visibility
- L3: deferred (needs schema migration; explained in BUGS.md)
- L6: closed as intentional (Stremio addons need open CORS)
Both services were using raw httpx.AsyncClient with no retry on 429/5xx and inconsistent error handling. Switched both to BaseClient (which handles retry, exponential backoff, and the new safe-json wrapper). Behavior on the happy path is unchanged; transient network/upstream errors now retry instead of bubbling up immediately.
…ld on mismatch Cached profiles carried no signal of which source they were built from, so switching watch_history_source in the configure page kept serving the old (wrong) profile until the cache happened to be invalidated for unrelated reasons. Stremio profiles were sticking around for users who connected Trakt or Simkl. - Add 'source' field to TasteProfile (default 'stremio' for back-compat). - Set source on every build path: Stremio, Trakt, Simkl. - Compare cached.source vs requested watch_history_source in both catalog_service and build_and_cache_profile; invalidate on mismatch. - Drop cached profile/watched_sets/catalog data when a user saves a new watch_history_source via /tokens. - Promote the silent external-history-fetch fallback from warning to error, and split out a dedicated 'token missing' branch so the log clearly says which case fired.
The ProfileIntegration class was never imported anywhere in the codebase and would have raised ImportError on first import: it pulls in GENRE_WHITELIST_LIMIT and SmartSampler which don't exist (sampling.py exposes a free function, not a class), and imports ScoringService from the wrong path. A diverged parallel implementation of ProfileService that had drifted out of use. Removing it eliminates a maintenance trap.
Concurrent users hitting the same 429 (TMDB / Trakt rate limit) all backed off in lockstep with the previous deterministic schedule, which amplified the rate-limit hit and made things worse on the second try. Add up to 250ms of random jitter to each backoff window.
Library items, profiles, watched sets, library hash, and last-profile-build timestamp were all stored without a TTL — only the catalog cache had one. A user who installs once and never returns left a permanent footprint in Redis. The main token key is intentionally left untouched (TOKEN_TTL_SECONDS governs that and defaults to 'never expire'). - Add USER_CACHE_TTL_SECONDS = 90 days constant. - Pass it as the TTL on every set() call in user_cache. - Add redis_service.expire() helper and call it on every successful read so active users' caches stay warm; only stale installs decay. Also drops the dead bytes-vs-str branch in get_last_profile_build_time (decode_responses=True means it's always str — same fix BUGS.md L4 already applied at line 245).
The Simkl OAuth callback was the last spot still doing raw httpx.AsyncClient work after the H9 BaseClient migration. Two inline 'async with AsyncClient' blocks duplicated the timeout/retry behavior that simkl_service.client already provides. - Add SimklService.exchange_code and get_user_settings, mirroring the Trakt equivalents. - Replace the inline AsyncClient calls in simkl_callback with those methods. Drops ~30 lines. - Remove the now-unused SIMKL_TOKEN_URL constant.
When the user's Trakt access token expired (or Simkl access was revoked), we silently fell back to the Stremio library and kept retrying with the dead token on every catalog request. The user had no way to know their connection was broken — recommendations just got noticeably worse. - Re-raise 401/403 from Simkl get_trending/get_item_details/get_history so callers can distinguish 'auth dead' from 'item not found'. - In _build_from_external_source, separate HTTPStatusError handling from generic exceptions; on 401/403 wipe the bad access (and refresh) tokens from stored credentials so the configure page shows the provider as disconnected and the user can reconnect. - Logs now identify the failure mode explicitly: 'token rejected', 'token missing', or generic fetch failure with the exception class.
Trakt access tokens expire ~3 months after issue. The codebase had TraktService.refresh_token defined but never called, so once a user's token expired the catalog silently fell back to Stremio with no recovery path — the user had to notice degraded recommendations and manually reconnect. Long-term users were the ones most affected. - Capture expires_in/created_at from the Trakt token-exchange response in the OAuth callback; compute absolute trakt_token_expires_at and pass through the postMessage payload. - New trakt_token_expires_at field on TokenRequest and UserSettings; round-tripped via /tokens/stremio-identity and the configure form. - Proactive refresh: if the access token is within 7 days of expiry, refresh before calling get_history. Persist the new tokens via token_store.update_user_data. - Reactive refresh: on a 401 from get_history, attempt one refresh + retry; if the refresh itself fails, fall through to the existing revoked-token cleanup so the configure page shows Trakt as disconnected on the user's next visit.
No description provided.