Skip to content

chore: bump bulletin-deploy to 0.7.14#150

Merged
UtkarshBhardwaj007 merged 1 commit intomainfrom
chore/bump-bulletin-deploy-0.7.14
May 9, 2026
Merged

chore: bump bulletin-deploy to 0.7.14#150
UtkarshBhardwaj007 merged 1 commit intomainfrom
chore/bump-bulletin-deploy-0.7.14

Conversation

@UtkarshBhardwaj007
Copy link
Copy Markdown
Member

@UtkarshBhardwaj007 UtkarshBhardwaj007 commented May 9, 2026

Summary

  • Bumps bulletin-deploy from 0.7.130.7.14. Public surface (deploy, DeployContent, DeployOptions, DeployResult) is unchanged; verified via diff of upstream src/index.ts.
  • 0.7.14 hardens the chunked-storage path against WS-halt allocation storms (issues paritytech/bulletin-deploy#142, #216, #287):
    • Per-deploy retry-budget circuit breaker. Bails with Retry budget exhausted: … if more than 5 recovery events land within 30s. Tunable via BULLETIN_RETRY_BUDGET_MAX / BULLETIN_RETRY_BUDGET_WINDOW_MS.
    • Recovery batch-size drop: 2-in-flight → 1-in-flight after the first reconnect, halving peak in-flight bytes during the recovery path.
    • Synchronous onStatusChanged(CLOSE|ERROR) hook that destroys the PAPI client before its activeBroadcasts.forEach loop can mutate-while-iterating into OOM.
    • isConnectionError extended to match ChainHead disjointed (PAPI's chainHead-state-inconsistent shape after our destroy mid-subscription) so retries route through doReconnect.
    • deploy.status="ok" set on the success path (we already get error / killed from the 0.7.13 series).
    • Spurious new Uint8Array(fs.readFileSync(...)) double-wrap removed (perf only).
  • Lockfile diff is minimal: bulletin-deploy version + integrity, plus a transitive @scure/base 2.0.0 → 2.2.0. @polkadot-apps/* count remains 0.
  • CLAUDE.md invariant for the explicit bulletin-deploy pin updated to reflect 0.7.14.
  • Changeset added (patch).

Test plan

  • pnpm install — clean resolve, no peer-dep regressions
  • pnpm exec tsc --noEmit — clean
  • pnpm test — 486/486 pass
  • grep '@polkadot-apps/' pnpm-lock.yaml — 0 hits
  • CI green (lint/format, unit, E2E PR matrix)

Internal hardening of the chunked-storage path against WS-halt allocation
storms (issues #142/#216/#287): per-deploy retry-budget circuit breaker
(5 events / 30s, tunable via BULLETIN_RETRY_BUDGET_{MAX,WINDOW_MS}),
recovery batch-size drop (2->1 in flight after first reconnect), and a
synchronous onStatusChanged(CLOSE|ERROR) hook that destroys the PAPI
client before its activeBroadcasts.forEach loop can mutate-while-iterating
into OOM. Public surface (deploy / DeployContent / DeployOptions /
DeployResult) is unchanged.
@socket-security
Copy link
Copy Markdown

Review the following changes in direct dependencies. Learn more about Socket for GitHub.

Diff Package Supply Chain
Security
Vulnerability Quality Maintenance License
Updatednpm/​bulletin-deploy@​0.7.13 ⏵ 0.7.1478 +1100100 +196 +1100

View full report

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 9, 2026

Dev build ready — try this branch:

curl -fsSL https://raw.githubusercontent.com/paritytech/playground-cli/main/install.sh | VERSION=dev/chore/bump-bulletin-deploy-0.7.14 bash

@UtkarshBhardwaj007 UtkarshBhardwaj007 merged commit 6f43339 into main May 9, 2026
17 of 18 checks passed
@UtkarshBhardwaj007 UtkarshBhardwaj007 deleted the chore/bump-bulletin-deploy-0.7.14 branch May 9, 2026 11:51
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented May 9, 2026

E2E Test Pass · ❌ FAIL

Tag: e2e-ci-pr · Branch: chore/bump-bulletin-deploy-0.7.14 · Commit: 38326d7 · Run logs

Cell Result Time
pr-init-session ✅ PASS 1m49s
pr-mod ❌ FAIL 2m57s
pr-preflight ✅ PASS 2m05s
pr-deploy-frontend ✅ PASS 6m39s
pr-deploy-foundry ✅ PASS 1m43s
pr-deploy-cdm ✅ PASS 3m09s
pr-install ✅ PASS 0m47s
${{ matrix.cell }} ⏭️ SKIP 0m00s
${{ matrix.cell }} ⏭️ SKIP 0m00s

Sentry traces: view spans for this run

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant