refactor(config): replace manual parsing with ConfigBeanFactory binding#6615
Conversation
HOCON null values cannot be bound by ConfigBeanFactory to String fields. Use empty string instead, which has the same effect (triggers the automatic IP detection fallback in java-tron). Related: tronprotocol/java-tron#6615
Replace ~850 lines of manual if/hasPath/getXxx blocks in Args.applyConfigParams() with ConfigBeanFactory.create() automatic binding. Each config.conf domain now maps to a typed Java bean class (VmConfig, BlockConfig, CommitteeConfig, MetricsConfig, NodeConfig, EventConfig, StorageConfig, etc.). - Delete ConfigKey.java (~100 string constants), config.conf is sole source of truth - Migrate Storage.java static getters to read from StorageConfig bean - Add unit tests for all config bean classes - Migrate DynamicArgs to use bean binding
Move all default values from scattered bean field initializers into reference.conf, making it the single source of truth for config defaults. Expose config beans as static singletons for convenient access. - Add comprehensive reference.conf with defaults for all config domains - Auto-bind discovery, PBFT, and list fields in NodeConfig - Expose config beans as static singletons (NodeConfig.getInstance() etc.) - Move postProcess logic into bean classes - Fix test configs (external.ip=null -> empty string) - Document manual-read keys with reasons in reference.conf
…Config Move default/defaultM/defaultL LevelDB option reading into StorageConfig, so Storage no longer touches Config directly. - Add DbOptionOverride with nullable boxed types for partial overrides - Fix cacheSize type from int to long to match LevelDB Options API - Remove dead externalIp(Config) bridge method - Remove setIfNeeded and Config field from Storage
- Replace null values (discovery.external.ip, trustNode) with empty string before ConfigBeanFactory binding for external config compat (system-test uses "external.ip = null" which ConfigBeanFactory cannot bind to String fields; recommend updating system-test to use "" instead) - Fix floating point comparison with Double.compare (java:S1244) - Extract duplicated string literals into constants/variables (java:S1192)
eb841d0 to
4d8d9b9
Compare
| @@ -79,7 +80,11 @@ private void updateActiveNodes(Config config) { | |||
| } | |||
|
|
|||
There was a problem hiding this comment.
DynamicArgs: NodeConfig constructed twice per reload — wrong abstraction boundary
private void updateActiveNodes(Config config) {
NodeConfig nodeConfig = NodeConfig.fromConfig(config); // parse #1
...
}
private void updateTrustNodes(Config config) {
NodeConfig nodeConfig = NodeConfig.fromConfig(config); // parse #2 — same Config object
...
}Beyond the redundant ConfigBeanFactory reflection cost, there is a deeper engineering issue:
Broken single source of truth. Both methods are steps of one atomic reload, yet they produce two separate NodeConfig instances from the same input. The code gives no guarantee they reflect the same state — any future side effect or environment-sensitive substitution in fromConfig() could cause them to silently diverge. The root cause: Config is the wrong parameter type here. Both methods''' actual contract is NodeConfig.
There was a problem hiding this comment.
Re: DynamicArgs NodeConfig parsed twice — Fixed. NodeConfig is now parsed once in reload() and passed to both methods.
|
Unused files these related to configuration |
| * All getXxxFromConfig methods now read from StorageConfig bean instead of | ||
| * manual string constants. Signatures preserved for backward compatibility. | ||
| */ | ||
|
|
There was a problem hiding this comment.
Including getDbEngineFromConfig, a total of 12 methods are no longer used and can be removed.
There was a problem hiding this comment.
Re: Storage 12 unused methods — Fixed. All removed, confirmed zero callers.
| private static final String PBFT_EXPIRE_NUM_KEY = "pBFTExpireNum"; | ||
| private static final String ALLOW_PBFT_KEY = "allowPBFT"; | ||
|
|
||
| public static CommitteeConfig fromConfig(Config config) { |
There was a problem hiding this comment.
Suggest to replace class-level @Getter/@Setter with per-field annotations in CommitteeConfig.
The previous approach used class-level annotations but silently suppressed them on allowPBFT and pBFTExpireNum via AccessLevel.NONE. This is misleading — a reader assumes all fields are covered until they scan the entire file to find the exceptions. The code has poor readability.
Per-field annotations make each field's contract explicit. The two non-standard fields carry no Lombok annotations at all, so their manual accessors immediately signal that special handling is required.
There is no need to modify the configuration parameter name solely due to case differences.
Lombok best practice: use class-level annotations only when the policy is uniform across all fields; switch to per-field when exceptions exist.
Suggest to optimize like this concisely:
//no Getter Setter
@Slf4j
public class CommitteeConfig {
@Getter @Setter private long allowCreationOfContracts = 0;
...
@Getter @Setter private long changedDelegation = 0;
// These two fields which violates JavaBean naming convention are excluded from auto-binding and
// handled manually in fromConfig().
@Getter private long allowPBFT = 0;
@Getter private long pBFTExpireNum = 20;
@Getter @Setter private long allowTvmFreeze = 0;
...
@Getter @Setter private long dynamicEnergyMaxFactor = 0;
// proposalExpireTime is NOT a committee field — it's in block.* and handled by BlockConfig
// Defaults come from reference.conf (loaded globally via Configuration.java)
private static final String PBFT_EXPIRE_NUM_KEY = "pBFTExpireNum";
private static final String ALLOW_PBFT_KEY = "allowPBFT";
public static CommitteeConfig fromConfig(Config config) {
Config section = config.getConfig("committee");
CommitteeConfig cc = ConfigBeanFactory.create(section, CommitteeConfig.class);
// Ensure the manually-named fields get the right values from the original keys
cc.allowPBFT = section.hasPath(ALLOW_PBFT_KEY) ? section.getLong(ALLOW_PBFT_KEY) : 0;
cc.pBFTExpireNum = section.hasPath(PBFT_EXPIRE_NUM_KEY)
? section.getLong(PBFT_EXPIRE_NUM_KEY) : 20;
cc.postProcess();
return cc;
}
private void postProcess() {
...
There was a problem hiding this comment.
Re: CommitteeConfig per-field @Getter/https://github.com/Setter — Good suggestion for readability. However, the root cause is that allowPBFT and pBFTExpireNum violate JavaBean naming convention (consecutive uppercase "PBFT"). Switching annotation style is a cosmetic fix.
Since PBFT is not yet active, I'd suggest addressing this in a future version by renaming the config keys to standard camelCase (allowPbft, pbftExpireNum). That eliminates the need for manual binding entirely, and the annotation style issue goes away with it. The same approach should apply to other beans (NodeConfig, StorageConfig, EventConfig) that have similar AccessLevel.NONE workarounds.
| + "committee.allowNewRewardAlgorithm = 1" | ||
| + " or committee.allowNewReward = 1" | ||
| + " or committee.allowTvmVote = 1."); | ||
| } |
There was a problem hiding this comment.
It should throws TronError instead of IllegalArgumentException here, as the IllegalArgumentException here appears to be an oversight and may be silently swallowed by an upstream catch block, causing the node to start with an invalid committee configuration instead of exiting immediately.
There was a problem hiding this comment.
Agree that TronError is the better choice semantically. However, this PR is a pure refactor — the original code in Args.java used IllegalArgumentException here, so I kept it unchanged to avoid behavioral changes.
Also, in the current call stack (FullNode.main → Args.setParam → applyConfigParams → CommitteeConfig.fromConfig → postProcess), there is no catch(Exception) that would swallow it. The IllegalArgumentException propagates to the UncaughtExceptionHandler and exits the node correctly.
That said, unifying all config validation errors to TronError would be a good follow-up.
| influxdb { | ||
| ip = "" | ||
| port = 8086 | ||
| database = "" |
There was a problem hiding this comment.
It missing default value is "metrics" here, it should be same as that in code,
There was a problem hiding this comment.
Fixed. Changed to database = "metrics" to match MetricsConfig.InfluxDbConfig default.
|
Good catch. Both files are indeed unused:
Both are out of scope for this PR (config binding refactor). I'd suggest a separate cleanup PR to remove them along with any other dead config artifacts. |
- DynamicArgs: parse NodeConfig once in reload(), pass to both methods - Storage: remove 12 unused getXxxFromConfig static methods - reference.conf: fix influxdb database default to "metrics"
| enable = true | ||
| persist = true | ||
| external.ip = null | ||
| external.ip = "" |
There was a problem hiding this comment.
I noticed the test configuration files changed external.ip from null to "":
- external.ip = null
+ external.ip = ""Could this affect the external IP detection logic? In the original code, null might trigger auto-detection fallback, while empty string "" could be treated as a valid (but empty) value. Want to confirm this is intentional and won't break IP auto-discovery in tests.
There was a problem hiding this comment.
Good question, but verified safe. Both values trigger the same fallback path:
// Args.java:1157
private static void externalIp(NodeConfig nodeConfig) {
String externalIp = nodeConfig.getDiscoveryExternalIp();
if (StringUtils.isEmpty(externalIp)) { // null AND "" are both empty
if (PARAMETER.nodeExternalIp == null) {
PARAMETER.nodeExternalIp = PARAMETER.p2pConfig.getIp(); // auto-detect
...StringUtils.isEmpty() treats both null and "" as empty, so the auto-detection fallback triggers in both cases. No behavioral difference.
The reason for changing null to "" is that ConfigBeanFactory cannot bind a null value to a String field — it throws ConfigException$Null. Empty string is the only way to express "unset" for a String field under bean binding.
There was a problem hiding this comment.
To avoid a previously occurred bug In the absence of external network access permissions safely, it is recommended to change if (PARAMETER.nodeExternalIp == null) { to if (StringUtils.isEmpty(PARAMETER.nodeExternalIp)) {.
There was a problem hiding this comment.
Thanks — done. The original == null check is semantically safe in the current architecture (after the bean refactor, PARAMETER.nodeExternalIp at this point can only be null — no CLI flag or other code writes it before externalIp() runs), so it wasn't actually a bug. Switching to StringUtils.isEmpty(...) is purely a readability/consistency improvement (aligns with the isEmpty checks on lines 1145 and 1149).
| private long allowAccountAssetOptimization = 0; | ||
| private long allowAssetOptimization = 0; | ||
| private long allowNewReward = 0; | ||
| private long memoFee = 0; |
There was a problem hiding this comment.
[Behavioral regression] memoFee clamping is lost.
The original Args.java clamped memoFee to [0, 1_000_000_000]:
// Args.java (develop branch)
if (config.hasPath(ConfigKey.MEMO_FEE)) {
PARAMETER.memoFee = config.getLong(ConfigKey.MEMO_FEE);
if (PARAMETER.memoFee > 1_000_000_000) {
PARAMETER.memoFee = 1_000_000_000;
}
if (PARAMETER.memoFee < 0) {
PARAMETER.memoFee = 0;
}
}After this PR, postProcess() does not clamp memoFee, so out-of-range values are passed through unchanged. For example:
committee.memoFee = 5000000000→ develop clamps to1_000_000_000, this PR keeps5_000_000_000committee.memoFee = -100→ develop clamps to0, this PR keeps-100
Suggested fix in postProcess():
if (memoFee < 0) { memoFee = 0; }
if (memoFee > 1_000_000_000L) { memoFee = 1_000_000_000L; }Verified by a regression test against this PR branch — 2 of 5 memoFee boundary tests fail without this clamp.
There was a problem hiding this comment.
Great catch — fixed in c38f388. Both clamps restored in CommitteeConfig.postProcess():
if (allowNewReward < 0) { allowNewReward = 0; }
if (allowNewReward > 1) { allowNewReward = 1; }
if (memoFee < 0) { memoFee = 0; }
if (memoFee > 1_000_000_000L) { memoFee = 1_000_000_000L; }Also added 31 boundary test cases across CommitteeConfigTest, NodeConfigTest, VmConfigTest, and ArgsTest to pin every clamp (below/above/in-range) so this kind of regression cannot happen silently again. Specifically pinned the allowNewReward = 2 + allowOldRewardOpt = 1 case you flagged — verifies the clamp runs before the cross-field check.
Thanks for the rigorous review — the original Args.java had no test coverage for these clamps, so writing the regression test was the only way this could have been caught.
| private long allowReceiptsMerkleRoot = 0; | ||
| private long allowAccountAssetOptimization = 0; | ||
| private long allowAssetOptimization = 0; | ||
| private long allowNewReward = 0; |
There was a problem hiding this comment.
[Behavioral regression] allowNewReward clamping is lost — and this changes the cross-field check semantics below.
The original Args.java clamped allowNewReward to [0, 1]:
// Args.java (develop branch)
if (config.hasPath(ConfigKey.ALLOW_NEW_REWARD)) {
PARAMETER.allowNewReward = config.getLong(ConfigKey.ALLOW_NEW_REWARD);
if (PARAMETER.allowNewReward > 1) { PARAMETER.allowNewReward = 1; }
if (PARAMETER.allowNewReward < 0) { PARAMETER.allowNewReward = 0; }
}This is especially critical because the cross-field validation in postProcess() (line 152) uses allowNewReward != 1:
if (allowOldRewardOpt == 1 && allowNewRewardAlgorithm != 1
&& allowNewReward != 1 && allowTvmVote != 1) {
throw new IllegalArgumentException(...);
}Without clamping, the semantics diverge from develop:
| Config | develop behavior | this PR behavior |
|---|---|---|
allowNewReward = 2 + allowOldRewardOpt = 1 |
Clamped to 1, check passes | Stays 2, check fails, throws exception |
allowNewReward = 99 |
Clamped to 1 | Stays 99 |
allowNewReward = -5 |
Clamped to 0 | Stays -5 |
A user who configured allowNewReward = 2 (intending to enable it) would have their node start fine on develop, but fail to start after this PR.
Suggested fix in postProcess(), before the cross-field check at line 152:
if (allowNewReward < 0) { allowNewReward = 0; }
if (allowNewReward > 1) { allowNewReward = 1; }Verified by a regression test against this PR branch — the cross-field semantic test fails without this clamp.
There was a problem hiding this comment.
Same fix in c38f388 — see the reply on the allowNewReward thread above. Thanks again.
…ary tests Add missing clamps in CommitteeConfig.postProcess(): - memoFee clamped to [0, 1_000_000_000] (regression from manual parsing) - allowNewReward clamped to [0, 1] (must run before cross-field check) Add boundary test coverage for every clamp in CommitteeConfig, NodeConfig, VmConfig, and Args bridge code (fetchBlockTimeout). 31 new test cases pin each clamp's below/above/in-range behavior to prevent silent regression in future refactors. Reported by reviewer kuny0707.
Suggestion: Replace ConfigBeanFactory with custom annotations to eliminate manual fallback codeThe current approach uses
These scattered manual fallbacks are easy to miss during maintenance — the memoFee/allowNewReward clamp regression is a case in point. Approach: Custom
|
| Dimension | ConfigBeanFactory (current) | Annotation approach |
|---|---|---|
| PBFT naming issues | Manual aliasing + manual assignment | @ConfigValue("pBFTExpireNum") — solved |
| "is" prefix / PascalCase | Manual reads | Annotation specifies key directly |
| Java reserved words (native) | withoutPath + manual binding | Annotation specifies key directly |
| fromConfig() boilerplate | One factory method per bean | Unified ConfigBinder.bind() |
| Clamp validation | postProcess() per bean | Declarative @Clamp |
| Cost of adding new config | Add field + possibly modify fromConfig | Add one annotated field |
| Implementation cost | Zero (built-in Typesafe Config API) | ~100-150 lines of binder code |
| External dependencies | None | None |
The key advantage is centralizing key-to-field mappings from scattered fromConfig() code into field-level declarations. Adding a new config parameter becomes a single annotated field — no parsing logic, no risk of missing a clamp. Worth considering as a follow-up iteration.
bladehan1
left a comment
There was a problem hiding this comment.
Review: default value behavioral changes in reference.conf
|
|
||
| # Maximum percentage of producing block interval (provides time for broadcast etc.) | ||
| blockProducedTimeOut = 75 | ||
|
|
There was a problem hiding this comment.
Default value behavioral change — blockProducedTimeOut: 50 → 75
The old Args.java hardcoded fallback was BLOCK_PRODUCE_TIMEOUT_PERCENT = 50 (used when the key was absent from config). In the old shipped config.conf this key was commented out, so the effective default for all nodes not explicitly setting it was 50.
reference.conf now sets it to 75, which changes block production timing for any node that omits this key in their config.conf.
Suggest changing to 50 to preserve behavioral equivalence, or documenting this as an intentional change in the PR description.
There was a problem hiding this comment.
Thanks — fixing both in the same commit. The reference.conf should reflect the code-level default (the value the runtime uses when the key is absent), not whatever was in the shipped sample config.conf. I'll change blockProducedTimeOut back to 50 and txCache.initOptimization back to false.
| txCache.estimatedTransactions = 1000 | ||
| # If true, transaction cache initialization will be faster. | ||
| txCache.initOptimization = true | ||
|
|
There was a problem hiding this comment.
Default value behavioral change — txCache.initOptimization: false → true
The old Storage.java field defaulted to false, and the hasPath && getBoolean pattern also yielded false when the key was absent. The old shipped config.conf had this set to true, so mainnet nodes are unaffected.
However, custom/test configurations that omit this key will now get true instead of false, changing tx cache initialization behavior.
Suggest either:
- Changing to
falseto match the old code-level default, or - Documenting this as an intentional improvement (since
trueis the recommended production setting)
There was a problem hiding this comment.
See reply on the blockProducedTimeOut thread — fixing both in the same commit. Thanks for catching these.
|
Two thoughts:
|
…acks Address review feedback from 317787106 (2026-04-16) and lxcmyf (2026-04-17) covering silent default-value drift introduced by the new reference.conf. Verified each drift against develop Args.initRocksDbSettings / applyConfigParams runtime fallbacks. reference.conf: - storage.dbSettings.compactThreads: 32 -> 0 (0 = auto: max(availableProcessors, 1)) - storage.dbSettings.blocksize: 64 -> 16 - storage.dbSettings.level0FileNumCompactionTrigger: 4 -> 2 - storage.dbSettings.targetFileSizeBase: 256 -> 64 - node.trustNode: "127.0.0.1:50051" -> "" (Args bridge converts empty -> null) - node.maxActiveNodesWithSameIp: line removed. Shipping the legacy alias in reference.conf caused HOCON merges to always mask user-supplied maxConnectionsWithSameIp via the alias-fallback branch. - node.validContractProto.threads: 2 -> 0 (0 = auto: availableProcessors) StorageConfig.java: - DbSettingsConfig field defaults mirror reference.conf - DbSettingsConfig.postProcess() expands compactThreads == 0 to max(availableProcessors, 1), matching develop Args.java:1609-1611 NodeConfig.java: - ValidContractProtoConfig.threads default: 2 -> 0 - postProcess() expands validContractProto.threads == 0 to availableProcessors(), matching develop Args.java:743-746 - Removed unused field maxActiveNodesWithSameIp. The alias read at fromConfig() uses section.hasPath() directly, and removing the bean field lets ConfigBeanFactory stop requiring a reference.conf default for it (which is what caused the alias pollution to begin with). Tests: - StorageConfigTest.testDbSettingsDefaults asserts the new fallbacks - StorageConfigTest: testCompactThreadsAutoExpand / testCompactThreadsExplicitPreserved - NodeConfigTest: testValidContractProtoThreadsDefaultAutoExpands / testValidContractProtoThreadsExplicitPreserved - NodeConfigTest: testTrustNodeNotDefaultedByReferenceConf - NodeConfigTest: testMaxConnectionsWithSameIpNotOverriddenByReferenceConfAlias - NodeConfigTest: testMaxActiveNodesWithSameIpLegacyAliasStillWorks - NodeConfigTest: testLegacyAliasTakesPriorityOverModernKey (matches develop Args.java:392-399)
check-math CI job flagged uses of java.lang.Math introduced in 8a76db8 (compactThreads auto-expand logic in StorageConfig.postProcess and the corresponding test assertions). Swap both to StrictMathWrapper.max, which is the project-wide convention enforced by the check-math scanner.
Address PR review from lvs0075 (2026-04-22, reference.conf:218).
reference.conf ships `allowShieldedTransactionApi = true`, so after the HOCON
`withFallback` merge, `section.hasPath("allowShieldedTransactionApi")` is
always true regardless of what the user wrote. That made the legacy-key
compatibility branch in NodeConfig.fromConfig() dead code: a user who wrote
only `node.fullNodeAllowShieldedTransaction = false` in their config.conf
silently got `true`, unintentionally enabling the shielded transaction API.
Fix: replace the unreachable else-if chain with a direct override. The legacy
key is intentionally not defaulted in reference.conf, so `hasPath` on it
reliably means "user supplied it". Same pattern as maxActiveNodesWithSameIp.
Also restores the deprecation warning develop's Args.java emits when a user
still uses the legacy key; this PR dropped it along with the rewrite.
Regression tests:
- testShieldedApiDefaultsToTrueWhenNeitherKeySet
- testShieldedApiModernKeyRespected
- testShieldedApiLegacyKeyRespected (guards the reported bug)
- testShieldedApiLegacyKeyTakesPriorityOverModern (mirrors
testLegacyAliasTakesPriorityOverModernKey)
| // which ConfigBeanFactory cannot bind to a String field. | ||
| // Note: hasPath() returns false for null values, use hasPathOrNull() instead. | ||
| section = replaceNullWithEmpty(section, "discovery.external.ip"); | ||
| section = replaceNullWithEmpty(section, "trustNode"); |
There was a problem hiding this comment.
[SHOULD] The null-value sanitization here only hardcodes two paths (discovery.external.ip and trustNode). Defaulting trustNode = "" in reference.conf reduces exposure, but any user who writes foo = null for another String-typed key (external configs like system-test do this) will still crash ConfigBeanFactory.create. Consider a generic sweep over the section that replaces every hasPathOrNull && !hasPath key with an empty string, so we don't have to add a new line here every time a String field is added.
There was a problem hiding this comment.
Thanks, but I'd prefer to keep this explicit rather than generalize:
- Writing
= nullin a config is an anti-pattern —nullis a Java/HOCON keyword, not a value any typed bean field can consume. External configs shouldn't be doing this. - With auto-binding, any existing
nullwe didn't sanitize fails fast at startup with a clearConfigBeanFactory.ValidationFailednaming the path. Nothing slips through silently. - Same guarantee covers the future: if a new String field ever gets a
nullfrom some external config, auto-binding will surface it immediately — no silent breakage possible. So a preemptive generic sweep isn't needed; we'll just add the explicit line when (and if) a second case actually shows up.
There was a problem hiding this comment.
On reflection — the existing replaceNullWithEmpty was itself a temporary workaround added because system-test writes = null, which ConfigBeanFactory cannot bind to a String field. Rather than generalizing the workaround, I'll just remove it and go back to pure auto-binding. Any = null in external configs will then fail fast with a clear ConfigBeanFactory.ValidationFailed naming the path, which is the correct long-term signal that the external config needs to be fixed.
| } | ||
| if (maxFlush > 500) { | ||
| throw new IllegalArgumentException("MaxFlushCount value must not exceed 500!"); | ||
| } |
There was a problem hiding this comment.
[SHOULD] The estimatedTransactions clamp (238-245) and maxFlushCount validation (249-255) still live in the bridge, while NodeConfig / VmConfig / CommitteeConfig all put clamps and validation inside their own bean's postProcess(). Consider moving these into StorageConfig.postProcess() (attached to SnapshotConfig and TxCacheConfig) for architectural consistency — this also lets StorageConfig enforce its invariants when used standalone (e.g. in tests) without relying on the Args bridge.
There was a problem hiding this comment.
Agreed — done. Moved the maxFlushCount validation into SnapshotConfig.postProcess() and the estimatedTransactions clamp into TxCacheConfig.postProcess(), called from StorageConfig.fromConfig() alongside the existing dbSettings.postProcess(). Args.applyStorageConfig now just reads already-validated values. Covered by new tests in StorageConfigTest.
Address PR review from lxcmyf (2026-04-22, Args.java:255) — architectural consistency with NodeConfig / VmConfig / CommitteeConfig where clamps and validation live in the bean's own postProcess(). Before: Args.applyStorageConfig held an inline clamp for estimatedTransactions ([100, 10000]) and an inline range check for maxFlushCount ((0, 500]). This was carried over from develop's static Storage helpers during the bean-binding refactor. Now: - SnapshotConfig.postProcess() enforces maxFlushCount in (0, 500], throwing IllegalArgumentException on violation (same messages as develop). - TxCacheConfig.postProcess() clamps estimatedTransactions to [100, 10000]. - StorageConfig.fromConfig() calls both alongside the existing dbSettings.postProcess(). - Args.applyStorageConfig now simply reads already-validated values. This also lets StorageConfig enforce its invariants when loaded standalone (e.g. in tests) without depending on the Args bridge. Tests (StorageConfigTest): - testSnapshotMaxFlushCountZeroRejected - testSnapshotMaxFlushCountNegativeRejected - testSnapshotMaxFlushCountOver500Rejected - testTxCacheEstimatedClampedBelowMin - testTxCacheEstimatedClampedAboveMax - testTxCacheEstimatedWithinRangePreserved
Address PR review from lxcmyf (2026-04-22, NodeConfig.java:370). The `replaceNullWithEmpty` helper was originally a compatibility shim for system-test configs that write `discovery.external.ip = null`. Rather than generalizing the shim over all paths (the reviewer's concern), remove it entirely and let ConfigBeanFactory fail fast on HOCON null values. The auto-binding itself is the right safety net: any String-typed field that receives `= null` from an external config surfaces at startup as `ConfigBeanFactory.ValidationFailed` naming the exact path. That's the correct long-term signal that the external config needs to use `""` or remove the key, not to be papered over. No in-repo configs write `= null`; external configs (system-test) should update their own defaults.
| * HTTP/RPC rate limiter lists still use getRateLimiterFromConfig() for | ||
| * conversion to RateLimiterInitialization business objects. | ||
| */ | ||
| private static void applyRateLimiterConfig(RateLimiterConfig rl, Config config) { |
There was a problem hiding this comment.
[SHOULD] The config parameter of applyRateLimiterConfig is not used and can be removed.
| if (section.hasPath("maxActiveNodes")) { | ||
| nc.maxConnections = section.getInt("maxActiveNodes"); | ||
| if (section.hasPath("connectFactor")) { | ||
| nc.minConnections = (int) (nc.maxConnections * section.getDouble("connectFactor")); | ||
| } | ||
| if (section.hasPath("activeConnectFactor")) { | ||
| nc.minActiveConnections = (int) (nc.maxConnections | ||
| * section.getDouble("activeConnectFactor")); | ||
| } | ||
| } | ||
| if (section.hasPath("maxActiveNodesWithSameIp")) { | ||
| nc.maxConnectionsWithSameIp = section.getInt("maxActiveNodesWithSameIp"); | ||
| } |
There was a problem hiding this comment.
These parameters are no longer used:
node.activeConnectFactornode.connectFactornode.disconnectNumberFactornode.tcpNettyWorkThreadNumnode.udpNettyWorkThreadNumnode.maxActiveNodesnode.channel.read.timeoutnode.maxActiveNodesWithSameIp
Should they still be present in reference.conf? If they are configured, it would be better to log a warning to alert users that these settings are deprecated and will be ignored later.
There was a problem hiding this comment.
Thanks — I checked each key:
node.disconnectNumberFactoris indeed dead (never read in this repo or in develop). I'll remove it fromreference.confand from the bean.- The other 7 are not actually unused — keeping them as-is:
tcpNettyWorkThreadNum/udpNettyWorkThreadNum→ still applied inArgs.java:628-629.maxActiveNodes/maxActiveNodesWithSameIp→ active legacy aliases formaxConnections/maxConnectionsWithSameIp, used for backward compat with old user configs.activeConnectFactor/connectFactor→ only effective inside themaxActiveNodeslegacy branch; removing them would change the legacy path behavior.channel.read.timeout→ propagates toCommonParameter.nodeChannelReadTimeout, a public API field consumed via the Lombok getter (also in develop).
- develop doesn't emit deprecation warnings for any of these either, so I'd prefer not to introduce a new warning scheme in this refactor PR. Happy to do it in a follow-up PR once we agree on the deprecation policy.
There was a problem hiding this comment.
Thank you for your reply. PARAMETER.nodeChannelReadTimeout, PARAMETER.tcpNettyWorkThreadNum, and PARAMETER.udpNettyWorkThreadNum were originally used for libp2p. Since libp2p was decoupled from java-tron, these three parameters are still read from config file but never been used. However, this is not a major issue and can be handled in other PRs as well.
Address three follow-up review comments after the initial approvals: - xxo1shine (Args.java:345): drop the unused `config` parameter from `applyRateLimiterConfig`; it was a leftover from the pre-refactor path that used `getRateLimiterFromConfig(config)`. - 317787106 (NodeConfig.java:387): remove the `disconnectNumberFactor` bean field and its reference.conf default — no code in this repo or in develop reads it; purely synthetic dead weight introduced by this PR. The other 7 keys listed by the reviewer are kept (actively used, legacy aliases, or public API surface); see review reply for per-key rationale. - 317787106 (Args.java:1146): switch `== null` to `StringUtils.isEmpty(...)` on the `externalIp()` fallback guard. Not a behaviour change in the current architecture (no CLI flag or other code writes `PARAMETER.nodeExternalIp` before this point, so it can only be null here), but makes the intent consistent with the `isEmpty` checks already used on lines 1145 and 1149.
What
Replace ~850 lines of manual
if (config.hasPath(KEY)) / getXxxblocks inArgs.applyConfigParams()withConfigBeanFactory.create()automatic binding.Each
config.confdomain now maps to a typed Java bean class. DeleteConfigKey.java(~100 string constants) —
config.confbecomes the sole source of truth for key names.Why
Adding a new config parameter previously required editing 3 files (
config.conf,ConfigKey.java,Args.java). The manual parsing was error-prone, hard to review,and duplicated every key name as a Java string constant. With bean binding, adding
a parameter only requires adding a field to the bean class and a line in
reference.conf.Introducing reference.conf
Previously, default values were scattered across three places: bean field initializers,
ternary expressions in
Args.java, and comments inConfigKey.java. Defaults couldbe inconsistent, and there was no single place to see all config parameters and their
default values at a glance.
reference.confis the official recommended practice of the Typesafe Config library(see official docs):
libraries and applications declare all config parameters and their defaults in
src/main/resources/reference.conf, and users only need to override the values theywant to change — Typesafe Config merges them automatically.
Akka, Play Framework, and Spark all follow this convention.
Benefits:
scatter and inconsistency
reference.confshows the complete list of configparameters, defaults, and comments — no need to dig through Java code
corresponding config key.
reference.confensures binding never fails due tomissing keys, even when users don't configure a value
reference.confto exist, lowering the learning curve for new contributorsChanges
Commit 1: Core refactor — ConfigBeanFactory binding
VmConfig,BlockConfig,CommitteeConfig,MetricsConfig,NodeConfig,EventConfig,StorageConfig,GenesisConfig,RateLimiterConfig,MiscConfig,LocalWitnessConfigArgs.applyConfigParams()from ~850 lines to ~50 lines of domain binding callsConfigKey.java(~100 string constants)Storage.javastatic getters to read fromStorageConfigbeanCommit 2: reference.conf as single source of defaults
common/src/main/resources/reference.confwith defaults for all config domainsreference.confNodeConfig.getInstance()etc.)NodeConfigCommit 3: Storage cleanup
default/defaultM/defaultLLevelDB option reading fromStorageintoStorageConfig, soStorageno longer touchesConfigdirectlyDbOptionOverridewith nullable boxed types for partial override semanticscacheSizetype frominttolongto match LevelDBOptionsAPIScope
config.confvalues are parsed into Java objectsCommonParameter(future Phase 2)config.confformat or CLI parameter handlingCommonParameter.getInstance()call sitesFuture Plan
This refactor is Phase 1 — it only replaces the config reading layer. After bean
binding, values are still copied one by one to
CommonParameter's flat fields, because847 call sites across the codebase depend on
CommonParameter.getInstance().The goal of Phase 2 is to remove the
CommonParameterintermediary:by both
config.confand CLI arguments (JCommander), converging onCommonParameter.The CLI override logic needs to be unified into Typesafe Config's override mechanism
(
ConfigFactory.systemProperties()or custom overrides), eliminating CLI's directwrite dependency on
CommonParameterCommonParameter.getInstance().getXxx()calls with direct domain bean access —
NodeConfig.getInstance().getXxx(),VmConfig.getInstance().getXxx(), etc.CommonParameter: once all call sites are migrated and CLI arguments nolonger write directly, remove
CommonParameterand the bridge-copy code inArgsEnd state:
config.conf→reference.conffallback →ConfigBeanFactory→ domainbean singletons. Fully type-safe, no intermediary layer, no string constants.
Test