Skip to content

HBASE-30110 Fix for flapper in TestZooKeeper by AI#8116

Open
haridsv wants to merge 1 commit intoapache:masterfrom
haridsv:HBASE-30110
Open

HBASE-30110 Fix for flapper in TestZooKeeper by AI#8116
haridsv wants to merge 1 commit intoapache:masterfrom
haridsv:HBASE-30110

Conversation

@haridsv
Copy link
Copy Markdown
Contributor

@haridsv haridsv commented Apr 23, 2026

Ran for 20 times in a loop to verify.

AI tool: Claude Code
AI Model: Sonnet 4.5 (reverified using Opus 4.6 (1M context))
Prompt: You need to diagnose the HBase test TestZooKeeper that is flapping. It had multiple failures and errors in the PR validation build, but none when I ran locally. The source code is at @../../src/apache/hbase/ and the logs for good run is at @Good-Run and bad run is at @bad-run. Can you compare the logs and try to understand why the run failed and look at the code for a potential fix?

Here is the AI summary for the fix:

Problem Identified:

  • Line 78 had a duplicate call to TEST_UTIL.startMiniDFSCluster(2)
  • This violated the "can only create one" constraint and caused HDFS namespace conflicts when tests ran in parallel

Why it was flaky:

  • Local/isolated runs: Worked because no leftover HDFS state
  • CI/parallel runs: Failed when other ZooKeeper tests ran first, leaving behind HDFS data with incompatible namespaceID

Fix Applied:
Removed the duplicate call on line 78. The test now properly:

  1. Starts Mini DFS Cluster (line 71)
  2. Starts Mini ZK Cluster (line 72)
  3. Configures settings (lines 73-77)
  4. Done! (no duplicate cluster start)

This matches the pattern used in other similar tests like TestClusterBootOrder and aligns with the comment's intent: "Test we can first start the ZK cluster by itself."

The test should now be stable across both local and CI environments. You can verify by running the test suite again to ensure TestZooKeeper no longer fails with InconsistentFSStateException.

Ran for 20 times in a loop to verify.

AI tool: Claude Code
AI Model: Sonnet 4.5 (reverified using Opus 4.6 (1M context))
Prompt: You need to diagnose the HBase test TestZooKeeper that is flapping. It had multiple failures and errors in the PR validation build, but none when I ran locally. The source code is at @../../src/apache/hbase/ and the logs for good run is at @Good-Run and bad run is at @bad-run. Can you compare the logs and try to understand why the run failed and look at the code for a potential fix?

Here is the AI summary for the fix:

**Problem Identified:**
- Line 78 had a duplicate call to `TEST_UTIL.startMiniDFSCluster(2)`
- This violated the "can only create one" constraint and caused HDFS namespace conflicts when tests ran in parallel

**Why it was flaky:**
- ✅ **Local/isolated runs**: Worked because no leftover HDFS state
- ❌ **CI/parallel runs**: Failed when other ZooKeeper tests ran first, leaving behind HDFS data with incompatible `namespaceID`

**Fix Applied:**
Removed the duplicate call on line 78. The test now properly:
1. Starts Mini DFS Cluster (line 71)
2. Starts Mini ZK Cluster (line 72)
3. Configures settings (lines 73-77)
4. Done! (no duplicate cluster start)

This matches the pattern used in other similar tests like `TestClusterBootOrder` and aligns with the comment's intent: "Test we can first start the ZK cluster by itself."

The test should now be stable across both local and CI environments. You can verify by running the test suite again to ensure `TestZooKeeper` no longer fails with `InconsistentFSStateException`.
@Apache9
Copy link
Copy Markdown
Contributor

Apache9 commented Apr 24, 2026

Ah, TestZooKeeper still failed in the pre commit check...

@Apache9
Copy link
Copy Markdown
Contributor

Apache9 commented Apr 24, 2026

TEST-org.apache.hadoop.hbase.TestZooKeeper.xml

Checked the xml, the inconsistent state happened in rerun, the failure for the first run is timeout.

@Apache9
Copy link
Copy Markdown
Contributor

Apache9 commented Apr 24, 2026

This is the root cause, when reassigning meta region after region server crash, the thread is blocked...

"RS_OPEN_META-regionserver/2cd189b8f196:0-0" daemon prio=5 tid=470 blocked
java.lang.Thread.State: BLOCKED
        at app//org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:223)
        at app//org.apache.hadoop.hbase.metrics.BaseSourceImpl.<init>(BaseSourceImpl.java:115)
        at app//org.apache.hadoop.hbase.io.MetricsIOSourceImpl.<init>(MetricsIOSourceImpl.java:44)
        at app//org.apache.hadoop.hbase.io.MetricsIOSourceImpl.<init>(MetricsIOSourceImpl.java:39)
        at app//org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactoryImpl.createIO(MetricsRegionServerSourceFactoryImpl.java:99)
        at app//org.apache.hadoop.hbase.io.MetricsIO.<init>(MetricsIO.java:36)
        at app//org.apache.hadoop.hbase.io.MetricsIO.getInstance(MetricsIO.java:52)
        at app//org.apache.hadoop.hbase.io.hfile.HFile.updateWriteLatency(HFile.java:205)
        at app//org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.finishBlockAndWriteHeaderAndData(HFileBlock.java:1051)
        at app//org.apache.hadoop.hbase.io.hfile.HFileBlock$Writer.writeHeaderAndData(HFileBlock.java:1036)
        at app//org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.finishBlock(HFileWriterImpl.java:384)
        at app//org.apache.hadoop.hbase.io.hfile.HFileWriterImpl.close(HFileWriterImpl.java:653)
        at app//org.apache.hadoop.hbase.regionserver.StoreFileWriter$SingleStoreFileWriter.close(StoreFileWriter.java:781)
        at app//org.apache.hadoop.hbase.regionserver.StoreFileWriter.close(StoreFileWriter.java:301)
        at app//org.apache.hadoop.hbase.regionserver.StoreFlusher.finalizeWriter(StoreFlusher.java:70)
        at app//org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:74)
        at app//org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:836)
        at app//org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1987)
        at app//org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:3158)
        at app//org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2866)
        at app//org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:5623)
        at app//org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:1099)
        at app//org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:1033)
        at app//org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:8038)
        at app//org.apache.hadoop.hbase.regionserver.HRegion.openHRegionFromTableDir(HRegion.java:7992)
        at app//org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7964)
        at app//org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7912)
        at app//org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:7843)
        at app//org.apache.hadoop.hbase.regionserver.handler.AssignRegionHandler.process(AssignRegionHandler.java:143)
        at app//org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
        at java.base@17.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at java.base@17.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.base@17.0.11/java.lang.Thread.run(Thread.java:840)

@Apache9
Copy link
Copy Markdown
Contributor

Apache9 commented Apr 24, 2026

"HBase-Metrics2-1" daemon prio=5 tid=199 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
        at java.base@17.0.11/jdk.internal.misc.Unsafe.park(Native Method)
        at java.base@17.0.11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:211)
        at java.base@17.0.11/java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1864)
        at java.base@17.0.11/java.util.concurrent.ForkJoinPool.unmanagedBlock(ForkJoinPool.java:3465)
        at java.base@17.0.11/java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3436)
        at java.base@17.0.11/java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1898)
        at java.base@17.0.11/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2072)
        at app//org.apache.hadoop.hbase.util.FutureUtils.get(FutureUtils.java:182)
        at app//org.apache.hadoop.hbase.client.TableOverAsyncTable.get(TableOverAsyncTable.java:188)
        at app//org.apache.hadoop.hbase.MetaTableAccessor.getTableState(MetaTableAccessor.java:601)
        at app//org.apache.hadoop.hbase.master.TableStateManager.readMetaState(TableStateManager.java:177)
        at app//org.apache.hadoop.hbase.master.TableStateManager.isTablePresent(TableStateManager.java:107)
        at app//org.apache.hadoop.hbase.master.HMaster.getTableDescriptors(HMaster.java:3856)
        at app//org.apache.hadoop.hbase.master.HMaster.listTableDescriptors(HMaster.java:3806)
        at app//org.apache.hadoop.hbase.master.MetricsMasterWrapperImpl.getRegionCounts(MetricsMasterWrapperImpl.java:227)
        at app//org.apache.hadoop.hbase.master.MetricsMasterSourceImpl.getMetrics(MetricsMasterSourceImpl.java:95)
        at app//org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
        at app//org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:183)
        at app//org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:156)
        at java.management@17.0.11/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getClassName(DefaultMBeanServerInterceptor.java:1766)
        at java.management@17.0.11/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.safeGetClassName(DefaultMBeanServerInterceptor.java:1575)
        at java.management@17.0.11/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanPermission(DefaultMBeanServerInterceptor.java:1776)
        at java.management@17.0.11/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:426)
        at java.management@17.0.11/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:411)
        at java.management@17.0.11/com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:547)
        at app//org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:144)
        at app//org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stopMBeans(MetricsSourceAdapter.java:228)
        at app//org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stop(MetricsSourceAdapter.java:213)
        at app//org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stopSources(MetricsSystemImpl.java:464)
        at app//org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stop(MetricsSystemImpl.java:212)
        at app//org.apache.hadoop.metrics2.impl.JmxCacheBuster$JmxCacheBusterRunnable.run(JmxCacheBuster.java:98)
        at java.base@17.0.11/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
        at java.base@17.0.11/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base@17.0.11/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
        at java.base@17.0.11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
        at java.base@17.0.11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
        at java.base@17.0.11/java.lang.Thread.run(Thread.java:840)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants