Diagnosed failure

AdminCliTest.TestRebuildTables: /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/tools/kudu-admin-test.cc:3914: Failure
Failed
Bad status: Not found: not all replicas of tablets comprising table TestTable are registered yet
I20250905 08:24:32.646919   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 4457
I20250905 08:24:32.673401   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 4611
I20250905 08:24:32.698489   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 4761
I20250905 08:24:32.721885   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 4387
2025-09-05T08:24:32Z chronyd exiting
I20250905 08:24:32.767566   426 test_util.cc:183] -----------------------------------------------
I20250905 08:24:32.767733   426 test_util.cc:184] Had failures, leaving test files at /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0

Full log

Note: This is test shard 6 of 8.
[==========] Running 9 tests from 5 test suites.
[----------] Global test environment set-up.
[----------] 5 tests from AdminCliTest
[ RUN      ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes
WARNING: Logging before InitGoogleLogging() is written to STDERR
I20250905 08:22:28.064627   426 test_util.cc:276] Using random seed: -1982794496
W20250905 08:22:29.198966   426 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.094s	user 0.433s	sys 0.658s
W20250905 08:22:29.199416   426 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.094s	user 0.433s	sys 0.658s
I20250905 08:22:29.201397   426 ts_itest-base.cc:115] Starting cluster with:
I20250905 08:22:29.201601   426 ts_itest-base.cc:116] --------------
I20250905 08:22:29.201769   426 ts_itest-base.cc:117] 4 tablet servers
I20250905 08:22:29.201943   426 ts_itest-base.cc:118] 3 replicas per TS
I20250905 08:22:29.202098   426 ts_itest-base.cc:119] --------------
2025-09-05T08:22:29Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-09-05T08:22:29Z Disabled control of system clock
I20250905 08:22:29.241508   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:37529
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:33323
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:37529 with env {}
W20250905 08:22:29.521198   440 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:22:29.521704   440 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:22:29.522118   440 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:22:29.549477   440 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250905 08:22:29.549798   440 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:22:29.550030   440 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250905 08:22:29.550251   440 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250905 08:22:29.582212   440 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:33323
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:37529
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:37529
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:22:29.583434   440 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:22:29.584990   440 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:22:29.597126   446 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:29.600023   447 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:29.601509   449 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:30.743388   448 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1142 milliseconds
I20250905 08:22:30.743505   440 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:22:30.744663   440 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:22:30.747359   440 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:22:30.748752   440 hybrid_clock.cc:648] HybridClock initialized: now 1757060550748703 us; error 66 us; skew 500 ppm
I20250905 08:22:30.749531   440 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:22:30.756767   440 webserver.cc:480] Webserver started at http://127.0.106.190:39601/ using document root <none> and password file <none>
I20250905 08:22:30.757603   440 fs_manager.cc:362] Metadata directory not provided
I20250905 08:22:30.757793   440 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:22:30.758155   440 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:22:30.766302   440 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3"
format_stamp: "Formatted at 2025-09-05 08:22:30 on dist-test-slave-0x95"
I20250905 08:22:30.767297   440 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3"
format_stamp: "Formatted at 2025-09-05 08:22:30 on dist-test-slave-0x95"
I20250905 08:22:30.773862   440 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.009s	sys 0.000s
I20250905 08:22:30.779309   456 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:30.780264   440 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.003s	sys 0.000s
I20250905 08:22:30.780565   440 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3"
format_stamp: "Formatted at 2025-09-05 08:22:30 on dist-test-slave-0x95"
I20250905 08:22:30.780859   440 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:22:30.828824   440 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:22:30.830180   440 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:22:30.830595   440 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:22:30.902761   440 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:37529
I20250905 08:22:30.902815   507 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:37529 every 8 connection(s)
I20250905 08:22:30.905169   440 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250905 08:22:30.908212   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 440
I20250905 08:22:30.908655   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250905 08:22:30.911351   508 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:22:30.928239   508 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3: Bootstrap starting.
I20250905 08:22:30.935087   508 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3: Neither blocks nor log segments found. Creating new log.
I20250905 08:22:30.936931   508 log.cc:826] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3: Log is configured to *not* fsync() on all Append() calls
I20250905 08:22:30.941874   508 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3: No bootstrap required, opened a new log
I20250905 08:22:30.959201   508 raft_consensus.cc:357] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37529 } }
I20250905 08:22:30.959757   508 raft_consensus.cc:383] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:22:30.960036   508 raft_consensus.cc:738] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 60956f90bd4a4b45a2f1b8fdf0b5b9b3, State: Initialized, Role: FOLLOWER
I20250905 08:22:30.960734   508 consensus_queue.cc:260] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37529 } }
I20250905 08:22:30.961138   508 raft_consensus.cc:397] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:22:30.961367   508 raft_consensus.cc:491] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:22:30.961609   508 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:22:30.964844   508 raft_consensus.cc:513] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37529 } }
I20250905 08:22:30.965350   508 leader_election.cc:304] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 60956f90bd4a4b45a2f1b8fdf0b5b9b3; no voters: 
I20250905 08:22:30.967229   508 leader_election.cc:290] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250905 08:22:30.968312   513 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:22:30.970675   513 raft_consensus.cc:695] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 1 LEADER]: Becoming Leader. State: Replica: 60956f90bd4a4b45a2f1b8fdf0b5b9b3, State: Running, Role: LEADER
I20250905 08:22:30.971385   508 sys_catalog.cc:564] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [sys.catalog]: configured and running, proceeding with master startup.
I20250905 08:22:30.971205   513 consensus_queue.cc:237] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37529 } }
I20250905 08:22:30.984175   514 sys_catalog.cc:455] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37529 } } }
I20250905 08:22:30.984856   514 sys_catalog.cc:458] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [sys.catalog]: This master's current role is: LEADER
I20250905 08:22:30.987167   515 sys_catalog.cc:455] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 60956f90bd4a4b45a2f1b8fdf0b5b9b3. Latest consensus state: current_term: 1 leader_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37529 } } }
I20250905 08:22:30.987800   515 sys_catalog.cc:458] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [sys.catalog]: This master's current role is: LEADER
I20250905 08:22:30.990262   523 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250905 08:22:31.003443   523 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250905 08:22:31.020334   523 catalog_manager.cc:1349] Generated new cluster ID: 59d24ef0b4d64bc9ac0383362765a584
I20250905 08:22:31.020524   523 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250905 08:22:31.047928   523 catalog_manager.cc:1372] Generated new certificate authority record
I20250905 08:22:31.049214   523 catalog_manager.cc:1506] Loading token signing keys...
I20250905 08:22:31.062274   523 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3: Generated new TSK 0
I20250905 08:22:31.063143   523 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250905 08:22:31.084048   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37529
--builtin_ntp_servers=127.0.106.148:33323
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250905 08:22:31.366222   532 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:22:31.366709   532 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:22:31.367197   532 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:22:31.397117   532 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:22:31.397881   532 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250905 08:22:31.429872   532 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:33323
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37529
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:22:31.431098   532 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:22:31.432523   532 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:22:31.444759   538 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:32.847553   537 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 532
W20250905 08:22:31.447223   539 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:33.169672   540 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1723 milliseconds
W20250905 08:22:33.170279   541 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:33.168026   532 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.723s	user 0.490s	sys 1.148s
W20250905 08:22:33.170889   532 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.726s	user 0.490s	sys 1.148s
I20250905 08:22:33.171154   532 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:22:33.174384   532 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:22:33.177335   532 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:22:33.178779   532 hybrid_clock.cc:648] HybridClock initialized: now 1757060553178741 us; error 48 us; skew 500 ppm
I20250905 08:22:33.179821   532 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:22:33.187072   532 webserver.cc:480] Webserver started at http://127.0.106.129:39861/ using document root <none> and password file <none>
I20250905 08:22:33.188200   532 fs_manager.cc:362] Metadata directory not provided
I20250905 08:22:33.188462   532 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:22:33.188946   532 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:22:33.195225   532 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "8ac0c5dcff144305b212c5f105d3cab7"
format_stamp: "Formatted at 2025-09-05 08:22:33 on dist-test-slave-0x95"
I20250905 08:22:33.196657   532 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "8ac0c5dcff144305b212c5f105d3cab7"
format_stamp: "Formatted at 2025-09-05 08:22:33 on dist-test-slave-0x95"
I20250905 08:22:33.205441   532 fs_manager.cc:696] Time spent creating directory manager: real 0.008s	user 0.006s	sys 0.002s
I20250905 08:22:33.211982   548 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:33.212932   532 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.001s	sys 0.003s
I20250905 08:22:33.213263   532 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "8ac0c5dcff144305b212c5f105d3cab7"
format_stamp: "Formatted at 2025-09-05 08:22:33 on dist-test-slave-0x95"
I20250905 08:22:33.213634   532 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:22:33.271464   532 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:22:33.272994   532 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:22:33.273399   532 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:22:33.276285   532 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:22:33.280006   532 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:22:33.280226   532 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:33.280454   532 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:22:33.280606   532 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:33.453975   532 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:45995
I20250905 08:22:33.454164   660 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:45995 every 8 connection(s)
I20250905 08:22:33.456904   532 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250905 08:22:33.464776   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 532
I20250905 08:22:33.465212   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250905 08:22:33.471244   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:0
--local_ip_for_outbound_sockets=127.0.106.130
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37529
--builtin_ntp_servers=127.0.106.148:33323
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250905 08:22:33.482728   661 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37529
I20250905 08:22:33.483208   661 heartbeater.cc:461] Registering TS with master...
I20250905 08:22:33.484483   661 heartbeater.cc:507] Master 127.0.106.190:37529 requested a full tablet report, sending...
I20250905 08:22:33.487383   473 ts_manager.cc:194] Registered new tserver with Master: 8ac0c5dcff144305b212c5f105d3cab7 (127.0.106.129:45995)
I20250905 08:22:33.489923   473 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:47117
W20250905 08:22:33.769984   665 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:22:33.770496   665 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:22:33.770989   665 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:22:33.801538   665 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:22:33.802316   665 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250905 08:22:33.834772   665 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:33323
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37529
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:22:33.836037   665 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:22:33.837491   665 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:22:33.848598   671 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:22:34.494105   661 heartbeater.cc:499] Master 127.0.106.190:37529 was elected leader, sending a full tablet report...
W20250905 08:22:33.849697   672 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:35.021728   674 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:35.023777   673 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1168 milliseconds
W20250905 08:22:35.025054   665 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.176s	user 0.303s	sys 0.859s
W20250905 08:22:35.026353   665 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.177s	user 0.303s	sys 0.859s
I20250905 08:22:35.026656   665 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:22:35.028101   665 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:22:35.030766   665 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:22:35.032290   665 hybrid_clock.cc:648] HybridClock initialized: now 1757060555032219 us; error 57 us; skew 500 ppm
I20250905 08:22:35.033458   665 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:22:35.043315   665 webserver.cc:480] Webserver started at http://127.0.106.130:39827/ using document root <none> and password file <none>
I20250905 08:22:35.044450   665 fs_manager.cc:362] Metadata directory not provided
I20250905 08:22:35.044618   665 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:22:35.044974   665 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:22:35.048759   665 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "710a990db8704a02a561d580a36eeaa6"
format_stamp: "Formatted at 2025-09-05 08:22:35 on dist-test-slave-0x95"
I20250905 08:22:35.049772   665 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "710a990db8704a02a561d580a36eeaa6"
format_stamp: "Formatted at 2025-09-05 08:22:35 on dist-test-slave-0x95"
I20250905 08:22:35.057602   665 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.004s	sys 0.004s
I20250905 08:22:35.063603   681 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:35.064549   665 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.002s	sys 0.001s
I20250905 08:22:35.064805   665 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "710a990db8704a02a561d580a36eeaa6"
format_stamp: "Formatted at 2025-09-05 08:22:35 on dist-test-slave-0x95"
I20250905 08:22:35.065068   665 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:22:35.128513   665 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:22:35.129735   665 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:22:35.130136   665 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:22:35.132263   665 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:22:35.136188   665 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:22:35.136373   665 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:35.136575   665 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:22:35.136709   665 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:35.269073   665 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:36913
I20250905 08:22:35.269651   793 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:36913 every 8 connection(s)
I20250905 08:22:35.271499   665 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250905 08:22:35.280692   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 665
I20250905 08:22:35.281059   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250905 08:22:35.286322   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:0
--local_ip_for_outbound_sockets=127.0.106.131
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37529
--builtin_ntp_servers=127.0.106.148:33323
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250905 08:22:35.291890   794 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37529
I20250905 08:22:35.292263   794 heartbeater.cc:461] Registering TS with master...
I20250905 08:22:35.293311   794 heartbeater.cc:507] Master 127.0.106.190:37529 requested a full tablet report, sending...
I20250905 08:22:35.295409   473 ts_manager.cc:194] Registered new tserver with Master: 710a990db8704a02a561d580a36eeaa6 (127.0.106.130:36913)
I20250905 08:22:35.297201   473 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:33047
W20250905 08:22:35.564396   798 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:22:35.564778   798 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:22:35.565198   798 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:22:35.593004   798 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:22:35.593706   798 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250905 08:22:35.623430   798 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:33323
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37529
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:22:35.624455   798 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:22:35.625721   798 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:22:35.637135   804 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:22:36.300405   794 heartbeater.cc:499] Master 127.0.106.190:37529 was elected leader, sending a full tablet report...
W20250905 08:22:37.039955   803 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 798
W20250905 08:22:35.638424   805 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:37.290282   798 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.653s	user 0.650s	sys 1.002s
W20250905 08:22:37.292826   798 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.656s	user 0.650s	sys 1.002s
W20250905 08:22:37.292861   807 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:37.292228   806 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1651 milliseconds
I20250905 08:22:37.293155   798 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:22:37.296382   798 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:22:37.298100   798 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:22:37.299402   798 hybrid_clock.cc:648] HybridClock initialized: now 1757060557299365 us; error 39 us; skew 500 ppm
I20250905 08:22:37.300209   798 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:22:37.305902   798 webserver.cc:480] Webserver started at http://127.0.106.131:42159/ using document root <none> and password file <none>
I20250905 08:22:37.307009   798 fs_manager.cc:362] Metadata directory not provided
I20250905 08:22:37.307212   798 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:22:37.307603   798 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:22:37.311786   798 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "b1d30ac4b1674028bb4876321e5818cc"
format_stamp: "Formatted at 2025-09-05 08:22:37 on dist-test-slave-0x95"
I20250905 08:22:37.312789   798 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "b1d30ac4b1674028bb4876321e5818cc"
format_stamp: "Formatted at 2025-09-05 08:22:37 on dist-test-slave-0x95"
I20250905 08:22:37.319017   798 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.006s	sys 0.000s
I20250905 08:22:37.324175   814 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:37.325004   798 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.004s	sys 0.000s
I20250905 08:22:37.325264   798 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "b1d30ac4b1674028bb4876321e5818cc"
format_stamp: "Formatted at 2025-09-05 08:22:37 on dist-test-slave-0x95"
I20250905 08:22:37.325515   798 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:22:37.385753   798 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:22:37.387037   798 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:22:37.387413   798 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:22:37.390022   798 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:22:37.394224   798 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:22:37.394450   798 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:37.394706   798 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:22:37.394850   798 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:37.540966   798 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:37437
I20250905 08:22:37.541076   926 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:37437 every 8 connection(s)
I20250905 08:22:37.543453   798 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250905 08:22:37.545971   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 798
I20250905 08:22:37.546468   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250905 08:22:37.552696   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.132:0
--local_ip_for_outbound_sockets=127.0.106.132
--webserver_interface=127.0.106.132
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37529
--builtin_ntp_servers=127.0.106.148:33323
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250905 08:22:37.567991   927 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37529
I20250905 08:22:37.568367   927 heartbeater.cc:461] Registering TS with master...
I20250905 08:22:37.569321   927 heartbeater.cc:507] Master 127.0.106.190:37529 requested a full tablet report, sending...
I20250905 08:22:37.571452   473 ts_manager.cc:194] Registered new tserver with Master: b1d30ac4b1674028bb4876321e5818cc (127.0.106.131:37437)
I20250905 08:22:37.573159   473 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:51047
W20250905 08:22:37.837309   931 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:22:37.837725   931 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:22:37.838150   931 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:22:37.867569   931 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:22:37.868330   931 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.132
I20250905 08:22:37.900776   931 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:33323
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.132:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.0.106.132
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37529
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.132
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:22:37.901984   931 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:22:37.903371   931 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:22:37.916520   937 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:22:38.576740   927 heartbeater.cc:499] Master 127.0.106.190:37529 was elected leader, sending a full tablet report...
W20250905 08:22:37.918529   938 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:39.089916   931 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.173s	user 0.370s	sys 0.790s
W20250905 08:22:39.090344   931 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.174s	user 0.371s	sys 0.792s
W20250905 08:22:39.091746   939 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1173 milliseconds
I20250905 08:22:39.092168   931 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250905 08:22:39.092197   940 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:22:39.093392   931 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:22:39.095453   931 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:22:39.098932   931 hybrid_clock.cc:648] HybridClock initialized: now 1757060559098871 us; error 57 us; skew 500 ppm
I20250905 08:22:39.099812   931 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:22:39.108103   931 webserver.cc:480] Webserver started at http://127.0.106.132:43921/ using document root <none> and password file <none>
I20250905 08:22:39.109011   931 fs_manager.cc:362] Metadata directory not provided
I20250905 08:22:39.109242   931 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:22:39.109654   931 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:22:39.113972   931 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "4a699fa7b9c14abcb5f48d8825e6b46a"
format_stamp: "Formatted at 2025-09-05 08:22:39 on dist-test-slave-0x95"
I20250905 08:22:39.115087   931 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "4a699fa7b9c14abcb5f48d8825e6b46a"
format_stamp: "Formatted at 2025-09-05 08:22:39 on dist-test-slave-0x95"
I20250905 08:22:39.122809   931 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.009s	sys 0.001s
I20250905 08:22:39.127858   948 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:39.128847   931 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.003s	sys 0.001s
I20250905 08:22:39.129120   931 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "4a699fa7b9c14abcb5f48d8825e6b46a"
format_stamp: "Formatted at 2025-09-05 08:22:39 on dist-test-slave-0x95"
I20250905 08:22:39.129391   931 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:22:39.201258   931 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:22:39.202565   931 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:22:39.202911   931 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:22:39.205253   931 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:22:39.209077   931 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:22:39.209280   931 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:39.209518   931 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:22:39.209681   931 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:39.356730   931 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.132:33907
I20250905 08:22:39.356835  1060 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.132:33907 every 8 connection(s)
I20250905 08:22:39.359275   931 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250905 08:22:39.364439   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 931
I20250905 08:22:39.365087   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250905 08:22:39.380199  1061 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37529
I20250905 08:22:39.380597  1061 heartbeater.cc:461] Registering TS with master...
I20250905 08:22:39.381491  1061 heartbeater.cc:507] Master 127.0.106.190:37529 requested a full tablet report, sending...
I20250905 08:22:39.383489   473 ts_manager.cc:194] Registered new tserver with Master: 4a699fa7b9c14abcb5f48d8825e6b46a (127.0.106.132:33907)
I20250905 08:22:39.384675   473 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.132:59581
I20250905 08:22:39.384989   426 external_mini_cluster.cc:949] 4 TS(s) registered with all masters
I20250905 08:22:39.421597   473 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:36980:
name: "TestTable"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
owner: "alice"
I20250905 08:22:39.489745   862 tablet_service.cc:1468] Processing CreateTablet for tablet 326058ad48ec4612bb7cc7bcb83ab34a (DEFAULT_TABLE table=TestTable [id=053730c8b3f146859c021c4179575413]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:22:39.491000   862 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 326058ad48ec4612bb7cc7bcb83ab34a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:22:39.492795   729 tablet_service.cc:1468] Processing CreateTablet for tablet 326058ad48ec4612bb7cc7bcb83ab34a (DEFAULT_TABLE table=TestTable [id=053730c8b3f146859c021c4179575413]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:22:39.492794   996 tablet_service.cc:1468] Processing CreateTablet for tablet 326058ad48ec4612bb7cc7bcb83ab34a (DEFAULT_TABLE table=TestTable [id=053730c8b3f146859c021c4179575413]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:22:39.494678   996 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 326058ad48ec4612bb7cc7bcb83ab34a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:22:39.494717   729 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 326058ad48ec4612bb7cc7bcb83ab34a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:22:39.512180  1080 tablet_bootstrap.cc:492] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc: Bootstrap starting.
I20250905 08:22:39.519817  1080 tablet_bootstrap.cc:654] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc: Neither blocks nor log segments found. Creating new log.
I20250905 08:22:39.522187  1080 log.cc:826] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc: Log is configured to *not* fsync() on all Append() calls
I20250905 08:22:39.531023  1080 tablet_bootstrap.cc:492] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc: No bootstrap required, opened a new log
I20250905 08:22:39.531734  1080 ts_tablet_manager.cc:1397] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc: Time spent bootstrapping tablet: real 0.020s	user 0.008s	sys 0.010s
I20250905 08:22:39.538543  1083 tablet_bootstrap.cc:492] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6: Bootstrap starting.
I20250905 08:22:39.542510  1082 tablet_bootstrap.cc:492] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a: Bootstrap starting.
I20250905 08:22:39.545444  1083 tablet_bootstrap.cc:654] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6: Neither blocks nor log segments found. Creating new log.
I20250905 08:22:39.548427  1083 log.cc:826] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6: Log is configured to *not* fsync() on all Append() calls
I20250905 08:22:39.552753  1082 tablet_bootstrap.cc:654] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a: Neither blocks nor log segments found. Creating new log.
I20250905 08:22:39.555095  1082 log.cc:826] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a: Log is configured to *not* fsync() on all Append() calls
I20250905 08:22:39.561386  1082 tablet_bootstrap.cc:492] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a: No bootstrap required, opened a new log
I20250905 08:22:39.561905  1082 ts_tablet_manager.cc:1397] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a: Time spent bootstrapping tablet: real 0.020s	user 0.010s	sys 0.004s
I20250905 08:22:39.563304  1083 tablet_bootstrap.cc:492] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6: No bootstrap required, opened a new log
I20250905 08:22:39.562875  1080 raft_consensus.cc:357] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }
I20250905 08:22:39.563701  1080 raft_consensus.cc:383] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:22:39.563787  1083 ts_tablet_manager.cc:1397] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6: Time spent bootstrapping tablet: real 0.026s	user 0.005s	sys 0.017s
I20250905 08:22:39.564020  1080 raft_consensus.cc:738] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b1d30ac4b1674028bb4876321e5818cc, State: Initialized, Role: FOLLOWER
I20250905 08:22:39.564972  1080 consensus_queue.cc:260] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }
I20250905 08:22:39.590083  1080 ts_tablet_manager.cc:1428] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc: Time spent starting tablet: real 0.058s	user 0.032s	sys 0.021s
I20250905 08:22:39.591364  1083 raft_consensus.cc:357] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }
I20250905 08:22:39.592365  1083 raft_consensus.cc:383] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:22:39.592700  1083 raft_consensus.cc:738] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 710a990db8704a02a561d580a36eeaa6, State: Initialized, Role: FOLLOWER
I20250905 08:22:39.593506  1082 raft_consensus.cc:357] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }
I20250905 08:22:39.593715  1083 consensus_queue.cc:260] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }
I20250905 08:22:39.594271  1082 raft_consensus.cc:383] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:22:39.594626  1082 raft_consensus.cc:738] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4a699fa7b9c14abcb5f48d8825e6b46a, State: Initialized, Role: FOLLOWER
I20250905 08:22:39.595585  1082 consensus_queue.cc:260] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }
I20250905 08:22:39.599419  1061 heartbeater.cc:499] Master 127.0.106.190:37529 was elected leader, sending a full tablet report...
I20250905 08:22:39.601123  1083 ts_tablet_manager.cc:1428] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6: Time spent starting tablet: real 0.037s	user 0.028s	sys 0.002s
I20250905 08:22:39.601653  1082 ts_tablet_manager.cc:1428] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a: Time spent starting tablet: real 0.039s	user 0.036s	sys 0.000s
W20250905 08:22:39.613994  1062 tablet.cc:2378] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250905 08:22:39.780364   795 tablet.cc:2378] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250905 08:22:39.799578   928 tablet.cc:2378] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250905 08:22:39.856256  1088 raft_consensus.cc:491] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250905 08:22:39.856631  1088 raft_consensus.cc:513] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }
I20250905 08:22:39.858681  1088 leader_election.cc:290] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers b1d30ac4b1674028bb4876321e5818cc (127.0.106.131:37437), 4a699fa7b9c14abcb5f48d8825e6b46a (127.0.106.132:33907)
I20250905 08:22:39.869495   882 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "326058ad48ec4612bb7cc7bcb83ab34a" candidate_uuid: "710a990db8704a02a561d580a36eeaa6" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b1d30ac4b1674028bb4876321e5818cc" is_pre_election: true
I20250905 08:22:39.869899  1016 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "326058ad48ec4612bb7cc7bcb83ab34a" candidate_uuid: "710a990db8704a02a561d580a36eeaa6" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" is_pre_election: true
I20250905 08:22:39.870258   882 raft_consensus.cc:2466] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 710a990db8704a02a561d580a36eeaa6 in term 0.
I20250905 08:22:39.870621  1016 raft_consensus.cc:2466] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 710a990db8704a02a561d580a36eeaa6 in term 0.
I20250905 08:22:39.871512   685 leader_election.cc:304] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 710a990db8704a02a561d580a36eeaa6, b1d30ac4b1674028bb4876321e5818cc; no voters: 
I20250905 08:22:39.872205  1088 raft_consensus.cc:2802] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250905 08:22:39.872470  1088 raft_consensus.cc:491] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250905 08:22:39.872687  1088 raft_consensus.cc:3058] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:22:39.876730  1088 raft_consensus.cc:513] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }
I20250905 08:22:39.877918  1088 leader_election.cc:290] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [CANDIDATE]: Term 1 election: Requested vote from peers b1d30ac4b1674028bb4876321e5818cc (127.0.106.131:37437), 4a699fa7b9c14abcb5f48d8825e6b46a (127.0.106.132:33907)
I20250905 08:22:39.878437   882 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "326058ad48ec4612bb7cc7bcb83ab34a" candidate_uuid: "710a990db8704a02a561d580a36eeaa6" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b1d30ac4b1674028bb4876321e5818cc"
I20250905 08:22:39.878680  1016 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "326058ad48ec4612bb7cc7bcb83ab34a" candidate_uuid: "710a990db8704a02a561d580a36eeaa6" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a"
I20250905 08:22:39.878813   882 raft_consensus.cc:3058] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:22:39.879046  1016 raft_consensus.cc:3058] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:22:39.883002   882 raft_consensus.cc:2466] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 710a990db8704a02a561d580a36eeaa6 in term 1.
I20250905 08:22:39.883021  1016 raft_consensus.cc:2466] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 710a990db8704a02a561d580a36eeaa6 in term 1.
I20250905 08:22:39.883770   685 leader_election.cc:304] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 710a990db8704a02a561d580a36eeaa6, b1d30ac4b1674028bb4876321e5818cc; no voters: 
I20250905 08:22:39.884331  1088 raft_consensus.cc:2802] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:22:39.885887  1088 raft_consensus.cc:695] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [term 1 LEADER]: Becoming Leader. State: Replica: 710a990db8704a02a561d580a36eeaa6, State: Running, Role: LEADER
I20250905 08:22:39.886643  1088 consensus_queue.cc:237] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }
I20250905 08:22:39.894665   472 catalog_manager.cc:5582] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 reported cstate change: term changed from 0 to 1, leader changed from <none> to 710a990db8704a02a561d580a36eeaa6 (127.0.106.130). New cstate: current_term: 1 leader_uuid: "710a990db8704a02a561d580a36eeaa6" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } health_report { overall_health: HEALTHY } } }
I20250905 08:22:39.968147   426 external_mini_cluster.cc:949] 4 TS(s) registered with all masters
I20250905 08:22:39.971230   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 710a990db8704a02a561d580a36eeaa6 to finish bootstrapping
I20250905 08:22:39.983189   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver b1d30ac4b1674028bb4876321e5818cc to finish bootstrapping
I20250905 08:22:39.992565   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 4a699fa7b9c14abcb5f48d8825e6b46a to finish bootstrapping
I20250905 08:22:40.002707   426 kudu-admin-test.cc:709] Waiting for Master to see the current replicas...
I20250905 08:22:40.005328   426 kudu-admin-test.cc:716] Tablet locations:
tablet_locations {
  tablet_id: "326058ad48ec4612bb7cc7bcb83ab34a"
  DEPRECATED_stale: false
  partition {
    partition_key_start: ""
    partition_key_end: ""
  }
  interned_replicas {
    ts_info_idx: 0
    role: FOLLOWER
  }
  interned_replicas {
    ts_info_idx: 1
    role: FOLLOWER
  }
  interned_replicas {
    ts_info_idx: 2
    role: LEADER
  }
}
ts_infos {
  permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc"
  rpc_addresses {
    host: "127.0.106.131"
    port: 37437
  }
}
ts_infos {
  permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a"
  rpc_addresses {
    host: "127.0.106.132"
    port: 33907
  }
}
ts_infos {
  permanent_uuid: "710a990db8704a02a561d580a36eeaa6"
  rpc_addresses {
    host: "127.0.106.130"
    port: 36913
  }
}
I20250905 08:22:40.436857  1096 consensus_queue.cc:1035] T 326058ad48ec4612bb7cc7bcb83ab34a P 710a990db8704a02a561d580a36eeaa6 [LEADER]: Connected to new peer: Peer: permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250905 08:22:40.453557   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 665
W20250905 08:22:40.481962   950 connection.cc:537] server connection from 127.0.106.130:58727 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250905 08:22:40.482095   460 connection.cc:537] server connection from 127.0.106.130:33047 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250905 08:22:40.482115   818 connection.cc:537] server connection from 127.0.106.130:48371 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250905 08:22:40.483045   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 440
I20250905 08:22:40.515326   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:37529
--webserver_interface=127.0.106.190
--webserver_port=39601
--builtin_ntp_servers=127.0.106.148:33323
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:37529 with env {}
W20250905 08:22:40.543255   661 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:37529 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:37529: connect: Connection refused (error 111)
W20250905 08:22:40.613116  1061 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:37529 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:37529: connect: Connection refused (error 111)
W20250905 08:22:40.801813  1103 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:22:40.802357  1103 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:22:40.802775  1103 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:22:40.830878  1103 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250905 08:22:40.831178  1103 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:22:40.831449  1103 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250905 08:22:40.831684  1103 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250905 08:22:40.864048  1103 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:33323
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:37529
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:37529
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=39601
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:22:40.865234  1103 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:22:40.866701  1103 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:22:40.876942  1111 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:41.478289   927 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:37529 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:37529: connect: Connection refused (error 111)
I20250905 08:22:41.794018  1121 raft_consensus.cc:491] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250905 08:22:41.794991  1121 raft_consensus.cc:513] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }
I20250905 08:22:41.815315  1121 leader_election.cc:290] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers b1d30ac4b1674028bb4876321e5818cc (127.0.106.131:37437), 710a990db8704a02a561d580a36eeaa6 (127.0.106.130:36913)
W20250905 08:22:41.823177   952 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.106.130:36913: connect: Connection refused (error 111)
W20250905 08:22:41.834002   952 leader_election.cc:336] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer 710a990db8704a02a561d580a36eeaa6 (127.0.106.130:36913): Network error: Client connection negotiation failed: client connection to 127.0.106.130:36913: connect: Connection refused (error 111)
I20250905 08:22:41.836274   882 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "326058ad48ec4612bb7cc7bcb83ab34a" candidate_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b1d30ac4b1674028bb4876321e5818cc" is_pre_election: true
I20250905 08:22:41.838152   952 leader_election.cc:304] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 4a699fa7b9c14abcb5f48d8825e6b46a; no voters: 710a990db8704a02a561d580a36eeaa6, b1d30ac4b1674028bb4876321e5818cc
I20250905 08:22:41.839146  1121 raft_consensus.cc:2747] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 1 FOLLOWER]: Leader pre-election lost for term 2. Reason: could not achieve majority
I20250905 08:22:41.965106  1125 raft_consensus.cc:491] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 1 FOLLOWER]: Starting pre-election (detected failure of leader 710a990db8704a02a561d580a36eeaa6)
I20250905 08:22:41.966015  1125 raft_consensus.cc:513] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }
I20250905 08:22:41.975400  1125 leader_election.cc:290] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 4a699fa7b9c14abcb5f48d8825e6b46a (127.0.106.132:33907), 710a990db8704a02a561d580a36eeaa6 (127.0.106.130:36913)
W20250905 08:22:41.982887   818 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.106.130:36913: connect: Connection refused (error 111)
W20250905 08:22:41.995967   818 leader_election.cc:336] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer 710a990db8704a02a561d580a36eeaa6 (127.0.106.130:36913): Network error: Client connection negotiation failed: client connection to 127.0.106.130:36913: connect: Connection refused (error 111)
I20250905 08:22:42.004253  1016 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "326058ad48ec4612bb7cc7bcb83ab34a" candidate_uuid: "b1d30ac4b1674028bb4876321e5818cc" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" is_pre_election: true
I20250905 08:22:42.004922  1016 raft_consensus.cc:2466] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate b1d30ac4b1674028bb4876321e5818cc in term 1.
I20250905 08:22:42.006227   817 leader_election.cc:304] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 4a699fa7b9c14abcb5f48d8825e6b46a, b1d30ac4b1674028bb4876321e5818cc; no voters: 710a990db8704a02a561d580a36eeaa6
I20250905 08:22:42.007488  1125 raft_consensus.cc:2802] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250905 08:22:42.007869  1125 raft_consensus.cc:491] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 1 FOLLOWER]: Starting leader election (detected failure of leader 710a990db8704a02a561d580a36eeaa6)
I20250905 08:22:42.008234  1125 raft_consensus.cc:3058] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 1 FOLLOWER]: Advancing to term 2
I20250905 08:22:42.016988  1125 raft_consensus.cc:513] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }
I20250905 08:22:42.019583  1016 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "326058ad48ec4612bb7cc7bcb83ab34a" candidate_uuid: "b1d30ac4b1674028bb4876321e5818cc" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a"
I20250905 08:22:42.020097  1016 raft_consensus.cc:3058] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 1 FOLLOWER]: Advancing to term 2
I20250905 08:22:42.022397  1125 leader_election.cc:290] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [CANDIDATE]: Term 2 election: Requested vote from peers 4a699fa7b9c14abcb5f48d8825e6b46a (127.0.106.132:33907), 710a990db8704a02a561d580a36eeaa6 (127.0.106.130:36913)
W20250905 08:22:42.024256   818 leader_election.cc:336] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer 710a990db8704a02a561d580a36eeaa6 (127.0.106.130:36913): Network error: Client connection negotiation failed: client connection to 127.0.106.130:36913: connect: Connection refused (error 111)
I20250905 08:22:42.026598  1016 raft_consensus.cc:2466] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate b1d30ac4b1674028bb4876321e5818cc in term 2.
I20250905 08:22:42.027784   817 leader_election.cc:304] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 4a699fa7b9c14abcb5f48d8825e6b46a, b1d30ac4b1674028bb4876321e5818cc; no voters: 710a990db8704a02a561d580a36eeaa6
I20250905 08:22:42.028990  1125 raft_consensus.cc:2802] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 2 FOLLOWER]: Leader election won for term 2
I20250905 08:22:42.040393  1125 raft_consensus.cc:695] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 2 LEADER]: Becoming Leader. State: Replica: b1d30ac4b1674028bb4876321e5818cc, State: Running, Role: LEADER
I20250905 08:22:42.041608  1125 consensus_queue.cc:237] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }
W20250905 08:22:40.877885  1112 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:42.474185  1103 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.597s	user 0.557s	sys 0.998s
W20250905 08:22:42.474573  1103 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.598s	user 0.557s	sys 0.999s
W20250905 08:22:42.280558  1110 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 1103
W20250905 08:22:42.476454  1114 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:42.479425  1113 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1601 milliseconds
I20250905 08:22:42.479441  1103 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:22:42.480439  1103 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:22:42.482582  1103 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:22:42.483891  1103 hybrid_clock.cc:648] HybridClock initialized: now 1757060562483855 us; error 39 us; skew 500 ppm
I20250905 08:22:42.484630  1103 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:22:42.489872  1103 webserver.cc:480] Webserver started at http://127.0.106.190:39601/ using document root <none> and password file <none>
I20250905 08:22:42.490655  1103 fs_manager.cc:362] Metadata directory not provided
I20250905 08:22:42.490870  1103 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:22:42.500440  1103 fs_manager.cc:714] Time spent opening directory manager: real 0.006s	user 0.007s	sys 0.000s
I20250905 08:22:42.504390  1135 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:42.505266  1103 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.002s	sys 0.002s
I20250905 08:22:42.505534  1103 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3"
format_stamp: "Formatted at 2025-09-05 08:22:30 on dist-test-slave-0x95"
I20250905 08:22:42.507122  1103 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
W20250905 08:22:42.512341   818 consensus_peers.cc:489] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc -> Peer 710a990db8704a02a561d580a36eeaa6 (127.0.106.130:36913): Couldn't send request to peer 710a990db8704a02a561d580a36eeaa6. Status: Network error: Client connection negotiation failed: client connection to 127.0.106.130:36913: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250905 08:22:42.558672  1103 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:22:42.560242  1103 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:22:42.560771  1103 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:22:42.626226  1103 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:37529
I20250905 08:22:42.626307  1187 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:37529 every 8 connection(s)
I20250905 08:22:42.628785  1103 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250905 08:22:42.638929   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 1103
I20250905 08:22:42.639484   426 kudu-admin-test.cc:735] Forcing unsafe config change on tserver b1d30ac4b1674028bb4876321e5818cc
I20250905 08:22:42.639966  1016 raft_consensus.cc:1273] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 2 FOLLOWER]: Refusing update from remote peer b1d30ac4b1674028bb4876321e5818cc: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250905 08:22:42.640187  1188 sys_catalog.cc:263] Verifying existing consensus state
I20250905 08:22:42.641451  1125 consensus_queue.cc:1035] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [LEADER]: Connected to new peer: Peer: permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 0, Time since last communication: 0.000s
I20250905 08:22:42.647627  1188 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3: Bootstrap starting.
I20250905 08:22:42.663744  1061 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37529
I20250905 08:22:42.691241   927 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37529
I20250905 08:22:42.700004  1188 log.cc:826] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3: Log is configured to *not* fsync() on all Append() calls
I20250905 08:22:42.720160  1188 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=5 ignored=0} mutations{seen=2 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:22:42.720865  1188 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3: Bootstrap complete.
I20250905 08:22:42.738852  1188 raft_consensus.cc:357] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37529 } }
I20250905 08:22:42.740907  1188 raft_consensus.cc:738] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 60956f90bd4a4b45a2f1b8fdf0b5b9b3, State: Initialized, Role: FOLLOWER
I20250905 08:22:42.741838  1188 consensus_queue.cc:260] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37529 } }
I20250905 08:22:42.742502  1188 raft_consensus.cc:397] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:22:42.742848  1188 raft_consensus.cc:491] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:22:42.743286  1188 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 1 FOLLOWER]: Advancing to term 2
I20250905 08:22:42.748715  1188 raft_consensus.cc:513] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37529 } }
I20250905 08:22:42.749271  1188 leader_election.cc:304] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 60956f90bd4a4b45a2f1b8fdf0b5b9b3; no voters: 
I20250905 08:22:42.751389  1188 leader_election.cc:290] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [CANDIDATE]: Term 2 election: Requested vote from peers 
I20250905 08:22:42.751782  1199 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 2 FOLLOWER]: Leader election won for term 2
I20250905 08:22:42.755048  1199 raft_consensus.cc:695] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [term 2 LEADER]: Becoming Leader. State: Replica: 60956f90bd4a4b45a2f1b8fdf0b5b9b3, State: Running, Role: LEADER
I20250905 08:22:42.756078  1199 consensus_queue.cc:237] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37529 } }
I20250905 08:22:42.757169  1188 sys_catalog.cc:564] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [sys.catalog]: configured and running, proceeding with master startup.
I20250905 08:22:42.764487  1201 sys_catalog.cc:455] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 60956f90bd4a4b45a2f1b8fdf0b5b9b3. Latest consensus state: current_term: 2 leader_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37529 } } }
I20250905 08:22:42.765149  1201 sys_catalog.cc:458] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [sys.catalog]: This master's current role is: LEADER
I20250905 08:22:42.764469  1200 sys_catalog.cc:455] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60956f90bd4a4b45a2f1b8fdf0b5b9b3" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37529 } } }
I20250905 08:22:42.766963  1200 sys_catalog.cc:458] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3 [sys.catalog]: This master's current role is: LEADER
I20250905 08:22:42.777400  1206 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250905 08:22:42.788303  1206 catalog_manager.cc:671] Loaded metadata for table TestTable [id=053730c8b3f146859c021c4179575413]
I20250905 08:22:42.795282  1206 tablet_loader.cc:96] loaded metadata for tablet 326058ad48ec4612bb7cc7bcb83ab34a (table TestTable [id=053730c8b3f146859c021c4179575413])
I20250905 08:22:42.796748  1206 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250905 08:22:42.801047  1206 catalog_manager.cc:1261] Loaded cluster ID: 59d24ef0b4d64bc9ac0383362765a584
I20250905 08:22:42.801333  1206 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250905 08:22:42.808048  1206 catalog_manager.cc:1506] Loading token signing keys...
I20250905 08:22:42.812599  1206 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 60956f90bd4a4b45a2f1b8fdf0b5b9b3: Loaded TSK: 0
I20250905 08:22:42.813889  1206 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250905 08:22:42.980928  1193 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:22:42.981465  1193 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:22:43.010967  1193 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250905 08:22:43.600250   661 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37529
I20250905 08:22:43.606866  1153 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "8ac0c5dcff144305b212c5f105d3cab7" instance_seqno: 1757060553414498) as {username='slave'} at 127.0.106.129:37095; Asking this server to re-register.
I20250905 08:22:43.608906   661 heartbeater.cc:461] Registering TS with master...
I20250905 08:22:43.609772   661 heartbeater.cc:507] Master 127.0.106.190:37529 requested a full tablet report, sending...
I20250905 08:22:43.613440  1152 ts_manager.cc:194] Registered new tserver with Master: 8ac0c5dcff144305b212c5f105d3cab7 (127.0.106.129:45995)
I20250905 08:22:43.680146  1152 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" instance_seqno: 1757060559324957) as {username='slave'} at 127.0.106.132:60423; Asking this server to re-register.
I20250905 08:22:43.682034  1061 heartbeater.cc:461] Registering TS with master...
I20250905 08:22:43.682662  1061 heartbeater.cc:507] Master 127.0.106.190:37529 requested a full tablet report, sending...
I20250905 08:22:43.685732  1152 ts_manager.cc:194] Registered new tserver with Master: 4a699fa7b9c14abcb5f48d8825e6b46a (127.0.106.132:33907)
I20250905 08:22:43.691316  1152 catalog_manager.cc:5582] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a reported cstate change: term changed from 1 to 2, leader changed from 710a990db8704a02a561d580a36eeaa6 (127.0.106.130) to b1d30ac4b1674028bb4876321e5818cc (127.0.106.131). New cstate: current_term: 2 leader_uuid: "b1d30ac4b1674028bb4876321e5818cc" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } } }
I20250905 08:22:43.701874  1153 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" instance_seqno: 1757060557505771) as {username='slave'} at 127.0.106.131:51677; Asking this server to re-register.
I20250905 08:22:43.703512   927 heartbeater.cc:461] Registering TS with master...
I20250905 08:22:43.704167   927 heartbeater.cc:507] Master 127.0.106.190:37529 requested a full tablet report, sending...
I20250905 08:22:43.707118  1153 ts_manager.cc:194] Registered new tserver with Master: b1d30ac4b1674028bb4876321e5818cc (127.0.106.131:37437)
W20250905 08:22:44.296249   923 debug-util.cc:398] Leaking SignalData structure 0x7b08000ac520 after lost signal to thread 799
W20250905 08:22:44.297518   923 debug-util.cc:398] Leaking SignalData structure 0x7b0800083cc0 after lost signal to thread 926
W20250905 08:22:44.391383  1193 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.343s	user 0.496s	sys 0.798s
W20250905 08:22:44.391816  1193 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.344s	user 0.498s	sys 0.800s
I20250905 08:22:44.518026   882 tablet_service.cc:1905] Received UnsafeChangeConfig RPC: dest_uuid: "b1d30ac4b1674028bb4876321e5818cc"
tablet_id: "326058ad48ec4612bb7cc7bcb83ab34a"
caller_id: "kudu-tools"
new_config {
  peers {
    permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a"
  }
  peers {
    permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc"
  }
}
 from {username='slave'} at 127.0.0.1:45762
W20250905 08:22:44.519443   882 raft_consensus.cc:2216] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 2 LEADER]: PROCEEDING WITH UNSAFE CONFIG CHANGE ON THIS SERVER, COMMITTED CONFIG: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }NEW CONFIG: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } unsafe_config_change: true
I20250905 08:22:44.520844   882 raft_consensus.cc:3053] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 2 LEADER]: Stepping down as leader of term 2
I20250905 08:22:44.521159   882 raft_consensus.cc:738] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 2 LEADER]: Becoming Follower/Learner. State: Replica: b1d30ac4b1674028bb4876321e5818cc, State: Running, Role: LEADER
I20250905 08:22:44.522130   882 consensus_queue.cc:260] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 2.2, Last appended by leader: 2, Current term: 2, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }
I20250905 08:22:44.523584   882 raft_consensus.cc:3058] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 2 FOLLOWER]: Advancing to term 3
I20250905 08:22:45.735700  1247 raft_consensus.cc:491] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 2 FOLLOWER]: Starting pre-election (detected failure of leader b1d30ac4b1674028bb4876321e5818cc)
I20250905 08:22:45.736120  1247 raft_consensus.cc:513] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 2 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } }
I20250905 08:22:45.737529  1247 leader_election.cc:290] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers b1d30ac4b1674028bb4876321e5818cc (127.0.106.131:37437), 710a990db8704a02a561d580a36eeaa6 (127.0.106.130:36913)
I20250905 08:22:45.738893   882 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "326058ad48ec4612bb7cc7bcb83ab34a" candidate_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" candidate_term: 3 candidate_status { last_received { term: 2 index: 2 } } ignore_live_leader: false dest_uuid: "b1d30ac4b1674028bb4876321e5818cc" is_pre_election: true
W20250905 08:22:45.742045   952 leader_election.cc:336] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 710a990db8704a02a561d580a36eeaa6 (127.0.106.130:36913): Network error: Client connection negotiation failed: client connection to 127.0.106.130:36913: connect: Connection refused (error 111)
I20250905 08:22:45.742481   952 leader_election.cc:304] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 4a699fa7b9c14abcb5f48d8825e6b46a; no voters: 710a990db8704a02a561d580a36eeaa6, b1d30ac4b1674028bb4876321e5818cc
I20250905 08:22:45.743181  1247 raft_consensus.cc:2747] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250905 08:22:46.034058  1250 raft_consensus.cc:491] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 3 FOLLOWER]: Starting pre-election (detected failure of leader kudu-tools)
I20250905 08:22:46.034624  1250 raft_consensus.cc:513] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 3 FOLLOWER]: Starting pre-election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } unsafe_config_change: true
I20250905 08:22:46.035882  1250 leader_election.cc:290] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [CANDIDATE]: Term 4 pre-election: Requested pre-vote from peers 4a699fa7b9c14abcb5f48d8825e6b46a (127.0.106.132:33907)
I20250905 08:22:46.036875  1016 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "326058ad48ec4612bb7cc7bcb83ab34a" candidate_uuid: "b1d30ac4b1674028bb4876321e5818cc" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" is_pre_election: true
I20250905 08:22:46.037271  1016 raft_consensus.cc:2466] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate b1d30ac4b1674028bb4876321e5818cc in term 2.
I20250905 08:22:46.038138   817 leader_election.cc:304] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [CANDIDATE]: Term 4 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: 4a699fa7b9c14abcb5f48d8825e6b46a, b1d30ac4b1674028bb4876321e5818cc; no voters: 
I20250905 08:22:46.038808  1250 raft_consensus.cc:2802] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 3 FOLLOWER]: Leader pre-election won for term 4
I20250905 08:22:46.039059  1250 raft_consensus.cc:491] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 3 FOLLOWER]: Starting leader election (detected failure of leader kudu-tools)
I20250905 08:22:46.039346  1250 raft_consensus.cc:3058] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 3 FOLLOWER]: Advancing to term 4
I20250905 08:22:46.043826  1250 raft_consensus.cc:513] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 4 FOLLOWER]: Starting leader election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } unsafe_config_change: true
I20250905 08:22:46.044878  1250 leader_election.cc:290] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [CANDIDATE]: Term 4 election: Requested vote from peers 4a699fa7b9c14abcb5f48d8825e6b46a (127.0.106.132:33907)
I20250905 08:22:46.045686  1016 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "326058ad48ec4612bb7cc7bcb83ab34a" candidate_uuid: "b1d30ac4b1674028bb4876321e5818cc" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a"
I20250905 08:22:46.046020  1016 raft_consensus.cc:3058] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 2 FOLLOWER]: Advancing to term 4
I20250905 08:22:46.050102  1016 raft_consensus.cc:2466] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 4 FOLLOWER]: Leader election vote request: Granting yes vote for candidate b1d30ac4b1674028bb4876321e5818cc in term 4.
I20250905 08:22:46.050863   817 leader_election.cc:304] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: 4a699fa7b9c14abcb5f48d8825e6b46a, b1d30ac4b1674028bb4876321e5818cc; no voters: 
I20250905 08:22:46.051434  1250 raft_consensus.cc:2802] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 4 FOLLOWER]: Leader election won for term 4
I20250905 08:22:46.052392  1250 raft_consensus.cc:695] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 4 LEADER]: Becoming Leader. State: Replica: b1d30ac4b1674028bb4876321e5818cc, State: Running, Role: LEADER
I20250905 08:22:46.053328  1250 consensus_queue.cc:237] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 3.3, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } unsafe_config_change: true
I20250905 08:22:46.060511  1153 catalog_manager.cc:5582] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc reported cstate change: term changed from 2 to 4, now has a pending config: VOTER b1d30ac4b1674028bb4876321e5818cc (127.0.106.131), VOTER 4a699fa7b9c14abcb5f48d8825e6b46a (127.0.106.132). New cstate: current_term: 4 leader_uuid: "b1d30ac4b1674028bb4876321e5818cc" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "710a990db8704a02a561d580a36eeaa6" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36913 } } } pending_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } unsafe_config_change: true }
I20250905 08:22:46.670115  1016 raft_consensus.cc:1273] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 4 FOLLOWER]: Refusing update from remote peer b1d30ac4b1674028bb4876321e5818cc: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 4 index: 4. (index mismatch)
I20250905 08:22:46.671030  1257 consensus_queue.cc:1035] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [LEADER]: Connected to new peer: Peer: permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 2, Time since last communication: 0.000s
I20250905 08:22:46.677610  1260 raft_consensus.cc:2953] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 4 LEADER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER 710a990db8704a02a561d580a36eeaa6 (127.0.106.130) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } unsafe_config_change: true }
I20250905 08:22:46.678678  1016 raft_consensus.cc:2953] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 4 FOLLOWER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER 710a990db8704a02a561d580a36eeaa6 (127.0.106.130) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } unsafe_config_change: true }
I20250905 08:22:46.686157  1153 catalog_manager.cc:5582] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc reported cstate change: config changed from index -1 to 3, VOTER 710a990db8704a02a561d580a36eeaa6 (127.0.106.130) evicted, no longer has a pending config: VOTER b1d30ac4b1674028bb4876321e5818cc (127.0.106.131), VOTER 4a699fa7b9c14abcb5f48d8825e6b46a (127.0.106.132). New cstate: current_term: 4 leader_uuid: "b1d30ac4b1674028bb4876321e5818cc" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } health_report { overall_health: HEALTHY } } unsafe_config_change: true }
W20250905 08:22:46.694625  1153 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet 326058ad48ec4612bb7cc7bcb83ab34a on TS 710a990db8704a02a561d580a36eeaa6: Not found: failed to reset TS proxy: Could not find TS for UUID 710a990db8704a02a561d580a36eeaa6
I20250905 08:22:46.709218   882 consensus_queue.cc:237] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 4.4, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "8ac0c5dcff144305b212c5f105d3cab7" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 45995 } attrs { promote: true } } unsafe_config_change: true
I20250905 08:22:46.713385  1016 raft_consensus.cc:1273] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 4 FOLLOWER]: Refusing update from remote peer b1d30ac4b1674028bb4876321e5818cc: Log matching property violated. Preceding OpId in replica: term: 4 index: 4. Preceding OpId from leader: term: 4 index: 5. (index mismatch)
I20250905 08:22:46.714630  1257 consensus_queue.cc:1035] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [LEADER]: Connected to new peer: Peer: permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250905 08:22:46.719337  1260 raft_consensus.cc:2953] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 4 LEADER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER 8ac0c5dcff144305b212c5f105d3cab7 (127.0.106.129) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "8ac0c5dcff144305b212c5f105d3cab7" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 45995 } attrs { promote: true } } unsafe_config_change: true }
I20250905 08:22:46.720721  1016 raft_consensus.cc:2953] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 4 FOLLOWER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER 8ac0c5dcff144305b212c5f105d3cab7 (127.0.106.129) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "8ac0c5dcff144305b212c5f105d3cab7" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 45995 } attrs { promote: true } } unsafe_config_change: true }
W20250905 08:22:46.723407   818 consensus_peers.cc:489] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc -> Peer 8ac0c5dcff144305b212c5f105d3cab7 (127.0.106.129:45995): Couldn't send request to peer 8ac0c5dcff144305b212c5f105d3cab7. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 326058ad48ec4612bb7cc7bcb83ab34a. This is attempt 1: this message will repeat every 5th retry.
I20250905 08:22:46.726958  1139 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 326058ad48ec4612bb7cc7bcb83ab34a with cas_config_opid_index 3: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250905 08:22:46.729946  1152 catalog_manager.cc:5582] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc reported cstate change: config changed from index 3 to 5, NON_VOTER 8ac0c5dcff144305b212c5f105d3cab7 (127.0.106.129) added. New cstate: current_term: 4 leader_uuid: "b1d30ac4b1674028bb4876321e5818cc" committed_config { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8ac0c5dcff144305b212c5f105d3cab7" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 45995 } attrs { promote: true } health_report { overall_health: UNKNOWN } } unsafe_config_change: true }
W20250905 08:22:46.747174  1137 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet 326058ad48ec4612bb7cc7bcb83ab34a on TS 710a990db8704a02a561d580a36eeaa6 failed: Not found: failed to reset TS proxy: Could not find TS for UUID 710a990db8704a02a561d580a36eeaa6
I20250905 08:22:47.275704  1270 ts_tablet_manager.cc:927] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7: Initiating tablet copy from peer b1d30ac4b1674028bb4876321e5818cc (127.0.106.131:37437)
I20250905 08:22:47.277671  1270 tablet_copy_client.cc:323] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7: tablet copy: Beginning tablet copy session from remote peer at address 127.0.106.131:37437
I20250905 08:22:47.286550   902 tablet_copy_service.cc:140] P b1d30ac4b1674028bb4876321e5818cc: Received BeginTabletCopySession request for tablet 326058ad48ec4612bb7cc7bcb83ab34a from peer 8ac0c5dcff144305b212c5f105d3cab7 ({username='slave'} at 127.0.106.129:34497)
I20250905 08:22:47.286914   902 tablet_copy_service.cc:161] P b1d30ac4b1674028bb4876321e5818cc: Beginning new tablet copy session on tablet 326058ad48ec4612bb7cc7bcb83ab34a from peer 8ac0c5dcff144305b212c5f105d3cab7 at {username='slave'} at 127.0.106.129:34497: session id = 8ac0c5dcff144305b212c5f105d3cab7-326058ad48ec4612bb7cc7bcb83ab34a
I20250905 08:22:47.291082   902 tablet_copy_source_session.cc:215] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc: Tablet Copy: opened 0 blocks and 1 log segments
I20250905 08:22:47.295713  1270 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 326058ad48ec4612bb7cc7bcb83ab34a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:22:47.312141  1270 tablet_copy_client.cc:806] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7: tablet copy: Starting download of 0 data blocks...
I20250905 08:22:47.312566  1270 tablet_copy_client.cc:670] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7: tablet copy: Starting download of 1 WAL segments...
I20250905 08:22:47.315384  1270 tablet_copy_client.cc:538] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250905 08:22:47.320508  1270 tablet_bootstrap.cc:492] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7: Bootstrap starting.
I20250905 08:22:47.330632  1270 log.cc:826] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7: Log is configured to *not* fsync() on all Append() calls
I20250905 08:22:47.339926  1270 tablet_bootstrap.cc:492] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7: Bootstrap replayed 1/1 log segments. Stats: ops{read=5 overwritten=0 applied=5 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:22:47.340456  1270 tablet_bootstrap.cc:492] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7: Bootstrap complete.
I20250905 08:22:47.340884  1270 ts_tablet_manager.cc:1397] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7: Time spent bootstrapping tablet: real 0.021s	user 0.015s	sys 0.004s
I20250905 08:22:47.355507  1270 raft_consensus.cc:357] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7 [term 4 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "8ac0c5dcff144305b212c5f105d3cab7" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 45995 } attrs { promote: true } } unsafe_config_change: true
I20250905 08:22:47.356319  1270 raft_consensus.cc:738] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7 [term 4 LEARNER]: Becoming Follower/Learner. State: Replica: 8ac0c5dcff144305b212c5f105d3cab7, State: Initialized, Role: LEARNER
I20250905 08:22:47.356949  1270 consensus_queue.cc:260] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 5, Last appended: 4.5, Last appended by leader: 5, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "8ac0c5dcff144305b212c5f105d3cab7" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 45995 } attrs { promote: true } } unsafe_config_change: true
I20250905 08:22:47.359951  1270 ts_tablet_manager.cc:1428] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7: Time spent starting tablet: real 0.019s	user 0.019s	sys 0.001s
I20250905 08:22:47.361320   902 tablet_copy_service.cc:342] P b1d30ac4b1674028bb4876321e5818cc: Request end of tablet copy session 8ac0c5dcff144305b212c5f105d3cab7-326058ad48ec4612bb7cc7bcb83ab34a received from {username='slave'} at 127.0.106.129:34497
I20250905 08:22:47.361719   902 tablet_copy_service.cc:434] P b1d30ac4b1674028bb4876321e5818cc: ending tablet copy session 8ac0c5dcff144305b212c5f105d3cab7-326058ad48ec4612bb7cc7bcb83ab34a on tablet 326058ad48ec4612bb7cc7bcb83ab34a with peer 8ac0c5dcff144305b212c5f105d3cab7
I20250905 08:22:47.862365   616 raft_consensus.cc:1215] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7 [term 4 LEARNER]: Deduplicated request from leader. Original: 4.4->[4.5-4.5]   Dedup: 4.5->[]
W20250905 08:22:47.914059  1137 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet 326058ad48ec4612bb7cc7bcb83ab34a on TS 710a990db8704a02a561d580a36eeaa6 failed: Not found: failed to reset TS proxy: Could not find TS for UUID 710a990db8704a02a561d580a36eeaa6
I20250905 08:22:48.474803  1278 raft_consensus.cc:1062] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc: attempting to promote NON_VOTER 8ac0c5dcff144305b212c5f105d3cab7 to VOTER
I20250905 08:22:48.476236  1278 consensus_queue.cc:237] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 5, Committed index: 5, Last appended: 4.5, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "8ac0c5dcff144305b212c5f105d3cab7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 45995 } attrs { promote: false } } unsafe_config_change: true
I20250905 08:22:48.480453   616 raft_consensus.cc:1273] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7 [term 4 LEARNER]: Refusing update from remote peer b1d30ac4b1674028bb4876321e5818cc: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250905 08:22:48.481245  1016 raft_consensus.cc:1273] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 4 FOLLOWER]: Refusing update from remote peer b1d30ac4b1674028bb4876321e5818cc: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250905 08:22:48.481530  1278 consensus_queue.cc:1035] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [LEADER]: Connected to new peer: Peer: permanent_uuid: "8ac0c5dcff144305b212c5f105d3cab7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 45995 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250905 08:22:48.482498  1283 consensus_queue.cc:1035] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [LEADER]: Connected to new peer: Peer: permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250905 08:22:48.487641  1278 raft_consensus.cc:2953] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc [term 4 LEADER]: Committing config change with OpId 4.6: config changed from index 5 to 6, 8ac0c5dcff144305b212c5f105d3cab7 (127.0.106.129) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "8ac0c5dcff144305b212c5f105d3cab7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 45995 } attrs { promote: false } } unsafe_config_change: true }
I20250905 08:22:48.489202   616 raft_consensus.cc:2953] T 326058ad48ec4612bb7cc7bcb83ab34a P 8ac0c5dcff144305b212c5f105d3cab7 [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, 8ac0c5dcff144305b212c5f105d3cab7 (127.0.106.129) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "8ac0c5dcff144305b212c5f105d3cab7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 45995 } attrs { promote: false } } unsafe_config_change: true }
I20250905 08:22:48.489552  1016 raft_consensus.cc:2953] T 326058ad48ec4612bb7cc7bcb83ab34a P 4a699fa7b9c14abcb5f48d8825e6b46a [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, 8ac0c5dcff144305b212c5f105d3cab7 (127.0.106.129) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } } peers { permanent_uuid: "8ac0c5dcff144305b212c5f105d3cab7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 45995 } attrs { promote: false } } unsafe_config_change: true }
I20250905 08:22:48.499122  1153 catalog_manager.cc:5582] T 326058ad48ec4612bb7cc7bcb83ab34a P b1d30ac4b1674028bb4876321e5818cc reported cstate change: config changed from index 5 to 6, 8ac0c5dcff144305b212c5f105d3cab7 (127.0.106.129) changed from NON_VOTER to VOTER. New cstate: current_term: 4 leader_uuid: "b1d30ac4b1674028bb4876321e5818cc" committed_config { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 37437 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 33907 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8ac0c5dcff144305b212c5f105d3cab7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 45995 } attrs { promote: false } health_report { overall_health: HEALTHY } } unsafe_config_change: true }
I20250905 08:22:48.521335   426 kudu-admin-test.cc:751] Waiting for Master to see new config...
I20250905 08:22:48.538898   426 kudu-admin-test.cc:756] Tablet locations:
tablet_locations {
  tablet_id: "326058ad48ec4612bb7cc7bcb83ab34a"
  DEPRECATED_stale: false
  partition {
    partition_key_start: ""
    partition_key_end: ""
  }
  interned_replicas {
    ts_info_idx: 0
    role: LEADER
  }
  interned_replicas {
    ts_info_idx: 1
    role: FOLLOWER
  }
  interned_replicas {
    ts_info_idx: 2
    role: FOLLOWER
  }
}
ts_infos {
  permanent_uuid: "b1d30ac4b1674028bb4876321e5818cc"
  rpc_addresses {
    host: "127.0.106.131"
    port: 37437
  }
}
ts_infos {
  permanent_uuid: "4a699fa7b9c14abcb5f48d8825e6b46a"
  rpc_addresses {
    host: "127.0.106.132"
    port: 33907
  }
}
ts_infos {
  permanent_uuid: "8ac0c5dcff144305b212c5f105d3cab7"
  rpc_addresses {
    host: "127.0.106.129"
    port: 45995
  }
}
I20250905 08:22:48.542583   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 532
I20250905 08:22:48.564805   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 798
I20250905 08:22:48.591658   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 931
I20250905 08:22:48.615729   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 1103
2025-09-05T08:22:48Z chronyd exiting
[       OK ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes (20612 ms)
[ RUN      ] AdminCliTest.TestGracefulSpecificLeaderStepDown
I20250905 08:22:48.674459   426 test_util.cc:276] Using random seed: -1962184573
I20250905 08:22:48.679880   426 ts_itest-base.cc:115] Starting cluster with:
I20250905 08:22:48.680042   426 ts_itest-base.cc:116] --------------
I20250905 08:22:48.680202   426 ts_itest-base.cc:117] 3 tablet servers
I20250905 08:22:48.680341   426 ts_itest-base.cc:118] 3 replicas per TS
I20250905 08:22:48.680473   426 ts_itest-base.cc:119] --------------
2025-09-05T08:22:48Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-09-05T08:22:48Z Disabled control of system clock
I20250905 08:22:48.712872   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:44033
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:44957
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:44033
--catalog_manager_wait_for_new_tablets_to_elect_leader=false with env {}
W20250905 08:22:48.984436  1301 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:22:48.984884  1301 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:22:48.985222  1301 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:22:49.012791  1301 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250905 08:22:49.013049  1301 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:22:49.013247  1301 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250905 08:22:49.013438  1301 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250905 08:22:49.044647  1301 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:44957
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--catalog_manager_wait_for_new_tablets_to_elect_leader=false
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:44033
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:44033
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:22:49.045714  1301 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:22:49.047092  1301 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:22:49.057070  1307 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:49.058393  1308 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:50.171680  1309 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
W20250905 08:22:50.174513  1310 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:50.176247  1301 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.120s	user 0.422s	sys 0.698s
W20250905 08:22:50.176513  1301 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.120s	user 0.422s	sys 0.698s
I20250905 08:22:50.176717  1301 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:22:50.177744  1301 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:22:50.180131  1301 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:22:50.181444  1301 hybrid_clock.cc:648] HybridClock initialized: now 1757060570181399 us; error 52 us; skew 500 ppm
I20250905 08:22:50.182142  1301 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:22:50.189267  1301 webserver.cc:480] Webserver started at http://127.0.106.190:40737/ using document root <none> and password file <none>
I20250905 08:22:50.190171  1301 fs_manager.cc:362] Metadata directory not provided
I20250905 08:22:50.190367  1301 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:22:50.190802  1301 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:22:50.194998  1301 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "6f975179a8b84f1584b23b6b62586ad0"
format_stamp: "Formatted at 2025-09-05 08:22:50 on dist-test-slave-0x95"
I20250905 08:22:50.196051  1301 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "6f975179a8b84f1584b23b6b62586ad0"
format_stamp: "Formatted at 2025-09-05 08:22:50 on dist-test-slave-0x95"
I20250905 08:22:50.203440  1301 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.005s	sys 0.001s
I20250905 08:22:50.209133  1317 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:50.210045  1301 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.000s	sys 0.005s
I20250905 08:22:50.210331  1301 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "6f975179a8b84f1584b23b6b62586ad0"
format_stamp: "Formatted at 2025-09-05 08:22:50 on dist-test-slave-0x95"
I20250905 08:22:50.210588  1301 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:22:50.288924  1301 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:22:50.290259  1301 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:22:50.290652  1301 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:22:50.353907  1301 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:44033
I20250905 08:22:50.353986  1368 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:44033 every 8 connection(s)
I20250905 08:22:50.356465  1301 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250905 08:22:50.361533  1369 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:22:50.363757   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 1301
I20250905 08:22:50.364168   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250905 08:22:50.382931  1369 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0: Bootstrap starting.
I20250905 08:22:50.388311  1369 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0: Neither blocks nor log segments found. Creating new log.
I20250905 08:22:50.389892  1369 log.cc:826] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0: Log is configured to *not* fsync() on all Append() calls
I20250905 08:22:50.393663  1369 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0: No bootstrap required, opened a new log
I20250905 08:22:50.409794  1369 raft_consensus.cc:357] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f975179a8b84f1584b23b6b62586ad0" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 44033 } }
I20250905 08:22:50.410332  1369 raft_consensus.cc:383] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:22:50.410516  1369 raft_consensus.cc:738] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6f975179a8b84f1584b23b6b62586ad0, State: Initialized, Role: FOLLOWER
I20250905 08:22:50.411083  1369 consensus_queue.cc:260] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f975179a8b84f1584b23b6b62586ad0" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 44033 } }
I20250905 08:22:50.411523  1369 raft_consensus.cc:397] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:22:50.411736  1369 raft_consensus.cc:491] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:22:50.412015  1369 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:22:50.415531  1369 raft_consensus.cc:513] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f975179a8b84f1584b23b6b62586ad0" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 44033 } }
I20250905 08:22:50.416141  1369 leader_election.cc:304] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 6f975179a8b84f1584b23b6b62586ad0; no voters: 
I20250905 08:22:50.417757  1369 leader_election.cc:290] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250905 08:22:50.418474  1374 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:22:50.420595  1374 raft_consensus.cc:695] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [term 1 LEADER]: Becoming Leader. State: Replica: 6f975179a8b84f1584b23b6b62586ad0, State: Running, Role: LEADER
I20250905 08:22:50.421258  1374 consensus_queue.cc:237] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f975179a8b84f1584b23b6b62586ad0" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 44033 } }
I20250905 08:22:50.422204  1369 sys_catalog.cc:564] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [sys.catalog]: configured and running, proceeding with master startup.
I20250905 08:22:50.427590  1376 sys_catalog.cc:455] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 6f975179a8b84f1584b23b6b62586ad0. Latest consensus state: current_term: 1 leader_uuid: "6f975179a8b84f1584b23b6b62586ad0" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f975179a8b84f1584b23b6b62586ad0" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 44033 } } }
I20250905 08:22:50.428166  1376 sys_catalog.cc:458] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [sys.catalog]: This master's current role is: LEADER
I20250905 08:22:50.427990  1375 sys_catalog.cc:455] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "6f975179a8b84f1584b23b6b62586ad0" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f975179a8b84f1584b23b6b62586ad0" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 44033 } } }
I20250905 08:22:50.428675  1375 sys_catalog.cc:458] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0 [sys.catalog]: This master's current role is: LEADER
I20250905 08:22:50.434722  1382 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250905 08:22:50.445780  1382 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250905 08:22:50.460654  1382 catalog_manager.cc:1349] Generated new cluster ID: 12da59248f2042cba2bfab0142d7c20a
I20250905 08:22:50.460887  1382 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250905 08:22:50.496910  1382 catalog_manager.cc:1372] Generated new certificate authority record
I20250905 08:22:50.498355  1382 catalog_manager.cc:1506] Loading token signing keys...
I20250905 08:22:50.512439  1382 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 6f975179a8b84f1584b23b6b62586ad0: Generated new TSK 0
I20250905 08:22:50.513222  1382 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250905 08:22:50.533156   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:44033
--builtin_ntp_servers=127.0.106.148:44957
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
W20250905 08:22:50.805688  1393 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250905 08:22:50.806515  1393 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:22:50.806778  1393 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:22:50.807243  1393 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:22:50.836172  1393 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:22:50.837018  1393 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250905 08:22:50.868932  1393 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:44957
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:44033
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:22:50.870165  1393 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:22:50.871716  1393 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:22:50.883239  1399 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:52.286597  1398 debug-util.cc:398] Leaking SignalData structure 0x7b08000068a0 after lost signal to thread 1393
W20250905 08:22:52.535579  1398 kernel_stack_watchdog.cc:198] Thread 1393 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 399ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250905 08:22:50.884567  1400 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:52.536677  1393 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.652s	user 0.546s	sys 1.031s
W20250905 08:22:52.537184  1393 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.653s	user 0.546s	sys 1.031s
W20250905 08:22:52.538816  1402 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:52.541185  1401 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1655 milliseconds
I20250905 08:22:52.541225  1393 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:22:52.542680  1393 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:22:52.545185  1393 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:22:52.546665  1393 hybrid_clock.cc:648] HybridClock initialized: now 1757060572546616 us; error 40 us; skew 500 ppm
I20250905 08:22:52.547614  1393 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:22:52.554354  1393 webserver.cc:480] Webserver started at http://127.0.106.129:44035/ using document root <none> and password file <none>
I20250905 08:22:52.555469  1393 fs_manager.cc:362] Metadata directory not provided
I20250905 08:22:52.555706  1393 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:22:52.556246  1393 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:22:52.563496  1393 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "233b7197447b431b9c6accaf57a9501c"
format_stamp: "Formatted at 2025-09-05 08:22:52 on dist-test-slave-0x95"
I20250905 08:22:52.564713  1393 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "233b7197447b431b9c6accaf57a9501c"
format_stamp: "Formatted at 2025-09-05 08:22:52 on dist-test-slave-0x95"
W20250905 08:22:52.569351  1365 debug-util.cc:398] Leaking SignalData structure 0x7b080006f040 after lost signal to thread 1302
W20250905 08:22:52.570065  1365 debug-util.cc:398] Leaking SignalData structure 0x7b08000a2940 after lost signal to thread 1368
I20250905 08:22:52.571527  1393 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.006s	sys 0.000s
I20250905 08:22:52.579046  1409 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:52.580317  1393 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.003s	sys 0.000s
I20250905 08:22:52.580680  1393 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "233b7197447b431b9c6accaf57a9501c"
format_stamp: "Formatted at 2025-09-05 08:22:52 on dist-test-slave-0x95"
I20250905 08:22:52.581094  1393 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:22:52.639593  1393 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:22:52.641577  1393 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:22:52.642088  1393 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:22:52.645413  1393 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:22:52.650871  1393 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:22:52.651129  1393 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:52.651402  1393 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:22:52.651605  1393 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:52.859692  1393 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:35601
I20250905 08:22:52.859717  1521 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:35601 every 8 connection(s)
I20250905 08:22:52.863759  1393 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250905 08:22:52.864467   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 1393
I20250905 08:22:52.864892   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250905 08:22:52.871807   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:0
--local_ip_for_outbound_sockets=127.0.106.130
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:44033
--builtin_ntp_servers=127.0.106.148:44957
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250905 08:22:52.939610  1522 heartbeater.cc:344] Connected to a master server at 127.0.106.190:44033
I20250905 08:22:52.940013  1522 heartbeater.cc:461] Registering TS with master...
I20250905 08:22:52.940901  1522 heartbeater.cc:507] Master 127.0.106.190:44033 requested a full tablet report, sending...
I20250905 08:22:52.943455  1334 ts_manager.cc:194] Registered new tserver with Master: 233b7197447b431b9c6accaf57a9501c (127.0.106.129:35601)
I20250905 08:22:52.946105  1334 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:47535
W20250905 08:22:53.274602  1523 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250905 08:22:53.275291  1523 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:22:53.275609  1523 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:22:53.276158  1523 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:22:53.308380  1523 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:22:53.309202  1523 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250905 08:22:53.348098  1523 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:44957
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:44033
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:22:53.349409  1523 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:22:53.351142  1523 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:22:53.364331  1532 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:22:53.949728  1522 heartbeater.cc:499] Master 127.0.106.190:44033 was elected leader, sending a full tablet report...
W20250905 08:22:54.164999  1518 debug-util.cc:398] Leaking SignalData structure 0x7b08000ac060 after lost signal to thread 1394
W20250905 08:22:54.166085  1518 debug-util.cc:398] Leaking SignalData structure 0x7b08000acf40 after lost signal to thread 1521
W20250905 08:22:54.767473  1531 debug-util.cc:398] Leaking SignalData structure 0x7b08000068a0 after lost signal to thread 1523
W20250905 08:22:55.108587  1523 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.743s	user 0.507s	sys 1.111s
W20250905 08:22:55.110502  1523 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.745s	user 0.507s	sys 1.111s
W20250905 08:22:53.366492  1533 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:55.110919  1534 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1742 milliseconds
W20250905 08:22:55.111104  1535 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:22:55.111323  1523 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:22:55.116011  1523 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:22:55.119187  1523 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:22:55.122766  1523 hybrid_clock.cc:648] HybridClock initialized: now 1757060575122726 us; error 34 us; skew 500 ppm
I20250905 08:22:55.123584  1523 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:22:55.131878  1523 webserver.cc:480] Webserver started at http://127.0.106.130:33751/ using document root <none> and password file <none>
I20250905 08:22:55.133178  1523 fs_manager.cc:362] Metadata directory not provided
I20250905 08:22:55.133503  1523 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:22:55.134135  1523 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:22:55.138814  1523 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "37a8f659ffb749d295aba8801755cbb7"
format_stamp: "Formatted at 2025-09-05 08:22:55 on dist-test-slave-0x95"
I20250905 08:22:55.139778  1523 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "37a8f659ffb749d295aba8801755cbb7"
format_stamp: "Formatted at 2025-09-05 08:22:55 on dist-test-slave-0x95"
I20250905 08:22:55.149356  1523 fs_manager.cc:696] Time spent creating directory manager: real 0.009s	user 0.008s	sys 0.000s
I20250905 08:22:55.157734  1542 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:55.159096  1523 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.002s	sys 0.003s
I20250905 08:22:55.159524  1523 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "37a8f659ffb749d295aba8801755cbb7"
format_stamp: "Formatted at 2025-09-05 08:22:55 on dist-test-slave-0x95"
I20250905 08:22:55.160116  1523 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:22:55.211239  1523 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:22:55.212633  1523 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:22:55.213038  1523 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:22:55.215916  1523 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:22:55.220204  1523 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:22:55.220377  1523 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:55.220621  1523 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:22:55.220782  1523 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:55.408252  1523 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:39751
I20250905 08:22:55.408344  1654 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:39751 every 8 connection(s)
I20250905 08:22:55.410732  1523 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250905 08:22:55.411209   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 1523
I20250905 08:22:55.411626   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250905 08:22:55.420917   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:0
--local_ip_for_outbound_sockets=127.0.106.131
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:44033
--builtin_ntp_servers=127.0.106.148:44957
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250905 08:22:55.487989  1655 heartbeater.cc:344] Connected to a master server at 127.0.106.190:44033
I20250905 08:22:55.488396  1655 heartbeater.cc:461] Registering TS with master...
I20250905 08:22:55.489223  1655 heartbeater.cc:507] Master 127.0.106.190:44033 requested a full tablet report, sending...
I20250905 08:22:55.491168  1334 ts_manager.cc:194] Registered new tserver with Master: 37a8f659ffb749d295aba8801755cbb7 (127.0.106.130:39751)
I20250905 08:22:55.492762  1334 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:39037
W20250905 08:22:55.797204  1656 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250905 08:22:55.797755  1656 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:22:55.798012  1656 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:22:55.798519  1656 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:22:55.827394  1656 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:22:55.828217  1656 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250905 08:22:55.861254  1656 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:44957
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:44033
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:22:55.862444  1656 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:22:55.864005  1656 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:22:55.875270  1665 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:22:56.501199  1655 heartbeater.cc:499] Master 127.0.106.190:44033 was elected leader, sending a full tablet report...
W20250905 08:22:57.279287  1664 debug-util.cc:398] Leaking SignalData structure 0x7b08000068a0 after lost signal to thread 1656
W20250905 08:22:57.518121  1664 kernel_stack_watchdog.cc:198] Thread 1656 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250905 08:22:55.876245  1666 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:57.518715  1656 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.642s	user 0.593s	sys 1.000s
W20250905 08:22:57.519111  1656 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.643s	user 0.593s	sys 1.000s
W20250905 08:22:57.520931  1668 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:22:57.523252  1667 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1642 milliseconds
I20250905 08:22:57.523248  1656 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:22:57.524581  1656 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:22:57.526424  1656 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:22:57.527727  1656 hybrid_clock.cc:648] HybridClock initialized: now 1757060577527686 us; error 39 us; skew 500 ppm
I20250905 08:22:57.528481  1656 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:22:57.534034  1656 webserver.cc:480] Webserver started at http://127.0.106.131:46689/ using document root <none> and password file <none>
I20250905 08:22:57.534922  1656 fs_manager.cc:362] Metadata directory not provided
I20250905 08:22:57.535111  1656 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:22:57.535521  1656 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:22:57.539647  1656 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "b346455a86e34770b790a74bedca168d"
format_stamp: "Formatted at 2025-09-05 08:22:57 on dist-test-slave-0x95"
I20250905 08:22:57.540621  1656 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "b346455a86e34770b790a74bedca168d"
format_stamp: "Formatted at 2025-09-05 08:22:57 on dist-test-slave-0x95"
I20250905 08:22:57.546900  1656 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.003s	sys 0.003s
I20250905 08:22:57.551996  1675 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:57.552888  1656 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.003s	sys 0.001s
I20250905 08:22:57.553180  1656 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "b346455a86e34770b790a74bedca168d"
format_stamp: "Formatted at 2025-09-05 08:22:57 on dist-test-slave-0x95"
I20250905 08:22:57.553486  1656 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:22:57.602835  1656 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:22:57.604108  1656 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:22:57.604516  1656 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:22:57.606822  1656 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:22:57.610564  1656 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:22:57.610733  1656 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:57.610975  1656 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:22:57.611107  1656 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:22:57.740842  1656 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:45771
I20250905 08:22:57.740931  1787 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:45771 every 8 connection(s)
I20250905 08:22:57.743140  1656 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250905 08:22:57.749071   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 1656
I20250905 08:22:57.749521   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250905 08:22:57.762816  1788 heartbeater.cc:344] Connected to a master server at 127.0.106.190:44033
I20250905 08:22:57.763222  1788 heartbeater.cc:461] Registering TS with master...
I20250905 08:22:57.764150  1788 heartbeater.cc:507] Master 127.0.106.190:44033 requested a full tablet report, sending...
I20250905 08:22:57.766350  1334 ts_manager.cc:194] Registered new tserver with Master: b346455a86e34770b790a74bedca168d (127.0.106.131:45771)
I20250905 08:22:57.767567  1334 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:37367
I20250905 08:22:57.768857   426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250905 08:22:57.798576  1333 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:44754:
name: "TestTable"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
owner: "alice"
W20250905 08:22:57.816751  1333 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250905 08:22:57.862408  1457 tablet_service.cc:1468] Processing CreateTablet for tablet c70abb14f6bd465999abcb18e3185ad2 (DEFAULT_TABLE table=TestTable [id=70ce49ca4e2a458cb2b3bbb6ccfc5b4a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:22:57.862481  1590 tablet_service.cc:1468] Processing CreateTablet for tablet c70abb14f6bd465999abcb18e3185ad2 (DEFAULT_TABLE table=TestTable [id=70ce49ca4e2a458cb2b3bbb6ccfc5b4a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:22:57.864503  1590 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c70abb14f6bd465999abcb18e3185ad2. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:22:57.864555  1457 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c70abb14f6bd465999abcb18e3185ad2. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:22:57.871942  1723 tablet_service.cc:1468] Processing CreateTablet for tablet c70abb14f6bd465999abcb18e3185ad2 (DEFAULT_TABLE table=TestTable [id=70ce49ca4e2a458cb2b3bbb6ccfc5b4a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:22:57.873797  1723 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c70abb14f6bd465999abcb18e3185ad2. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:22:57.885180  1807 tablet_bootstrap.cc:492] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c: Bootstrap starting.
I20250905 08:22:57.890461  1808 tablet_bootstrap.cc:492] T c70abb14f6bd465999abcb18e3185ad2 P 37a8f659ffb749d295aba8801755cbb7: Bootstrap starting.
I20250905 08:22:57.892385  1807 tablet_bootstrap.cc:654] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c: Neither blocks nor log segments found. Creating new log.
I20250905 08:22:57.894783  1807 log.cc:826] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c: Log is configured to *not* fsync() on all Append() calls
I20250905 08:22:57.898218  1809 tablet_bootstrap.cc:492] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d: Bootstrap starting.
I20250905 08:22:57.899204  1808 tablet_bootstrap.cc:654] T c70abb14f6bd465999abcb18e3185ad2 P 37a8f659ffb749d295aba8801755cbb7: Neither blocks nor log segments found. Creating new log.
I20250905 08:22:57.902151  1808 log.cc:826] T c70abb14f6bd465999abcb18e3185ad2 P 37a8f659ffb749d295aba8801755cbb7: Log is configured to *not* fsync() on all Append() calls
I20250905 08:22:57.903913  1809 tablet_bootstrap.cc:654] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d: Neither blocks nor log segments found. Creating new log.
I20250905 08:22:57.905454  1809 log.cc:826] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d: Log is configured to *not* fsync() on all Append() calls
I20250905 08:22:57.906275  1807 tablet_bootstrap.cc:492] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c: No bootstrap required, opened a new log
I20250905 08:22:57.906662  1807 ts_tablet_manager.cc:1397] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c: Time spent bootstrapping tablet: real 0.022s	user 0.014s	sys 0.006s
I20250905 08:22:57.909054  1808 tablet_bootstrap.cc:492] T c70abb14f6bd465999abcb18e3185ad2 P 37a8f659ffb749d295aba8801755cbb7: No bootstrap required, opened a new log
I20250905 08:22:57.909518  1808 ts_tablet_manager.cc:1397] T c70abb14f6bd465999abcb18e3185ad2 P 37a8f659ffb749d295aba8801755cbb7: Time spent bootstrapping tablet: real 0.020s	user 0.016s	sys 0.000s
I20250905 08:22:57.910471  1809 tablet_bootstrap.cc:492] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d: No bootstrap required, opened a new log
I20250905 08:22:57.910806  1809 ts_tablet_manager.cc:1397] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d: Time spent bootstrapping tablet: real 0.013s	user 0.007s	sys 0.004s
I20250905 08:22:57.926578  1807 raft_consensus.cc:357] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "233b7197447b431b9c6accaf57a9501c" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35601 } } peers { permanent_uuid: "37a8f659ffb749d295aba8801755cbb7" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39751 } } peers { permanent_uuid: "b346455a86e34770b790a74bedca168d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45771 } }
I20250905 08:22:57.927182  1807 raft_consensus.cc:738] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 233b7197447b431b9c6accaf57a9501c, State: Initialized, Role: FOLLOWER
I20250905 08:22:57.927695  1807 consensus_queue.cc:260] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "233b7197447b431b9c6accaf57a9501c" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35601 } } peers { permanent_uuid: "37a8f659ffb749d295aba8801755cbb7" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39751 } } peers { permanent_uuid: "b346455a86e34770b790a74bedca168d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45771 } }
I20250905 08:22:57.934319  1807 ts_tablet_manager.cc:1428] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c: Time spent starting tablet: real 0.027s	user 0.019s	sys 0.007s
I20250905 08:22:57.936172  1808 raft_consensus.cc:357] T c70abb14f6bd465999abcb18e3185ad2 P 37a8f659ffb749d295aba8801755cbb7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "233b7197447b431b9c6accaf57a9501c" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35601 } } peers { permanent_uuid: "37a8f659ffb749d295aba8801755cbb7" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39751 } } peers { permanent_uuid: "b346455a86e34770b790a74bedca168d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45771 } }
I20250905 08:22:57.936807  1808 raft_consensus.cc:738] T c70abb14f6bd465999abcb18e3185ad2 P 37a8f659ffb749d295aba8801755cbb7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 37a8f659ffb749d295aba8801755cbb7, State: Initialized, Role: FOLLOWER
I20250905 08:22:57.936869  1809 raft_consensus.cc:357] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "233b7197447b431b9c6accaf57a9501c" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35601 } } peers { permanent_uuid: "37a8f659ffb749d295aba8801755cbb7" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39751 } } peers { permanent_uuid: "b346455a86e34770b790a74bedca168d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45771 } }
I20250905 08:22:57.937716  1809 raft_consensus.cc:738] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b346455a86e34770b790a74bedca168d, State: Initialized, Role: FOLLOWER
I20250905 08:22:57.937500  1808 consensus_queue.cc:260] T c70abb14f6bd465999abcb18e3185ad2 P 37a8f659ffb749d295aba8801755cbb7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "233b7197447b431b9c6accaf57a9501c" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35601 } } peers { permanent_uuid: "37a8f659ffb749d295aba8801755cbb7" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39751 } } peers { permanent_uuid: "b346455a86e34770b790a74bedca168d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45771 } }
I20250905 08:22:57.938386  1809 consensus_queue.cc:260] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "233b7197447b431b9c6accaf57a9501c" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35601 } } peers { permanent_uuid: "37a8f659ffb749d295aba8801755cbb7" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39751 } } peers { permanent_uuid: "b346455a86e34770b790a74bedca168d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45771 } }
I20250905 08:22:57.941237  1808 ts_tablet_manager.cc:1428] T c70abb14f6bd465999abcb18e3185ad2 P 37a8f659ffb749d295aba8801755cbb7: Time spent starting tablet: real 0.031s	user 0.027s	sys 0.004s
I20250905 08:22:57.942929  1788 heartbeater.cc:499] Master 127.0.106.190:44033 was elected leader, sending a full tablet report...
I20250905 08:22:57.944178  1809 ts_tablet_manager.cc:1428] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d: Time spent starting tablet: real 0.033s	user 0.028s	sys 0.004s
I20250905 08:22:57.960744   426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250905 08:22:57.963629   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 233b7197447b431b9c6accaf57a9501c to finish bootstrapping
W20250905 08:22:57.972726  1657 tablet.cc:2378] T c70abb14f6bd465999abcb18e3185ad2 P 37a8f659ffb749d295aba8801755cbb7: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250905 08:22:57.975150   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 37a8f659ffb749d295aba8801755cbb7 to finish bootstrapping
I20250905 08:22:57.983785   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver b346455a86e34770b790a74bedca168d to finish bootstrapping
W20250905 08:22:57.997357  1789 tablet.cc:2378] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250905 08:22:58.020767  1477 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "c70abb14f6bd465999abcb18e3185ad2"
dest_uuid: "233b7197447b431b9c6accaf57a9501c"
 from {username='slave'} at 127.0.0.1:60626
I20250905 08:22:58.021253  1477 raft_consensus.cc:491] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 0 FOLLOWER]: Starting forced leader election (received explicit request)
I20250905 08:22:58.021545  1477 raft_consensus.cc:3058] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:22:58.025748  1477 raft_consensus.cc:513] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 1 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "233b7197447b431b9c6accaf57a9501c" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35601 } } peers { permanent_uuid: "37a8f659ffb749d295aba8801755cbb7" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39751 } } peers { permanent_uuid: "b346455a86e34770b790a74bedca168d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45771 } }
I20250905 08:22:58.027694  1477 leader_election.cc:290] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [CANDIDATE]: Term 1 election: Requested vote from peers 37a8f659ffb749d295aba8801755cbb7 (127.0.106.130:39751), b346455a86e34770b790a74bedca168d (127.0.106.131:45771)
I20250905 08:22:58.036741   426 cluster_itest_util.cc:257] Not converged past 1 yet: 0.0 0.0 0.0
I20250905 08:22:58.038872  1610 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "c70abb14f6bd465999abcb18e3185ad2" candidate_uuid: "233b7197447b431b9c6accaf57a9501c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "37a8f659ffb749d295aba8801755cbb7"
I20250905 08:22:58.039433  1610 raft_consensus.cc:3058] T c70abb14f6bd465999abcb18e3185ad2 P 37a8f659ffb749d295aba8801755cbb7 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:22:58.040570  1743 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "c70abb14f6bd465999abcb18e3185ad2" candidate_uuid: "233b7197447b431b9c6accaf57a9501c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "b346455a86e34770b790a74bedca168d"
I20250905 08:22:58.041065  1743 raft_consensus.cc:3058] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:22:58.043784  1610 raft_consensus.cc:2466] T c70abb14f6bd465999abcb18e3185ad2 P 37a8f659ffb749d295aba8801755cbb7 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 233b7197447b431b9c6accaf57a9501c in term 1.
I20250905 08:22:58.044719  1410 leader_election.cc:304] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 233b7197447b431b9c6accaf57a9501c, 37a8f659ffb749d295aba8801755cbb7; no voters: 
I20250905 08:22:58.045271  1743 raft_consensus.cc:2466] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 233b7197447b431b9c6accaf57a9501c in term 1.
I20250905 08:22:58.045436  1813 raft_consensus.cc:2802] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:22:58.046908  1813 raft_consensus.cc:695] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 1 LEADER]: Becoming Leader. State: Replica: 233b7197447b431b9c6accaf57a9501c, State: Running, Role: LEADER
I20250905 08:22:58.047772  1813 consensus_queue.cc:237] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "233b7197447b431b9c6accaf57a9501c" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35601 } } peers { permanent_uuid: "37a8f659ffb749d295aba8801755cbb7" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39751 } } peers { permanent_uuid: "b346455a86e34770b790a74bedca168d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45771 } }
I20250905 08:22:58.055668  1332 catalog_manager.cc:5582] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c reported cstate change: term changed from 0 to 1, leader changed from <none> to 233b7197447b431b9c6accaf57a9501c (127.0.106.129). New cstate: current_term: 1 leader_uuid: "233b7197447b431b9c6accaf57a9501c" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "233b7197447b431b9c6accaf57a9501c" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35601 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "37a8f659ffb749d295aba8801755cbb7" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39751 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "b346455a86e34770b790a74bedca168d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45771 } health_report { overall_health: UNKNOWN } } }
I20250905 08:22:58.142809   426 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
W20250905 08:22:58.179749  1524 tablet.cc:2378] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250905 08:22:58.347661   426 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
I20250905 08:22:58.594568  1823 consensus_queue.cc:1035] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [LEADER]: Connected to new peer: Peer: permanent_uuid: "37a8f659ffb749d295aba8801755cbb7" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39751 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250905 08:22:58.611377  1829 consensus_queue.cc:1035] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [LEADER]: Connected to new peer: Peer: permanent_uuid: "b346455a86e34770b790a74bedca168d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45771 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250905 08:23:00.254326  1477 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "c70abb14f6bd465999abcb18e3185ad2"
dest_uuid: "233b7197447b431b9c6accaf57a9501c"
mode: GRACEFUL
 from {username='slave'} at 127.0.0.1:60636
I20250905 08:23:00.254879  1477 raft_consensus.cc:604] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 1 LEADER]: Received request to transfer leadership
I20250905 08:23:00.615684  1829 raft_consensus.cc:991] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c: : Instructing follower b346455a86e34770b790a74bedca168d to start an election
I20250905 08:23:00.616091  1853 raft_consensus.cc:1079] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 1 LEADER]: Signalling peer b346455a86e34770b790a74bedca168d to start an election
I20250905 08:23:00.617246  1743 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "c70abb14f6bd465999abcb18e3185ad2"
dest_uuid: "b346455a86e34770b790a74bedca168d"
 from {username='slave'} at 127.0.106.129:48857
I20250905 08:23:00.617748  1743 raft_consensus.cc:491] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d [term 1 FOLLOWER]: Starting forced leader election (received explicit request)
I20250905 08:23:00.617993  1743 raft_consensus.cc:3058] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d [term 1 FOLLOWER]: Advancing to term 2
I20250905 08:23:00.622206  1743 raft_consensus.cc:513] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d [term 2 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "233b7197447b431b9c6accaf57a9501c" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35601 } } peers { permanent_uuid: "37a8f659ffb749d295aba8801755cbb7" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39751 } } peers { permanent_uuid: "b346455a86e34770b790a74bedca168d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45771 } }
I20250905 08:23:00.624125  1743 leader_election.cc:290] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d [CANDIDATE]: Term 2 election: Requested vote from peers 233b7197447b431b9c6accaf57a9501c (127.0.106.129:35601), 37a8f659ffb749d295aba8801755cbb7 (127.0.106.130:39751)
I20250905 08:23:00.633667  1477 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "c70abb14f6bd465999abcb18e3185ad2" candidate_uuid: "b346455a86e34770b790a74bedca168d" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "233b7197447b431b9c6accaf57a9501c"
I20250905 08:23:00.634289  1477 raft_consensus.cc:3053] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 1 LEADER]: Stepping down as leader of term 1
I20250905 08:23:00.634683  1477 raft_consensus.cc:738] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 1 LEADER]: Becoming Follower/Learner. State: Replica: 233b7197447b431b9c6accaf57a9501c, State: Running, Role: LEADER
I20250905 08:23:00.635353  1610 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "c70abb14f6bd465999abcb18e3185ad2" candidate_uuid: "b346455a86e34770b790a74bedca168d" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "37a8f659ffb749d295aba8801755cbb7"
I20250905 08:23:00.635210  1477 consensus_queue.cc:260] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 1, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "233b7197447b431b9c6accaf57a9501c" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35601 } } peers { permanent_uuid: "37a8f659ffb749d295aba8801755cbb7" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39751 } } peers { permanent_uuid: "b346455a86e34770b790a74bedca168d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45771 } }
I20250905 08:23:00.635895  1610 raft_consensus.cc:3058] T c70abb14f6bd465999abcb18e3185ad2 P 37a8f659ffb749d295aba8801755cbb7 [term 1 FOLLOWER]: Advancing to term 2
I20250905 08:23:00.636451  1477 raft_consensus.cc:3058] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 1 FOLLOWER]: Advancing to term 2
I20250905 08:23:00.641924  1610 raft_consensus.cc:2466] T c70abb14f6bd465999abcb18e3185ad2 P 37a8f659ffb749d295aba8801755cbb7 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate b346455a86e34770b790a74bedca168d in term 2.
I20250905 08:23:00.642479  1477 raft_consensus.cc:2466] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate b346455a86e34770b790a74bedca168d in term 2.
I20250905 08:23:00.643080  1676 leader_election.cc:304] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 37a8f659ffb749d295aba8801755cbb7, b346455a86e34770b790a74bedca168d; no voters: 
I20250905 08:23:00.645414  1857 raft_consensus.cc:2802] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d [term 2 FOLLOWER]: Leader election won for term 2
I20250905 08:23:00.646672  1857 raft_consensus.cc:695] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d [term 2 LEADER]: Becoming Leader. State: Replica: b346455a86e34770b790a74bedca168d, State: Running, Role: LEADER
I20250905 08:23:00.647310  1857 consensus_queue.cc:237] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "233b7197447b431b9c6accaf57a9501c" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35601 } } peers { permanent_uuid: "37a8f659ffb749d295aba8801755cbb7" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39751 } } peers { permanent_uuid: "b346455a86e34770b790a74bedca168d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45771 } }
I20250905 08:23:00.654103  1331 catalog_manager.cc:5582] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d reported cstate change: term changed from 1 to 2, leader changed from 233b7197447b431b9c6accaf57a9501c (127.0.106.129) to b346455a86e34770b790a74bedca168d (127.0.106.131). New cstate: current_term: 2 leader_uuid: "b346455a86e34770b790a74bedca168d" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "233b7197447b431b9c6accaf57a9501c" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35601 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "37a8f659ffb749d295aba8801755cbb7" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39751 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "b346455a86e34770b790a74bedca168d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45771 } health_report { overall_health: HEALTHY } } }
I20250905 08:23:01.072014  1610 raft_consensus.cc:1273] T c70abb14f6bd465999abcb18e3185ad2 P 37a8f659ffb749d295aba8801755cbb7 [term 2 FOLLOWER]: Refusing update from remote peer b346455a86e34770b790a74bedca168d: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250905 08:23:01.072821  1477 raft_consensus.cc:1273] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 2 FOLLOWER]: Refusing update from remote peer b346455a86e34770b790a74bedca168d: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250905 08:23:01.073227  1858 consensus_queue.cc:1035] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d [LEADER]: Connected to new peer: Peer: permanent_uuid: "37a8f659ffb749d295aba8801755cbb7" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39751 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250905 08:23:01.074362  1857 consensus_queue.cc:1035] T c70abb14f6bd465999abcb18e3185ad2 P b346455a86e34770b790a74bedca168d [LEADER]: Connected to new peer: Peer: permanent_uuid: "233b7197447b431b9c6accaf57a9501c" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35601 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250905 08:23:02.865850  1477 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "c70abb14f6bd465999abcb18e3185ad2"
dest_uuid: "233b7197447b431b9c6accaf57a9501c"
mode: GRACEFUL
 from {username='slave'} at 127.0.0.1:60638
I20250905 08:23:02.866501  1477 raft_consensus.cc:604] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 2 FOLLOWER]: Received request to transfer leadership
I20250905 08:23:02.866875  1477 raft_consensus.cc:612] T c70abb14f6bd465999abcb18e3185ad2 P 233b7197447b431b9c6accaf57a9501c [term 2 FOLLOWER]: Rejecting request to transer leadership while not leader
W20250905 08:23:02.945978  1518 debug-util.cc:398] Leaking SignalData structure 0x7b0800083c40 after lost signal to thread 1394
I20250905 08:23:03.902925   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 1393
I20250905 08:23:03.926837   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 1523
I20250905 08:23:03.947729   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 1656
I20250905 08:23:03.970747   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 1301
2025-09-05T08:23:03Z chronyd exiting
[       OK ] AdminCliTest.TestGracefulSpecificLeaderStepDown (15348 ms)
[ RUN      ] AdminCliTest.TestDescribeTableColumnFlags
I20250905 08:23:04.022871   426 test_util.cc:276] Using random seed: -1946836163
I20250905 08:23:04.026520   426 ts_itest-base.cc:115] Starting cluster with:
I20250905 08:23:04.026708   426 ts_itest-base.cc:116] --------------
I20250905 08:23:04.026847   426 ts_itest-base.cc:117] 3 tablet servers
I20250905 08:23:04.026968   426 ts_itest-base.cc:118] 3 replicas per TS
I20250905 08:23:04.027083   426 ts_itest-base.cc:119] --------------
2025-09-05T08:23:04Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-09-05T08:23:04Z Disabled control of system clock
I20250905 08:23:04.059134   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:46267
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:33593
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:46267 with env {}
W20250905 08:23:04.330410  1900 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:04.330870  1900 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:04.331219  1900 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:04.359711  1900 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250905 08:23:04.359962  1900 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:04.360154  1900 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250905 08:23:04.360361  1900 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250905 08:23:04.391811  1900 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:33593
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:46267
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:46267
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:04.392871  1900 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:04.394330  1900 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:04.404505  1906 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:04.404863  1907 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:05.759291  1908 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1349 milliseconds
W20250905 08:23:05.759987  1909 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:05.762120  1900 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.358s	user 0.001s	sys 0.007s
W20250905 08:23:05.762368  1900 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.358s	user 0.001s	sys 0.007s
I20250905 08:23:05.762588  1900 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:23:05.763602  1900 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:05.766430  1900 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:05.767874  1900 hybrid_clock.cc:648] HybridClock initialized: now 1757060585767788 us; error 63 us; skew 500 ppm
I20250905 08:23:05.768657  1900 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:05.776006  1900 webserver.cc:480] Webserver started at http://127.0.106.190:33035/ using document root <none> and password file <none>
I20250905 08:23:05.776880  1900 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:05.777084  1900 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:05.777520  1900 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:23:05.781685  1900 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "1297abdacce646e5a804af7a09a3468d"
format_stamp: "Formatted at 2025-09-05 08:23:05 on dist-test-slave-0x95"
I20250905 08:23:05.782670  1900 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "1297abdacce646e5a804af7a09a3468d"
format_stamp: "Formatted at 2025-09-05 08:23:05 on dist-test-slave-0x95"
I20250905 08:23:05.790124  1900 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.009s	sys 0.000s
I20250905 08:23:05.795673  1916 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:05.796700  1900 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.001s	sys 0.002s
I20250905 08:23:05.796994  1900 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "1297abdacce646e5a804af7a09a3468d"
format_stamp: "Formatted at 2025-09-05 08:23:05 on dist-test-slave-0x95"
I20250905 08:23:05.797293  1900 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:05.866696  1900 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:05.868072  1900 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:05.868453  1900 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:05.933686  1900 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:46267
I20250905 08:23:05.933766  1967 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:46267 every 8 connection(s)
I20250905 08:23:05.936254  1900 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250905 08:23:05.941589  1968 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:05.944109   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 1900
I20250905 08:23:05.944545   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250905 08:23:05.962126  1968 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d: Bootstrap starting.
I20250905 08:23:05.967304  1968 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d: Neither blocks nor log segments found. Creating new log.
I20250905 08:23:05.969352  1968 log.cc:826] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:05.973155  1968 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d: No bootstrap required, opened a new log
I20250905 08:23:05.989598  1968 raft_consensus.cc:357] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1297abdacce646e5a804af7a09a3468d" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 46267 } }
I20250905 08:23:05.990252  1968 raft_consensus.cc:383] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:23:05.990506  1968 raft_consensus.cc:738] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1297abdacce646e5a804af7a09a3468d, State: Initialized, Role: FOLLOWER
I20250905 08:23:05.991168  1968 consensus_queue.cc:260] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1297abdacce646e5a804af7a09a3468d" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 46267 } }
I20250905 08:23:05.991698  1968 raft_consensus.cc:397] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:23:05.991997  1968 raft_consensus.cc:491] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:23:05.992316  1968 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:23:05.996806  1968 raft_consensus.cc:513] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1297abdacce646e5a804af7a09a3468d" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 46267 } }
I20250905 08:23:05.997545  1968 leader_election.cc:304] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 1297abdacce646e5a804af7a09a3468d; no voters: 
I20250905 08:23:05.999307  1968 leader_election.cc:290] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250905 08:23:06.000002  1973 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:23:06.002100  1973 raft_consensus.cc:695] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [term 1 LEADER]: Becoming Leader. State: Replica: 1297abdacce646e5a804af7a09a3468d, State: Running, Role: LEADER
I20250905 08:23:06.002923  1973 consensus_queue.cc:237] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1297abdacce646e5a804af7a09a3468d" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 46267 } }
I20250905 08:23:06.003937  1968 sys_catalog.cc:564] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [sys.catalog]: configured and running, proceeding with master startup.
I20250905 08:23:06.012393  1975 sys_catalog.cc:455] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [sys.catalog]: SysCatalogTable state changed. Reason: New leader 1297abdacce646e5a804af7a09a3468d. Latest consensus state: current_term: 1 leader_uuid: "1297abdacce646e5a804af7a09a3468d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1297abdacce646e5a804af7a09a3468d" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 46267 } } }
I20250905 08:23:06.012856  1975 sys_catalog.cc:458] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [sys.catalog]: This master's current role is: LEADER
I20250905 08:23:06.012799  1974 sys_catalog.cc:455] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "1297abdacce646e5a804af7a09a3468d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "1297abdacce646e5a804af7a09a3468d" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 46267 } } }
I20250905 08:23:06.014140  1974 sys_catalog.cc:458] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d [sys.catalog]: This master's current role is: LEADER
I20250905 08:23:06.015296  1981 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250905 08:23:06.029958  1981 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250905 08:23:06.047935  1981 catalog_manager.cc:1349] Generated new cluster ID: 8964f339fe63438eb820bd09f31d4fca
I20250905 08:23:06.048185  1981 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250905 08:23:06.062381  1981 catalog_manager.cc:1372] Generated new certificate authority record
I20250905 08:23:06.063618  1981 catalog_manager.cc:1506] Loading token signing keys...
I20250905 08:23:06.078444  1981 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 1297abdacce646e5a804af7a09a3468d: Generated new TSK 0
I20250905 08:23:06.079187  1981 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250905 08:23:06.094038   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:46267
--builtin_ntp_servers=127.0.106.148:33593
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250905 08:23:06.392073  1992 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:06.392547  1992 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:06.393026  1992 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:06.421169  1992 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:06.421944  1992 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250905 08:23:06.452548  1992 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:33593
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:46267
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:06.453717  1992 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:06.455101  1992 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:06.466451  1998 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:06.468954  1999 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:07.675173  2000 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1207 milliseconds
W20250905 08:23:07.675940  2001 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:23:07.676091  1992 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:23:07.679832  1992 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:07.682576  1992 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:07.683977  1992 hybrid_clock.cc:648] HybridClock initialized: now 1757060587683912 us; error 68 us; skew 500 ppm
I20250905 08:23:07.684726  1992 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:07.691378  1992 webserver.cc:480] Webserver started at http://127.0.106.129:35021/ using document root <none> and password file <none>
I20250905 08:23:07.692293  1992 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:07.692476  1992 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:07.692909  1992 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:23:07.697371  1992 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "756bc2c8a1534a4291c60e6fef150cd5"
format_stamp: "Formatted at 2025-09-05 08:23:07 on dist-test-slave-0x95"
I20250905 08:23:07.698421  1992 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "756bc2c8a1534a4291c60e6fef150cd5"
format_stamp: "Formatted at 2025-09-05 08:23:07 on dist-test-slave-0x95"
I20250905 08:23:07.705578  1992 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.008s	sys 0.000s
I20250905 08:23:07.712445  2008 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:07.713557  1992 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.004s	sys 0.000s
I20250905 08:23:07.713929  1992 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "756bc2c8a1534a4291c60e6fef150cd5"
format_stamp: "Formatted at 2025-09-05 08:23:07 on dist-test-slave-0x95"
I20250905 08:23:07.714293  1992 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:07.775133  1992 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:07.776516  1992 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:07.776922  1992 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:07.779806  1992 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:23:07.783560  1992 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:23:07.783761  1992 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:07.784003  1992 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:23:07.784147  1992 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:07.952257  1992 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:43955
I20250905 08:23:07.952411  2120 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:43955 every 8 connection(s)
I20250905 08:23:07.954519  1992 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250905 08:23:07.955305   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 1992
I20250905 08:23:07.955786   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250905 08:23:07.963400   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:0
--local_ip_for_outbound_sockets=127.0.106.130
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:46267
--builtin_ntp_servers=127.0.106.148:33593
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250905 08:23:07.990442  2121 heartbeater.cc:344] Connected to a master server at 127.0.106.190:46267
I20250905 08:23:07.990790  2121 heartbeater.cc:461] Registering TS with master...
I20250905 08:23:07.991801  2121 heartbeater.cc:507] Master 127.0.106.190:46267 requested a full tablet report, sending...
I20250905 08:23:07.993870  1933 ts_manager.cc:194] Registered new tserver with Master: 756bc2c8a1534a4291c60e6fef150cd5 (127.0.106.129:43955)
I20250905 08:23:07.995608  1933 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:58965
W20250905 08:23:08.270437  2125 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:08.270922  2125 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:08.271385  2125 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:08.301440  2125 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:08.302268  2125 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250905 08:23:08.335892  2125 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:33593
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:46267
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:08.337455  2125 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:08.339143  2125 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:08.350495  2132 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:23:08.998720  2121 heartbeater.cc:499] Master 127.0.106.190:46267 was elected leader, sending a full tablet report...
W20250905 08:23:08.352394  2133 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:09.468266  2135 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:09.470008  2134 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1114 milliseconds
I20250905 08:23:09.470090  2125 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:23:09.471239  2125 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:09.473841  2125 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:09.475247  2125 hybrid_clock.cc:648] HybridClock initialized: now 1757060589475210 us; error 38 us; skew 500 ppm
I20250905 08:23:09.476015  2125 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:09.482020  2125 webserver.cc:480] Webserver started at http://127.0.106.130:41473/ using document root <none> and password file <none>
I20250905 08:23:09.482878  2125 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:09.483078  2125 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:09.483481  2125 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:23:09.489125  2125 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "471dd246d799432e80c445a1b7539386"
format_stamp: "Formatted at 2025-09-05 08:23:09 on dist-test-slave-0x95"
I20250905 08:23:09.490127  2125 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "471dd246d799432e80c445a1b7539386"
format_stamp: "Formatted at 2025-09-05 08:23:09 on dist-test-slave-0x95"
I20250905 08:23:09.496688  2125 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.007s	sys 0.001s
I20250905 08:23:09.501688  2142 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:09.502529  2125 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.004s	sys 0.001s
I20250905 08:23:09.502800  2125 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "471dd246d799432e80c445a1b7539386"
format_stamp: "Formatted at 2025-09-05 08:23:09 on dist-test-slave-0x95"
I20250905 08:23:09.503069  2125 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:09.565362  2125 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:09.566622  2125 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:09.567023  2125 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:09.569384  2125 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:23:09.573086  2125 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:23:09.573266  2125 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:09.573469  2125 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:23:09.573601  2125 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:09.700919  2125 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:45291
I20250905 08:23:09.701013  2254 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:45291 every 8 connection(s)
I20250905 08:23:09.703439  2125 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250905 08:23:09.708128   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 2125
I20250905 08:23:09.708874   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250905 08:23:09.715907   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:0
--local_ip_for_outbound_sockets=127.0.106.131
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:46267
--builtin_ntp_servers=127.0.106.148:33593
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250905 08:23:09.725888  2255 heartbeater.cc:344] Connected to a master server at 127.0.106.190:46267
I20250905 08:23:09.726339  2255 heartbeater.cc:461] Registering TS with master...
I20250905 08:23:09.727617  2255 heartbeater.cc:507] Master 127.0.106.190:46267 requested a full tablet report, sending...
I20250905 08:23:09.729815  1933 ts_manager.cc:194] Registered new tserver with Master: 471dd246d799432e80c445a1b7539386 (127.0.106.130:45291)
I20250905 08:23:09.730962  1933 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:59307
W20250905 08:23:10.012072  2259 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:10.012518  2259 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:10.012969  2259 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:10.041318  2259 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:10.042090  2259 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250905 08:23:10.072407  2259 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:33593
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:46267
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:10.073536  2259 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:10.074970  2259 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:10.085981  2265 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:23:10.734521  2255 heartbeater.cc:499] Master 127.0.106.190:46267 was elected leader, sending a full tablet report...
W20250905 08:23:10.086658  2266 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:11.442813  2268 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:11.446604  2259 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.360s	user 0.423s	sys 0.907s
W20250905 08:23:11.446870  2259 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.361s	user 0.423s	sys 0.907s
W20250905 08:23:11.447520  2267 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1359 milliseconds
I20250905 08:23:11.447526  2259 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:23:11.448750  2259 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:11.455482  2259 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:11.456785  2259 hybrid_clock.cc:648] HybridClock initialized: now 1757060591456762 us; error 40 us; skew 500 ppm
I20250905 08:23:11.457727  2259 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:11.464349  2259 webserver.cc:480] Webserver started at http://127.0.106.131:46001/ using document root <none> and password file <none>
I20250905 08:23:11.465257  2259 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:11.465467  2259 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:11.465881  2259 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:23:11.470126  2259 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "3e0812be3ee6439db579b18b371b3173"
format_stamp: "Formatted at 2025-09-05 08:23:11 on dist-test-slave-0x95"
I20250905 08:23:11.471351  2259 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "3e0812be3ee6439db579b18b371b3173"
format_stamp: "Formatted at 2025-09-05 08:23:11 on dist-test-slave-0x95"
I20250905 08:23:11.478132  2259 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.000s	sys 0.005s
I20250905 08:23:11.483731  2275 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:11.484794  2259 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.002s	sys 0.001s
I20250905 08:23:11.485066  2259 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "3e0812be3ee6439db579b18b371b3173"
format_stamp: "Formatted at 2025-09-05 08:23:11 on dist-test-slave-0x95"
I20250905 08:23:11.485335  2259 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:11.548271  2259 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:11.549504  2259 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:11.549861  2259 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:11.552062  2259 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:23:11.555416  2259 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:23:11.555630  2259 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:11.555871  2259 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:23:11.556017  2259 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:11.679859  2259 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:43701
I20250905 08:23:11.679976  2387 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:43701 every 8 connection(s)
I20250905 08:23:11.682186  2259 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250905 08:23:11.691362   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 2259
I20250905 08:23:11.691805   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250905 08:23:11.708068  2388 heartbeater.cc:344] Connected to a master server at 127.0.106.190:46267
I20250905 08:23:11.708550  2388 heartbeater.cc:461] Registering TS with master...
I20250905 08:23:11.709647  2388 heartbeater.cc:507] Master 127.0.106.190:46267 requested a full tablet report, sending...
I20250905 08:23:11.711449  1933 ts_manager.cc:194] Registered new tserver with Master: 3e0812be3ee6439db579b18b371b3173 (127.0.106.131:43701)
I20250905 08:23:11.712515  1933 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:40665
I20250905 08:23:11.723919   426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250905 08:23:11.755481  1933 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:58876:
name: "TestTable"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
owner: "alice"
W20250905 08:23:11.773219  1933 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250905 08:23:11.835557  2056 tablet_service.cc:1468] Processing CreateTablet for tablet 587ebc30cc7442a689818af872e1b64a (DEFAULT_TABLE table=TestTable [id=816ad39a73154eaca86b6a8cde5ef332]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:23:11.835937  2323 tablet_service.cc:1468] Processing CreateTablet for tablet 587ebc30cc7442a689818af872e1b64a (DEFAULT_TABLE table=TestTable [id=816ad39a73154eaca86b6a8cde5ef332]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:23:11.837584  2323 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 587ebc30cc7442a689818af872e1b64a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:11.837590  2056 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 587ebc30cc7442a689818af872e1b64a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:11.837586  2190 tablet_service.cc:1468] Processing CreateTablet for tablet 587ebc30cc7442a689818af872e1b64a (DEFAULT_TABLE table=TestTable [id=816ad39a73154eaca86b6a8cde5ef332]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:23:11.839310  2190 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 587ebc30cc7442a689818af872e1b64a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:11.859930  2407 tablet_bootstrap.cc:492] T 587ebc30cc7442a689818af872e1b64a P 3e0812be3ee6439db579b18b371b3173: Bootstrap starting.
I20250905 08:23:11.863576  2408 tablet_bootstrap.cc:492] T 587ebc30cc7442a689818af872e1b64a P 756bc2c8a1534a4291c60e6fef150cd5: Bootstrap starting.
I20250905 08:23:11.865590  2409 tablet_bootstrap.cc:492] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386: Bootstrap starting.
I20250905 08:23:11.866461  2407 tablet_bootstrap.cc:654] T 587ebc30cc7442a689818af872e1b64a P 3e0812be3ee6439db579b18b371b3173: Neither blocks nor log segments found. Creating new log.
I20250905 08:23:11.868772  2407 log.cc:826] T 587ebc30cc7442a689818af872e1b64a P 3e0812be3ee6439db579b18b371b3173: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:11.870638  2408 tablet_bootstrap.cc:654] T 587ebc30cc7442a689818af872e1b64a P 756bc2c8a1534a4291c60e6fef150cd5: Neither blocks nor log segments found. Creating new log.
I20250905 08:23:11.870673  2409 tablet_bootstrap.cc:654] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386: Neither blocks nor log segments found. Creating new log.
I20250905 08:23:11.872933  2409 log.cc:826] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:11.873370  2408 log.cc:826] T 587ebc30cc7442a689818af872e1b64a P 756bc2c8a1534a4291c60e6fef150cd5: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:11.876210  2407 tablet_bootstrap.cc:492] T 587ebc30cc7442a689818af872e1b64a P 3e0812be3ee6439db579b18b371b3173: No bootstrap required, opened a new log
I20250905 08:23:11.876701  2407 ts_tablet_manager.cc:1397] T 587ebc30cc7442a689818af872e1b64a P 3e0812be3ee6439db579b18b371b3173: Time spent bootstrapping tablet: real 0.017s	user 0.013s	sys 0.004s
I20250905 08:23:11.877883  2409 tablet_bootstrap.cc:492] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386: No bootstrap required, opened a new log
I20250905 08:23:11.878262  2409 ts_tablet_manager.cc:1397] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386: Time spent bootstrapping tablet: real 0.013s	user 0.005s	sys 0.005s
I20250905 08:23:11.879314  2408 tablet_bootstrap.cc:492] T 587ebc30cc7442a689818af872e1b64a P 756bc2c8a1534a4291c60e6fef150cd5: No bootstrap required, opened a new log
I20250905 08:23:11.879766  2408 ts_tablet_manager.cc:1397] T 587ebc30cc7442a689818af872e1b64a P 756bc2c8a1534a4291c60e6fef150cd5: Time spent bootstrapping tablet: real 0.017s	user 0.004s	sys 0.010s
I20250905 08:23:11.895561  2409 raft_consensus.cc:357] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } } peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } }
I20250905 08:23:11.896207  2409 raft_consensus.cc:383] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:23:11.896458  2409 raft_consensus.cc:738] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 471dd246d799432e80c445a1b7539386, State: Initialized, Role: FOLLOWER
I20250905 08:23:11.897130  2409 consensus_queue.cc:260] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } } peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } }
I20250905 08:23:11.900955  2409 ts_tablet_manager.cc:1428] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386: Time spent starting tablet: real 0.023s	user 0.019s	sys 0.004s
I20250905 08:23:11.902931  2407 raft_consensus.cc:357] T 587ebc30cc7442a689818af872e1b64a P 3e0812be3ee6439db579b18b371b3173 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } } peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } }
I20250905 08:23:11.903684  2407 raft_consensus.cc:383] T 587ebc30cc7442a689818af872e1b64a P 3e0812be3ee6439db579b18b371b3173 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:23:11.903981  2407 raft_consensus.cc:738] T 587ebc30cc7442a689818af872e1b64a P 3e0812be3ee6439db579b18b371b3173 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 3e0812be3ee6439db579b18b371b3173, State: Initialized, Role: FOLLOWER
I20250905 08:23:11.904632  2408 raft_consensus.cc:357] T 587ebc30cc7442a689818af872e1b64a P 756bc2c8a1534a4291c60e6fef150cd5 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } } peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } }
I20250905 08:23:11.904732  2407 consensus_queue.cc:260] T 587ebc30cc7442a689818af872e1b64a P 3e0812be3ee6439db579b18b371b3173 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } } peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } }
I20250905 08:23:11.905413  2408 raft_consensus.cc:383] T 587ebc30cc7442a689818af872e1b64a P 756bc2c8a1534a4291c60e6fef150cd5 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:23:11.905681  2408 raft_consensus.cc:738] T 587ebc30cc7442a689818af872e1b64a P 756bc2c8a1534a4291c60e6fef150cd5 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 756bc2c8a1534a4291c60e6fef150cd5, State: Initialized, Role: FOLLOWER
I20250905 08:23:11.906584  2408 consensus_queue.cc:260] T 587ebc30cc7442a689818af872e1b64a P 756bc2c8a1534a4291c60e6fef150cd5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } } peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } }
I20250905 08:23:11.912793  2388 heartbeater.cc:499] Master 127.0.106.190:46267 was elected leader, sending a full tablet report...
I20250905 08:23:11.914283  2407 ts_tablet_manager.cc:1428] T 587ebc30cc7442a689818af872e1b64a P 3e0812be3ee6439db579b18b371b3173: Time spent starting tablet: real 0.037s	user 0.020s	sys 0.014s
I20250905 08:23:11.914430  2408 ts_tablet_manager.cc:1428] T 587ebc30cc7442a689818af872e1b64a P 756bc2c8a1534a4291c60e6fef150cd5: Time spent starting tablet: real 0.034s	user 0.030s	sys 0.003s
W20250905 08:23:11.936285  2389 tablet.cc:2378] T 587ebc30cc7442a689818af872e1b64a P 3e0812be3ee6439db579b18b371b3173: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250905 08:23:11.959722  2256 tablet.cc:2378] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250905 08:23:11.968923  2122 tablet.cc:2378] T 587ebc30cc7442a689818af872e1b64a P 756bc2c8a1534a4291c60e6fef150cd5: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250905 08:23:11.985569  2413 raft_consensus.cc:491] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250905 08:23:11.985924  2413 raft_consensus.cc:513] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } } peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } }
I20250905 08:23:11.987933  2413 leader_election.cc:290] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 3e0812be3ee6439db579b18b371b3173 (127.0.106.131:43701), 756bc2c8a1534a4291c60e6fef150cd5 (127.0.106.129:43955)
I20250905 08:23:12.000391  2343 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "587ebc30cc7442a689818af872e1b64a" candidate_uuid: "471dd246d799432e80c445a1b7539386" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "3e0812be3ee6439db579b18b371b3173" is_pre_election: true
I20250905 08:23:12.001044  2343 raft_consensus.cc:2466] T 587ebc30cc7442a689818af872e1b64a P 3e0812be3ee6439db579b18b371b3173 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 471dd246d799432e80c445a1b7539386 in term 0.
I20250905 08:23:12.001395  2076 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "587ebc30cc7442a689818af872e1b64a" candidate_uuid: "471dd246d799432e80c445a1b7539386" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "756bc2c8a1534a4291c60e6fef150cd5" is_pre_election: true
I20250905 08:23:12.002169  2076 raft_consensus.cc:2466] T 587ebc30cc7442a689818af872e1b64a P 756bc2c8a1534a4291c60e6fef150cd5 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 471dd246d799432e80c445a1b7539386 in term 0.
I20250905 08:23:12.002254  2145 leader_election.cc:304] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 3e0812be3ee6439db579b18b371b3173, 471dd246d799432e80c445a1b7539386; no voters: 
I20250905 08:23:12.003021  2413 raft_consensus.cc:2802] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250905 08:23:12.003276  2413 raft_consensus.cc:491] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250905 08:23:12.003499  2413 raft_consensus.cc:3058] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:23:12.007462  2413 raft_consensus.cc:513] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } } peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } }
I20250905 08:23:12.008757  2413 leader_election.cc:290] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [CANDIDATE]: Term 1 election: Requested vote from peers 3e0812be3ee6439db579b18b371b3173 (127.0.106.131:43701), 756bc2c8a1534a4291c60e6fef150cd5 (127.0.106.129:43955)
I20250905 08:23:12.009447  2343 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "587ebc30cc7442a689818af872e1b64a" candidate_uuid: "471dd246d799432e80c445a1b7539386" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "3e0812be3ee6439db579b18b371b3173"
I20250905 08:23:12.009502  2076 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "587ebc30cc7442a689818af872e1b64a" candidate_uuid: "471dd246d799432e80c445a1b7539386" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "756bc2c8a1534a4291c60e6fef150cd5"
I20250905 08:23:12.009780  2343 raft_consensus.cc:3058] T 587ebc30cc7442a689818af872e1b64a P 3e0812be3ee6439db579b18b371b3173 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:23:12.009862  2076 raft_consensus.cc:3058] T 587ebc30cc7442a689818af872e1b64a P 756bc2c8a1534a4291c60e6fef150cd5 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:23:12.013536  2343 raft_consensus.cc:2466] T 587ebc30cc7442a689818af872e1b64a P 3e0812be3ee6439db579b18b371b3173 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 471dd246d799432e80c445a1b7539386 in term 1.
I20250905 08:23:12.013651  2076 raft_consensus.cc:2466] T 587ebc30cc7442a689818af872e1b64a P 756bc2c8a1534a4291c60e6fef150cd5 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 471dd246d799432e80c445a1b7539386 in term 1.
I20250905 08:23:12.014196  2145 leader_election.cc:304] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 3e0812be3ee6439db579b18b371b3173, 471dd246d799432e80c445a1b7539386; no voters: 
I20250905 08:23:12.014808  2413 raft_consensus.cc:2802] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:23:12.016273  2413 raft_consensus.cc:695] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [term 1 LEADER]: Becoming Leader. State: Replica: 471dd246d799432e80c445a1b7539386, State: Running, Role: LEADER
I20250905 08:23:12.016917  2413 consensus_queue.cc:237] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } } peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } }
I20250905 08:23:12.027645  1933 catalog_manager.cc:5582] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 reported cstate change: term changed from 0 to 1, leader changed from <none> to 471dd246d799432e80c445a1b7539386 (127.0.106.130). New cstate: current_term: 1 leader_uuid: "471dd246d799432e80c445a1b7539386" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } health_report { overall_health: HEALTHY } } }
I20250905 08:23:12.055814   426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250905 08:23:12.058679   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 756bc2c8a1534a4291c60e6fef150cd5 to finish bootstrapping
I20250905 08:23:12.070096   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 471dd246d799432e80c445a1b7539386 to finish bootstrapping
I20250905 08:23:12.080262   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 3e0812be3ee6439db579b18b371b3173 to finish bootstrapping
I20250905 08:23:12.092295  1933 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:58876:
name: "TestAnotherTable"
schema {
  columns {
    name: "foo"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "bar"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    comment: "comment for bar"
    immutable: false
  }
}
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "foo"
    }
  }
}
W20250905 08:23:12.093690  1933 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestAnotherTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250905 08:23:12.109300  2056 tablet_service.cc:1468] Processing CreateTablet for tablet 3bf27e0f0c36447ca775376df0b6d7c7 (DEFAULT_TABLE table=TestAnotherTable [id=a63ea454bf9045bc89fbbe6514bb065c]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250905 08:23:12.110304  2056 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3bf27e0f0c36447ca775376df0b6d7c7. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:12.110226  2323 tablet_service.cc:1468] Processing CreateTablet for tablet 3bf27e0f0c36447ca775376df0b6d7c7 (DEFAULT_TABLE table=TestAnotherTable [id=a63ea454bf9045bc89fbbe6514bb065c]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250905 08:23:12.110152  2190 tablet_service.cc:1468] Processing CreateTablet for tablet 3bf27e0f0c36447ca775376df0b6d7c7 (DEFAULT_TABLE table=TestAnotherTable [id=a63ea454bf9045bc89fbbe6514bb065c]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250905 08:23:12.110941  2323 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3bf27e0f0c36447ca775376df0b6d7c7. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:12.111088  2190 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3bf27e0f0c36447ca775376df0b6d7c7. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:12.121343  2407 tablet_bootstrap.cc:492] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173: Bootstrap starting.
I20250905 08:23:12.123323  2408 tablet_bootstrap.cc:492] T 3bf27e0f0c36447ca775376df0b6d7c7 P 756bc2c8a1534a4291c60e6fef150cd5: Bootstrap starting.
I20250905 08:23:12.124023  2409 tablet_bootstrap.cc:492] T 3bf27e0f0c36447ca775376df0b6d7c7 P 471dd246d799432e80c445a1b7539386: Bootstrap starting.
I20250905 08:23:12.126513  2407 tablet_bootstrap.cc:654] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173: Neither blocks nor log segments found. Creating new log.
I20250905 08:23:12.128811  2408 tablet_bootstrap.cc:654] T 3bf27e0f0c36447ca775376df0b6d7c7 P 756bc2c8a1534a4291c60e6fef150cd5: Neither blocks nor log segments found. Creating new log.
I20250905 08:23:12.129514  2409 tablet_bootstrap.cc:654] T 3bf27e0f0c36447ca775376df0b6d7c7 P 471dd246d799432e80c445a1b7539386: Neither blocks nor log segments found. Creating new log.
I20250905 08:23:12.132110  2407 tablet_bootstrap.cc:492] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173: No bootstrap required, opened a new log
I20250905 08:23:12.132475  2407 ts_tablet_manager.cc:1397] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173: Time spent bootstrapping tablet: real 0.011s	user 0.006s	sys 0.005s
I20250905 08:23:12.134410  2408 tablet_bootstrap.cc:492] T 3bf27e0f0c36447ca775376df0b6d7c7 P 756bc2c8a1534a4291c60e6fef150cd5: No bootstrap required, opened a new log
I20250905 08:23:12.134686  2408 ts_tablet_manager.cc:1397] T 3bf27e0f0c36447ca775376df0b6d7c7 P 756bc2c8a1534a4291c60e6fef150cd5: Time spent bootstrapping tablet: real 0.012s	user 0.007s	sys 0.003s
I20250905 08:23:12.134562  2407 raft_consensus.cc:357] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } } peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } }
I20250905 08:23:12.135257  2407 raft_consensus.cc:383] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:23:12.135557  2407 raft_consensus.cc:738] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 3e0812be3ee6439db579b18b371b3173, State: Initialized, Role: FOLLOWER
I20250905 08:23:12.136368  2409 tablet_bootstrap.cc:492] T 3bf27e0f0c36447ca775376df0b6d7c7 P 471dd246d799432e80c445a1b7539386: No bootstrap required, opened a new log
I20250905 08:23:12.136288  2407 consensus_queue.cc:260] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } } peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } }
I20250905 08:23:12.136817  2409 ts_tablet_manager.cc:1397] T 3bf27e0f0c36447ca775376df0b6d7c7 P 471dd246d799432e80c445a1b7539386: Time spent bootstrapping tablet: real 0.013s	user 0.009s	sys 0.003s
I20250905 08:23:12.137221  2408 raft_consensus.cc:357] T 3bf27e0f0c36447ca775376df0b6d7c7 P 756bc2c8a1534a4291c60e6fef150cd5 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } } peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } }
I20250905 08:23:12.138247  2408 raft_consensus.cc:383] T 3bf27e0f0c36447ca775376df0b6d7c7 P 756bc2c8a1534a4291c60e6fef150cd5 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:23:12.138612  2407 ts_tablet_manager.cc:1428] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173: Time spent starting tablet: real 0.006s	user 0.001s	sys 0.003s
I20250905 08:23:12.138563  2408 raft_consensus.cc:738] T 3bf27e0f0c36447ca775376df0b6d7c7 P 756bc2c8a1534a4291c60e6fef150cd5 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 756bc2c8a1534a4291c60e6fef150cd5, State: Initialized, Role: FOLLOWER
I20250905 08:23:12.139482  2409 raft_consensus.cc:357] T 3bf27e0f0c36447ca775376df0b6d7c7 P 471dd246d799432e80c445a1b7539386 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } } peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } }
I20250905 08:23:12.139583  2408 consensus_queue.cc:260] T 3bf27e0f0c36447ca775376df0b6d7c7 P 756bc2c8a1534a4291c60e6fef150cd5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } } peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } }
I20250905 08:23:12.140180  2409 raft_consensus.cc:383] T 3bf27e0f0c36447ca775376df0b6d7c7 P 471dd246d799432e80c445a1b7539386 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:23:12.140448  2409 raft_consensus.cc:738] T 3bf27e0f0c36447ca775376df0b6d7c7 P 471dd246d799432e80c445a1b7539386 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 471dd246d799432e80c445a1b7539386, State: Initialized, Role: FOLLOWER
I20250905 08:23:12.141155  2409 consensus_queue.cc:260] T 3bf27e0f0c36447ca775376df0b6d7c7 P 471dd246d799432e80c445a1b7539386 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } } peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } }
I20250905 08:23:12.141906  2408 ts_tablet_manager.cc:1428] T 3bf27e0f0c36447ca775376df0b6d7c7 P 756bc2c8a1534a4291c60e6fef150cd5: Time spent starting tablet: real 0.007s	user 0.006s	sys 0.000s
I20250905 08:23:12.146049  2409 ts_tablet_manager.cc:1428] T 3bf27e0f0c36447ca775376df0b6d7c7 P 471dd246d799432e80c445a1b7539386: Time spent starting tablet: real 0.009s	user 0.001s	sys 0.004s
I20250905 08:23:12.173380  2414 raft_consensus.cc:491] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250905 08:23:12.173739  2414 raft_consensus.cc:513] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } } peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } }
I20250905 08:23:12.175884  2414 leader_election.cc:290] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 756bc2c8a1534a4291c60e6fef150cd5 (127.0.106.129:43955), 471dd246d799432e80c445a1b7539386 (127.0.106.130:45291)
I20250905 08:23:12.184043  2076 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3bf27e0f0c36447ca775376df0b6d7c7" candidate_uuid: "3e0812be3ee6439db579b18b371b3173" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "756bc2c8a1534a4291c60e6fef150cd5" is_pre_election: true
I20250905 08:23:12.184487  2076 raft_consensus.cc:2466] T 3bf27e0f0c36447ca775376df0b6d7c7 P 756bc2c8a1534a4291c60e6fef150cd5 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 3e0812be3ee6439db579b18b371b3173 in term 0.
I20250905 08:23:12.185412  2279 leader_election.cc:304] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 3e0812be3ee6439db579b18b371b3173, 756bc2c8a1534a4291c60e6fef150cd5; no voters: 
I20250905 08:23:12.186039  2414 raft_consensus.cc:2802] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250905 08:23:12.186357  2414 raft_consensus.cc:491] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250905 08:23:12.186339  2210 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3bf27e0f0c36447ca775376df0b6d7c7" candidate_uuid: "3e0812be3ee6439db579b18b371b3173" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "471dd246d799432e80c445a1b7539386" is_pre_election: true
I20250905 08:23:12.186637  2414 raft_consensus.cc:3058] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:23:12.187059  2210 raft_consensus.cc:2466] T 3bf27e0f0c36447ca775376df0b6d7c7 P 471dd246d799432e80c445a1b7539386 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 3e0812be3ee6439db579b18b371b3173 in term 0.
I20250905 08:23:12.191093  2414 raft_consensus.cc:513] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } } peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } }
I20250905 08:23:12.192283  2414 leader_election.cc:290] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [CANDIDATE]: Term 1 election: Requested vote from peers 756bc2c8a1534a4291c60e6fef150cd5 (127.0.106.129:43955), 471dd246d799432e80c445a1b7539386 (127.0.106.130:45291)
I20250905 08:23:12.192847  2076 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3bf27e0f0c36447ca775376df0b6d7c7" candidate_uuid: "3e0812be3ee6439db579b18b371b3173" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "756bc2c8a1534a4291c60e6fef150cd5"
I20250905 08:23:12.193060  2210 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3bf27e0f0c36447ca775376df0b6d7c7" candidate_uuid: "3e0812be3ee6439db579b18b371b3173" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "471dd246d799432e80c445a1b7539386"
I20250905 08:23:12.193225  2076 raft_consensus.cc:3058] T 3bf27e0f0c36447ca775376df0b6d7c7 P 756bc2c8a1534a4291c60e6fef150cd5 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:23:12.193466  2210 raft_consensus.cc:3058] T 3bf27e0f0c36447ca775376df0b6d7c7 P 471dd246d799432e80c445a1b7539386 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:23:12.196653  2076 raft_consensus.cc:2466] T 3bf27e0f0c36447ca775376df0b6d7c7 P 756bc2c8a1534a4291c60e6fef150cd5 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 3e0812be3ee6439db579b18b371b3173 in term 1.
I20250905 08:23:12.197311  2210 raft_consensus.cc:2466] T 3bf27e0f0c36447ca775376df0b6d7c7 P 471dd246d799432e80c445a1b7539386 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 3e0812be3ee6439db579b18b371b3173 in term 1.
I20250905 08:23:12.197412  2279 leader_election.cc:304] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 3e0812be3ee6439db579b18b371b3173, 756bc2c8a1534a4291c60e6fef150cd5; no voters: 
I20250905 08:23:12.198122  2414 raft_consensus.cc:2802] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:23:12.200428  2414 raft_consensus.cc:695] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [term 1 LEADER]: Becoming Leader. State: Replica: 3e0812be3ee6439db579b18b371b3173, State: Running, Role: LEADER
I20250905 08:23:12.201174  2414 consensus_queue.cc:237] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } } peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } }
I20250905 08:23:12.209909  1931 catalog_manager.cc:5582] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 reported cstate change: term changed from 0 to 1, leader changed from <none> to 3e0812be3ee6439db579b18b371b3173 (127.0.106.131). New cstate: current_term: 1 leader_uuid: "3e0812be3ee6439db579b18b371b3173" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 } health_report { overall_health: HEALTHY } } }
I20250905 08:23:12.485484  2413 consensus_queue.cc:1035] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [LEADER]: Connected to new peer: Peer: permanent_uuid: "3e0812be3ee6439db579b18b371b3173" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43701 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250905 08:23:12.504896  2427 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
I20250905 08:23:12.504825  2418 consensus_queue.cc:1035] T 587ebc30cc7442a689818af872e1b64a P 471dd246d799432e80c445a1b7539386 [LEADER]: Connected to new peer: Peer: permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250905 08:23:12.505762  2427 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:12.540889  2427 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250905 08:23:12.659263  2414 consensus_queue.cc:1035] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [LEADER]: Connected to new peer: Peer: permanent_uuid: "471dd246d799432e80c445a1b7539386" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 45291 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250905 08:23:12.678133  2425 consensus_queue.cc:1035] T 3bf27e0f0c36447ca775376df0b6d7c7 P 3e0812be3ee6439db579b18b371b3173 [LEADER]: Connected to new peer: Peer: permanent_uuid: "756bc2c8a1534a4291c60e6fef150cd5" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 43955 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250905 08:23:13.788938  2427 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.212s	user 0.535s	sys 0.661s
W20250905 08:23:13.789333  2427 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.212s	user 0.535s	sys 0.661s
W20250905 08:23:15.155390  2452 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:15.155882  2452 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:15.184007  2452 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250905 08:23:16.384099  2452 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.165s	user 0.450s	sys 0.714s
W20250905 08:23:16.384477  2452 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.166s	user 0.450s	sys 0.714s
W20250905 08:23:17.737462  2468 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:17.737986  2468 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:17.766934  2468 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250905 08:23:19.003152  2468 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.198s	user 0.507s	sys 0.686s
W20250905 08:23:19.003444  2468 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.199s	user 0.507s	sys 0.686s
W20250905 08:23:20.342916  2485 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:20.343422  2485 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:20.371114  2485 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250905 08:23:21.545177  2485 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.139s	user 0.459s	sys 0.679s
W20250905 08:23:21.545593  2485 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.139s	user 0.459s	sys 0.679s
I20250905 08:23:22.614943   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 1992
I20250905 08:23:22.638463   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 2125
I20250905 08:23:22.663380   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 2259
I20250905 08:23:22.689236   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 1900
2025-09-05T08:23:22Z chronyd exiting
[       OK ] AdminCliTest.TestDescribeTableColumnFlags (18720 ms)
[ RUN      ] AdminCliTest.TestAuthzResetCacheNotAuthorized
I20250905 08:23:22.743520   426 test_util.cc:276] Using random seed: -1928115516
I20250905 08:23:22.747400   426 ts_itest-base.cc:115] Starting cluster with:
I20250905 08:23:22.747567   426 ts_itest-base.cc:116] --------------
I20250905 08:23:22.747730   426 ts_itest-base.cc:117] 3 tablet servers
I20250905 08:23:22.747911   426 ts_itest-base.cc:118] 3 replicas per TS
I20250905 08:23:22.748066   426 ts_itest-base.cc:119] --------------
2025-09-05T08:23:22Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-09-05T08:23:22Z Disabled control of system clock
I20250905 08:23:22.781175   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:42529
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:32909
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:42529
--superuser_acl=no-such-user with env {}
W20250905 08:23:23.051549  2505 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:23.052084  2505 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:23.052536  2505 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:23.080917  2505 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250905 08:23:23.081243  2505 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:23.081509  2505 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250905 08:23:23.081755  2505 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250905 08:23:23.114635  2505 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:32909
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:42529
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:42529
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--superuser_acl=<redacted>
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:23.115844  2505 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:23.117301  2505 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:23.127120  2511 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:24.531056  2510 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ec0 after lost signal to thread 2505
W20250905 08:23:24.832234  2505 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.704s	user 0.515s	sys 1.189s
W20250905 08:23:24.832608  2505 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.705s	user 0.515s	sys 1.189s
W20250905 08:23:23.128563  2512 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:24.833124  2513 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1704 milliseconds
W20250905 08:23:24.834208  2514 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:23:24.834172  2505 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:23:24.837340  2505 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:24.839669  2505 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:24.841027  2505 hybrid_clock.cc:648] HybridClock initialized: now 1757060604841002 us; error 33 us; skew 500 ppm
I20250905 08:23:24.841907  2505 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:24.847541  2505 webserver.cc:480] Webserver started at http://127.0.106.190:36901/ using document root <none> and password file <none>
I20250905 08:23:24.848448  2505 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:24.848647  2505 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:24.849094  2505 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:23:24.854547  2505 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "a38e1b56fb3c4e46b13c8799076d8a89"
format_stamp: "Formatted at 2025-09-05 08:23:24 on dist-test-slave-0x95"
I20250905 08:23:24.855583  2505 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "a38e1b56fb3c4e46b13c8799076d8a89"
format_stamp: "Formatted at 2025-09-05 08:23:24 on dist-test-slave-0x95"
I20250905 08:23:24.862430  2505 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.005s	sys 0.001s
I20250905 08:23:24.867255  2521 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:24.868149  2505 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.001s	sys 0.003s
I20250905 08:23:24.868453  2505 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "a38e1b56fb3c4e46b13c8799076d8a89"
format_stamp: "Formatted at 2025-09-05 08:23:24 on dist-test-slave-0x95"
I20250905 08:23:24.868718  2505 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:24.924247  2505 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:24.925585  2505 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:24.926003  2505 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:24.989423  2505 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:42529
I20250905 08:23:24.989483  2572 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:42529 every 8 connection(s)
I20250905 08:23:24.992040  2505 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250905 08:23:24.997251  2573 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:25.000267   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 2505
I20250905 08:23:25.000664   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250905 08:23:25.017042  2573 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89: Bootstrap starting.
I20250905 08:23:25.021848  2573 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89: Neither blocks nor log segments found. Creating new log.
I20250905 08:23:25.023339  2573 log.cc:826] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:25.027526  2573 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89: No bootstrap required, opened a new log
I20250905 08:23:25.044898  2573 raft_consensus.cc:357] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a38e1b56fb3c4e46b13c8799076d8a89" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42529 } }
I20250905 08:23:25.045537  2573 raft_consensus.cc:383] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:23:25.045758  2573 raft_consensus.cc:738] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a38e1b56fb3c4e46b13c8799076d8a89, State: Initialized, Role: FOLLOWER
I20250905 08:23:25.046337  2573 consensus_queue.cc:260] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a38e1b56fb3c4e46b13c8799076d8a89" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42529 } }
I20250905 08:23:25.046806  2573 raft_consensus.cc:397] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:23:25.047046  2573 raft_consensus.cc:491] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:23:25.047291  2573 raft_consensus.cc:3058] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:23:25.051515  2573 raft_consensus.cc:513] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a38e1b56fb3c4e46b13c8799076d8a89" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42529 } }
I20250905 08:23:25.052172  2573 leader_election.cc:304] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: a38e1b56fb3c4e46b13c8799076d8a89; no voters: 
I20250905 08:23:25.053617  2573 leader_election.cc:290] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250905 08:23:25.054144  2578 raft_consensus.cc:2802] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:23:25.056141  2578 raft_consensus.cc:695] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [term 1 LEADER]: Becoming Leader. State: Replica: a38e1b56fb3c4e46b13c8799076d8a89, State: Running, Role: LEADER
I20250905 08:23:25.056792  2578 consensus_queue.cc:237] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a38e1b56fb3c4e46b13c8799076d8a89" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42529 } }
I20250905 08:23:25.057600  2573 sys_catalog.cc:564] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [sys.catalog]: configured and running, proceeding with master startup.
I20250905 08:23:25.065956  2579 sys_catalog.cc:455] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "a38e1b56fb3c4e46b13c8799076d8a89" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a38e1b56fb3c4e46b13c8799076d8a89" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42529 } } }
I20250905 08:23:25.065963  2580 sys_catalog.cc:455] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [sys.catalog]: SysCatalogTable state changed. Reason: New leader a38e1b56fb3c4e46b13c8799076d8a89. Latest consensus state: current_term: 1 leader_uuid: "a38e1b56fb3c4e46b13c8799076d8a89" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a38e1b56fb3c4e46b13c8799076d8a89" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42529 } } }
I20250905 08:23:25.066715  2580 sys_catalog.cc:458] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [sys.catalog]: This master's current role is: LEADER
I20250905 08:23:25.066712  2579 sys_catalog.cc:458] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89 [sys.catalog]: This master's current role is: LEADER
I20250905 08:23:25.070824  2587 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250905 08:23:25.082201  2587 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250905 08:23:25.095729  2587 catalog_manager.cc:1349] Generated new cluster ID: 8ee2432459dd447281e0efd0a8171fe3
I20250905 08:23:25.095981  2587 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250905 08:23:25.122054  2587 catalog_manager.cc:1372] Generated new certificate authority record
I20250905 08:23:25.123795  2587 catalog_manager.cc:1506] Loading token signing keys...
I20250905 08:23:25.145951  2587 catalog_manager.cc:5955] T 00000000000000000000000000000000 P a38e1b56fb3c4e46b13c8799076d8a89: Generated new TSK 0
I20250905 08:23:25.146660  2587 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250905 08:23:25.170758   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:42529
--builtin_ntp_servers=127.0.106.148:32909
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250905 08:23:25.471623  2597 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:25.472123  2597 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:25.472572  2597 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:25.499667  2597 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:25.500571  2597 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250905 08:23:25.530652  2597 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:32909
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:42529
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:25.531802  2597 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:25.533249  2597 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:25.544510  2603 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:26.898891  2606 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:26.900880  2605 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1354 milliseconds
W20250905 08:23:25.545599  2604 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:26.904006  2597 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.359s	user 0.392s	sys 0.861s
W20250905 08:23:26.904371  2597 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.359s	user 0.392s	sys 0.861s
I20250905 08:23:26.904662  2597 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:23:26.906522  2597 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:26.913947  2597 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:26.915444  2597 hybrid_clock.cc:648] HybridClock initialized: now 1757060606915375 us; error 67 us; skew 500 ppm
I20250905 08:23:26.916549  2597 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:26.925133  2597 webserver.cc:480] Webserver started at http://127.0.106.129:43519/ using document root <none> and password file <none>
I20250905 08:23:26.926471  2597 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:26.926769  2597 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:26.927376  2597 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:23:26.934595  2597 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "6c7aef2960f544c5ab645c9cadae1d33"
format_stamp: "Formatted at 2025-09-05 08:23:26 on dist-test-slave-0x95"
I20250905 08:23:26.936231  2597 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "6c7aef2960f544c5ab645c9cadae1d33"
format_stamp: "Formatted at 2025-09-05 08:23:26 on dist-test-slave-0x95"
I20250905 08:23:26.946445  2597 fs_manager.cc:696] Time spent creating directory manager: real 0.009s	user 0.008s	sys 0.000s
I20250905 08:23:26.954327  2613 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:26.955593  2597 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.006s	sys 0.000s
I20250905 08:23:26.956056  2597 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "6c7aef2960f544c5ab645c9cadae1d33"
format_stamp: "Formatted at 2025-09-05 08:23:26 on dist-test-slave-0x95"
I20250905 08:23:26.956521  2597 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:27.039228  2597 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:27.040599  2597 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:27.041013  2597 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:27.043543  2597 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:23:27.047437  2597 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:23:27.047638  2597 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:27.047924  2597 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:23:27.048095  2597 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:27.194590  2597 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:46579
I20250905 08:23:27.194705  2725 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:46579 every 8 connection(s)
I20250905 08:23:27.197245  2597 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250905 08:23:27.204424   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 2597
I20250905 08:23:27.204854   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250905 08:23:27.211556   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:0
--local_ip_for_outbound_sockets=127.0.106.130
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:42529
--builtin_ntp_servers=127.0.106.148:32909
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250905 08:23:27.217983  2726 heartbeater.cc:344] Connected to a master server at 127.0.106.190:42529
I20250905 08:23:27.218355  2726 heartbeater.cc:461] Registering TS with master...
I20250905 08:23:27.219213  2726 heartbeater.cc:507] Master 127.0.106.190:42529 requested a full tablet report, sending...
I20250905 08:23:27.222051  2538 ts_manager.cc:194] Registered new tserver with Master: 6c7aef2960f544c5ab645c9cadae1d33 (127.0.106.129:46579)
I20250905 08:23:27.224390  2538 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:38145
W20250905 08:23:27.487059  2730 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:27.487463  2730 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:27.487943  2730 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:27.515203  2730 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:27.515955  2730 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250905 08:23:27.546444  2730 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:32909
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:42529
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:27.547509  2730 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:27.548909  2730 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:27.560374  2736 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:23:28.228121  2726 heartbeater.cc:499] Master 127.0.106.190:42529 was elected leader, sending a full tablet report...
W20250905 08:23:27.560917  2737 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:28.677770  2739 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:28.679950  2738 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1116 milliseconds
I20250905 08:23:28.680087  2730 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:23:28.681253  2730 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:28.683203  2730 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:28.684499  2730 hybrid_clock.cc:648] HybridClock initialized: now 1757060608684475 us; error 27 us; skew 500 ppm
I20250905 08:23:28.685163  2730 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:28.690759  2730 webserver.cc:480] Webserver started at http://127.0.106.130:42445/ using document root <none> and password file <none>
I20250905 08:23:28.691613  2730 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:28.691798  2730 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:28.692238  2730 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:23:28.696297  2730 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "88282cb85d6f41519619c021cdeefc79"
format_stamp: "Formatted at 2025-09-05 08:23:28 on dist-test-slave-0x95"
I20250905 08:23:28.697234  2730 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "88282cb85d6f41519619c021cdeefc79"
format_stamp: "Formatted at 2025-09-05 08:23:28 on dist-test-slave-0x95"
I20250905 08:23:28.703320  2730 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.006s	sys 0.001s
I20250905 08:23:28.708140  2747 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:28.708963  2730 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.004s	sys 0.000s
I20250905 08:23:28.709223  2730 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "88282cb85d6f41519619c021cdeefc79"
format_stamp: "Formatted at 2025-09-05 08:23:28 on dist-test-slave-0x95"
I20250905 08:23:28.709472  2730 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:28.753877  2730 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:28.755059  2730 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:28.755419  2730 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:28.757630  2730 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:23:28.761211  2730 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:23:28.761394  2730 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:28.761647  2730 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:23:28.761795  2730 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:28.884014  2730 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:39315
I20250905 08:23:28.884109  2859 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:39315 every 8 connection(s)
I20250905 08:23:28.886173  2730 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250905 08:23:28.893560   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 2730
I20250905 08:23:28.894017   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250905 08:23:28.900259   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:0
--local_ip_for_outbound_sockets=127.0.106.131
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:42529
--builtin_ntp_servers=127.0.106.148:32909
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250905 08:23:28.904778  2860 heartbeater.cc:344] Connected to a master server at 127.0.106.190:42529
I20250905 08:23:28.905150  2860 heartbeater.cc:461] Registering TS with master...
I20250905 08:23:28.906008  2860 heartbeater.cc:507] Master 127.0.106.190:42529 requested a full tablet report, sending...
I20250905 08:23:28.907788  2538 ts_manager.cc:194] Registered new tserver with Master: 88282cb85d6f41519619c021cdeefc79 (127.0.106.130:39315)
I20250905 08:23:28.909046  2538 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:32771
W20250905 08:23:29.170857  2864 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:29.171371  2864 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:29.171926  2864 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:29.207983  2864 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:29.208690  2864 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250905 08:23:29.239738  2864 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:32909
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:42529
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:29.240869  2864 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:29.242475  2864 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:29.253053  2870 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:23:29.912422  2860 heartbeater.cc:499] Master 127.0.106.190:42529 was elected leader, sending a full tablet report...
W20250905 08:23:29.253731  2871 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:30.375402  2872 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
W20250905 08:23:30.377112  2873 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:30.379773  2864 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.127s	user 0.408s	sys 0.718s
W20250905 08:23:30.380167  2864 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.127s	user 0.408s	sys 0.719s
I20250905 08:23:30.380470  2864 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:23:30.381981  2864 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:30.384575  2864 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:30.386041  2864 hybrid_clock.cc:648] HybridClock initialized: now 1757060610385980 us; error 49 us; skew 500 ppm
I20250905 08:23:30.387117  2864 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:30.395345  2864 webserver.cc:480] Webserver started at http://127.0.106.131:36821/ using document root <none> and password file <none>
I20250905 08:23:30.396636  2864 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:30.396922  2864 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:30.397583  2864 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:23:30.404599  2864 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "8322f2a94d3d4d778f8d76355046997d"
format_stamp: "Formatted at 2025-09-05 08:23:30 on dist-test-slave-0x95"
I20250905 08:23:30.406066  2864 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "8322f2a94d3d4d778f8d76355046997d"
format_stamp: "Formatted at 2025-09-05 08:23:30 on dist-test-slave-0x95"
I20250905 08:23:30.415392  2864 fs_manager.cc:696] Time spent creating directory manager: real 0.009s	user 0.005s	sys 0.005s
I20250905 08:23:30.423138  2880 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:30.424234  2864 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.003s	sys 0.000s
I20250905 08:23:30.424619  2864 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "8322f2a94d3d4d778f8d76355046997d"
format_stamp: "Formatted at 2025-09-05 08:23:30 on dist-test-slave-0x95"
I20250905 08:23:30.425029  2864 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:30.525405  2864 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:30.526646  2864 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:30.526978  2864 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:30.529194  2864 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:23:30.532987  2864 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:23:30.533154  2864 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:30.533326  2864 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:23:30.533452  2864 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:30.660876  2864 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:40683
I20250905 08:23:30.661062  2992 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:40683 every 8 connection(s)
I20250905 08:23:30.663432  2864 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250905 08:23:30.663913   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 2864
I20250905 08:23:30.664270   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250905 08:23:30.687564  2993 heartbeater.cc:344] Connected to a master server at 127.0.106.190:42529
I20250905 08:23:30.687943  2993 heartbeater.cc:461] Registering TS with master...
I20250905 08:23:30.688781  2993 heartbeater.cc:507] Master 127.0.106.190:42529 requested a full tablet report, sending...
I20250905 08:23:30.690492  2538 ts_manager.cc:194] Registered new tserver with Master: 8322f2a94d3d4d778f8d76355046997d (127.0.106.131:40683)
I20250905 08:23:30.691659  2538 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:41281
I20250905 08:23:30.697685   426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250905 08:23:30.727998  2538 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:47664:
name: "TestTable"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
owner: "alice"
W20250905 08:23:30.744747  2538 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250905 08:23:30.787972  2928 tablet_service.cc:1468] Processing CreateTablet for tablet 984a87708b02458ca138e96c6dbeb1a7 (DEFAULT_TABLE table=TestTable [id=ca6d1445715d48098410c65a000b6e69]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:23:30.789659  2928 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 984a87708b02458ca138e96c6dbeb1a7. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:30.792563  2795 tablet_service.cc:1468] Processing CreateTablet for tablet 984a87708b02458ca138e96c6dbeb1a7 (DEFAULT_TABLE table=TestTable [id=ca6d1445715d48098410c65a000b6e69]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:23:30.792464  2661 tablet_service.cc:1468] Processing CreateTablet for tablet 984a87708b02458ca138e96c6dbeb1a7 (DEFAULT_TABLE table=TestTable [id=ca6d1445715d48098410c65a000b6e69]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:23:30.794085  2795 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 984a87708b02458ca138e96c6dbeb1a7. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:30.794828  2661 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 984a87708b02458ca138e96c6dbeb1a7. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:30.811658  3012 tablet_bootstrap.cc:492] T 984a87708b02458ca138e96c6dbeb1a7 P 8322f2a94d3d4d778f8d76355046997d: Bootstrap starting.
I20250905 08:23:30.816954  3012 tablet_bootstrap.cc:654] T 984a87708b02458ca138e96c6dbeb1a7 P 8322f2a94d3d4d778f8d76355046997d: Neither blocks nor log segments found. Creating new log.
I20250905 08:23:30.818930  3012 log.cc:826] T 984a87708b02458ca138e96c6dbeb1a7 P 8322f2a94d3d4d778f8d76355046997d: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:30.821280  3013 tablet_bootstrap.cc:492] T 984a87708b02458ca138e96c6dbeb1a7 P 88282cb85d6f41519619c021cdeefc79: Bootstrap starting.
I20250905 08:23:30.823899  3014 tablet_bootstrap.cc:492] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33: Bootstrap starting.
I20250905 08:23:30.825069  3012 tablet_bootstrap.cc:492] T 984a87708b02458ca138e96c6dbeb1a7 P 8322f2a94d3d4d778f8d76355046997d: No bootstrap required, opened a new log
I20250905 08:23:30.825408  3012 ts_tablet_manager.cc:1397] T 984a87708b02458ca138e96c6dbeb1a7 P 8322f2a94d3d4d778f8d76355046997d: Time spent bootstrapping tablet: real 0.014s	user 0.012s	sys 0.000s
I20250905 08:23:30.828104  3013 tablet_bootstrap.cc:654] T 984a87708b02458ca138e96c6dbeb1a7 P 88282cb85d6f41519619c021cdeefc79: Neither blocks nor log segments found. Creating new log.
I20250905 08:23:30.830220  3013 log.cc:826] T 984a87708b02458ca138e96c6dbeb1a7 P 88282cb85d6f41519619c021cdeefc79: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:30.830466  3014 tablet_bootstrap.cc:654] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33: Neither blocks nor log segments found. Creating new log.
I20250905 08:23:30.832818  3014 log.cc:826] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:30.835440  3013 tablet_bootstrap.cc:492] T 984a87708b02458ca138e96c6dbeb1a7 P 88282cb85d6f41519619c021cdeefc79: No bootstrap required, opened a new log
I20250905 08:23:30.835896  3013 ts_tablet_manager.cc:1397] T 984a87708b02458ca138e96c6dbeb1a7 P 88282cb85d6f41519619c021cdeefc79: Time spent bootstrapping tablet: real 0.015s	user 0.009s	sys 0.005s
I20250905 08:23:30.837625  3014 tablet_bootstrap.cc:492] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33: No bootstrap required, opened a new log
I20250905 08:23:30.838022  3014 ts_tablet_manager.cc:1397] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33: Time spent bootstrapping tablet: real 0.015s	user 0.009s	sys 0.002s
I20250905 08:23:30.842905  3012 raft_consensus.cc:357] T 984a87708b02458ca138e96c6dbeb1a7 P 8322f2a94d3d4d778f8d76355046997d [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8322f2a94d3d4d778f8d76355046997d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40683 } } peers { permanent_uuid: "6c7aef2960f544c5ab645c9cadae1d33" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 46579 } } peers { permanent_uuid: "88282cb85d6f41519619c021cdeefc79" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39315 } }
I20250905 08:23:30.843592  3012 raft_consensus.cc:383] T 984a87708b02458ca138e96c6dbeb1a7 P 8322f2a94d3d4d778f8d76355046997d [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:23:30.843883  3012 raft_consensus.cc:738] T 984a87708b02458ca138e96c6dbeb1a7 P 8322f2a94d3d4d778f8d76355046997d [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8322f2a94d3d4d778f8d76355046997d, State: Initialized, Role: FOLLOWER
I20250905 08:23:30.844669  3012 consensus_queue.cc:260] T 984a87708b02458ca138e96c6dbeb1a7 P 8322f2a94d3d4d778f8d76355046997d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8322f2a94d3d4d778f8d76355046997d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40683 } } peers { permanent_uuid: "6c7aef2960f544c5ab645c9cadae1d33" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 46579 } } peers { permanent_uuid: "88282cb85d6f41519619c021cdeefc79" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39315 } }
I20250905 08:23:30.847826  2993 heartbeater.cc:499] Master 127.0.106.190:42529 was elected leader, sending a full tablet report...
I20250905 08:23:30.849000  3012 ts_tablet_manager.cc:1428] T 984a87708b02458ca138e96c6dbeb1a7 P 8322f2a94d3d4d778f8d76355046997d: Time spent starting tablet: real 0.023s	user 0.024s	sys 0.000s
I20250905 08:23:30.856297  3013 raft_consensus.cc:357] T 984a87708b02458ca138e96c6dbeb1a7 P 88282cb85d6f41519619c021cdeefc79 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8322f2a94d3d4d778f8d76355046997d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40683 } } peers { permanent_uuid: "6c7aef2960f544c5ab645c9cadae1d33" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 46579 } } peers { permanent_uuid: "88282cb85d6f41519619c021cdeefc79" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39315 } }
I20250905 08:23:30.857105  3013 raft_consensus.cc:383] T 984a87708b02458ca138e96c6dbeb1a7 P 88282cb85d6f41519619c021cdeefc79 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:23:30.857376  3013 raft_consensus.cc:738] T 984a87708b02458ca138e96c6dbeb1a7 P 88282cb85d6f41519619c021cdeefc79 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 88282cb85d6f41519619c021cdeefc79, State: Initialized, Role: FOLLOWER
I20250905 08:23:30.858076  3013 consensus_queue.cc:260] T 984a87708b02458ca138e96c6dbeb1a7 P 88282cb85d6f41519619c021cdeefc79 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8322f2a94d3d4d778f8d76355046997d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40683 } } peers { permanent_uuid: "6c7aef2960f544c5ab645c9cadae1d33" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 46579 } } peers { permanent_uuid: "88282cb85d6f41519619c021cdeefc79" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39315 } }
I20250905 08:23:30.862524  3013 ts_tablet_manager.cc:1428] T 984a87708b02458ca138e96c6dbeb1a7 P 88282cb85d6f41519619c021cdeefc79: Time spent starting tablet: real 0.026s	user 0.025s	sys 0.000s
I20250905 08:23:30.862926  3014 raft_consensus.cc:357] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8322f2a94d3d4d778f8d76355046997d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40683 } } peers { permanent_uuid: "6c7aef2960f544c5ab645c9cadae1d33" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 46579 } } peers { permanent_uuid: "88282cb85d6f41519619c021cdeefc79" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39315 } }
I20250905 08:23:30.863647  3014 raft_consensus.cc:383] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:23:30.863858  3014 raft_consensus.cc:738] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6c7aef2960f544c5ab645c9cadae1d33, State: Initialized, Role: FOLLOWER
I20250905 08:23:30.864473  3014 consensus_queue.cc:260] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8322f2a94d3d4d778f8d76355046997d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40683 } } peers { permanent_uuid: "6c7aef2960f544c5ab645c9cadae1d33" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 46579 } } peers { permanent_uuid: "88282cb85d6f41519619c021cdeefc79" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39315 } }
I20250905 08:23:30.867686  3014 ts_tablet_manager.cc:1428] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33: Time spent starting tablet: real 0.029s	user 0.028s	sys 0.002s
W20250905 08:23:30.892220  2861 tablet.cc:2378] T 984a87708b02458ca138e96c6dbeb1a7 P 88282cb85d6f41519619c021cdeefc79: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250905 08:23:30.918323  2994 tablet.cc:2378] T 984a87708b02458ca138e96c6dbeb1a7 P 8322f2a94d3d4d778f8d76355046997d: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250905 08:23:30.949889  3020 raft_consensus.cc:491] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250905 08:23:30.950307  3020 raft_consensus.cc:513] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8322f2a94d3d4d778f8d76355046997d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40683 } } peers { permanent_uuid: "6c7aef2960f544c5ab645c9cadae1d33" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 46579 } } peers { permanent_uuid: "88282cb85d6f41519619c021cdeefc79" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39315 } }
I20250905 08:23:30.952672  3020 leader_election.cc:290] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 8322f2a94d3d4d778f8d76355046997d (127.0.106.131:40683), 88282cb85d6f41519619c021cdeefc79 (127.0.106.130:39315)
W20250905 08:23:30.955355  2727 tablet.cc:2378] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250905 08:23:30.965729  2948 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "984a87708b02458ca138e96c6dbeb1a7" candidate_uuid: "6c7aef2960f544c5ab645c9cadae1d33" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8322f2a94d3d4d778f8d76355046997d" is_pre_election: true
I20250905 08:23:30.965729  2815 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "984a87708b02458ca138e96c6dbeb1a7" candidate_uuid: "6c7aef2960f544c5ab645c9cadae1d33" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "88282cb85d6f41519619c021cdeefc79" is_pre_election: true
I20250905 08:23:30.966490  2815 raft_consensus.cc:2466] T 984a87708b02458ca138e96c6dbeb1a7 P 88282cb85d6f41519619c021cdeefc79 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 6c7aef2960f544c5ab645c9cadae1d33 in term 0.
I20250905 08:23:30.966476  2948 raft_consensus.cc:2466] T 984a87708b02458ca138e96c6dbeb1a7 P 8322f2a94d3d4d778f8d76355046997d [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 6c7aef2960f544c5ab645c9cadae1d33 in term 0.
I20250905 08:23:30.967603  2617 leader_election.cc:304] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 6c7aef2960f544c5ab645c9cadae1d33, 8322f2a94d3d4d778f8d76355046997d; no voters: 
I20250905 08:23:30.968217  3020 raft_consensus.cc:2802] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250905 08:23:30.968468  3020 raft_consensus.cc:491] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250905 08:23:30.968708  3020 raft_consensus.cc:3058] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:23:30.972785  3020 raft_consensus.cc:513] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8322f2a94d3d4d778f8d76355046997d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40683 } } peers { permanent_uuid: "6c7aef2960f544c5ab645c9cadae1d33" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 46579 } } peers { permanent_uuid: "88282cb85d6f41519619c021cdeefc79" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39315 } }
I20250905 08:23:30.974233  3020 leader_election.cc:290] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [CANDIDATE]: Term 1 election: Requested vote from peers 8322f2a94d3d4d778f8d76355046997d (127.0.106.131:40683), 88282cb85d6f41519619c021cdeefc79 (127.0.106.130:39315)
I20250905 08:23:30.974778  2948 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "984a87708b02458ca138e96c6dbeb1a7" candidate_uuid: "6c7aef2960f544c5ab645c9cadae1d33" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "8322f2a94d3d4d778f8d76355046997d"
I20250905 08:23:30.975041  2815 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "984a87708b02458ca138e96c6dbeb1a7" candidate_uuid: "6c7aef2960f544c5ab645c9cadae1d33" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "88282cb85d6f41519619c021cdeefc79"
I20250905 08:23:30.975214  2948 raft_consensus.cc:3058] T 984a87708b02458ca138e96c6dbeb1a7 P 8322f2a94d3d4d778f8d76355046997d [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:23:30.975445  2815 raft_consensus.cc:3058] T 984a87708b02458ca138e96c6dbeb1a7 P 88282cb85d6f41519619c021cdeefc79 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:23:30.979193  2948 raft_consensus.cc:2466] T 984a87708b02458ca138e96c6dbeb1a7 P 8322f2a94d3d4d778f8d76355046997d [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 6c7aef2960f544c5ab645c9cadae1d33 in term 1.
I20250905 08:23:30.979807  2617 leader_election.cc:304] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 6c7aef2960f544c5ab645c9cadae1d33, 8322f2a94d3d4d778f8d76355046997d; no voters: 
I20250905 08:23:30.979867  2815 raft_consensus.cc:2466] T 984a87708b02458ca138e96c6dbeb1a7 P 88282cb85d6f41519619c021cdeefc79 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 6c7aef2960f544c5ab645c9cadae1d33 in term 1.
I20250905 08:23:30.980406  3020 raft_consensus.cc:2802] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:23:30.981884  3020 raft_consensus.cc:695] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [term 1 LEADER]: Becoming Leader. State: Replica: 6c7aef2960f544c5ab645c9cadae1d33, State: Running, Role: LEADER
I20250905 08:23:30.982669  3020 consensus_queue.cc:237] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8322f2a94d3d4d778f8d76355046997d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40683 } } peers { permanent_uuid: "6c7aef2960f544c5ab645c9cadae1d33" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 46579 } } peers { permanent_uuid: "88282cb85d6f41519619c021cdeefc79" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39315 } }
I20250905 08:23:30.992035  2537 catalog_manager.cc:5582] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 reported cstate change: term changed from 0 to 1, leader changed from <none> to 6c7aef2960f544c5ab645c9cadae1d33 (127.0.106.129). New cstate: current_term: 1 leader_uuid: "6c7aef2960f544c5ab645c9cadae1d33" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "8322f2a94d3d4d778f8d76355046997d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40683 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "6c7aef2960f544c5ab645c9cadae1d33" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 46579 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "88282cb85d6f41519619c021cdeefc79" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39315 } health_report { overall_health: UNKNOWN } } }
I20250905 08:23:31.030342   426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250905 08:23:31.033290   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 6c7aef2960f544c5ab645c9cadae1d33 to finish bootstrapping
I20250905 08:23:31.044332   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 88282cb85d6f41519619c021cdeefc79 to finish bootstrapping
I20250905 08:23:31.053705   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 8322f2a94d3d4d778f8d76355046997d to finish bootstrapping
I20250905 08:23:31.427634  3020 consensus_queue.cc:1035] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [LEADER]: Connected to new peer: Peer: permanent_uuid: "8322f2a94d3d4d778f8d76355046997d" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40683 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250905 08:23:31.452682  3023 consensus_queue.cc:1035] T 984a87708b02458ca138e96c6dbeb1a7 P 6c7aef2960f544c5ab645c9cadae1d33 [LEADER]: Connected to new peer: Peer: permanent_uuid: "88282cb85d6f41519619c021cdeefc79" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 39315 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250905 08:23:32.569532  2537 server_base.cc:1129] Unauthorized access attempt to method kudu.master.MasterService.RefreshAuthzCache from {username='slave'} at 127.0.0.1:47688
I20250905 08:23:33.600889   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 2597
I20250905 08:23:33.628130   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 2730
I20250905 08:23:33.649371   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 2864
I20250905 08:23:33.672384   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 2505
2025-09-05T08:23:33Z chronyd exiting
[       OK ] AdminCliTest.TestAuthzResetCacheNotAuthorized (10975 ms)
[ RUN      ] AdminCliTest.TestRebuildTables
I20250905 08:23:33.718596   426 test_util.cc:276] Using random seed: -1917140436
I20250905 08:23:33.722090   426 ts_itest-base.cc:115] Starting cluster with:
I20250905 08:23:33.722229   426 ts_itest-base.cc:116] --------------
I20250905 08:23:33.722400   426 ts_itest-base.cc:117] 3 tablet servers
I20250905 08:23:33.722532   426 ts_itest-base.cc:118] 3 replicas per TS
I20250905 08:23:33.722688   426 ts_itest-base.cc:119] --------------
2025-09-05T08:23:33Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-09-05T08:23:33Z Disabled control of system clock
I20250905 08:23:33.752570   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:35711
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:41569
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:35711 with env {}
W20250905 08:23:34.029109  3064 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:34.029626  3064 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:34.030005  3064 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:34.058686  3064 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250905 08:23:34.058943  3064 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:34.059121  3064 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250905 08:23:34.059294  3064 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250905 08:23:34.090338  3064 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:41569
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:35711
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:35711
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:34.091369  3064 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:34.092878  3064 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:34.102357  3070 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:34.102645  3071 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:35.241079  3072 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1136 milliseconds
W20250905 08:23:35.242185  3073 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:35.244446  3064 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.142s	user 0.343s	sys 0.791s
W20250905 08:23:35.244755  3064 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.143s	user 0.343s	sys 0.791s
I20250905 08:23:35.244998  3064 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:23:35.246146  3064 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:35.248562  3064 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:35.249888  3064 hybrid_clock.cc:648] HybridClock initialized: now 1757060615249860 us; error 33 us; skew 500 ppm
I20250905 08:23:35.250576  3064 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:35.260591  3064 webserver.cc:480] Webserver started at http://127.0.106.190:42537/ using document root <none> and password file <none>
I20250905 08:23:35.261428  3064 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:35.261633  3064 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:35.262094  3064 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:23:35.266949  3064 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "3b7c796ecec441dbaff957664cc80c2c"
format_stamp: "Formatted at 2025-09-05 08:23:35 on dist-test-slave-0x95"
I20250905 08:23:35.267933  3064 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "3b7c796ecec441dbaff957664cc80c2c"
format_stamp: "Formatted at 2025-09-05 08:23:35 on dist-test-slave-0x95"
I20250905 08:23:35.275111  3064 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.005s	sys 0.001s
I20250905 08:23:35.280571  3080 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:35.281597  3064 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.004s	sys 0.000s
I20250905 08:23:35.281931  3064 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "3b7c796ecec441dbaff957664cc80c2c"
format_stamp: "Formatted at 2025-09-05 08:23:35 on dist-test-slave-0x95"
I20250905 08:23:35.282230  3064 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:35.346930  3064 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:35.348196  3064 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:35.348564  3064 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:35.419412  3064 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:35711
I20250905 08:23:35.419474  3131 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:35711 every 8 connection(s)
I20250905 08:23:35.422605  3064 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250905 08:23:35.428048  3132 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:35.427958   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 3064
I20250905 08:23:35.428402   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250905 08:23:35.453469  3132 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Bootstrap starting.
I20250905 08:23:35.459103  3132 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Neither blocks nor log segments found. Creating new log.
I20250905 08:23:35.460765  3132 log.cc:826] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:35.464666  3132 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: No bootstrap required, opened a new log
I20250905 08:23:35.479568  3132 raft_consensus.cc:357] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:23:35.480144  3132 raft_consensus.cc:383] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:23:35.480355  3132 raft_consensus.cc:738] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 3b7c796ecec441dbaff957664cc80c2c, State: Initialized, Role: FOLLOWER
I20250905 08:23:35.481103  3132 consensus_queue.cc:260] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:23:35.481595  3132 raft_consensus.cc:397] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:23:35.481837  3132 raft_consensus.cc:491] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:23:35.482076  3132 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:23:35.485541  3132 raft_consensus.cc:513] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:23:35.486068  3132 leader_election.cc:304] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 3b7c796ecec441dbaff957664cc80c2c; no voters: 
I20250905 08:23:35.487804  3132 leader_election.cc:290] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250905 08:23:35.488456  3137 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:23:35.490639  3137 raft_consensus.cc:695] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 1 LEADER]: Becoming Leader. State: Replica: 3b7c796ecec441dbaff957664cc80c2c, State: Running, Role: LEADER
I20250905 08:23:35.491338  3137 consensus_queue.cc:237] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:23:35.492172  3132 sys_catalog.cc:564] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: configured and running, proceeding with master startup.
I20250905 08:23:35.497395  3138 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "3b7c796ecec441dbaff957664cc80c2c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } } }
I20250905 08:23:35.497959  3138 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: This master's current role is: LEADER
I20250905 08:23:35.498831  3139 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: SysCatalogTable state changed. Reason: New leader 3b7c796ecec441dbaff957664cc80c2c. Latest consensus state: current_term: 1 leader_uuid: "3b7c796ecec441dbaff957664cc80c2c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } } }
I20250905 08:23:35.499457  3139 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: This master's current role is: LEADER
I20250905 08:23:35.504607  3146 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250905 08:23:35.514425  3146 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250905 08:23:35.528734  3146 catalog_manager.cc:1349] Generated new cluster ID: 005533fd8e8d4b80b3b44f16d93c1bfa
I20250905 08:23:35.528939  3146 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250905 08:23:35.542794  3146 catalog_manager.cc:1372] Generated new certificate authority record
I20250905 08:23:35.543958  3146 catalog_manager.cc:1506] Loading token signing keys...
I20250905 08:23:35.570053  3146 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Generated new TSK 0
I20250905 08:23:35.570948  3146 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250905 08:23:35.584753   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:35711
--builtin_ntp_servers=127.0.106.148:41569
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250905 08:23:35.856047  3156 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:35.856561  3156 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:35.857098  3156 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:35.884820  3156 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:35.885557  3156 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250905 08:23:35.916851  3156 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:41569
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:35711
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:35.917961  3156 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:35.919430  3156 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:35.930430  3162 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:37.334254  3161 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 3156
W20250905 08:23:37.711335  3161 kernel_stack_watchdog.cc:198] Thread 3156 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250905 08:23:35.931811  3163 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:37.713616  3165 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:37.713661  3164 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1781 milliseconds
W20250905 08:23:37.712935  3156 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.781s	user 0.641s	sys 1.095s
W20250905 08:23:37.714347  3156 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.783s	user 0.641s	sys 1.095s
I20250905 08:23:37.714638  3156 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:23:37.717382  3156 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:37.719362  3156 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:37.720690  3156 hybrid_clock.cc:648] HybridClock initialized: now 1757060617720644 us; error 53 us; skew 500 ppm
I20250905 08:23:37.721474  3156 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:37.727746  3156 webserver.cc:480] Webserver started at http://127.0.106.129:34887/ using document root <none> and password file <none>
I20250905 08:23:37.728716  3156 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:37.728912  3156 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:37.729297  3156 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:23:37.733424  3156 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "ea76bcce6ea441cfa874b93546e60292"
format_stamp: "Formatted at 2025-09-05 08:23:37 on dist-test-slave-0x95"
I20250905 08:23:37.734431  3156 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "ea76bcce6ea441cfa874b93546e60292"
format_stamp: "Formatted at 2025-09-05 08:23:37 on dist-test-slave-0x95"
I20250905 08:23:37.741060  3156 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.008s	sys 0.000s
I20250905 08:23:37.746341  3172 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:37.747375  3156 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.002s	sys 0.000s
I20250905 08:23:37.747639  3156 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "ea76bcce6ea441cfa874b93546e60292"
format_stamp: "Formatted at 2025-09-05 08:23:37 on dist-test-slave-0x95"
I20250905 08:23:37.747931  3156 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:37.813375  3156 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:37.814746  3156 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:37.815141  3156 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:37.817492  3156 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:23:37.821297  3156 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:23:37.821493  3156 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:37.821717  3156 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:23:37.821875  3156 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:37.974854  3156 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:42483
I20250905 08:23:37.975014  3284 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:42483 every 8 connection(s)
I20250905 08:23:37.977283  3156 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250905 08:23:37.981637   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 3156
I20250905 08:23:37.982108   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250905 08:23:37.990659   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:0
--local_ip_for_outbound_sockets=127.0.106.130
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:35711
--builtin_ntp_servers=127.0.106.148:41569
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250905 08:23:38.008162  3285 heartbeater.cc:344] Connected to a master server at 127.0.106.190:35711
I20250905 08:23:38.008597  3285 heartbeater.cc:461] Registering TS with master...
I20250905 08:23:38.009512  3285 heartbeater.cc:507] Master 127.0.106.190:35711 requested a full tablet report, sending...
I20250905 08:23:38.011655  3097 ts_manager.cc:194] Registered new tserver with Master: ea76bcce6ea441cfa874b93546e60292 (127.0.106.129:42483)
I20250905 08:23:38.013609  3097 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:56071
W20250905 08:23:38.280771  3289 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:38.281225  3289 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:38.281702  3289 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:38.310074  3289 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:38.310847  3289 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250905 08:23:38.344885  3289 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:41569
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:35711
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:38.346143  3289 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:38.347685  3289 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:38.358417  3296 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:23:39.016400  3285 heartbeater.cc:499] Master 127.0.106.190:35711 was elected leader, sending a full tablet report...
W20250905 08:23:39.762923  3295 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 3289
W20250905 08:23:39.935549  3295 kernel_stack_watchdog.cc:198] Thread 3289 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250905 08:23:38.360373  3297 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:39.936793  3289 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.577s	user 0.000s	sys 0.002s
W20250905 08:23:39.937147  3289 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.578s	user 0.000s	sys 0.002s
W20250905 08:23:39.937142  3298 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1576 milliseconds
W20250905 08:23:39.941668  3299 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:23:39.941695  3289 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:23:39.942898  3289 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:39.944850  3289 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:39.946178  3289 hybrid_clock.cc:648] HybridClock initialized: now 1757060619946137 us; error 41 us; skew 500 ppm
I20250905 08:23:39.946923  3289 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:39.952597  3289 webserver.cc:480] Webserver started at http://127.0.106.130:40331/ using document root <none> and password file <none>
I20250905 08:23:39.953497  3289 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:39.953750  3289 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:39.954385  3289 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:23:39.960767  3289 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00"
format_stamp: "Formatted at 2025-09-05 08:23:39 on dist-test-slave-0x95"
I20250905 08:23:39.962054  3289 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00"
format_stamp: "Formatted at 2025-09-05 08:23:39 on dist-test-slave-0x95"
I20250905 08:23:39.969261  3289 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.004s	sys 0.004s
I20250905 08:23:39.975323  3306 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:39.976384  3289 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.003s	sys 0.000s
I20250905 08:23:39.976727  3289 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00"
format_stamp: "Formatted at 2025-09-05 08:23:39 on dist-test-slave-0x95"
I20250905 08:23:39.977131  3289 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:40.029659  3289 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:40.030861  3289 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:40.031193  3289 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:40.033484  3289 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:23:40.036808  3289 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:23:40.036971  3289 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:40.037143  3289 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:23:40.037251  3289 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:40.157487  3289 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:34297
I20250905 08:23:40.157579  3418 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:34297 every 8 connection(s)
I20250905 08:23:40.159641  3289 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250905 08:23:40.164381   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 3289
I20250905 08:23:40.165133   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250905 08:23:40.171142   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:0
--local_ip_for_outbound_sockets=127.0.106.131
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:35711
--builtin_ntp_servers=127.0.106.148:41569
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250905 08:23:40.178347  3419 heartbeater.cc:344] Connected to a master server at 127.0.106.190:35711
I20250905 08:23:40.178799  3419 heartbeater.cc:461] Registering TS with master...
I20250905 08:23:40.179983  3419 heartbeater.cc:507] Master 127.0.106.190:35711 requested a full tablet report, sending...
I20250905 08:23:40.182152  3097 ts_manager.cc:194] Registered new tserver with Master: cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297)
I20250905 08:23:40.183341  3097 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:33335
W20250905 08:23:40.439831  3423 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:40.440275  3423 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:40.440737  3423 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:40.468729  3423 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:40.469409  3423 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250905 08:23:40.503201  3423 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:41569
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:35711
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:40.504403  3423 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:40.505904  3423 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:40.516942  3429 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:23:41.186648  3419 heartbeater.cc:499] Master 127.0.106.190:35711 was elected leader, sending a full tablet report...
W20250905 08:23:41.922392  3428 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 3423
W20250905 08:23:42.007476  3428 kernel_stack_watchdog.cc:198] Thread 3423 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 402ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250905 08:23:40.517360  3430 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:42.008137  3423 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.491s	user 0.000s	sys 0.002s
W20250905 08:23:42.008473  3423 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.492s	user 0.000s	sys 0.002s
W20250905 08:23:42.008713  3431 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1490 milliseconds
I20250905 08:23:42.009966  3423 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250905 08:23:42.010000  3432 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:23:42.013048  3423 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:42.014930  3423 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:42.016220  3423 hybrid_clock.cc:648] HybridClock initialized: now 1757060622016168 us; error 55 us; skew 500 ppm
I20250905 08:23:42.016963  3423 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:42.022411  3423 webserver.cc:480] Webserver started at http://127.0.106.131:45505/ using document root <none> and password file <none>
I20250905 08:23:42.023314  3423 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:42.023515  3423 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:42.023973  3423 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:23:42.027992  3423 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "4343ba2e6dc5477b8097b27ea603906b"
format_stamp: "Formatted at 2025-09-05 08:23:42 on dist-test-slave-0x95"
I20250905 08:23:42.029060  3423 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "4343ba2e6dc5477b8097b27ea603906b"
format_stamp: "Formatted at 2025-09-05 08:23:42 on dist-test-slave-0x95"
I20250905 08:23:42.035995  3423 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.007s	sys 0.001s
I20250905 08:23:42.040777  3439 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:42.041654  3423 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.004s	sys 0.000s
I20250905 08:23:42.041921  3423 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "4343ba2e6dc5477b8097b27ea603906b"
format_stamp: "Formatted at 2025-09-05 08:23:42 on dist-test-slave-0x95"
I20250905 08:23:42.042207  3423 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:42.092526  3423 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:42.093971  3423 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:42.094375  3423 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:42.096719  3423 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:23:42.100407  3423 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:23:42.100605  3423 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:42.100843  3423 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:23:42.101001  3423 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:42.230016  3423 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:40383
I20250905 08:23:42.230113  3551 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:40383 every 8 connection(s)
I20250905 08:23:42.232224  3423 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250905 08:23:42.238476   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 3423
I20250905 08:23:42.238924   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250905 08:23:42.250825  3552 heartbeater.cc:344] Connected to a master server at 127.0.106.190:35711
I20250905 08:23:42.251161  3552 heartbeater.cc:461] Registering TS with master...
I20250905 08:23:42.252080  3552 heartbeater.cc:507] Master 127.0.106.190:35711 requested a full tablet report, sending...
I20250905 08:23:42.253928  3097 ts_manager.cc:194] Registered new tserver with Master: 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383)
I20250905 08:23:42.255144  3097 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:53177
I20250905 08:23:42.257522   426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250905 08:23:42.287760   426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250905 08:23:42.288071   426 test_util.cc:276] Using random seed: -1908570954
I20250905 08:23:42.322616  3097 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:51858:
name: "TestTable"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
I20250905 08:23:42.360628  3487 tablet_service.cc:1468] Processing CreateTablet for tablet 339d359c12c14106aa07add6dbc3309f (DEFAULT_TABLE table=TestTable [id=730aaacb1d1749ddba90a5883aa0a363]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:23:42.361923  3487 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 339d359c12c14106aa07add6dbc3309f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:42.379913  3572 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Bootstrap starting.
I20250905 08:23:42.385107  3572 tablet_bootstrap.cc:654] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Neither blocks nor log segments found. Creating new log.
I20250905 08:23:42.386705  3572 log.cc:826] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:42.390751  3572 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: No bootstrap required, opened a new log
I20250905 08:23:42.391099  3572 ts_tablet_manager.cc:1397] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Time spent bootstrapping tablet: real 0.012s	user 0.002s	sys 0.007s
I20250905 08:23:42.406606  3572 raft_consensus.cc:357] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } }
I20250905 08:23:42.407038  3572 raft_consensus.cc:383] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:23:42.407246  3572 raft_consensus.cc:738] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4343ba2e6dc5477b8097b27ea603906b, State: Initialized, Role: FOLLOWER
I20250905 08:23:42.407856  3572 consensus_queue.cc:260] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } }
I20250905 08:23:42.408308  3572 raft_consensus.cc:397] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:23:42.408543  3572 raft_consensus.cc:491] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:23:42.408818  3572 raft_consensus.cc:3058] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:23:42.413723  3572 raft_consensus.cc:513] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } }
I20250905 08:23:42.414510  3572 leader_election.cc:304] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 4343ba2e6dc5477b8097b27ea603906b; no voters: 
I20250905 08:23:42.416137  3572 leader_election.cc:290] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250905 08:23:42.416805  3574 raft_consensus.cc:2802] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:23:42.419281  3572 ts_tablet_manager.cc:1428] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Time spent starting tablet: real 0.028s	user 0.027s	sys 0.001s
I20250905 08:23:42.419559  3552 heartbeater.cc:499] Master 127.0.106.190:35711 was elected leader, sending a full tablet report...
I20250905 08:23:42.420301  3574 raft_consensus.cc:695] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 1 LEADER]: Becoming Leader. State: Replica: 4343ba2e6dc5477b8097b27ea603906b, State: Running, Role: LEADER
I20250905 08:23:42.420939  3574 consensus_queue.cc:237] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } }
I20250905 08:23:42.432444  3096 catalog_manager.cc:5582] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b reported cstate change: term changed from 0 to 1, leader changed from <none> to 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131). New cstate: current_term: 1 leader_uuid: "4343ba2e6dc5477b8097b27ea603906b" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } health_report { overall_health: HEALTHY } } }
I20250905 08:23:42.603477   426 test_util.cc:276] Using random seed: -1908255566
I20250905 08:23:42.624161  3095 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:51866:
name: "TestTable1"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
I20250905 08:23:42.651803  3354 tablet_service.cc:1468] Processing CreateTablet for tablet 778e19d4e8684ca191e4734027cc9b36 (DEFAULT_TABLE table=TestTable1 [id=7d8dc0c608e5483f83f8c380c5d1a36b]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:23:42.653183  3354 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 778e19d4e8684ca191e4734027cc9b36. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:42.670351  3594 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Bootstrap starting.
I20250905 08:23:42.675532  3594 tablet_bootstrap.cc:654] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Neither blocks nor log segments found. Creating new log.
I20250905 08:23:42.677053  3594 log.cc:826] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:42.680879  3594 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: No bootstrap required, opened a new log
I20250905 08:23:42.681259  3594 ts_tablet_manager.cc:1397] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Time spent bootstrapping tablet: real 0.011s	user 0.005s	sys 0.005s
I20250905 08:23:42.696950  3594 raft_consensus.cc:357] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } }
I20250905 08:23:42.697418  3594 raft_consensus.cc:383] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:23:42.697614  3594 raft_consensus.cc:738] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: cad2e126b3ba40d9ad88ef5ddc39bb00, State: Initialized, Role: FOLLOWER
I20250905 08:23:42.698227  3594 consensus_queue.cc:260] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } }
I20250905 08:23:42.698700  3594 raft_consensus.cc:397] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:23:42.698935  3594 raft_consensus.cc:491] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:23:42.699190  3594 raft_consensus.cc:3058] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:23:42.702886  3594 raft_consensus.cc:513] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } }
I20250905 08:23:42.703500  3594 leader_election.cc:304] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: cad2e126b3ba40d9ad88ef5ddc39bb00; no voters: 
I20250905 08:23:42.705152  3594 leader_election.cc:290] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250905 08:23:42.705479  3596 raft_consensus.cc:2802] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:23:42.708323  3596 raft_consensus.cc:695] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 1 LEADER]: Becoming Leader. State: Replica: cad2e126b3ba40d9ad88ef5ddc39bb00, State: Running, Role: LEADER
I20250905 08:23:42.709084  3594 ts_tablet_manager.cc:1428] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Time spent starting tablet: real 0.028s	user 0.028s	sys 0.000s
I20250905 08:23:42.709064  3596 consensus_queue.cc:237] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } }
I20250905 08:23:42.718624  3095 catalog_manager.cc:5582] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 reported cstate change: term changed from 0 to 1, leader changed from <none> to cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130). New cstate: current_term: 1 leader_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } health_report { overall_health: HEALTHY } } }
I20250905 08:23:42.869140   426 test_util.cc:276] Using random seed: -1907989892
I20250905 08:23:42.887650  3094 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:51868:
name: "TestTable2"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
I20250905 08:23:42.914543  3220 tablet_service.cc:1468] Processing CreateTablet for tablet b3cc066ed2004a1390fef6fc0eb08162 (DEFAULT_TABLE table=TestTable2 [id=4a36f6f3139f4dc1870aa0ec53f8c2ea]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:23:42.916066  3220 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b3cc066ed2004a1390fef6fc0eb08162. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:42.933220  3615 tablet_bootstrap.cc:492] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Bootstrap starting.
I20250905 08:23:42.938691  3615 tablet_bootstrap.cc:654] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Neither blocks nor log segments found. Creating new log.
I20250905 08:23:42.940356  3615 log.cc:826] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:42.944146  3615 tablet_bootstrap.cc:492] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: No bootstrap required, opened a new log
I20250905 08:23:42.944511  3615 ts_tablet_manager.cc:1397] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Time spent bootstrapping tablet: real 0.012s	user 0.006s	sys 0.005s
I20250905 08:23:42.959833  3615 raft_consensus.cc:357] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } }
I20250905 08:23:42.960323  3615 raft_consensus.cc:383] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:23:42.960522  3615 raft_consensus.cc:738] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ea76bcce6ea441cfa874b93546e60292, State: Initialized, Role: FOLLOWER
I20250905 08:23:42.961128  3615 consensus_queue.cc:260] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } }
I20250905 08:23:42.961583  3615 raft_consensus.cc:397] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:23:42.961818  3615 raft_consensus.cc:491] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:23:42.962101  3615 raft_consensus.cc:3058] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:23:42.965746  3615 raft_consensus.cc:513] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } }
I20250905 08:23:42.966341  3615 leader_election.cc:304] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: ea76bcce6ea441cfa874b93546e60292; no voters: 
I20250905 08:23:42.967758  3615 leader_election.cc:290] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250905 08:23:42.968112  3617 raft_consensus.cc:2802] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:23:42.970672  3617 raft_consensus.cc:695] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 1 LEADER]: Becoming Leader. State: Replica: ea76bcce6ea441cfa874b93546e60292, State: Running, Role: LEADER
I20250905 08:23:42.971599  3617 consensus_queue.cc:237] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } }
I20250905 08:23:42.973862  3615 ts_tablet_manager.cc:1428] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Time spent starting tablet: real 0.029s	user 0.022s	sys 0.007s
I20250905 08:23:42.983199  3094 catalog_manager.cc:5582] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 reported cstate change: term changed from 0 to 1, leader changed from <none> to ea76bcce6ea441cfa874b93546e60292 (127.0.106.129). New cstate: current_term: 1 leader_uuid: "ea76bcce6ea441cfa874b93546e60292" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } health_report { overall_health: HEALTHY } } }
W20250905 08:23:43.114073  3625 meta_cache.cc:1261] Time spent looking up entry by key: real 0.081s	user 0.000s	sys 0.000s
W20250905 08:23:43.114270  3612 meta_cache.cc:1261] Time spent looking up entry by key: real 0.080s	user 0.002s	sys 0.020s
W20250905 08:23:43.114866  3622 meta_cache.cc:1261] Time spent looking up entry by key: real 0.063s	user 0.013s	sys 0.000s
W20250905 08:23:43.115106  3624 meta_cache.cc:1261] Time spent looking up entry by key: real 0.064s	user 0.000s	sys 0.006s
W20250905 08:23:43.114081  3623 meta_cache.cc:1261] Time spent looking up entry by key: real 0.080s	user 0.030s	sys 0.046s
W20250905 08:23:43.268290  3609 meta_cache.cc:1261] Time spent looking up entry by key: real 0.235s	user 0.008s	sys 0.015s
W20250905 08:23:43.114615  3611 meta_cache.cc:1261] Time spent looking up entry by key: real 0.081s	user 0.002s	sys 0.004s
W20250905 08:23:43.118062  3610 meta_cache.cc:1261] Time spent looking up entry by key: real 0.078s	user 0.000s	sys 0.005s
I20250905 08:23:43.376067   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 3064
W20250905 08:23:43.445900  3552 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:35711 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:35711: connect: Connection refused (error 111)
W20250905 08:23:43.734110  3419 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:35711 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:35711: connect: Connection refused (error 111)
W20250905 08:23:44.022601  3285 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:35711 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:35711: connect: Connection refused (error 111)
W20250905 08:23:44.114823  3415 debug-util.cc:398] Leaking SignalData structure 0x7b08000acd60 after lost signal to thread 3291
W20250905 08:23:44.116465  3415 debug-util.cc:398] Leaking SignalData structure 0x7b08000acd80 after lost signal to thread 3418
I20250905 08:23:48.374119   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 3156
I20250905 08:23:48.395615   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 3289
I20250905 08:23:48.418066   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 3423
I20250905 08:23:48.443537   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:35711
--webserver_interface=127.0.106.190
--webserver_port=42537
--builtin_ntp_servers=127.0.106.148:41569
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:35711 with env {}
W20250905 08:23:48.724005  3696 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:48.724557  3696 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:48.725005  3696 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:48.754166  3696 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250905 08:23:48.754410  3696 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:48.754637  3696 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250905 08:23:48.754837  3696 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250905 08:23:48.786998  3696 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:41569
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:35711
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:35711
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=42537
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:48.788130  3696 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:48.789500  3696 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:48.798565  3702 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:50.202952  3701 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 3696
W20250905 08:23:50.468528  3701 kernel_stack_watchdog.cc:198] Thread 3696 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250905 08:23:48.800285  3703 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:50.469384  3696 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.670s	user 0.002s	sys 0.000s
W20250905 08:23:50.469776  3696 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.670s	user 0.002s	sys 0.000s
W20250905 08:23:50.469877  3704 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1668 milliseconds
I20250905 08:23:50.471246  3696 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250905 08:23:50.471325  3705 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:23:50.474022  3696 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:50.476250  3696 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:50.477551  3696 hybrid_clock.cc:648] HybridClock initialized: now 1757060630477512 us; error 45 us; skew 500 ppm
I20250905 08:23:50.478220  3696 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:50.483582  3696 webserver.cc:480] Webserver started at http://127.0.106.190:42537/ using document root <none> and password file <none>
I20250905 08:23:50.484472  3696 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:50.484664  3696 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:50.491449  3696 fs_manager.cc:714] Time spent opening directory manager: real 0.004s	user 0.002s	sys 0.005s
I20250905 08:23:50.496244  3712 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:50.497956  3696 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.000s	sys 0.004s
I20250905 08:23:50.498301  3696 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "3b7c796ecec441dbaff957664cc80c2c"
format_stamp: "Formatted at 2025-09-05 08:23:35 on dist-test-slave-0x95"
I20250905 08:23:50.500810  3696 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:50.545094  3696 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:50.546408  3696 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:50.546782  3696 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:50.611764  3696 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:35711
I20250905 08:23:50.611891  3763 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:35711 every 8 connection(s)
I20250905 08:23:50.614408  3696 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250905 08:23:50.621600   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 3696
I20250905 08:23:50.623569   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:42483
--local_ip_for_outbound_sockets=127.0.106.129
--tserver_master_addrs=127.0.106.190:35711
--webserver_port=34887
--webserver_interface=127.0.106.129
--builtin_ntp_servers=127.0.106.148:41569
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250905 08:23:50.626418  3764 sys_catalog.cc:263] Verifying existing consensus state
I20250905 08:23:50.633836  3764 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Bootstrap starting.
I20250905 08:23:50.643056  3764 log.cc:826] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:50.688952  3764 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Bootstrap replayed 1/1 log segments. Stats: ops{read=19 overwritten=0 applied=19 ignored=0} inserts{seen=13 ignored=0} mutations{seen=11 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:23:50.689754  3764 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Bootstrap complete.
I20250905 08:23:50.708969  3764 raft_consensus.cc:357] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:23:50.710912  3764 raft_consensus.cc:738] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 3b7c796ecec441dbaff957664cc80c2c, State: Initialized, Role: FOLLOWER
I20250905 08:23:50.711666  3764 consensus_queue.cc:260] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 19, Last appended: 2.19, Last appended by leader: 19, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:23:50.712170  3764 raft_consensus.cc:397] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:23:50.712481  3764 raft_consensus.cc:491] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:23:50.712755  3764 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 2 FOLLOWER]: Advancing to term 3
I20250905 08:23:50.717491  3764 raft_consensus.cc:513] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:23:50.718001  3764 leader_election.cc:304] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 3b7c796ecec441dbaff957664cc80c2c; no voters: 
I20250905 08:23:50.719987  3764 leader_election.cc:290] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [CANDIDATE]: Term 3 election: Requested vote from peers 
I20250905 08:23:50.720223  3768 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 3 FOLLOWER]: Leader election won for term 3
I20250905 08:23:50.723023  3768 raft_consensus.cc:695] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 3 LEADER]: Becoming Leader. State: Replica: 3b7c796ecec441dbaff957664cc80c2c, State: Running, Role: LEADER
I20250905 08:23:50.723738  3768 consensus_queue.cc:237] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 19, Committed index: 19, Last appended: 2.19, Last appended by leader: 19, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:23:50.724341  3764 sys_catalog.cc:564] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: configured and running, proceeding with master startup.
I20250905 08:23:50.732491  3770 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: SysCatalogTable state changed. Reason: New leader 3b7c796ecec441dbaff957664cc80c2c. Latest consensus state: current_term: 3 leader_uuid: "3b7c796ecec441dbaff957664cc80c2c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } } }
I20250905 08:23:50.733484  3770 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: This master's current role is: LEADER
I20250905 08:23:50.739856  3769 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 3 leader_uuid: "3b7c796ecec441dbaff957664cc80c2c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } } }
I20250905 08:23:50.740605  3769 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: This master's current role is: LEADER
I20250905 08:23:50.742126  3774 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250905 08:23:50.754523  3774 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=4a36f6f3139f4dc1870aa0ec53f8c2ea]
I20250905 08:23:50.756263  3774 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=60cce2a86f5445939dd931e802a2d524]
I20250905 08:23:50.757824  3774 catalog_manager.cc:671] Loaded metadata for table TestTable [id=f2f96634e2d248b6b6bcec90878299d3]
I20250905 08:23:50.765182  3774 tablet_loader.cc:96] loaded metadata for tablet 339d359c12c14106aa07add6dbc3309f (table TestTable [id=f2f96634e2d248b6b6bcec90878299d3])
I20250905 08:23:50.766409  3774 tablet_loader.cc:96] loaded metadata for tablet 778e19d4e8684ca191e4734027cc9b36 (table TestTable1 [id=60cce2a86f5445939dd931e802a2d524])
I20250905 08:23:50.767798  3774 tablet_loader.cc:96] loaded metadata for tablet b3cc066ed2004a1390fef6fc0eb08162 (table TestTable2 [id=4a36f6f3139f4dc1870aa0ec53f8c2ea])
I20250905 08:23:50.769250  3774 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250905 08:23:50.774421  3774 catalog_manager.cc:1261] Loaded cluster ID: 005533fd8e8d4b80b3b44f16d93c1bfa
I20250905 08:23:50.774693  3774 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250905 08:23:50.782016  3774 catalog_manager.cc:1506] Loading token signing keys...
I20250905 08:23:50.787217  3774 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Loaded TSK: 0
I20250905 08:23:50.788621  3774 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250905 08:23:50.936522  3766 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:50.936987  3766 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:50.937441  3766 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:50.966797  3766 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:50.967598  3766 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250905 08:23:50.999958  3766 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:41569
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:42483
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=34887
--tserver_master_addrs=127.0.106.190:35711
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:51.001082  3766 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:51.002456  3766 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:51.015185  3791 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:51.016381  3792 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:52.359025  3766 thread.cc:641] GCE (cloud detector) Time spent creating pthread: real 1.345s	user 0.467s	sys 0.873s
W20250905 08:23:52.359385  3766 thread.cc:608] GCE (cloud detector) Time spent starting thread: real 1.345s	user 0.467s	sys 0.873s
I20250905 08:23:52.366847  3766 server_base.cc:1047] running on GCE node
W20250905 08:23:52.368393  3796 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:23:52.369845  3766 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:52.372382  3766 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:52.373838  3766 hybrid_clock.cc:648] HybridClock initialized: now 1757060632373762 us; error 66 us; skew 500 ppm
I20250905 08:23:52.374998  3766 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:52.382946  3766 webserver.cc:480] Webserver started at http://127.0.106.129:34887/ using document root <none> and password file <none>
I20250905 08:23:52.384147  3766 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:52.384418  3766 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:52.394927  3766 fs_manager.cc:714] Time spent opening directory manager: real 0.006s	user 0.005s	sys 0.001s
I20250905 08:23:52.400624  3801 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:52.401840  3766 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.004s	sys 0.000s
I20250905 08:23:52.402220  3766 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "ea76bcce6ea441cfa874b93546e60292"
format_stamp: "Formatted at 2025-09-05 08:23:37 on dist-test-slave-0x95"
I20250905 08:23:52.404878  3766 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:52.488878  3766 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:52.490664  3766 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:52.491180  3766 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:52.496841  3766 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:23:52.503793  3808 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250905 08:23:52.511570  3766 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250905 08:23:52.511868  3766 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s	user 0.001s	sys 0.001s
I20250905 08:23:52.512192  3766 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250905 08:23:52.518926  3766 ts_tablet_manager.cc:610] Registered 1 tablets
I20250905 08:23:52.519151  3766 ts_tablet_manager.cc:589] Time spent register tablets: real 0.007s	user 0.004s	sys 0.000s
I20250905 08:23:52.519497  3808 tablet_bootstrap.cc:492] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Bootstrap starting.
I20250905 08:23:52.589857  3808 log.cc:826] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:52.690716  3808 tablet_bootstrap.cc:492] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Bootstrap replayed 1/1 log segments. Stats: ops{read=6 overwritten=0 applied=6 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:23:52.691717  3808 tablet_bootstrap.cc:492] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Bootstrap complete.
I20250905 08:23:52.693315  3808 ts_tablet_manager.cc:1397] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Time spent bootstrapping tablet: real 0.174s	user 0.140s	sys 0.027s
I20250905 08:23:52.713629  3808 raft_consensus.cc:357] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } }
I20250905 08:23:52.716529  3808 raft_consensus.cc:738] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: ea76bcce6ea441cfa874b93546e60292, State: Initialized, Role: FOLLOWER
I20250905 08:23:52.717377  3808 consensus_queue.cc:260] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } }
I20250905 08:23:52.717986  3808 raft_consensus.cc:397] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:23:52.718281  3808 raft_consensus.cc:491] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:23:52.718647  3808 raft_consensus.cc:3058] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 1 FOLLOWER]: Advancing to term 2
I20250905 08:23:52.726223  3808 raft_consensus.cc:513] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } }
I20250905 08:23:52.726989  3808 leader_election.cc:304] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: ea76bcce6ea441cfa874b93546e60292; no voters: 
I20250905 08:23:52.728821  3808 leader_election.cc:290] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 2 election: Requested vote from peers 
I20250905 08:23:52.729379  3916 raft_consensus.cc:2802] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Leader election won for term 2
I20250905 08:23:52.733844  3915 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:42483 every 8 connection(s)
I20250905 08:23:52.734345  3766 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:42483
I20250905 08:23:52.738456  3766 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250905 08:23:52.738900  3916 raft_consensus.cc:695] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 2 LEADER]: Becoming Leader. State: Replica: ea76bcce6ea441cfa874b93546e60292, State: Running, Role: LEADER
I20250905 08:23:52.739679  3916 consensus_queue.cc:237] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 6, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } }
I20250905 08:23:52.745009   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 3766
I20250905 08:23:52.745898  3808 ts_tablet_manager.cc:1428] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Time spent starting tablet: real 0.052s	user 0.043s	sys 0.008s
I20250905 08:23:52.747009   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:34297
--local_ip_for_outbound_sockets=127.0.106.130
--tserver_master_addrs=127.0.106.190:35711
--webserver_port=40331
--webserver_interface=127.0.106.130
--builtin_ntp_servers=127.0.106.148:41569
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250905 08:23:52.803802  3919 heartbeater.cc:344] Connected to a master server at 127.0.106.190:35711
I20250905 08:23:52.804275  3919 heartbeater.cc:461] Registering TS with master...
I20250905 08:23:52.805147  3919 heartbeater.cc:507] Master 127.0.106.190:35711 requested a full tablet report, sending...
I20250905 08:23:52.808869  3729 ts_manager.cc:194] Registered new tserver with Master: ea76bcce6ea441cfa874b93546e60292 (127.0.106.129:42483)
I20250905 08:23:52.811587  3729 catalog_manager.cc:5582] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "ea76bcce6ea441cfa874b93546e60292" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } health_report { overall_health: HEALTHY } } }
I20250905 08:23:52.842947  3729 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:47365
I20250905 08:23:52.846156  3919 heartbeater.cc:499] Master 127.0.106.190:35711 was elected leader, sending a full tablet report...
W20250905 08:23:53.064687  3922 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:53.065167  3922 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:53.065662  3922 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:53.093858  3922 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:53.094626  3922 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250905 08:23:53.126302  3922 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:41569
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:34297
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=40331
--tserver_master_addrs=127.0.106.190:35711
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:53.127537  3922 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:53.129040  3922 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:53.140542  3935 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:54.544871  3934 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 3922
W20250905 08:23:54.588804  3934 kernel_stack_watchdog.cc:198] Thread 3922 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250905 08:23:53.141916  3936 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:54.589999  3922 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.448s	user 0.000s	sys 0.002s
W20250905 08:23:54.590355  3922 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.449s	user 0.000s	sys 0.002s
W20250905 08:23:54.592239  3938 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:54.594911  3937 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1447 milliseconds
I20250905 08:23:54.594924  3922 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:23:54.596107  3922 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:54.598555  3922 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:54.599920  3922 hybrid_clock.cc:648] HybridClock initialized: now 1757060634599876 us; error 49 us; skew 500 ppm
I20250905 08:23:54.600669  3922 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:54.606918  3922 webserver.cc:480] Webserver started at http://127.0.106.130:40331/ using document root <none> and password file <none>
I20250905 08:23:54.607853  3922 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:54.608076  3922 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:54.617058  3922 fs_manager.cc:714] Time spent opening directory manager: real 0.006s	user 0.008s	sys 0.000s
I20250905 08:23:54.622197  3945 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:54.623221  3922 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.004s	sys 0.001s
I20250905 08:23:54.623564  3922 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00"
format_stamp: "Formatted at 2025-09-05 08:23:39 on dist-test-slave-0x95"
I20250905 08:23:54.626143  3922 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:54.675163  3922 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:54.676451  3922 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:54.676834  3922 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:54.678933  3922 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:23:54.684037  3952 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250905 08:23:54.691395  3922 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250905 08:23:54.691650  3922 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s	user 0.001s	sys 0.000s
I20250905 08:23:54.691978  3922 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250905 08:23:54.695982  3922 ts_tablet_manager.cc:610] Registered 1 tablets
I20250905 08:23:54.696167  3922 ts_tablet_manager.cc:589] Time spent register tablets: real 0.004s	user 0.004s	sys 0.000s
I20250905 08:23:54.696521  3952 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Bootstrap starting.
I20250905 08:23:54.756093  3952 log.cc:826] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:54.855237  3922 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:34297
I20250905 08:23:54.858960  3922 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250905 08:23:54.859894  4059 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:34297 every 8 connection(s)
I20250905 08:23:54.862704   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 3922
I20250905 08:23:54.864487   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:40383
--local_ip_for_outbound_sockets=127.0.106.131
--tserver_master_addrs=127.0.106.190:35711
--webserver_port=45505
--webserver_interface=127.0.106.131
--builtin_ntp_servers=127.0.106.148:41569
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250905 08:23:54.888746  3952 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Bootstrap replayed 1/1 log segments. Stats: ops{read=6 overwritten=0 applied=6 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:23:54.889700  3952 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Bootstrap complete.
I20250905 08:23:54.891296  3952 ts_tablet_manager.cc:1397] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Time spent bootstrapping tablet: real 0.195s	user 0.138s	sys 0.042s
I20250905 08:23:54.898715  4060 heartbeater.cc:344] Connected to a master server at 127.0.106.190:35711
I20250905 08:23:54.899215  4060 heartbeater.cc:461] Registering TS with master...
I20250905 08:23:54.900525  4060 heartbeater.cc:507] Master 127.0.106.190:35711 requested a full tablet report, sending...
I20250905 08:23:54.904210  3729 ts_manager.cc:194] Registered new tserver with Master: cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297)
I20250905 08:23:54.907321  3729 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:54635
I20250905 08:23:54.907953  3952 raft_consensus.cc:357] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } }
I20250905 08:23:54.910862  3952 raft_consensus.cc:738] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: cad2e126b3ba40d9ad88ef5ddc39bb00, State: Initialized, Role: FOLLOWER
I20250905 08:23:54.911672  3952 consensus_queue.cc:260] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } }
I20250905 08:23:54.912358  3952 raft_consensus.cc:397] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:23:54.912722  3952 raft_consensus.cc:491] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:23:54.913089  3952 raft_consensus.cc:3058] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 1 FOLLOWER]: Advancing to term 2
I20250905 08:23:54.922139  3952 raft_consensus.cc:513] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } }
I20250905 08:23:54.922865  3952 leader_election.cc:304] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: cad2e126b3ba40d9ad88ef5ddc39bb00; no voters: 
I20250905 08:23:54.925099  3952 leader_election.cc:290] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [CANDIDATE]: Term 2 election: Requested vote from peers 
I20250905 08:23:54.925614  4065 raft_consensus.cc:2802] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 FOLLOWER]: Leader election won for term 2
I20250905 08:23:54.928368  4060 heartbeater.cc:499] Master 127.0.106.190:35711 was elected leader, sending a full tablet report...
I20250905 08:23:54.929165  3952 ts_tablet_manager.cc:1428] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Time spent starting tablet: real 0.038s	user 0.028s	sys 0.008s
I20250905 08:23:54.938850  4065 raft_consensus.cc:695] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 LEADER]: Becoming Leader. State: Replica: cad2e126b3ba40d9ad88ef5ddc39bb00, State: Running, Role: LEADER
I20250905 08:23:54.939635  4065 consensus_queue.cc:237] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 6, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } }
I20250905 08:23:54.949826  3729 catalog_manager.cc:5582] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 reported cstate change: term changed from 0 to 2, leader changed from <none> to cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130), VOTER cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130) added. New cstate: current_term: 2 leader_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } health_report { overall_health: HEALTHY } } }
I20250905 08:23:54.975391  4015 consensus_queue.cc:237] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 2.7, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: true } }
I20250905 08:23:54.978964  4065 raft_consensus.cc:2953] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 LEADER]: Committing config change with OpId 2.8: config changed from index -1 to 8, NON_VOTER ea76bcce6ea441cfa874b93546e60292 (127.0.106.129) added. New config: { opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: true } } }
I20250905 08:23:54.988700  3715 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 778e19d4e8684ca191e4734027cc9b36 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
W20250905 08:23:54.994167  3948 consensus_peers.cc:489] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 -> Peer ea76bcce6ea441cfa874b93546e60292 (127.0.106.129:42483): Couldn't send request to peer ea76bcce6ea441cfa874b93546e60292. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 778e19d4e8684ca191e4734027cc9b36. This is attempt 1: this message will repeat every 5th retry.
I20250905 08:23:54.995003  3729 catalog_manager.cc:5582] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 reported cstate change: config changed from index -1 to 8, NON_VOTER ea76bcce6ea441cfa874b93546e60292 (127.0.106.129) added. New cstate: current_term: 2 leader_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" committed_config { opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
W20250905 08:23:55.002418  3729 catalog_manager.cc:5260] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 778e19d4e8684ca191e4734027cc9b36 with cas_config_opid_index 8: no extra replica candidate found for tablet 778e19d4e8684ca191e4734027cc9b36 (table TestTable1 [id=60cce2a86f5445939dd931e802a2d524]): Not found: could not select location for extra replica: not enough tablet servers to satisfy replica placement policy: the total number of registered tablet servers (2) does not allow for adding an extra replica; consider bringing up more to have at least 4 tablet servers up and running
W20250905 08:23:55.187942  4064 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:23:55.188414  4064 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:23:55.188941  4064 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:23:55.217963  4064 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:23:55.218767  4064 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250905 08:23:55.250228  4064 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:41569
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:40383
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=45505
--tserver_master_addrs=127.0.106.190:35711
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:23:55.251396  4064 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:55.252888  4064 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:55.264034  4082 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:23:55.545728  4089 ts_tablet_manager.cc:927] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: Initiating tablet copy from peer cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297)
I20250905 08:23:55.548308  4089 tablet_copy_client.cc:323] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: tablet copy: Beginning tablet copy session from remote peer at address 127.0.106.130:34297
I20250905 08:23:55.567390  4035 tablet_copy_service.cc:140] P cad2e126b3ba40d9ad88ef5ddc39bb00: Received BeginTabletCopySession request for tablet 778e19d4e8684ca191e4734027cc9b36 from peer ea76bcce6ea441cfa874b93546e60292 ({username='slave'} at 127.0.106.129:48179)
I20250905 08:23:55.568213  4035 tablet_copy_service.cc:161] P cad2e126b3ba40d9ad88ef5ddc39bb00: Beginning new tablet copy session on tablet 778e19d4e8684ca191e4734027cc9b36 from peer ea76bcce6ea441cfa874b93546e60292 at {username='slave'} at 127.0.106.129:48179: session id = ea76bcce6ea441cfa874b93546e60292-778e19d4e8684ca191e4734027cc9b36
I20250905 08:23:55.578495  4035 tablet_copy_source_session.cc:215] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Tablet Copy: opened 0 blocks and 1 log segments
I20250905 08:23:55.582993  4089 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 778e19d4e8684ca191e4734027cc9b36. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:55.592249  4089 tablet_copy_client.cc:806] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: tablet copy: Starting download of 0 data blocks...
I20250905 08:23:55.592736  4089 tablet_copy_client.cc:670] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: tablet copy: Starting download of 1 WAL segments...
I20250905 08:23:55.597383  4089 tablet_copy_client.cc:538] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250905 08:23:55.602377  4089 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: Bootstrap starting.
I20250905 08:23:55.668334  4089 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: Bootstrap replayed 1/1 log segments. Stats: ops{read=8 overwritten=0 applied=8 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:23:55.668924  4089 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: Bootstrap complete.
I20250905 08:23:55.669438  4089 ts_tablet_manager.cc:1397] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: Time spent bootstrapping tablet: real 0.067s	user 0.053s	sys 0.016s
I20250905 08:23:55.671900  4089 raft_consensus.cc:357] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: true } }
I20250905 08:23:55.672276  4089 raft_consensus.cc:738] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: ea76bcce6ea441cfa874b93546e60292, State: Initialized, Role: LEARNER
I20250905 08:23:55.672680  4089 consensus_queue.cc:260] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 8, Last appended: 2.8, Last appended by leader: 8, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: true } }
I20250905 08:23:55.675410  4089 ts_tablet_manager.cc:1428] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: Time spent starting tablet: real 0.006s	user 0.004s	sys 0.000s
I20250905 08:23:55.677443  4035 tablet_copy_service.cc:342] P cad2e126b3ba40d9ad88ef5ddc39bb00: Request end of tablet copy session ea76bcce6ea441cfa874b93546e60292-778e19d4e8684ca191e4734027cc9b36 received from {username='slave'} at 127.0.106.129:48179
I20250905 08:23:55.677861  4035 tablet_copy_service.cc:434] P cad2e126b3ba40d9ad88ef5ddc39bb00: ending tablet copy session ea76bcce6ea441cfa874b93546e60292-778e19d4e8684ca191e4734027cc9b36 on tablet 778e19d4e8684ca191e4734027cc9b36 with peer ea76bcce6ea441cfa874b93546e60292
I20250905 08:23:55.961380  3871 raft_consensus.cc:1215] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.7->[2.8-2.8]   Dedup: 2.8->[]
I20250905 08:23:56.384866  4088 raft_consensus.cc:1062] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: attempting to promote NON_VOTER ea76bcce6ea441cfa874b93546e60292 to VOTER
I20250905 08:23:56.386185  4088 consensus_queue.cc:237] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 2.8, Last appended by leader: 6, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } }
I20250905 08:23:56.390169  3871 raft_consensus.cc:1273] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 LEARNER]: Refusing update from remote peer cad2e126b3ba40d9ad88ef5ddc39bb00: Log matching property violated. Preceding OpId in replica: term: 2 index: 8. Preceding OpId from leader: term: 2 index: 9. (index mismatch)
I20250905 08:23:56.391304  4096 consensus_queue.cc:1035] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [LEADER]: Connected to new peer: Peer: permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 9, Last known committed idx: 8, Time since last communication: 0.001s
W20250905 08:23:55.268028  4085 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:55.264420  4083 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:56.396869  4084 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250905 08:23:56.396898  4064 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:23:56.398540  4088 raft_consensus.cc:2953] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 LEADER]: Committing config change with OpId 2.9: config changed from index 8 to 9, ea76bcce6ea441cfa874b93546e60292 (127.0.106.129) changed from NON_VOTER to VOTER. New config: { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } }
I20250905 08:23:56.400022  3871 raft_consensus.cc:2953] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Committing config change with OpId 2.9: config changed from index 8 to 9, ea76bcce6ea441cfa874b93546e60292 (127.0.106.129) changed from NON_VOTER to VOTER. New config: { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } }
I20250905 08:23:56.404922  4064 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:23:56.407394  4064 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:23:56.408912  4064 hybrid_clock.cc:648] HybridClock initialized: now 1757060636408870 us; error 40 us; skew 500 ppm
I20250905 08:23:56.409790  4064 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:56.409865  3728 catalog_manager.cc:5582] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 reported cstate change: config changed from index 8 to 9, ea76bcce6ea441cfa874b93546e60292 (127.0.106.129) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" committed_config { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250905 08:23:56.417013  4064 webserver.cc:480] Webserver started at http://127.0.106.131:45505/ using document root <none> and password file <none>
I20250905 08:23:56.418079  4064 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:56.418359  4064 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:56.427516  4064 fs_manager.cc:714] Time spent opening directory manager: real 0.006s	user 0.006s	sys 0.001s
I20250905 08:23:56.431545  4107 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:56.432507  4064 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.000s	sys 0.003s
I20250905 08:23:56.432778  4064 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "4343ba2e6dc5477b8097b27ea603906b"
format_stamp: "Formatted at 2025-09-05 08:23:42 on dist-test-slave-0x95"
I20250905 08:23:56.434505  4064 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:56.486824  4064 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:56.488107  4064 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:56.488469  4064 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:56.490655  4064 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:23:56.497781  4114 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250905 08:23:56.504834  4064 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250905 08:23:56.505039  4064 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s	user 0.002s	sys 0.000s
I20250905 08:23:56.505254  4064 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250905 08:23:56.509449  4064 ts_tablet_manager.cc:610] Registered 1 tablets
I20250905 08:23:56.509618  4064 ts_tablet_manager.cc:589] Time spent register tablets: real 0.004s	user 0.003s	sys 0.000s
I20250905 08:23:56.510018  4114 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Bootstrap starting.
I20250905 08:23:56.565055  4114 log.cc:826] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:56.676472  4064 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:40383
I20250905 08:23:56.676627  4221 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:40383 every 8 connection(s)
I20250905 08:23:56.680521  4064 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250905 08:23:56.685459   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 4064
I20250905 08:23:56.691864  4114 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Bootstrap replayed 1/1 log segments. Stats: ops{read=9 overwritten=0 applied=9 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:23:56.692819  4114 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Bootstrap complete.
I20250905 08:23:56.694360  4114 ts_tablet_manager.cc:1397] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Time spent bootstrapping tablet: real 0.185s	user 0.128s	sys 0.047s
I20250905 08:23:56.702426  4222 heartbeater.cc:344] Connected to a master server at 127.0.106.190:35711
I20250905 08:23:56.702848  4222 heartbeater.cc:461] Registering TS with master...
I20250905 08:23:56.703980  4222 heartbeater.cc:507] Master 127.0.106.190:35711 requested a full tablet report, sending...
I20250905 08:23:56.706821  3728 ts_manager.cc:194] Registered new tserver with Master: 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383)
I20250905 08:23:56.709074  3728 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:35957
I20250905 08:23:56.707521  4114 raft_consensus.cc:357] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } }
I20250905 08:23:56.709492  4114 raft_consensus.cc:738] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4343ba2e6dc5477b8097b27ea603906b, State: Initialized, Role: FOLLOWER
I20250905 08:23:56.710122  4114 consensus_queue.cc:260] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 9, Last appended: 1.9, Last appended by leader: 9, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } }
I20250905 08:23:56.710549  4114 raft_consensus.cc:397] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:23:56.710776  4114 raft_consensus.cc:491] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:23:56.711032  4114 raft_consensus.cc:3058] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 1 FOLLOWER]: Advancing to term 2
I20250905 08:23:56.715909  4114 raft_consensus.cc:513] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } }
I20250905 08:23:56.716568  4114 leader_election.cc:304] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 4343ba2e6dc5477b8097b27ea603906b; no voters: 
I20250905 08:23:56.718384  4114 leader_election.cc:290] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [CANDIDATE]: Term 2 election: Requested vote from peers 
I20250905 08:23:56.718768   426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250905 08:23:56.719218  4228 raft_consensus.cc:2802] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 2 FOLLOWER]: Leader election won for term 2
I20250905 08:23:56.721477  4114 ts_tablet_manager.cc:1428] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Time spent starting tablet: real 0.027s	user 0.023s	sys 0.005s
I20250905 08:23:56.721447  4222 heartbeater.cc:499] Master 127.0.106.190:35711 was elected leader, sending a full tablet report...
I20250905 08:23:56.722327  4228 raft_consensus.cc:695] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 2 LEADER]: Becoming Leader. State: Replica: 4343ba2e6dc5477b8097b27ea603906b, State: Running, Role: LEADER
I20250905 08:23:56.722965  4228 consensus_queue.cc:237] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 1.9, Last appended by leader: 9, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } }
I20250905 08:23:56.723380   426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
W20250905 08:23:56.726436   426 ts_itest-base.cc:209] found only 0 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" }
I20250905 08:23:56.733497  3728 catalog_manager.cc:5582] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b reported cstate change: term changed from 0 to 2, leader changed from <none> to 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131), VOTER 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131) added. New cstate: current_term: 2 leader_uuid: "4343ba2e6dc5477b8097b27ea603906b" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } health_report { overall_health: HEALTHY } } }
I20250905 08:23:56.759562  4177 consensus_queue.cc:237] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 9, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true } }
I20250905 08:23:56.762657  4230 raft_consensus.cc:2953] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index -1 to 11, NON_VOTER cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130) added. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true } } }
I20250905 08:23:56.766096  4015 consensus_queue.cc:237] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 2.9, Last appended by leader: 6, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: NON_VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: true } }
I20250905 08:23:56.773448  3715 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 339d359c12c14106aa07add6dbc3309f with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
W20250905 08:23:56.774595  4110 consensus_peers.cc:489] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b -> Peer cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297): Couldn't send request to peer cad2e126b3ba40d9ad88ef5ddc39bb00. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 339d359c12c14106aa07add6dbc3309f. This is attempt 1: this message will repeat every 5th retry.
I20250905 08:23:56.776294  3871 raft_consensus.cc:1273] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Refusing update from remote peer cad2e126b3ba40d9ad88ef5ddc39bb00: Log matching property violated. Preceding OpId in replica: term: 2 index: 9. Preceding OpId from leader: term: 2 index: 10. (index mismatch)
I20250905 08:23:56.776593  3728 catalog_manager.cc:5582] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b reported cstate change: config changed from index -1 to 11, NON_VOTER cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130) added. New cstate: current_term: 2 leader_uuid: "4343ba2e6dc5477b8097b27ea603906b" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250905 08:23:56.777662  4095 consensus_queue.cc:1035] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [LEADER]: Connected to new peer: Peer: permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 10, Last known committed idx: 9, Time since last communication: 0.001s
I20250905 08:23:56.783214  4088 raft_consensus.cc:2953] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 LEADER]: Committing config change with OpId 2.10: config changed from index 9 to 10, NON_VOTER 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131) added. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: NON_VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: true } } }
W20250905 08:23:56.785565  3948 consensus_peers.cc:489] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 -> Peer 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383): Couldn't send request to peer 4343ba2e6dc5477b8097b27ea603906b. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 778e19d4e8684ca191e4734027cc9b36. This is attempt 1: this message will repeat every 5th retry.
I20250905 08:23:56.787322  4177 consensus_queue.cc:237] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 9, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: true } }
I20250905 08:23:56.788437  3871 raft_consensus.cc:2953] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Committing config change with OpId 2.10: config changed from index 9 to 10, NON_VOTER 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131) added. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: NON_VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: true } } }
I20250905 08:23:56.791184  4230 raft_consensus.cc:2953] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 2 LEADER]: Committing config change with OpId 2.12: config changed from index 11 to 12, NON_VOTER ea76bcce6ea441cfa874b93546e60292 (127.0.106.129) added. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: true } } }
W20250905 08:23:56.793040  4110 consensus_peers.cc:489] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b -> Peer cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297): Couldn't send request to peer cad2e126b3ba40d9ad88ef5ddc39bb00. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 339d359c12c14106aa07add6dbc3309f. This is attempt 1: this message will repeat every 5th retry.
I20250905 08:23:56.798352  3715 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 778e19d4e8684ca191e4734027cc9b36 with cas_config_opid_index 9: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 5)
I20250905 08:23:56.798352  3728 catalog_manager.cc:5582] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 reported cstate change: config changed from index 9 to 10, NON_VOTER 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131) added. New cstate: current_term: 2 leader_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" committed_config { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: NON_VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: true } } }
I20250905 08:23:56.802119  3715 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 339d359c12c14106aa07add6dbc3309f with cas_config_opid_index 11: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250905 08:23:56.806764  3727 catalog_manager.cc:5582] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b reported cstate change: config changed from index 11 to 12, NON_VOTER ea76bcce6ea441cfa874b93546e60292 (127.0.106.129) added. New cstate: current_term: 2 leader_uuid: "4343ba2e6dc5477b8097b27ea603906b" committed_config { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
W20250905 08:23:56.810851  4110 consensus_peers.cc:489] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b -> Peer ea76bcce6ea441cfa874b93546e60292 (127.0.106.129:42483): Couldn't send request to peer ea76bcce6ea441cfa874b93546e60292. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 339d359c12c14106aa07add6dbc3309f. This is attempt 1: this message will repeat every 5th retry.
I20250905 08:23:57.204144  4238 ts_tablet_manager.cc:927] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: Initiating tablet copy from peer cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297)
I20250905 08:23:57.206043  4238 tablet_copy_client.cc:323] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: tablet copy: Beginning tablet copy session from remote peer at address 127.0.106.130:34297
I20250905 08:23:57.207382  4035 tablet_copy_service.cc:140] P cad2e126b3ba40d9ad88ef5ddc39bb00: Received BeginTabletCopySession request for tablet 778e19d4e8684ca191e4734027cc9b36 from peer 4343ba2e6dc5477b8097b27ea603906b ({username='slave'} at 127.0.106.131:53147)
I20250905 08:23:57.207733  4035 tablet_copy_service.cc:161] P cad2e126b3ba40d9ad88ef5ddc39bb00: Beginning new tablet copy session on tablet 778e19d4e8684ca191e4734027cc9b36 from peer 4343ba2e6dc5477b8097b27ea603906b at {username='slave'} at 127.0.106.131:53147: session id = 4343ba2e6dc5477b8097b27ea603906b-778e19d4e8684ca191e4734027cc9b36
I20250905 08:23:57.211362  4035 tablet_copy_source_session.cc:215] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Tablet Copy: opened 0 blocks and 1 log segments
I20250905 08:23:57.213618  4238 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 778e19d4e8684ca191e4734027cc9b36. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:57.225091  4238 tablet_copy_client.cc:806] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: tablet copy: Starting download of 0 data blocks...
I20250905 08:23:57.225576  4238 tablet_copy_client.cc:670] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: tablet copy: Starting download of 1 WAL segments...
I20250905 08:23:57.228561  4238 tablet_copy_client.cc:538] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250905 08:23:57.233189  4238 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: Bootstrap starting.
I20250905 08:23:57.298659  4238 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: Bootstrap replayed 1/1 log segments. Stats: ops{read=10 overwritten=0 applied=10 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:23:57.299317  4238 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: Bootstrap complete.
I20250905 08:23:57.299330  3714 catalog_manager.cc:5129] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 778e19d4e8684ca191e4734027cc9b36 with cas_config_opid_index 8: aborting the task: latest config opid_index 10; task opid_index 8
I20250905 08:23:57.299811  4238 ts_tablet_manager.cc:1397] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: Time spent bootstrapping tablet: real 0.067s	user 0.056s	sys 0.008s
I20250905 08:23:57.301301  4238 raft_consensus.cc:357] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: NON_VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: true } }
I20250905 08:23:57.301720  4238 raft_consensus.cc:738] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 4343ba2e6dc5477b8097b27ea603906b, State: Initialized, Role: LEARNER
I20250905 08:23:57.302134  4238 consensus_queue.cc:260] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 10, Last appended: 2.10, Last appended by leader: 10, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: NON_VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: true } }
I20250905 08:23:57.303757  4238 ts_tablet_manager.cc:1428] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: Time spent starting tablet: real 0.004s	user 0.000s	sys 0.000s
I20250905 08:23:57.305260  4035 tablet_copy_service.cc:342] P cad2e126b3ba40d9ad88ef5ddc39bb00: Request end of tablet copy session 4343ba2e6dc5477b8097b27ea603906b-778e19d4e8684ca191e4734027cc9b36 received from {username='slave'} at 127.0.106.131:53147
I20250905 08:23:57.305645  4035 tablet_copy_service.cc:434] P cad2e126b3ba40d9ad88ef5ddc39bb00: ending tablet copy session 4343ba2e6dc5477b8097b27ea603906b-778e19d4e8684ca191e4734027cc9b36 on tablet 778e19d4e8684ca191e4734027cc9b36 with peer 4343ba2e6dc5477b8097b27ea603906b
I20250905 08:23:57.305948  4241 ts_tablet_manager.cc:927] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: Initiating tablet copy from peer 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383)
I20250905 08:23:57.308292  4241 tablet_copy_client.cc:323] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: tablet copy: Beginning tablet copy session from remote peer at address 127.0.106.131:40383
I20250905 08:23:57.309630  4197 tablet_copy_service.cc:140] P 4343ba2e6dc5477b8097b27ea603906b: Received BeginTabletCopySession request for tablet 339d359c12c14106aa07add6dbc3309f from peer cad2e126b3ba40d9ad88ef5ddc39bb00 ({username='slave'} at 127.0.106.130:41885)
I20250905 08:23:57.310067  4197 tablet_copy_service.cc:161] P 4343ba2e6dc5477b8097b27ea603906b: Beginning new tablet copy session on tablet 339d359c12c14106aa07add6dbc3309f from peer cad2e126b3ba40d9ad88ef5ddc39bb00 at {username='slave'} at 127.0.106.130:41885: session id = cad2e126b3ba40d9ad88ef5ddc39bb00-339d359c12c14106aa07add6dbc3309f
I20250905 08:23:57.315451  4197 tablet_copy_source_session.cc:215] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Tablet Copy: opened 0 blocks and 1 log segments
I20250905 08:23:57.317999  4241 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 339d359c12c14106aa07add6dbc3309f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:57.327939  4241 tablet_copy_client.cc:806] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: tablet copy: Starting download of 0 data blocks...
I20250905 08:23:57.328366  4241 tablet_copy_client.cc:670] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: tablet copy: Starting download of 1 WAL segments...
I20250905 08:23:57.331379  4241 tablet_copy_client.cc:538] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250905 08:23:57.334682  4243 ts_tablet_manager.cc:927] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292: Initiating tablet copy from peer 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383)
I20250905 08:23:57.335930  4243 tablet_copy_client.cc:323] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292: tablet copy: Beginning tablet copy session from remote peer at address 127.0.106.131:40383
I20250905 08:23:57.337071  4241 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: Bootstrap starting.
I20250905 08:23:57.345283  4197 tablet_copy_service.cc:140] P 4343ba2e6dc5477b8097b27ea603906b: Received BeginTabletCopySession request for tablet 339d359c12c14106aa07add6dbc3309f from peer ea76bcce6ea441cfa874b93546e60292 ({username='slave'} at 127.0.106.129:42285)
I20250905 08:23:57.345625  4197 tablet_copy_service.cc:161] P 4343ba2e6dc5477b8097b27ea603906b: Beginning new tablet copy session on tablet 339d359c12c14106aa07add6dbc3309f from peer ea76bcce6ea441cfa874b93546e60292 at {username='slave'} at 127.0.106.129:42285: session id = ea76bcce6ea441cfa874b93546e60292-339d359c12c14106aa07add6dbc3309f
I20250905 08:23:57.349298  4197 tablet_copy_source_session.cc:215] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Tablet Copy: opened 0 blocks and 1 log segments
I20250905 08:23:57.351043  4243 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 339d359c12c14106aa07add6dbc3309f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:23:57.359593  4243 tablet_copy_client.cc:806] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292: tablet copy: Starting download of 0 data blocks...
I20250905 08:23:57.359980  4243 tablet_copy_client.cc:670] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292: tablet copy: Starting download of 1 WAL segments...
I20250905 08:23:57.362877  4243 tablet_copy_client.cc:538] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250905 08:23:57.367645  4243 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292: Bootstrap starting.
I20250905 08:23:57.426520  4241 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:23:57.427455  4241 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: Bootstrap complete.
I20250905 08:23:57.427841  4241 ts_tablet_manager.cc:1397] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: Time spent bootstrapping tablet: real 0.091s	user 0.071s	sys 0.017s
I20250905 08:23:57.429240  4241 raft_consensus.cc:357] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: true } }
I20250905 08:23:57.429626  4241 raft_consensus.cc:738] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: cad2e126b3ba40d9ad88ef5ddc39bb00, State: Initialized, Role: LEARNER
I20250905 08:23:57.430002  4241 consensus_queue.cc:260] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: true } }
I20250905 08:23:57.431335  4241 ts_tablet_manager.cc:1428] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: Time spent starting tablet: real 0.003s	user 0.000s	sys 0.000s
I20250905 08:23:57.432689  4197 tablet_copy_service.cc:342] P 4343ba2e6dc5477b8097b27ea603906b: Request end of tablet copy session cad2e126b3ba40d9ad88ef5ddc39bb00-339d359c12c14106aa07add6dbc3309f received from {username='slave'} at 127.0.106.130:41885
I20250905 08:23:57.432991  4197 tablet_copy_service.cc:434] P 4343ba2e6dc5477b8097b27ea603906b: ending tablet copy session cad2e126b3ba40d9ad88ef5ddc39bb00-339d359c12c14106aa07add6dbc3309f on tablet 339d359c12c14106aa07add6dbc3309f with peer cad2e126b3ba40d9ad88ef5ddc39bb00
I20250905 08:23:57.458428  4243 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:23:57.458966  4243 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292: Bootstrap complete.
I20250905 08:23:57.459309  4243 ts_tablet_manager.cc:1397] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292: Time spent bootstrapping tablet: real 0.092s	user 0.087s	sys 0.005s
I20250905 08:23:57.460705  4243 raft_consensus.cc:357] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: true } }
I20250905 08:23:57.461102  4243 raft_consensus.cc:738] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: ea76bcce6ea441cfa874b93546e60292, State: Initialized, Role: LEARNER
I20250905 08:23:57.461380  4243 consensus_queue.cc:260] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: true } }
I20250905 08:23:57.463388  4243 ts_tablet_manager.cc:1428] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292: Time spent starting tablet: real 0.004s	user 0.007s	sys 0.000s
I20250905 08:23:57.464753  4197 tablet_copy_service.cc:342] P 4343ba2e6dc5477b8097b27ea603906b: Request end of tablet copy session ea76bcce6ea441cfa874b93546e60292-339d359c12c14106aa07add6dbc3309f received from {username='slave'} at 127.0.106.129:42285
I20250905 08:23:57.465077  4197 tablet_copy_service.cc:434] P 4343ba2e6dc5477b8097b27ea603906b: ending tablet copy session ea76bcce6ea441cfa874b93546e60292-339d359c12c14106aa07add6dbc3309f on tablet 339d359c12c14106aa07add6dbc3309f with peer ea76bcce6ea441cfa874b93546e60292
I20250905 08:23:57.676756  4177 raft_consensus.cc:1215] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b [term 2 LEARNER]: Deduplicated request from leader. Original: 2.9->[2.10-2.10]   Dedup: 2.10->[]
I20250905 08:23:57.720383  4015 raft_consensus.cc:1215] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.11->[2.12-2.12]   Dedup: 2.12->[]
I20250905 08:23:57.730839   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver ea76bcce6ea441cfa874b93546e60292 to finish bootstrapping
I20250905 08:23:57.744940   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver cad2e126b3ba40d9ad88ef5ddc39bb00 to finish bootstrapping
I20250905 08:23:57.757165   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 4343ba2e6dc5477b8097b27ea603906b to finish bootstrapping
I20250905 08:23:57.782893  3871 raft_consensus.cc:1215] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.11->[2.12-2.12]   Dedup: 2.12->[]
I20250905 08:23:58.022086  4157 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250905 08:23:58.027697  3995 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250905 08:23:58.030532  3845 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250905 08:23:58.110649  4251 raft_consensus.cc:1062] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: attempting to promote NON_VOTER 4343ba2e6dc5477b8097b27ea603906b to VOTER
I20250905 08:23:58.112350  4251 consensus_queue.cc:237] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 6, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } }
I20250905 08:23:58.127539  4177 raft_consensus.cc:1273] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b [term 2 LEARNER]: Refusing update from remote peer cad2e126b3ba40d9ad88ef5ddc39bb00: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250905 08:23:58.128985  4088 consensus_queue.cc:1035] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [LEADER]: Connected to new peer: Peer: permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.000s
I20250905 08:23:58.135120  3871 raft_consensus.cc:1273] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Refusing update from remote peer cad2e126b3ba40d9ad88ef5ddc39bb00: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250905 08:23:58.136456  4088 consensus_queue.cc:1035] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [LEADER]: Connected to new peer: Peer: permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.000s
I20250905 08:23:58.151883  4250 raft_consensus.cc:2953] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } } }
I20250905 08:23:58.157400  3870 raft_consensus.cc:2953] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } } }
I20250905 08:23:58.159725  4177 raft_consensus.cc:2953] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } } }
I20250905 08:23:58.169732  3727 catalog_manager.cc:5582] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 reported cstate change: config changed from index 10 to 11, 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } } }
I20250905 08:23:58.203599  4252 raft_consensus.cc:1062] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: attempting to promote NON_VOTER ea76bcce6ea441cfa874b93546e60292 to VOTER
I20250905 08:23:58.205951  4252 consensus_queue.cc:237] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 12, Committed index: 12, Last appended: 2.12, Last appended by leader: 9, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } }
I20250905 08:23:58.224541  3870 raft_consensus.cc:1273] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 LEARNER]: Refusing update from remote peer 4343ba2e6dc5477b8097b27ea603906b: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 2 index: 13. (index mismatch)
I20250905 08:23:58.225670  4015 raft_consensus.cc:1273] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 LEARNER]: Refusing update from remote peer 4343ba2e6dc5477b8097b27ea603906b: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 2 index: 13. (index mismatch)
I20250905 08:23:58.227788  4230 consensus_queue.cc:1035] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [LEADER]: Connected to new peer: Peer: permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.000s
I20250905 08:23:58.229102  4253 consensus_queue.cc:1035] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [LEADER]: Connected to new peer: Peer: permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.000s
I20250905 08:23:58.265946  4253 raft_consensus.cc:2953] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 2 LEADER]: Committing config change with OpId 2.13: config changed from index 12 to 13, ea76bcce6ea441cfa874b93546e60292 (127.0.106.129) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } }
I20250905 08:23:58.268299  3870 raft_consensus.cc:2953] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Committing config change with OpId 2.13: config changed from index 12 to 13, ea76bcce6ea441cfa874b93546e60292 (127.0.106.129) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } }
I20250905 08:23:58.269687  4015 raft_consensus.cc:2953] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 LEARNER]: Committing config change with OpId 2.13: config changed from index 12 to 13, ea76bcce6ea441cfa874b93546e60292 (127.0.106.129) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } }
I20250905 08:23:58.277415  4252 raft_consensus.cc:1062] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: attempting to promote NON_VOTER cad2e126b3ba40d9ad88ef5ddc39bb00 to VOTER
I20250905 08:23:58.278512  3728 catalog_manager.cc:5582] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b reported cstate change: config changed from index 12 to 13, ea76bcce6ea441cfa874b93546e60292 (127.0.106.129) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "4343ba2e6dc5477b8097b27ea603906b" committed_config { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: true } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250905 08:23:58.279817  4252 consensus_queue.cc:237] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 13, Committed index: 13, Last appended: 2.13, Last appended by leader: 9, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } }
I20250905 08:23:58.288393  3870 raft_consensus.cc:1273] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Refusing update from remote peer 4343ba2e6dc5477b8097b27ea603906b: Log matching property violated. Preceding OpId in replica: term: 2 index: 13. Preceding OpId from leader: term: 2 index: 14. (index mismatch)
I20250905 08:23:58.290560  4294 consensus_queue.cc:1035] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [LEADER]: Connected to new peer: Peer: permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 14, Last known committed idx: 13, Time since last communication: 0.001s
I20250905 08:23:58.297850  4015 raft_consensus.cc:1273] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 LEARNER]: Refusing update from remote peer 4343ba2e6dc5477b8097b27ea603906b: Log matching property violated. Preceding OpId in replica: term: 2 index: 13. Preceding OpId from leader: term: 2 index: 14. (index mismatch)
I20250905 08:23:58.299057  4295 consensus_queue.cc:1035] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [LEADER]: Connected to new peer: Peer: permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 14, Last known committed idx: 13, Time since last communication: 0.001s
I20250905 08:23:58.302644  4294 raft_consensus.cc:2953] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 2 LEADER]: Committing config change with OpId 2.14: config changed from index 13 to 14, cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130) changed from NON_VOTER to VOTER. New config: { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } }
I20250905 08:23:58.304142  3870 raft_consensus.cc:2953] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Committing config change with OpId 2.14: config changed from index 13 to 14, cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130) changed from NON_VOTER to VOTER. New config: { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } }
Master Summary
               UUID               |       Address       | Status
----------------------------------+---------------------+---------
 3b7c796ecec441dbaff957664cc80c2c | 127.0.106.190:35711 | HEALTHY

Unusual flags for Master:
               Flag               |                                                                             Value                                                                             |      Tags       |         Master
----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
 ipki_ca_key_size                 | 768                                                                                                                                                           | experimental    | all 1 server(s) checked
 ipki_server_key_size             | 768                                                                                                                                                           | experimental    | all 1 server(s) checked
 never_fsync                      | true                                                                                                                                                          | unsafe,advanced | all 1 server(s) checked
 openssl_security_level_override  | 0                                                                                                                                                             | unsafe,hidden   | all 1 server(s) checked
 rpc_reuseport                    | true                                                                                                                                                          | experimental    | all 1 server(s) checked
 rpc_server_allow_ephemeral_ports | true                                                                                                                                                          | unsafe          | all 1 server(s) checked
 server_dump_info_format          | pb                                                                                                                                                            | hidden          | all 1 server(s) checked
 server_dump_info_path            | /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb | hidden          | all 1 server(s) checked
 tsk_num_rsa_bits                 | 512                                                                                                                                                           | experimental    | all 1 server(s) checked

Flags of checked categories for Master:
        Flag         |        Value        |         Master
---------------------+---------------------+-------------------------
 builtin_ntp_servers | 127.0.106.148:41569 | all 1 server(s) checked
 time_source         | builtin             | all 1 server(s) checked

I20250905 08:23:58.309758  4015 raft_consensus.cc:2953] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 FOLLOWER]: Committing config change with OpId 2.14: config changed from index 13 to 14, cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130) changed from NON_VOTER to VOTER. New config: { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } }
Tablet Server Summary
               UUID               |       Address       | Status  | Location | Tablet Leaders | Active Scanners
----------------------------------+---------------------+---------+----------+----------------+-----------------
 4343ba2e6dc5477b8097b27ea603906b | 127.0.106.131:40383 | HEALTHY | <none>   |       1        |       0
 cad2e126b3ba40d9ad88ef5ddc39bb00 | 127.0.106.130:34297 | HEALTHY | <none>   |       1        |       0
 ea76bcce6ea441cfa874b93546e60292 | 127.0.106.129:42483 | HEALTHY | <none>   |       1        |       0

Tablet Server Location Summary
 Location |  Count
----------+---------
 <none>   |       3

Unusual flags for Tablet Server:
               Flag               |                                                                           Value                                                                           |      Tags       |      Tablet Server
----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
 ipki_server_key_size             | 768                                                                                                                                                       | experimental    | all 3 server(s) checked
 local_ip_for_outbound_sockets    | 127.0.106.129                                                                                                                                             | experimental    | 127.0.106.129:42483
 local_ip_for_outbound_sockets    | 127.0.106.130                                                                                                                                             | experimental    | 127.0.106.130:34297
 local_ip_for_outbound_sockets    | 127.0.106.131                                                                                                                                             | experimental    | 127.0.106.131:40383
 never_fsync                      | true                                                                                                                                                      | unsafe,advanced | all 3 server(s) checked
 openssl_security_level_override  | 0                                                                                                                                                         | unsafe,hidden   | all 3 server(s) checked
 rpc_server_allow_ephemeral_ports | true                                                                                                                                                      | unsafe          | all 3 server(s) checked
 server_dump_info_format          | pb                                                                                                                                                        | hidden          | all 3 server(s) checked
 server_dump_info_path            | /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb | hidden          | 127.0.106.129:42483
 server_dump_info_path            | /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb | hidden          | 127.0.106.130:34297
 server_dump_info_path            | /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb | hidden          | 127.0.106.131:40383

Flags of checked categories for Tablet Server:
        Flag         |        Value        |      Tablet Server
---------------------+---------------------+-------------------------
 builtin_ntp_servers | 127.0.106.148:41569 | all 3 server(s) checked
I20250905 08:23:58.314296  3729 catalog_manager.cc:5582] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b reported cstate change: config changed from index 13 to 14, cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "4343ba2e6dc5477b8097b27ea603906b" committed_config { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
 time_source         | builtin             | all 3 server(s) checked

Version Summary
     Version     |         Servers
-----------------+-------------------------
 1.19.0-SNAPSHOT | all 4 server(s) checked

Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
    Name    | RF | Status  | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
------------+----+---------+---------------+---------+------------+------------------+-------------
 TestTable  | 3  | HEALTHY | 1             | 1       | 0          | 0                | 0
 TestTable1 | 3  | HEALTHY | 1             | 1       | 0          | 0                | 0
 TestTable2 | 1  | HEALTHY | 1             | 1       | 0          | 0                | 0

Tablet Replica Count Summary
   Statistic    | Replica Count
----------------+---------------
 Minimum        | 2
 First Quartile | 2
 Median         | 2
 Third Quartile | 3
 Maximum        | 3

Total Count Summary
                | Total Count
----------------+-------------
 Masters        | 1
 Tablet Servers | 3
 Tables         | 3
 Tablets        | 3
 Replicas       | 7

==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set

OK
I20250905 08:23:58.328938   426 log_verifier.cc:126] Checking tablet 339d359c12c14106aa07add6dbc3309f
I20250905 08:23:58.418573   426 log_verifier.cc:177] Verified matching terms for 14 ops in tablet 339d359c12c14106aa07add6dbc3309f
I20250905 08:23:58.418859   426 log_verifier.cc:126] Checking tablet 778e19d4e8684ca191e4734027cc9b36
I20250905 08:23:58.490523   426 log_verifier.cc:177] Verified matching terms for 11 ops in tablet 778e19d4e8684ca191e4734027cc9b36
I20250905 08:23:58.490716   426 log_verifier.cc:126] Checking tablet b3cc066ed2004a1390fef6fc0eb08162
I20250905 08:23:58.514508   426 log_verifier.cc:177] Verified matching terms for 7 ops in tablet b3cc066ed2004a1390fef6fc0eb08162
I20250905 08:23:58.514899   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 3696
I20250905 08:23:58.545152   426 minidump.cc:252] Setting minidump size limit to 20M
I20250905 08:23:58.546527   426 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:23:58.547839   426 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:23:58.557834  4297 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:23:58.559013  4298 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:23:58.559407   426 server_base.cc:1047] running on GCE node
W20250905 08:23:58.559772  4300 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:23:58.560690   426 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250905 08:23:58.560858   426 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250905 08:23:58.560973   426 hybrid_clock.cc:648] HybridClock initialized: now 1757060638560961 us; error 0 us; skew 500 ppm
I20250905 08:23:58.561450   426 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:23:58.564030   426 webserver.cc:480] Webserver started at http://0.0.0.0:43325/ using document root <none> and password file <none>
I20250905 08:23:58.564769   426 fs_manager.cc:362] Metadata directory not provided
I20250905 08:23:58.564934   426 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:23:58.569522   426 fs_manager.cc:714] Time spent opening directory manager: real 0.003s	user 0.005s	sys 0.000s
I20250905 08:23:58.572635  4305 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:23:58.573351   426 fs_manager.cc:730] Time spent opening block manager: real 0.002s	user 0.003s	sys 0.000s
I20250905 08:23:58.573594   426 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "3b7c796ecec441dbaff957664cc80c2c"
format_stamp: "Formatted at 2025-09-05 08:23:35 on dist-test-slave-0x95"
I20250905 08:23:58.575109   426 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:23:58.593191   426 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:23:58.594326   426 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:23:58.594691   426 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:23:58.602108   426 sys_catalog.cc:263] Verifying existing consensus state
W20250905 08:23:58.605055   426 sys_catalog.cc:243] For a single master config, on-disk Raft master: 127.0.106.190:35711 exists but no master address supplied!
I20250905 08:23:58.606649   426 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Bootstrap starting.
I20250905 08:23:58.644069   426 log.cc:826] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Log is configured to *not* fsync() on all Append() calls
I20250905 08:23:58.706429   426 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Bootstrap replayed 1/1 log segments. Stats: ops{read=31 overwritten=0 applied=31 ignored=0} inserts{seen=13 ignored=0} mutations{seen=22 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:23:58.707142   426 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Bootstrap complete.
I20250905 08:23:58.719986   426 raft_consensus.cc:357] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 3 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:23:58.720551   426 raft_consensus.cc:738] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 3 FOLLOWER]: Becoming Follower/Learner. State: Replica: 3b7c796ecec441dbaff957664cc80c2c, State: Initialized, Role: FOLLOWER
I20250905 08:23:58.721194   426 consensus_queue.cc:260] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 31, Last appended: 3.31, Last appended by leader: 31, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:23:58.721648   426 raft_consensus.cc:397] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 3 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:23:58.721910   426 raft_consensus.cc:491] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 3 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:23:58.722216   426 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 3 FOLLOWER]: Advancing to term 4
I20250905 08:23:58.727138   426 raft_consensus.cc:513] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 4 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:23:58.727766   426 leader_election.cc:304] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 3b7c796ecec441dbaff957664cc80c2c; no voters: 
I20250905 08:23:58.728981   426 leader_election.cc:290] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [CANDIDATE]: Term 4 election: Requested vote from peers 
I20250905 08:23:58.729250  4312 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 4 FOLLOWER]: Leader election won for term 4
I20250905 08:23:58.733060  4312 raft_consensus.cc:695] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 4 LEADER]: Becoming Leader. State: Replica: 3b7c796ecec441dbaff957664cc80c2c, State: Running, Role: LEADER
I20250905 08:23:58.733783  4312 consensus_queue.cc:237] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 31, Committed index: 31, Last appended: 3.31, Last appended by leader: 31, Current term: 4, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:23:58.741271  4314 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: SysCatalogTable state changed. Reason: New leader 3b7c796ecec441dbaff957664cc80c2c. Latest consensus state: current_term: 4 leader_uuid: "3b7c796ecec441dbaff957664cc80c2c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } } }
I20250905 08:23:58.741781  4314 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: This master's current role is: LEADER
I20250905 08:23:58.742532  4313 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 4 leader_uuid: "3b7c796ecec441dbaff957664cc80c2c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } } }
I20250905 08:23:58.742916  4313 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: This master's current role is: LEADER
I20250905 08:23:58.773906   426 tablet_replica.cc:331] stopping tablet replica
I20250905 08:23:58.774487   426 raft_consensus.cc:2241] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 4 LEADER]: Raft consensus shutting down.
I20250905 08:23:58.774896   426 raft_consensus.cc:2270] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 4 FOLLOWER]: Raft consensus is shut down!
I20250905 08:23:58.777524   426 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250905 08:23:58.778062   426 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250905 08:23:58.893564   426 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
W20250905 08:23:59.352197  4222 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:35711 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:35711: connect: Connection refused (error 111)
W20250905 08:23:59.361917  3919 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:35711 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:35711: connect: Connection refused (error 111)
W20250905 08:23:59.367203  4060 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:35711 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:35711: connect: Connection refused (error 111)
W20250905 08:23:59.847391  3912 debug-util.cc:398] Leaking SignalData structure 0x7b08000c32a0 after lost signal to thread 3786
W20250905 08:23:59.848445  3912 debug-util.cc:398] Leaking SignalData structure 0x7b08000c3320 after lost signal to thread 3915
I20250905 08:24:04.097739   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 3766
I20250905 08:24:04.124938   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 3922
I20250905 08:24:04.148859   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 4064
I20250905 08:24:04.175153   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:35711
--webserver_interface=127.0.106.190
--webserver_port=42537
--builtin_ntp_servers=127.0.106.148:41569
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:35711 with env {}
W20250905 08:24:04.448762  4387 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:24:04.449246  4387 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:24:04.449653  4387 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:24:04.477259  4387 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250905 08:24:04.477550  4387 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:24:04.477823  4387 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250905 08:24:04.478065  4387 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250905 08:24:04.511922  4387 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:41569
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:35711
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:35711
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=42537
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:24:04.513046  4387 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:24:04.514438  4387 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:24:04.524729  4393 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:04.529031  4396 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:04.525413  4394 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:05.651611  4395 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1123 milliseconds
I20250905 08:24:05.651706  4387 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:24:05.652761  4387 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:24:05.655444  4387 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:24:05.656832  4387 hybrid_clock.cc:648] HybridClock initialized: now 1757060645656784 us; error 65 us; skew 500 ppm
I20250905 08:24:05.657541  4387 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:24:05.663403  4387 webserver.cc:480] Webserver started at http://127.0.106.190:42537/ using document root <none> and password file <none>
I20250905 08:24:05.664259  4387 fs_manager.cc:362] Metadata directory not provided
I20250905 08:24:05.664450  4387 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:24:05.671870  4387 fs_manager.cc:714] Time spent opening directory manager: real 0.005s	user 0.005s	sys 0.002s
I20250905 08:24:05.675565  4403 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:05.676476  4387 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.003s	sys 0.000s
I20250905 08:24:05.676759  4387 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "3b7c796ecec441dbaff957664cc80c2c"
format_stamp: "Formatted at 2025-09-05 08:23:35 on dist-test-slave-0x95"
I20250905 08:24:05.678442  4387 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:24:05.721787  4387 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:24:05.722977  4387 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:24:05.723383  4387 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:24:05.786899  4387 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:35711
I20250905 08:24:05.786954  4454 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:35711 every 8 connection(s)
I20250905 08:24:05.789496  4387 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250905 08:24:05.796077   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 4387
I20250905 08:24:05.797994   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:42483
--local_ip_for_outbound_sockets=127.0.106.129
--tserver_master_addrs=127.0.106.190:35711
--webserver_port=34887
--webserver_interface=127.0.106.129
--builtin_ntp_servers=127.0.106.148:41569
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250905 08:24:05.800417  4455 sys_catalog.cc:263] Verifying existing consensus state
I20250905 08:24:05.807577  4455 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Bootstrap starting.
I20250905 08:24:05.816844  4455 log.cc:826] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Log is configured to *not* fsync() on all Append() calls
I20250905 08:24:05.888116  4455 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Bootstrap replayed 1/1 log segments. Stats: ops{read=35 overwritten=0 applied=35 ignored=0} inserts{seen=15 ignored=0} mutations{seen=24 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:24:05.888855  4455 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Bootstrap complete.
I20250905 08:24:05.905010  4455 raft_consensus.cc:357] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 5 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:24:05.906816  4455 raft_consensus.cc:738] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 5 FOLLOWER]: Becoming Follower/Learner. State: Replica: 3b7c796ecec441dbaff957664cc80c2c, State: Initialized, Role: FOLLOWER
I20250905 08:24:05.907413  4455 consensus_queue.cc:260] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 35, Last appended: 5.35, Last appended by leader: 35, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:24:05.907851  4455 raft_consensus.cc:397] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 5 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:24:05.908134  4455 raft_consensus.cc:491] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 5 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:24:05.908390  4455 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 5 FOLLOWER]: Advancing to term 6
I20250905 08:24:05.912549  4455 raft_consensus.cc:513] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 6 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:24:05.913079  4455 leader_election.cc:304] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [CANDIDATE]: Term 6 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 3b7c796ecec441dbaff957664cc80c2c; no voters: 
I20250905 08:24:05.914644  4455 leader_election.cc:290] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [CANDIDATE]: Term 6 election: Requested vote from peers 
I20250905 08:24:05.915069  4459 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 6 FOLLOWER]: Leader election won for term 6
I20250905 08:24:05.917634  4459 raft_consensus.cc:695] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [term 6 LEADER]: Becoming Leader. State: Replica: 3b7c796ecec441dbaff957664cc80c2c, State: Running, Role: LEADER
I20250905 08:24:05.918686  4455 sys_catalog.cc:564] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: configured and running, proceeding with master startup.
I20250905 08:24:05.918516  4459 consensus_queue.cc:237] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 35, Committed index: 35, Last appended: 5.35, Last appended by leader: 35, Current term: 6, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } }
I20250905 08:24:05.930145  4461 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: SysCatalogTable state changed. Reason: New leader 3b7c796ecec441dbaff957664cc80c2c. Latest consensus state: current_term: 6 leader_uuid: "3b7c796ecec441dbaff957664cc80c2c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } } }
I20250905 08:24:05.930038  4460 sys_catalog.cc:455] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 6 leader_uuid: "3b7c796ecec441dbaff957664cc80c2c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "3b7c796ecec441dbaff957664cc80c2c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 35711 } } }
I20250905 08:24:05.931155  4461 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: This master's current role is: LEADER
I20250905 08:24:05.931336  4460 sys_catalog.cc:458] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c [sys.catalog]: This master's current role is: LEADER
I20250905 08:24:05.934388  4467 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250905 08:24:05.945824  4467 catalog_manager.cc:671] Loaded metadata for table TestTable [id=0b714f08d18e45cca428de02144c6f28]
I20250905 08:24:05.947546  4467 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=4a36f6f3139f4dc1870aa0ec53f8c2ea]
I20250905 08:24:05.949225  4467 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=60cce2a86f5445939dd931e802a2d524]
I20250905 08:24:05.956647  4467 tablet_loader.cc:96] loaded metadata for tablet 339d359c12c14106aa07add6dbc3309f (table TestTable [id=0b714f08d18e45cca428de02144c6f28])
I20250905 08:24:05.958115  4467 tablet_loader.cc:96] loaded metadata for tablet 778e19d4e8684ca191e4734027cc9b36 (table TestTable1 [id=60cce2a86f5445939dd931e802a2d524])
I20250905 08:24:05.959497  4467 tablet_loader.cc:96] loaded metadata for tablet b3cc066ed2004a1390fef6fc0eb08162 (table TestTable2 [id=4a36f6f3139f4dc1870aa0ec53f8c2ea])
I20250905 08:24:05.960929  4467 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250905 08:24:05.966323  4467 catalog_manager.cc:1261] Loaded cluster ID: 005533fd8e8d4b80b3b44f16d93c1bfa
I20250905 08:24:05.966605  4467 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250905 08:24:05.974622  4467 catalog_manager.cc:1506] Loading token signing keys...
I20250905 08:24:05.979813  4467 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 3b7c796ecec441dbaff957664cc80c2c: Loaded TSK: 0
I20250905 08:24:05.981209  4467 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250905 08:24:05.982010  4476 catalog_manager.cc:797] Waiting for catalog manager background task thread to start: Service unavailable: Catalog manager is not initialized. State: Starting
W20250905 08:24:06.107245  4457 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:24:06.107697  4457 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:24:06.108322  4457 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:24:06.137274  4457 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:24:06.138108  4457 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250905 08:24:06.169432  4457 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:41569
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:42483
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=34887
--tserver_master_addrs=127.0.106.190:35711
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:24:06.170615  4457 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:24:06.172058  4457 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:24:06.183779  4482 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:07.585525  4481 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 4457
W20250905 08:24:07.595139  4481 kernel_stack_watchdog.cc:198] Thread 4457 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 398ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250905 08:24:06.184782  4483 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:07.596532  4484 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1410 milliseconds
W20250905 08:24:07.596882  4457 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.412s	user 0.369s	sys 1.017s
W20250905 08:24:07.597173  4457 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.413s	user 0.369s	sys 1.017s
I20250905 08:24:07.598031  4457 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250905 08:24:07.598109  4485 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:24:07.601186  4457 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:24:07.603104  4457 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:24:07.604434  4457 hybrid_clock.cc:648] HybridClock initialized: now 1757060647604388 us; error 51 us; skew 500 ppm
I20250905 08:24:07.605201  4457 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:24:07.611320  4457 webserver.cc:480] Webserver started at http://127.0.106.129:34887/ using document root <none> and password file <none>
I20250905 08:24:07.612131  4457 fs_manager.cc:362] Metadata directory not provided
I20250905 08:24:07.612334  4457 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:24:07.619247  4457 fs_manager.cc:714] Time spent opening directory manager: real 0.004s	user 0.006s	sys 0.000s
I20250905 08:24:07.624867  4492 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:07.625890  4457 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.002s	sys 0.001s
I20250905 08:24:07.626173  4457 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "ea76bcce6ea441cfa874b93546e60292"
format_stamp: "Formatted at 2025-09-05 08:23:37 on dist-test-slave-0x95"
I20250905 08:24:07.627972  4457 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:24:07.675182  4457 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:24:07.676595  4457 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:24:07.677001  4457 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:24:07.679464  4457 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:24:07.684863  4499 ts_tablet_manager.cc:542] Loading tablet metadata (0/3 complete)
I20250905 08:24:07.708232  4457 ts_tablet_manager.cc:579] Loaded tablet metadata (3 total tablets, 3 live tablets)
I20250905 08:24:07.708479  4457 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.025s	user 0.000s	sys 0.002s
I20250905 08:24:07.708724  4457 ts_tablet_manager.cc:594] Registering tablets (0/3 complete)
I20250905 08:24:07.713364  4499 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: Bootstrap starting.
I20250905 08:24:07.718539  4457 ts_tablet_manager.cc:610] Registered 3 tablets
I20250905 08:24:07.718734  4457 ts_tablet_manager.cc:589] Time spent register tablets: real 0.010s	user 0.008s	sys 0.000s
I20250905 08:24:07.776211  4499 log.cc:826] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: Log is configured to *not* fsync() on all Append() calls
I20250905 08:24:07.907011  4457 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:42483
I20250905 08:24:07.907387  4499 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:24:07.907338  4606 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:42483 every 8 connection(s)
I20250905 08:24:07.908919  4499 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: Bootstrap complete.
I20250905 08:24:07.910470  4457 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250905 08:24:07.910467  4499 ts_tablet_manager.cc:1397] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: Time spent bootstrapping tablet: real 0.198s	user 0.133s	sys 0.053s
I20250905 08:24:07.918232   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 4457
I20250905 08:24:07.920060   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:34297
--local_ip_for_outbound_sockets=127.0.106.130
--tserver_master_addrs=127.0.106.190:35711
--webserver_port=40331
--webserver_interface=127.0.106.130
--builtin_ntp_servers=127.0.106.148:41569
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250905 08:24:07.938629  4499 raft_consensus.cc:357] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } }
I20250905 08:24:07.942896  4499 raft_consensus.cc:738] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: ea76bcce6ea441cfa874b93546e60292, State: Initialized, Role: FOLLOWER
I20250905 08:24:07.944180  4499 consensus_queue.cc:260] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } }
I20250905 08:24:07.955139  4499 ts_tablet_manager.cc:1428] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292: Time spent starting tablet: real 0.044s	user 0.034s	sys 0.007s
I20250905 08:24:07.956143  4499 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292: Bootstrap starting.
I20250905 08:24:07.969640  4607 heartbeater.cc:344] Connected to a master server at 127.0.106.190:35711
I20250905 08:24:07.970050  4607 heartbeater.cc:461] Registering TS with master...
I20250905 08:24:07.971107  4607 heartbeater.cc:507] Master 127.0.106.190:35711 requested a full tablet report, sending...
I20250905 08:24:07.976011  4420 ts_manager.cc:194] Registered new tserver with Master: ea76bcce6ea441cfa874b93546e60292 (127.0.106.129:42483)
I20250905 08:24:07.981745  4420 catalog_manager.cc:5582] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 reported cstate change: config changed from index -1 to 14, term changed from 0 to 2, VOTER 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131) added, VOTER cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130) added, VOTER ea76bcce6ea441cfa874b93546e60292 (127.0.106.129) added. New cstate: current_term: 2 committed_config { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } }
I20250905 08:24:08.047564  4420 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:46143
I20250905 08:24:08.051623  4607 heartbeater.cc:499] Master 127.0.106.190:35711 was elected leader, sending a full tablet report...
I20250905 08:24:08.104354  4499 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292: Bootstrap replayed 1/1 log segments. Stats: ops{read=14 overwritten=0 applied=14 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:24:08.104974  4499 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292: Bootstrap complete.
I20250905 08:24:08.106140  4499 ts_tablet_manager.cc:1397] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292: Time spent bootstrapping tablet: real 0.150s	user 0.135s	sys 0.012s
I20250905 08:24:08.107580  4499 raft_consensus.cc:357] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } }
I20250905 08:24:08.108034  4499 raft_consensus.cc:738] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: ea76bcce6ea441cfa874b93546e60292, State: Initialized, Role: FOLLOWER
I20250905 08:24:08.108651  4499 consensus_queue.cc:260] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 14, Last appended: 2.14, Last appended by leader: 14, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } }
I20250905 08:24:08.110299  4499 ts_tablet_manager.cc:1428] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292: Time spent starting tablet: real 0.004s	user 0.005s	sys 0.000s
I20250905 08:24:08.110862  4499 tablet_bootstrap.cc:492] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Bootstrap starting.
I20250905 08:24:08.214113  4499 tablet_bootstrap.cc:492] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:24:08.214911  4499 tablet_bootstrap.cc:492] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Bootstrap complete.
I20250905 08:24:08.216235  4499 ts_tablet_manager.cc:1397] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Time spent bootstrapping tablet: real 0.105s	user 0.097s	sys 0.007s
I20250905 08:24:08.218159  4499 raft_consensus.cc:357] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } }
I20250905 08:24:08.218657  4499 raft_consensus.cc:738] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: ea76bcce6ea441cfa874b93546e60292, State: Initialized, Role: FOLLOWER
I20250905 08:24:08.219213  4499 consensus_queue.cc:260] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 2.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } }
I20250905 08:24:08.219843  4499 raft_consensus.cc:397] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:24:08.220220  4499 raft_consensus.cc:491] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:24:08.220575  4499 raft_consensus.cc:3058] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Advancing to term 3
I20250905 08:24:08.228072  4499 raft_consensus.cc:513] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } }
I20250905 08:24:08.228917  4499 leader_election.cc:304] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: ea76bcce6ea441cfa874b93546e60292; no voters: 
I20250905 08:24:08.229550  4499 leader_election.cc:290] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 election: Requested vote from peers 
I20250905 08:24:08.229794  4612 raft_consensus.cc:2802] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 3 FOLLOWER]: Leader election won for term 3
I20250905 08:24:08.238307  4612 raft_consensus.cc:695] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [term 3 LEADER]: Becoming Leader. State: Replica: ea76bcce6ea441cfa874b93546e60292, State: Running, Role: LEADER
I20250905 08:24:08.239214  4612 consensus_queue.cc:237] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 2.7, Last appended by leader: 7, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } }
I20250905 08:24:08.254038  4499 ts_tablet_manager.cc:1428] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292: Time spent starting tablet: real 0.038s	user 0.017s	sys 0.020s
I20250905 08:24:08.254434  4420 catalog_manager.cc:5582] T b3cc066ed2004a1390fef6fc0eb08162 P ea76bcce6ea441cfa874b93546e60292 reported cstate change: term changed from 2 to 3. New cstate: current_term: 3 leader_uuid: "ea76bcce6ea441cfa874b93546e60292" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } health_report { overall_health: HEALTHY } } }
W20250905 08:24:08.341820  4611 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:24:08.342355  4611 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:24:08.342850  4611 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:24:08.375321  4611 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:24:08.376228  4611 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250905 08:24:08.410774  4611 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:41569
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:34297
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=40331
--tserver_master_addrs=127.0.106.190:35711
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:24:08.412075  4611 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:24:08.413604  4611 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:24:08.425699  4629 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:24:09.428316  4635 raft_consensus.cc:491] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250905 08:24:09.429195  4635 raft_consensus.cc:513] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } }
W20250905 08:24:09.438824  4495 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.106.131:40383: connect: Connection refused (error 111)
I20250905 08:24:09.440492  4635 leader_election.cc:290] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383), cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297)
W20250905 08:24:09.448611  4495 leader_election.cc:336] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383): Network error: Client connection negotiation failed: client connection to 127.0.106.131:40383: connect: Connection refused (error 111)
W20250905 08:24:09.449360  4495 leader_election.cc:336] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297): Network error: Client connection negotiation failed: client connection to 127.0.106.130:34297: connect: Connection refused (error 111)
I20250905 08:24:09.449955  4495 leader_election.cc:304] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: ea76bcce6ea441cfa874b93546e60292; no voters: 4343ba2e6dc5477b8097b27ea603906b, cad2e126b3ba40d9ad88ef5ddc39bb00
I20250905 08:24:09.450999  4635 raft_consensus.cc:2747] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250905 08:24:09.741605  4635 raft_consensus.cc:491] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250905 08:24:09.742218  4635 raft_consensus.cc:513] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } }
I20250905 08:24:09.744705  4635 leader_election.cc:290] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297), 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383)
W20250905 08:24:09.758626  4495 leader_election.cc:336] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297): Network error: Client connection negotiation failed: client connection to 127.0.106.130:34297: connect: Connection refused (error 111)
W20250905 08:24:09.759502  4495 leader_election.cc:336] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383): Network error: Client connection negotiation failed: client connection to 127.0.106.131:40383: connect: Connection refused (error 111)
I20250905 08:24:09.760067  4495 leader_election.cc:304] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: ea76bcce6ea441cfa874b93546e60292; no voters: 4343ba2e6dc5477b8097b27ea603906b, cad2e126b3ba40d9ad88ef5ddc39bb00
I20250905 08:24:09.760917  4635 raft_consensus.cc:2747] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
W20250905 08:24:09.828789  4628 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 4611
W20250905 08:24:10.103220  4628 kernel_stack_watchdog.cc:198] Thread 4611 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 398ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250905 08:24:08.426798  4630 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:10.104252  4611 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.676s	user 0.002s	sys 0.000s
W20250905 08:24:10.104532  4611 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.677s	user 0.002s	sys 0.000s
W20250905 08:24:10.108568  4631 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1681 milliseconds
W20250905 08:24:10.109405  4633 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:24:10.109472  4611 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:24:10.110522  4611 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:24:10.112653  4611 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:24:10.114070  4611 hybrid_clock.cc:648] HybridClock initialized: now 1757060650114023 us; error 51 us; skew 500 ppm
I20250905 08:24:10.114748  4611 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:24:10.120301  4611 webserver.cc:480] Webserver started at http://127.0.106.130:40331/ using document root <none> and password file <none>
I20250905 08:24:10.121102  4611 fs_manager.cc:362] Metadata directory not provided
I20250905 08:24:10.121310  4611 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:24:10.128041  4611 fs_manager.cc:714] Time spent opening directory manager: real 0.004s	user 0.005s	sys 0.002s
I20250905 08:24:10.132314  4642 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:10.133266  4611 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.000s	sys 0.004s
I20250905 08:24:10.133563  4611 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00"
format_stamp: "Formatted at 2025-09-05 08:23:39 on dist-test-slave-0x95"
I20250905 08:24:10.135337  4611 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:24:10.181442  4611 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:24:10.182818  4611 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:24:10.183233  4611 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:24:10.185518  4611 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:24:10.190750  4649 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250905 08:24:10.201987  4611 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250905 08:24:10.202212  4611 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.013s	user 0.002s	sys 0.000s
I20250905 08:24:10.202490  4611 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250905 08:24:10.207401  4649 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Bootstrap starting.
I20250905 08:24:10.210180  4611 ts_tablet_manager.cc:610] Registered 2 tablets
I20250905 08:24:10.210379  4611 ts_tablet_manager.cc:589] Time spent register tablets: real 0.008s	user 0.006s	sys 0.000s
I20250905 08:24:10.257795  4649 log.cc:826] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Log is configured to *not* fsync() on all Append() calls
I20250905 08:24:10.363977  4611 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:34297
I20250905 08:24:10.364125  4756 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:34297 every 8 connection(s)
I20250905 08:24:10.367249  4611 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250905 08:24:10.370213   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 4611
I20250905 08:24:10.372393   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:40383
--local_ip_for_outbound_sockets=127.0.106.131
--tserver_master_addrs=127.0.106.190:35711
--webserver_port=45505
--webserver_interface=127.0.106.131
--builtin_ntp_servers=127.0.106.148:41569
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250905 08:24:10.381357  4649 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:24:10.382472  4649 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Bootstrap complete.
I20250905 08:24:10.384413  4649 ts_tablet_manager.cc:1397] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Time spent bootstrapping tablet: real 0.177s	user 0.137s	sys 0.039s
I20250905 08:24:10.402503  4649 raft_consensus.cc:357] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } }
I20250905 08:24:10.404708  4649 raft_consensus.cc:738] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: cad2e126b3ba40d9ad88ef5ddc39bb00, State: Initialized, Role: FOLLOWER
I20250905 08:24:10.405444  4649 consensus_queue.cc:260] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } }
I20250905 08:24:10.409114  4649 ts_tablet_manager.cc:1428] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00: Time spent starting tablet: real 0.024s	user 0.018s	sys 0.000s
I20250905 08:24:10.409794  4649 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: Bootstrap starting.
I20250905 08:24:10.411062  4757 heartbeater.cc:344] Connected to a master server at 127.0.106.190:35711
I20250905 08:24:10.411502  4757 heartbeater.cc:461] Registering TS with master...
I20250905 08:24:10.412626  4757 heartbeater.cc:507] Master 127.0.106.190:35711 requested a full tablet report, sending...
I20250905 08:24:10.416452  4420 ts_manager.cc:194] Registered new tserver with Master: cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297)
I20250905 08:24:10.420341  4420 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:49837
I20250905 08:24:10.423365  4757 heartbeater.cc:499] Master 127.0.106.190:35711 was elected leader, sending a full tablet report...
I20250905 08:24:10.520962  4649 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: Bootstrap replayed 1/1 log segments. Stats: ops{read=14 overwritten=0 applied=14 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:24:10.521567  4649 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: Bootstrap complete.
I20250905 08:24:10.522593  4649 ts_tablet_manager.cc:1397] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: Time spent bootstrapping tablet: real 0.113s	user 0.102s	sys 0.008s
I20250905 08:24:10.523980  4649 raft_consensus.cc:357] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } }
I20250905 08:24:10.524363  4649 raft_consensus.cc:738] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: cad2e126b3ba40d9ad88ef5ddc39bb00, State: Initialized, Role: FOLLOWER
I20250905 08:24:10.524818  4649 consensus_queue.cc:260] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 14, Last appended: 2.14, Last appended by leader: 14, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } }
I20250905 08:24:10.526010  4649 ts_tablet_manager.cc:1428] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: Time spent starting tablet: real 0.003s	user 0.004s	sys 0.000s
W20250905 08:24:10.676422  4761 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:24:10.676846  4761 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:24:10.677346  4761 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:24:10.704381  4761 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:24:10.705157  4761 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250905 08:24:10.735965  4761 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:41569
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:40383
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=45505
--tserver_master_addrs=127.0.106.190:35711
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:24:10.737102  4761 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:24:10.738639  4761 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:24:10.749118  4769 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:24:11.454674  4775 raft_consensus.cc:491] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250905 08:24:11.458395  4775 raft_consensus.cc:513] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } }
I20250905 08:24:11.467561  4775 leader_election.cc:290] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383), cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297)
W20250905 08:24:11.489777  4495 leader_election.cc:336] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383): Network error: Client connection negotiation failed: client connection to 127.0.106.131:40383: connect: Connection refused (error 111)
I20250905 08:24:11.500808  4712 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "339d359c12c14106aa07add6dbc3309f" candidate_uuid: "ea76bcce6ea441cfa874b93546e60292" candidate_term: 3 candidate_status { last_received { term: 2 index: 14 } } ignore_live_leader: false dest_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" is_pre_election: true
I20250905 08:24:11.501576  4712 raft_consensus.cc:2466] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate ea76bcce6ea441cfa874b93546e60292 in term 2.
I20250905 08:24:11.503603  4495 leader_election.cc:304] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: cad2e126b3ba40d9ad88ef5ddc39bb00, ea76bcce6ea441cfa874b93546e60292; no voters: 4343ba2e6dc5477b8097b27ea603906b
I20250905 08:24:11.504864  4775 raft_consensus.cc:2802] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250905 08:24:11.505240  4775 raft_consensus.cc:491] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250905 08:24:11.505594  4775 raft_consensus.cc:3058] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Advancing to term 3
I20250905 08:24:11.514256  4775 raft_consensus.cc:513] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 3 FOLLOWER]: Starting leader election with config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } }
W20250905 08:24:11.521598  4495 leader_election.cc:336] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383): Network error: Client connection negotiation failed: client connection to 127.0.106.131:40383: connect: Connection refused (error 111)
I20250905 08:24:11.523864  4712 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "339d359c12c14106aa07add6dbc3309f" candidate_uuid: "ea76bcce6ea441cfa874b93546e60292" candidate_term: 3 candidate_status { last_received { term: 2 index: 14 } } ignore_live_leader: false dest_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00"
I20250905 08:24:11.524487  4712 raft_consensus.cc:3058] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 FOLLOWER]: Advancing to term 3
I20250905 08:24:11.531920  4775 leader_election.cc:290] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 election: Requested vote from peers 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383), cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297)
I20250905 08:24:11.539950  4712 raft_consensus.cc:2466] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate ea76bcce6ea441cfa874b93546e60292 in term 3.
I20250905 08:24:11.541270  4495 leader_election.cc:304] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: cad2e126b3ba40d9ad88ef5ddc39bb00, ea76bcce6ea441cfa874b93546e60292; no voters: 4343ba2e6dc5477b8097b27ea603906b
I20250905 08:24:11.542475  4775 raft_consensus.cc:2802] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 3 FOLLOWER]: Leader election won for term 3
I20250905 08:24:11.551723  4775 raft_consensus.cc:695] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 3 LEADER]: Becoming Leader. State: Replica: ea76bcce6ea441cfa874b93546e60292, State: Running, Role: LEADER
I20250905 08:24:11.552850  4775 consensus_queue.cc:237] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 14, Committed index: 14, Last appended: 2.14, Last appended by leader: 14, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } }
I20250905 08:24:11.566902  4420 catalog_manager.cc:5582] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 reported cstate change: term changed from 2 to 3, leader changed from <none> to ea76bcce6ea441cfa874b93546e60292 (127.0.106.129). New cstate: current_term: 3 leader_uuid: "ea76bcce6ea441cfa874b93546e60292" committed_config { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250905 08:24:11.849551  4786 raft_consensus.cc:491] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250905 08:24:11.850340  4786 raft_consensus.cc:513] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } }
I20250905 08:24:11.876880  4786 leader_election.cc:290] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers ea76bcce6ea441cfa874b93546e60292 (127.0.106.129:42483), 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383)
W20250905 08:24:11.909041  4645 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.106.131:40383: connect: Connection refused (error 111)
I20250905 08:24:11.915043  4775 raft_consensus.cc:491] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250905 08:24:11.915622  4775 raft_consensus.cc:513] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } }
I20250905 08:24:11.917783  4775 leader_election.cc:290] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297), 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383)
I20250905 08:24:11.921911  4712 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "778e19d4e8684ca191e4734027cc9b36" candidate_uuid: "ea76bcce6ea441cfa874b93546e60292" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" is_pre_election: true
I20250905 08:24:11.922646  4712 raft_consensus.cc:2466] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate ea76bcce6ea441cfa874b93546e60292 in term 2.
I20250905 08:24:11.924741  4495 leader_election.cc:304] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: cad2e126b3ba40d9ad88ef5ddc39bb00, ea76bcce6ea441cfa874b93546e60292; no voters: 
I20250905 08:24:11.928462  4775 raft_consensus.cc:2802] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250905 08:24:11.928911  4775 raft_consensus.cc:491] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250905 08:24:11.929419  4775 raft_consensus.cc:3058] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 2 FOLLOWER]: Advancing to term 3
W20250905 08:24:11.934258  4495 leader_election.cc:336] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383): Network error: Client connection negotiation failed: client connection to 127.0.106.131:40383: connect: Connection refused (error 111)
W20250905 08:24:11.950737  4645 leader_election.cc:336] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383): Network error: Client connection negotiation failed: client connection to 127.0.106.131:40383: connect: Connection refused (error 111)
I20250905 08:24:11.952700  4775 raft_consensus.cc:513] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 3 FOLLOWER]: Starting leader election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } }
I20250905 08:24:11.956239  4562 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "778e19d4e8684ca191e4734027cc9b36" candidate_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "ea76bcce6ea441cfa874b93546e60292" is_pre_election: true
I20250905 08:24:11.960005  4562 raft_consensus.cc:2391] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 3 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate cad2e126b3ba40d9ad88ef5ddc39bb00 in current term 3: Already voted for candidate ea76bcce6ea441cfa874b93546e60292 in this term.
I20250905 08:24:11.962280  4645 leader_election.cc:304] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: cad2e126b3ba40d9ad88ef5ddc39bb00; no voters: 4343ba2e6dc5477b8097b27ea603906b, ea76bcce6ea441cfa874b93546e60292
I20250905 08:24:11.963421  4786 raft_consensus.cc:3058] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 2 FOLLOWER]: Advancing to term 3
I20250905 08:24:11.974355  4786 raft_consensus.cc:2747] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 3 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250905 08:24:11.977633  4712 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "778e19d4e8684ca191e4734027cc9b36" candidate_uuid: "ea76bcce6ea441cfa874b93546e60292" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00"
I20250905 08:24:11.988837  4712 raft_consensus.cc:2466] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate ea76bcce6ea441cfa874b93546e60292 in term 3.
W20250905 08:24:11.995637  4495 leader_election.cc:336] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383): Network error: Client connection negotiation failed: client connection to 127.0.106.131:40383: connect: Connection refused (error 111)
I20250905 08:24:11.997645  4775 leader_election.cc:290] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 election: Requested vote from peers cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297), 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383)
I20250905 08:24:11.998966  4495 leader_election.cc:304] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: cad2e126b3ba40d9ad88ef5ddc39bb00, ea76bcce6ea441cfa874b93546e60292; no voters: 4343ba2e6dc5477b8097b27ea603906b
I20250905 08:24:12.008711  4780 raft_consensus.cc:2802] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 3 FOLLOWER]: Leader election won for term 3
I20250905 08:24:12.019001  4780 raft_consensus.cc:695] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [term 3 LEADER]: Becoming Leader. State: Replica: ea76bcce6ea441cfa874b93546e60292, State: Running, Role: LEADER
I20250905 08:24:12.022121  4780 consensus_queue.cc:237] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } }
W20250905 08:24:12.035231  4495 consensus_peers.cc:489] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 -> Peer 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383): Couldn't send request to peer 4343ba2e6dc5477b8097b27ea603906b. Status: Network error: Client connection negotiation failed: client connection to 127.0.106.131:40383: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250905 08:24:12.045265  4420 catalog_manager.cc:5582] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 reported cstate change: term changed from 2 to 3, leader changed from cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130) to ea76bcce6ea441cfa874b93546e60292 (127.0.106.129). New cstate: current_term: 3 leader_uuid: "ea76bcce6ea441cfa874b93546e60292" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } health_report { overall_health: UNKNOWN } } }
I20250905 08:24:12.156881  4712 raft_consensus.cc:1273] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 3 FOLLOWER]: Refusing update from remote peer ea76bcce6ea441cfa874b93546e60292: Log matching property violated. Preceding OpId in replica: term: 2 index: 14. Preceding OpId from leader: term: 3 index: 15. (index mismatch)
I20250905 08:24:12.159490  4780 consensus_queue.cc:1035] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [LEADER]: Connected to new peer: Peer: permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 15, Last known committed idx: 14, Time since last communication: 0.001s
W20250905 08:24:12.153681  4768 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 4761
W20250905 08:24:12.206946  4768 kernel_stack_watchdog.cc:198] Thread 4761 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 401ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250905 08:24:12.207298  4761 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.458s	user 0.519s	sys 0.932s
W20250905 08:24:12.207612  4761 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.458s	user 0.519s	sys 0.932s
W20250905 08:24:12.207870  4771 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1455 milliseconds
W20250905 08:24:10.749931  4770 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:12.208956  4772 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:24:12.209390  4761 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:24:12.213675  4761 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:24:12.216333  4761 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:24:12.217744  4761 hybrid_clock.cc:648] HybridClock initialized: now 1757060652217711 us; error 33 us; skew 500 ppm
I20250905 08:24:12.218680  4761 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:24:12.225960  4562 consensus_queue.cc:237] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 15, Committed index: 15, Last appended: 3.15, Last appended by leader: 14, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } }
I20250905 08:24:12.226486  4761 webserver.cc:480] Webserver started at http://127.0.106.131:45505/ using document root <none> and password file <none>
I20250905 08:24:12.227346  4761 fs_manager.cc:362] Metadata directory not provided
I20250905 08:24:12.227538  4761 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:24:12.230396  4711 raft_consensus.cc:1273] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 3 FOLLOWER]: Refusing update from remote peer ea76bcce6ea441cfa874b93546e60292: Log matching property violated. Preceding OpId in replica: term: 3 index: 15. Preceding OpId from leader: term: 3 index: 16. (index mismatch)
I20250905 08:24:12.231451  4780 consensus_queue.cc:1035] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [LEADER]: Connected to new peer: Peer: permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 16, Last known committed idx: 15, Time since last communication: 0.000s
I20250905 08:24:12.237012  4761 fs_manager.cc:714] Time spent opening directory manager: real 0.006s	user 0.005s	sys 0.002s
I20250905 08:24:12.236702  4775 raft_consensus.cc:2953] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 3 LEADER]: Committing config change with OpId 3.16: config changed from index 14 to 16, VOTER 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131) evicted. New config: { opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } }
I20250905 08:24:12.237973  4711 raft_consensus.cc:2953] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 3 FOLLOWER]: Committing config change with OpId 3.16: config changed from index 14 to 16, VOTER 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131) evicted. New config: { opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } }
I20250905 08:24:12.242345  4406 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 339d359c12c14106aa07add6dbc3309f with cas_config_opid_index 14: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250905 08:24:12.243347  4801 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:12.244359  4761 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.004s	sys 0.000s
I20250905 08:24:12.244660  4761 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "4343ba2e6dc5477b8097b27ea603906b"
format_stamp: "Formatted at 2025-09-05 08:23:42 on dist-test-slave-0x95"
I20250905 08:24:12.246907  4761 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:24:12.247172  4420 catalog_manager.cc:5582] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 reported cstate change: config changed from index 14 to 16, VOTER 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131) evicted. New cstate: current_term: 3 leader_uuid: "ea76bcce6ea441cfa874b93546e60292" committed_config { opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
W20250905 08:24:12.254829  4420 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet 339d359c12c14106aa07add6dbc3309f on TS 4343ba2e6dc5477b8097b27ea603906b: Not found: failed to reset TS proxy: Could not find TS for UUID 4343ba2e6dc5477b8097b27ea603906b
I20250905 08:24:12.257714  4562 consensus_queue.cc:237] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 16, Committed index: 16, Last appended: 3.16, Last appended by leader: 14, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 17 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } }
I20250905 08:24:12.259706  4780 raft_consensus.cc:2953] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 [term 3 LEADER]: Committing config change with OpId 3.17: config changed from index 16 to 17, VOTER cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130) evicted. New config: { opid_index: 17 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } }
I20250905 08:24:12.265081  4406 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 339d359c12c14106aa07add6dbc3309f with cas_config_opid_index 16: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250905 08:24:12.267357  4420 catalog_manager.cc:5582] T 339d359c12c14106aa07add6dbc3309f P ea76bcce6ea441cfa874b93546e60292 reported cstate change: config changed from index 16 to 17, VOTER cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130) evicted. New cstate: current_term: 3 leader_uuid: "ea76bcce6ea441cfa874b93546e60292" committed_config { opid_index: 17 OBSOLETE_local: true peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250905 08:24:12.284626  4692 tablet_service.cc:1515] Processing DeleteTablet for tablet 339d359c12c14106aa07add6dbc3309f with delete_type TABLET_DATA_TOMBSTONED (TS cad2e126b3ba40d9ad88ef5ddc39bb00 not found in new config with opid_index 17) from {username='slave'} at 127.0.0.1:50272
I20250905 08:24:12.287621  4803 tablet_replica.cc:331] stopping tablet replica
I20250905 08:24:12.288245  4803 raft_consensus.cc:2241] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 3 FOLLOWER]: Raft consensus shutting down.
I20250905 08:24:12.288699  4803 raft_consensus.cc:2270] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 3 FOLLOWER]: Raft consensus is shut down!
I20250905 08:24:12.292254  4803 ts_tablet_manager.cc:1905] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250905 08:24:12.303699  4761 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:24:12.305531  4761 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:24:12.305578  4803 ts_tablet_manager.cc:1918] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 3.16
I20250905 08:24:12.305922  4803 log.cc:1199] T 339d359c12c14106aa07add6dbc3309f P cad2e126b3ba40d9ad88ef5ddc39bb00: Deleting WAL directory at /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal/wals/339d359c12c14106aa07add6dbc3309f
I20250905 08:24:12.306037  4761 kserver.cc:163] Server-wide thread pool size limit: 3276
W20250905 08:24:12.307322  4405 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet 339d359c12c14106aa07add6dbc3309f on TS 4343ba2e6dc5477b8097b27ea603906b failed: Not found: failed to reset TS proxy: Could not find TS for UUID 4343ba2e6dc5477b8097b27ea603906b
I20250905 08:24:12.307607  4406 catalog_manager.cc:4928] TS cad2e126b3ba40d9ad88ef5ddc39bb00 (127.0.106.130:34297): tablet 339d359c12c14106aa07add6dbc3309f (table TestTable [id=0b714f08d18e45cca428de02144c6f28]) successfully deleted
I20250905 08:24:12.308987  4761 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:24:12.313972  4810 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250905 08:24:12.324990  4761 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250905 08:24:12.325237  4761 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.013s	user 0.000s	sys 0.001s
I20250905 08:24:12.325452  4761 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250905 08:24:12.330078  4810 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: Bootstrap starting.
I20250905 08:24:12.332554  4761 ts_tablet_manager.cc:610] Registered 2 tablets
I20250905 08:24:12.332733  4761 ts_tablet_manager.cc:589] Time spent register tablets: real 0.007s	user 0.004s	sys 0.003s
I20250905 08:24:12.387315  4810 log.cc:826] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: Log is configured to *not* fsync() on all Append() calls
I20250905 08:24:12.471619  4810 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:24:12.472520  4810 tablet_bootstrap.cc:492] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: Bootstrap complete.
I20250905 08:24:12.473937  4810 ts_tablet_manager.cc:1397] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: Time spent bootstrapping tablet: real 0.144s	user 0.091s	sys 0.049s
I20250905 08:24:12.489481  4810 raft_consensus.cc:357] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } }
I20250905 08:24:12.491468  4810 raft_consensus.cc:738] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4343ba2e6dc5477b8097b27ea603906b, State: Initialized, Role: FOLLOWER
I20250905 08:24:12.492331  4810 consensus_queue.cc:260] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } } peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false } }
I20250905 08:24:12.496330  4810 ts_tablet_manager.cc:1428] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b: Time spent starting tablet: real 0.022s	user 0.020s	sys 0.004s
I20250905 08:24:12.497099  4810 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Bootstrap starting.
I20250905 08:24:12.521234  4761 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:40383
I20250905 08:24:12.521457  4918 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:40383 every 8 connection(s)
I20250905 08:24:12.524430  4761 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250905 08:24:12.529327   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 4761
I20250905 08:24:12.559640  4919 heartbeater.cc:344] Connected to a master server at 127.0.106.190:35711
I20250905 08:24:12.560122  4919 heartbeater.cc:461] Registering TS with master...
I20250905 08:24:12.561262  4919 heartbeater.cc:507] Master 127.0.106.190:35711 requested a full tablet report, sending...
I20250905 08:24:12.565275  4420 ts_manager.cc:194] Registered new tserver with Master: 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383)
I20250905 08:24:12.572224   426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250905 08:24:12.576809   426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250905 08:24:12.579540  4420 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:59217
W20250905 08:24:12.581569   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
I20250905 08:24:12.590323  4873 raft_consensus.cc:3058] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b [term 2 FOLLOWER]: Advancing to term 3
I20250905 08:24:12.596038  4873 raft_consensus.cc:1273] T 778e19d4e8684ca191e4734027cc9b36 P 4343ba2e6dc5477b8097b27ea603906b [term 3 FOLLOWER]: Refusing update from remote peer ea76bcce6ea441cfa874b93546e60292: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 3 index: 12. (index mismatch)
I20250905 08:24:12.597354  4780 consensus_queue.cc:1035] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [LEADER]: Connected to new peer: Peer: permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.000s
I20250905 08:24:12.599875  4919 heartbeater.cc:499] Master 127.0.106.190:35711 was elected leader, sending a full tablet report...
I20250905 08:24:12.609593  4711 raft_consensus.cc:1273] T 778e19d4e8684ca191e4734027cc9b36 P cad2e126b3ba40d9ad88ef5ddc39bb00 [term 3 FOLLOWER]: Refusing update from remote peer ea76bcce6ea441cfa874b93546e60292: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 3 index: 12. (index mismatch)
I20250905 08:24:12.611094  4791 consensus_queue.cc:1035] T 778e19d4e8684ca191e4734027cc9b36 P ea76bcce6ea441cfa874b93546e60292 [LEADER]: Connected to new peer: Peer: permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.000s
I20250905 08:24:12.634727  4853 tablet_service.cc:1515] Processing DeleteTablet for tablet 339d359c12c14106aa07add6dbc3309f with delete_type TABLET_DATA_TOMBSTONED (TS 4343ba2e6dc5477b8097b27ea603906b not found in new config with opid_index 16) from {username='slave'} at 127.0.0.1:57334
W20250905 08:24:12.640230  4406 catalog_manager.cc:4908] TS 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383): delete failed for tablet 339d359c12c14106aa07add6dbc3309f because tablet deleting was already in progress. No further retry: Already present: State transition of tablet 339d359c12c14106aa07add6dbc3309f already in progress: opening tablet
I20250905 08:24:12.647933  4810 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Bootstrap replayed 1/1 log segments. Stats: ops{read=14 overwritten=0 applied=14 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:24:12.648581  4810 tablet_bootstrap.cc:492] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Bootstrap complete.
I20250905 08:24:12.649695  4810 ts_tablet_manager.cc:1397] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Time spent bootstrapping tablet: real 0.153s	user 0.132s	sys 0.015s
I20250905 08:24:12.651142  4810 raft_consensus.cc:357] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } }
I20250905 08:24:12.651583  4810 raft_consensus.cc:738] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4343ba2e6dc5477b8097b27ea603906b, State: Initialized, Role: FOLLOWER
I20250905 08:24:12.652029  4810 consensus_queue.cc:260] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 14, Last appended: 2.14, Last appended by leader: 14, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "4343ba2e6dc5477b8097b27ea603906b" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40383 } } peers { permanent_uuid: "cad2e126b3ba40d9ad88ef5ddc39bb00" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34297 } attrs { promote: false } } peers { permanent_uuid: "ea76bcce6ea441cfa874b93546e60292" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42483 } attrs { promote: false } }
I20250905 08:24:12.653404  4810 ts_tablet_manager.cc:1428] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Time spent starting tablet: real 0.003s	user 0.004s	sys 0.000s
I20250905 08:24:12.657087  4853 tablet_service.cc:1515] Processing DeleteTablet for tablet 339d359c12c14106aa07add6dbc3309f with delete_type TABLET_DATA_TOMBSTONED (Replica with old config index 14 (current committed config index is 17)) from {username='slave'} at 127.0.0.1:57334
I20250905 08:24:12.657878  4931 tablet_replica.cc:331] stopping tablet replica
I20250905 08:24:12.658421  4931 raft_consensus.cc:2241] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 2 FOLLOWER]: Raft consensus shutting down.
I20250905 08:24:12.658800  4931 raft_consensus.cc:2270] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b [term 2 FOLLOWER]: Raft consensus is shut down!
I20250905 08:24:12.661159  4931 ts_tablet_manager.cc:1905] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250905 08:24:12.669505  4931 ts_tablet_manager.cc:1918] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.14
I20250905 08:24:12.669857  4931 log.cc:1199] T 339d359c12c14106aa07add6dbc3309f P 4343ba2e6dc5477b8097b27ea603906b: Deleting WAL directory at /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal/wals/339d359c12c14106aa07add6dbc3309f
I20250905 08:24:12.671010  4406 catalog_manager.cc:4928] TS 4343ba2e6dc5477b8097b27ea603906b (127.0.106.131:40383): tablet 339d359c12c14106aa07add6dbc3309f (table TestTable [id=0b714f08d18e45cca428de02144c6f28]) successfully deleted
W20250905 08:24:13.585134   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:14.588907   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:15.591786   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:16.594791   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:17.597882   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:18.601663   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:19.604970   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:20.608230   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:21.611335   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:22.614331   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:23.617482   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:24.621771   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:25.624818   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:26.627898   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:27.630959   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:28.634631   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:29.638042   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:30.641022   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250905 08:24:31.644176   426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 339d359c12c14106aa07add6dbc3309f: tablet_id: "339d359c12c14106aa07add6dbc3309f" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
/home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/tools/kudu-admin-test.cc:3914: Failure
Failed
Bad status: Not found: not all replicas of tablets comprising table TestTable are registered yet
I20250905 08:24:32.646919   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 4457
I20250905 08:24:32.673401   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 4611
I20250905 08:24:32.698489   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 4761
I20250905 08:24:32.721885   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 4387
2025-09-05T08:24:32Z chronyd exiting
I20250905 08:24:32.767566   426 test_util.cc:183] -----------------------------------------------
I20250905 08:24:32.767733   426 test_util.cc:184] Had failures, leaving test files at /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1757060547998030-426-0
[  FAILED  ] AdminCliTest.TestRebuildTables (59051 ms)
[----------] 5 tests from AdminCliTest (124708 ms total)

[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest
[ RUN      ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4
I20250905 08:24:32.771140   426 test_util.cc:276] Using random seed: -1858087888
I20250905 08:24:32.774832   426 ts_itest-base.cc:115] Starting cluster with:
I20250905 08:24:32.774961   426 ts_itest-base.cc:116] --------------
I20250905 08:24:32.775061   426 ts_itest-base.cc:117] 5 tablet servers
I20250905 08:24:32.775163   426 ts_itest-base.cc:118] 3 replicas per TS
I20250905 08:24:32.775254   426 ts_itest-base.cc:119] --------------
2025-09-05T08:24:32Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-09-05T08:24:32Z Disabled control of system clock
I20250905 08:24:32.812925   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:43615
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:45773
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:43615
--raft_prepare_replacement_before_eviction=true with env {}
W20250905 08:24:33.080070  4951 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:24:33.080559  4951 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:24:33.081003  4951 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:24:33.109267  4951 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250905 08:24:33.109573  4951 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250905 08:24:33.109776  4951 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:24:33.109967  4951 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250905 08:24:33.110229  4951 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250905 08:24:33.141678  4951 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45773
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:43615
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:43615
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:24:33.142729  4951 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:24:33.144205  4951 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:24:33.153789  4957 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:34.557155  4956 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 4951
W20250905 08:24:33.154876  4958 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:34.842092  4951 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.687s	user 0.560s	sys 1.127s
W20250905 08:24:34.843004  4951 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.688s	user 0.560s	sys 1.127s
W20250905 08:24:34.844898  4960 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:34.846581  4959 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1687 milliseconds
I20250905 08:24:34.846585  4951 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:24:34.847905  4951 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:24:34.850126  4951 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:24:34.851447  4951 hybrid_clock.cc:648] HybridClock initialized: now 1757060674851410 us; error 44 us; skew 500 ppm
I20250905 08:24:34.852176  4951 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:24:34.857679  4951 webserver.cc:480] Webserver started at http://127.0.106.190:45751/ using document root <none> and password file <none>
I20250905 08:24:34.858481  4951 fs_manager.cc:362] Metadata directory not provided
I20250905 08:24:34.858667  4951 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:24:34.859066  4951 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:24:34.863085  4951 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "b1182f3fcd0644ddb13a766fd6916d88"
format_stamp: "Formatted at 2025-09-05 08:24:34 on dist-test-slave-0x95"
I20250905 08:24:34.864126  4951 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "b1182f3fcd0644ddb13a766fd6916d88"
format_stamp: "Formatted at 2025-09-05 08:24:34 on dist-test-slave-0x95"
I20250905 08:24:34.870384  4951 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.001s	sys 0.005s
I20250905 08:24:34.875460  4967 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:34.876317  4951 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.003s	sys 0.001s
I20250905 08:24:34.876583  4951 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "b1182f3fcd0644ddb13a766fd6916d88"
format_stamp: "Formatted at 2025-09-05 08:24:34 on dist-test-slave-0x95"
I20250905 08:24:34.876844  4951 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:24:34.921437  4951 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:24:34.922804  4951 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:24:34.923202  4951 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:24:34.989261  4951 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:43615
I20250905 08:24:34.989341  5018 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:43615 every 8 connection(s)
I20250905 08:24:34.991881  4951 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250905 08:24:34.995263   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 4951
I20250905 08:24:34.995755   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250905 08:24:34.998339  5019 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:24:35.018575  5019 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88: Bootstrap starting.
I20250905 08:24:35.023950  5019 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88: Neither blocks nor log segments found. Creating new log.
I20250905 08:24:35.025475  5019 log.cc:826] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88: Log is configured to *not* fsync() on all Append() calls
I20250905 08:24:35.029721  5019 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88: No bootstrap required, opened a new log
I20250905 08:24:35.044024  5019 raft_consensus.cc:357] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b1182f3fcd0644ddb13a766fd6916d88" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 43615 } }
I20250905 08:24:35.044579  5019 raft_consensus.cc:383] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:24:35.044808  5019 raft_consensus.cc:738] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b1182f3fcd0644ddb13a766fd6916d88, State: Initialized, Role: FOLLOWER
I20250905 08:24:35.045475  5019 consensus_queue.cc:260] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b1182f3fcd0644ddb13a766fd6916d88" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 43615 } }
I20250905 08:24:35.045908  5019 raft_consensus.cc:397] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:24:35.046216  5019 raft_consensus.cc:491] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:24:35.046557  5019 raft_consensus.cc:3058] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:24:35.049988  5019 raft_consensus.cc:513] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b1182f3fcd0644ddb13a766fd6916d88" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 43615 } }
I20250905 08:24:35.050613  5019 leader_election.cc:304] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: b1182f3fcd0644ddb13a766fd6916d88; no voters: 
I20250905 08:24:35.052122  5019 leader_election.cc:290] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250905 08:24:35.052774  5024 raft_consensus.cc:2802] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:24:35.055266  5024 raft_consensus.cc:695] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [term 1 LEADER]: Becoming Leader. State: Replica: b1182f3fcd0644ddb13a766fd6916d88, State: Running, Role: LEADER
I20250905 08:24:35.056098  5024 consensus_queue.cc:237] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b1182f3fcd0644ddb13a766fd6916d88" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 43615 } }
I20250905 08:24:35.056416  5019 sys_catalog.cc:564] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [sys.catalog]: configured and running, proceeding with master startup.
I20250905 08:24:35.065704  5025 sys_catalog.cc:455] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "b1182f3fcd0644ddb13a766fd6916d88" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b1182f3fcd0644ddb13a766fd6916d88" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 43615 } } }
I20250905 08:24:35.065897  5026 sys_catalog.cc:455] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [sys.catalog]: SysCatalogTable state changed. Reason: New leader b1182f3fcd0644ddb13a766fd6916d88. Latest consensus state: current_term: 1 leader_uuid: "b1182f3fcd0644ddb13a766fd6916d88" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "b1182f3fcd0644ddb13a766fd6916d88" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 43615 } } }
I20250905 08:24:35.066314  5025 sys_catalog.cc:458] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [sys.catalog]: This master's current role is: LEADER
I20250905 08:24:35.066648  5026 sys_catalog.cc:458] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88 [sys.catalog]: This master's current role is: LEADER
I20250905 08:24:35.069053  5032 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250905 08:24:35.078548  5032 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250905 08:24:35.095713  5032 catalog_manager.cc:1349] Generated new cluster ID: 50895e520b384ff4a4344e879edbdb73
I20250905 08:24:35.095984  5032 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250905 08:24:35.119360  5032 catalog_manager.cc:1372] Generated new certificate authority record
I20250905 08:24:35.120944  5032 catalog_manager.cc:1506] Loading token signing keys...
I20250905 08:24:35.135521  5032 catalog_manager.cc:5955] T 00000000000000000000000000000000 P b1182f3fcd0644ddb13a766fd6916d88: Generated new TSK 0
I20250905 08:24:35.136282  5032 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250905 08:24:35.155206   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:43615
--builtin_ntp_servers=127.0.106.148:45773
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
W20250905 08:24:35.452003  5043 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:24:35.452401  5043 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:24:35.452813  5043 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:24:35.479875  5043 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250905 08:24:35.480209  5043 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:24:35.480893  5043 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250905 08:24:35.511360  5043 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45773
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:43615
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:24:35.512524  5043 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:24:35.513901  5043 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:24:35.525312  5049 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:36.928622  5048 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 5043
W20250905 08:24:37.176101  5043 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.650s	user 0.583s	sys 1.034s
W20250905 08:24:35.526245  5050 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:37.176425  5043 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.650s	user 0.583s	sys 1.034s
W20250905 08:24:37.178339  5052 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:37.181725  5051 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1655 milliseconds
I20250905 08:24:37.181757  5043 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:24:37.183039  5043 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:24:37.185444  5043 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:24:37.186882  5043 hybrid_clock.cc:648] HybridClock initialized: now 1757060677186837 us; error 32 us; skew 500 ppm
I20250905 08:24:37.187891  5043 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:24:37.194684  5043 webserver.cc:480] Webserver started at http://127.0.106.129:40511/ using document root <none> and password file <none>
I20250905 08:24:37.195840  5043 fs_manager.cc:362] Metadata directory not provided
I20250905 08:24:37.196141  5043 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:24:37.196660  5043 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:24:37.202368  5043 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "05510d20b8924d02b8f9c93fe68298c6"
format_stamp: "Formatted at 2025-09-05 08:24:37 on dist-test-slave-0x95"
I20250905 08:24:37.203589  5043 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "05510d20b8924d02b8f9c93fe68298c6"
format_stamp: "Formatted at 2025-09-05 08:24:37 on dist-test-slave-0x95"
I20250905 08:24:37.212093  5043 fs_manager.cc:696] Time spent creating directory manager: real 0.008s	user 0.007s	sys 0.000s
I20250905 08:24:37.218509  5059 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:37.219511  5043 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.002s	sys 0.000s
I20250905 08:24:37.219887  5043 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "05510d20b8924d02b8f9c93fe68298c6"
format_stamp: "Formatted at 2025-09-05 08:24:37 on dist-test-slave-0x95"
I20250905 08:24:37.220189  5043 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:24:37.272859  5043 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:24:37.274044  5043 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:24:37.274447  5043 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:24:37.276801  5043 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:24:37.280280  5043 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:24:37.280458  5043 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:37.280695  5043 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:24:37.280831  5043 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:37.431999  5043 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:44647
I20250905 08:24:37.432123  5171 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:44647 every 8 connection(s)
I20250905 08:24:37.434468  5043 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250905 08:24:37.437775   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 5043
I20250905 08:24:37.438201   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250905 08:24:37.445327   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:0
--local_ip_for_outbound_sockets=127.0.106.130
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:43615
--builtin_ntp_servers=127.0.106.148:45773
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250905 08:24:37.466528  5172 heartbeater.cc:344] Connected to a master server at 127.0.106.190:43615
I20250905 08:24:37.466915  5172 heartbeater.cc:461] Registering TS with master...
I20250905 08:24:37.467789  5172 heartbeater.cc:507] Master 127.0.106.190:43615 requested a full tablet report, sending...
I20250905 08:24:37.470031  4984 ts_manager.cc:194] Registered new tserver with Master: 05510d20b8924d02b8f9c93fe68298c6 (127.0.106.129:44647)
I20250905 08:24:37.471886  4984 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:50475
W20250905 08:24:37.747726  5176 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:24:37.748188  5176 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:24:37.748636  5176 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:24:37.777506  5176 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250905 08:24:37.777868  5176 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:24:37.778991  5176 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250905 08:24:37.811179  5176 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45773
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:43615
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:24:37.812343  5176 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:24:37.813853  5176 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:24:37.824941  5182 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:24:38.474752  5172 heartbeater.cc:499] Master 127.0.106.190:43615 was elected leader, sending a full tablet report...
W20250905 08:24:37.825829  5183 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:39.137565  5185 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:39.140486  5184 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1313 milliseconds
W20250905 08:24:39.141605  5176 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.317s	user 0.452s	sys 0.854s
W20250905 08:24:39.141968  5176 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.317s	user 0.452s	sys 0.854s
I20250905 08:24:39.142293  5176 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:24:39.143760  5176 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:24:39.146422  5176 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:24:39.147898  5176 hybrid_clock.cc:648] HybridClock initialized: now 1757060679147837 us; error 48 us; skew 500 ppm
I20250905 08:24:39.148940  5176 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:24:39.157163  5176 webserver.cc:480] Webserver started at http://127.0.106.130:34075/ using document root <none> and password file <none>
I20250905 08:24:39.158450  5176 fs_manager.cc:362] Metadata directory not provided
I20250905 08:24:39.158730  5176 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:24:39.159353  5176 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:24:39.166180  5176 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "6f984f9cb10c4c91b8023f0365022039"
format_stamp: "Formatted at 2025-09-05 08:24:39 on dist-test-slave-0x95"
I20250905 08:24:39.167598  5176 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "6f984f9cb10c4c91b8023f0365022039"
format_stamp: "Formatted at 2025-09-05 08:24:39 on dist-test-slave-0x95"
I20250905 08:24:39.177294  5176 fs_manager.cc:696] Time spent creating directory manager: real 0.009s	user 0.006s	sys 0.002s
I20250905 08:24:39.182817  5193 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:39.183866  5176 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.002s	sys 0.001s
I20250905 08:24:39.184147  5176 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "6f984f9cb10c4c91b8023f0365022039"
format_stamp: "Formatted at 2025-09-05 08:24:39 on dist-test-slave-0x95"
I20250905 08:24:39.184460  5176 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:24:39.248800  5176 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:24:39.250171  5176 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:24:39.250567  5176 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:24:39.252857  5176 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:24:39.256531  5176 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:24:39.256709  5176 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:39.256906  5176 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:24:39.257046  5176 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:39.379926  5176 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:40703
I20250905 08:24:39.380052  5305 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:40703 every 8 connection(s)
I20250905 08:24:39.382164  5176 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250905 08:24:39.389283   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 5176
I20250905 08:24:39.389842   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250905 08:24:39.396448   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:0
--local_ip_for_outbound_sockets=127.0.106.131
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:43615
--builtin_ntp_servers=127.0.106.148:45773
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250905 08:24:39.402441  5306 heartbeater.cc:344] Connected to a master server at 127.0.106.190:43615
I20250905 08:24:39.402813  5306 heartbeater.cc:461] Registering TS with master...
I20250905 08:24:39.403801  5306 heartbeater.cc:507] Master 127.0.106.190:43615 requested a full tablet report, sending...
I20250905 08:24:39.405907  4984 ts_manager.cc:194] Registered new tserver with Master: 6f984f9cb10c4c91b8023f0365022039 (127.0.106.130:40703)
I20250905 08:24:39.407016  4984 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:36883
W20250905 08:24:39.677524  5310 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:24:39.677973  5310 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:24:39.678442  5310 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:24:39.707293  5310 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250905 08:24:39.707779  5310 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:24:39.708523  5310 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250905 08:24:39.740386  5310 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45773
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:43615
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:24:39.741473  5310 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:24:39.742794  5310 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:24:39.753209  5316 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:24:40.409907  5306 heartbeater.cc:499] Master 127.0.106.190:43615 was elected leader, sending a full tablet report...
W20250905 08:24:39.754004  5317 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:40.982564  5318 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1227 milliseconds
W20250905 08:24:40.982669  5310 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.229s	user 0.410s	sys 0.808s
W20250905 08:24:40.983170  5319 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:40.983198  5310 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.230s	user 0.411s	sys 0.810s
I20250905 08:24:40.983543  5310 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:24:40.985055  5310 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:24:40.987802  5310 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:24:40.989297  5310 hybrid_clock.cc:648] HybridClock initialized: now 1757060680989222 us; error 72 us; skew 500 ppm
I20250905 08:24:40.990447  5310 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:24:40.998983  5310 webserver.cc:480] Webserver started at http://127.0.106.131:38843/ using document root <none> and password file <none>
I20250905 08:24:41.000363  5310 fs_manager.cc:362] Metadata directory not provided
I20250905 08:24:41.000658  5310 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:24:41.001332  5310 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:24:41.008572  5310 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "028e737be1ca41e8ac40f2b7e9124802"
format_stamp: "Formatted at 2025-09-05 08:24:40 on dist-test-slave-0x95"
I20250905 08:24:41.009958  5310 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "028e737be1ca41e8ac40f2b7e9124802"
format_stamp: "Formatted at 2025-09-05 08:24:40 on dist-test-slave-0x95"
I20250905 08:24:41.018851  5310 fs_manager.cc:696] Time spent creating directory manager: real 0.008s	user 0.005s	sys 0.005s
I20250905 08:24:41.026315  5326 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:41.027326  5310 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.004s	sys 0.002s
I20250905 08:24:41.027655  5310 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "028e737be1ca41e8ac40f2b7e9124802"
format_stamp: "Formatted at 2025-09-05 08:24:40 on dist-test-slave-0x95"
I20250905 08:24:41.028049  5310 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:24:41.102365  5310 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:24:41.103547  5310 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:24:41.103931  5310 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:24:41.106177  5310 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:24:41.109758  5310 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:24:41.109926  5310 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:41.110109  5310 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:24:41.110265  5310 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:41.232108  5310 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:38913
I20250905 08:24:41.232210  5438 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:38913 every 8 connection(s)
I20250905 08:24:41.234493  5310 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250905 08:24:41.242944   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 5310
I20250905 08:24:41.243326   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250905 08:24:41.249008   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.132:0
--local_ip_for_outbound_sockets=127.0.106.132
--webserver_interface=127.0.106.132
--webserver_port=0
--tserver_master_addrs=127.0.106.190:43615
--builtin_ntp_servers=127.0.106.148:45773
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250905 08:24:41.259145  5439 heartbeater.cc:344] Connected to a master server at 127.0.106.190:43615
I20250905 08:24:41.259483  5439 heartbeater.cc:461] Registering TS with master...
I20250905 08:24:41.260365  5439 heartbeater.cc:507] Master 127.0.106.190:43615 requested a full tablet report, sending...
I20250905 08:24:41.262017  4984 ts_manager.cc:194] Registered new tserver with Master: 028e737be1ca41e8ac40f2b7e9124802 (127.0.106.131:38913)
I20250905 08:24:41.263013  4984 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:44555
W20250905 08:24:41.533877  5443 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:24:41.534284  5443 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:24:41.534694  5443 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:24:41.563103  5443 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250905 08:24:41.563452  5443 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:24:41.564242  5443 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.132
I20250905 08:24:41.594758  5443 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45773
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.132:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.0.106.132
--webserver_port=0
--tserver_master_addrs=127.0.106.190:43615
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.132
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:24:41.595942  5443 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:24:41.597357  5443 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:24:41.609174  5449 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:24:42.265709  5439 heartbeater.cc:499] Master 127.0.106.190:43615 was elected leader, sending a full tablet report...
W20250905 08:24:43.012626  5448 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 5443
W20250905 08:24:41.609788  5450 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:43.199538  5443 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.590s	user 0.566s	sys 1.023s
W20250905 08:24:43.199911  5443 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.591s	user 0.567s	sys 1.023s
W20250905 08:24:43.201670  5452 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:43.204684  5451 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1593 milliseconds
I20250905 08:24:43.204741  5443 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:24:43.205853  5443 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:24:43.207764  5443 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:24:43.209124  5443 hybrid_clock.cc:648] HybridClock initialized: now 1757060683209087 us; error 41 us; skew 500 ppm
I20250905 08:24:43.209808  5443 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:24:43.215420  5443 webserver.cc:480] Webserver started at http://127.0.106.132:35107/ using document root <none> and password file <none>
I20250905 08:24:43.216298  5443 fs_manager.cc:362] Metadata directory not provided
I20250905 08:24:43.216504  5443 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:24:43.216920  5443 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:24:43.220834  5443 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "a2dda94e4edf4d5a98b11a686a2e4028"
format_stamp: "Formatted at 2025-09-05 08:24:43 on dist-test-slave-0x95"
I20250905 08:24:43.221760  5443 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "a2dda94e4edf4d5a98b11a686a2e4028"
format_stamp: "Formatted at 2025-09-05 08:24:43 on dist-test-slave-0x95"
I20250905 08:24:43.228281  5443 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.005s	sys 0.002s
I20250905 08:24:43.234072  5460 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:43.235018  5443 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.005s	sys 0.001s
I20250905 08:24:43.235296  5443 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "a2dda94e4edf4d5a98b11a686a2e4028"
format_stamp: "Formatted at 2025-09-05 08:24:43 on dist-test-slave-0x95"
I20250905 08:24:43.235592  5443 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:24:43.293294  5443 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:24:43.294586  5443 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:24:43.294947  5443 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:24:43.297281  5443 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:24:43.301003  5443 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:24:43.301169  5443 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:43.301345  5443 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:24:43.301469  5443 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:43.426019  5443 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.132:46433
I20250905 08:24:43.426126  5572 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.132:46433 every 8 connection(s)
I20250905 08:24:43.428300  5443 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250905 08:24:43.434811   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 5443
I20250905 08:24:43.435251   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250905 08:24:43.442337   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.133:0
--local_ip_for_outbound_sockets=127.0.106.133
--webserver_interface=127.0.106.133
--webserver_port=0
--tserver_master_addrs=127.0.106.190:43615
--builtin_ntp_servers=127.0.106.148:45773
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250905 08:24:43.450253  5573 heartbeater.cc:344] Connected to a master server at 127.0.106.190:43615
I20250905 08:24:43.450765  5573 heartbeater.cc:461] Registering TS with master...
I20250905 08:24:43.452109  5573 heartbeater.cc:507] Master 127.0.106.190:43615 requested a full tablet report, sending...
I20250905 08:24:43.454097  4984 ts_manager.cc:194] Registered new tserver with Master: a2dda94e4edf4d5a98b11a686a2e4028 (127.0.106.132:46433)
I20250905 08:24:43.455255  4984 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.132:47435
W20250905 08:24:43.728190  5577 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:24:43.728632  5577 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:24:43.729064  5577 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:24:43.758337  5577 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250905 08:24:43.758692  5577 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:24:43.759382  5577 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.133
I20250905 08:24:43.791554  5577 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45773
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.133:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--webserver_interface=127.0.106.133
--webserver_port=0
--tserver_master_addrs=127.0.106.190:43615
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.133
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:24:43.792761  5577 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:24:43.794257  5577 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:24:43.805789  5583 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:24:44.458197  5573 heartbeater.cc:499] Master 127.0.106.190:43615 was elected leader, sending a full tablet report...
W20250905 08:24:43.806136  5584 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:44.944427  5586 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:44.946331  5577 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.139s	user 0.006s	sys 0.003s
W20250905 08:24:44.946591  5577 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.140s	user 0.006s	sys 0.003s
I20250905 08:24:44.946806  5577 server_base.cc:1047] running on GCE node
I20250905 08:24:44.947731  5577 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:24:44.949923  5577 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:24:44.951279  5577 hybrid_clock.cc:648] HybridClock initialized: now 1757060684951249 us; error 31 us; skew 500 ppm
I20250905 08:24:44.952082  5577 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:24:44.958798  5577 webserver.cc:480] Webserver started at http://127.0.106.133:37677/ using document root <none> and password file <none>
I20250905 08:24:44.959976  5577 fs_manager.cc:362] Metadata directory not provided
I20250905 08:24:44.960208  5577 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:24:44.960701  5577 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:24:44.964924  5577 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/data/instance:
uuid: "e2bd34d2bbeb47eebb67fdbfd579e565"
format_stamp: "Formatted at 2025-09-05 08:24:44 on dist-test-slave-0x95"
I20250905 08:24:44.966004  5577 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/wal/instance:
uuid: "e2bd34d2bbeb47eebb67fdbfd579e565"
format_stamp: "Formatted at 2025-09-05 08:24:44 on dist-test-slave-0x95"
I20250905 08:24:44.973467  5577 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.004s	sys 0.004s
I20250905 08:24:44.979218  5593 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:44.980343  5577 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.002s	sys 0.000s
I20250905 08:24:44.980628  5577 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/wal
uuid: "e2bd34d2bbeb47eebb67fdbfd579e565"
format_stamp: "Formatted at 2025-09-05 08:24:44 on dist-test-slave-0x95"
I20250905 08:24:44.980921  5577 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:24:45.051096  5577 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:24:45.052548  5577 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:24:45.052958  5577 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:24:45.055478  5577 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:24:45.059525  5577 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:24:45.059739  5577 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:45.060024  5577 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:24:45.060184  5577 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:45.198618  5577 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.133:40845
I20250905 08:24:45.198565  5705 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.133:40845 every 8 connection(s)
I20250905 08:24:45.201689  5577 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/data/info.pb
I20250905 08:24:45.210085   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 5577
I20250905 08:24:45.210546   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1757060547998030-426-0/raft_consensus-itest-cluster/ts-4/wal/instance
I20250905 08:24:45.222869  5706 heartbeater.cc:344] Connected to a master server at 127.0.106.190:43615
I20250905 08:24:45.223212  5706 heartbeater.cc:461] Registering TS with master...
I20250905 08:24:45.224172  5706 heartbeater.cc:507] Master 127.0.106.190:43615 requested a full tablet report, sending...
I20250905 08:24:45.225870  4984 ts_manager.cc:194] Registered new tserver with Master: e2bd34d2bbeb47eebb67fdbfd579e565 (127.0.106.133:40845)
I20250905 08:24:45.227021  4984 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.133:43721
I20250905 08:24:45.231436   426 external_mini_cluster.cc:949] 5 TS(s) registered with all masters
I20250905 08:24:45.264863  4983 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:33318:
name: "TestTable"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
owner: "alice"
I20250905 08:24:45.354096  5374 tablet_service.cc:1468] Processing CreateTablet for tablet b26d9d2bf7b040e8b6be159eb6307988 (DEFAULT_TABLE table=TestTable [id=5e4a6d2337994effb3b63578e7d9fe91]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:24:45.356024  5374 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b26d9d2bf7b040e8b6be159eb6307988. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:24:45.357460  5241 tablet_service.cc:1468] Processing CreateTablet for tablet b26d9d2bf7b040e8b6be159eb6307988 (DEFAULT_TABLE table=TestTable [id=5e4a6d2337994effb3b63578e7d9fe91]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:24:45.358772  5241 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b26d9d2bf7b040e8b6be159eb6307988. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:24:45.358505  5641 tablet_service.cc:1468] Processing CreateTablet for tablet b26d9d2bf7b040e8b6be159eb6307988 (DEFAULT_TABLE table=TestTable [id=5e4a6d2337994effb3b63578e7d9fe91]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:24:45.360128  5641 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b26d9d2bf7b040e8b6be159eb6307988. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:24:45.385322  5725 tablet_bootstrap.cc:492] T b26d9d2bf7b040e8b6be159eb6307988 P 6f984f9cb10c4c91b8023f0365022039: Bootstrap starting.
I20250905 08:24:45.391714  5727 tablet_bootstrap.cc:492] T b26d9d2bf7b040e8b6be159eb6307988 P e2bd34d2bbeb47eebb67fdbfd579e565: Bootstrap starting.
I20250905 08:24:45.397109  5726 tablet_bootstrap.cc:492] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802: Bootstrap starting.
I20250905 08:24:45.397984  5725 tablet_bootstrap.cc:654] T b26d9d2bf7b040e8b6be159eb6307988 P 6f984f9cb10c4c91b8023f0365022039: Neither blocks nor log segments found. Creating new log.
I20250905 08:24:45.400619  5725 log.cc:826] T b26d9d2bf7b040e8b6be159eb6307988 P 6f984f9cb10c4c91b8023f0365022039: Log is configured to *not* fsync() on all Append() calls
I20250905 08:24:45.401503  5727 tablet_bootstrap.cc:654] T b26d9d2bf7b040e8b6be159eb6307988 P e2bd34d2bbeb47eebb67fdbfd579e565: Neither blocks nor log segments found. Creating new log.
I20250905 08:24:45.404161  5727 log.cc:826] T b26d9d2bf7b040e8b6be159eb6307988 P e2bd34d2bbeb47eebb67fdbfd579e565: Log is configured to *not* fsync() on all Append() calls
I20250905 08:24:45.404498  5726 tablet_bootstrap.cc:654] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802: Neither blocks nor log segments found. Creating new log.
I20250905 08:24:45.407029  5726 log.cc:826] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802: Log is configured to *not* fsync() on all Append() calls
I20250905 08:24:45.414045  5726 tablet_bootstrap.cc:492] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802: No bootstrap required, opened a new log
I20250905 08:24:45.414352  5725 tablet_bootstrap.cc:492] T b26d9d2bf7b040e8b6be159eb6307988 P 6f984f9cb10c4c91b8023f0365022039: No bootstrap required, opened a new log
I20250905 08:24:45.414496  5726 ts_tablet_manager.cc:1397] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802: Time spent bootstrapping tablet: real 0.018s	user 0.015s	sys 0.000s
I20250905 08:24:45.414944  5725 ts_tablet_manager.cc:1397] T b26d9d2bf7b040e8b6be159eb6307988 P 6f984f9cb10c4c91b8023f0365022039: Time spent bootstrapping tablet: real 0.030s	user 0.009s	sys 0.011s
I20250905 08:24:45.420611  5727 tablet_bootstrap.cc:492] T b26d9d2bf7b040e8b6be159eb6307988 P e2bd34d2bbeb47eebb67fdbfd579e565: No bootstrap required, opened a new log
I20250905 08:24:45.421018  5727 ts_tablet_manager.cc:1397] T b26d9d2bf7b040e8b6be159eb6307988 P e2bd34d2bbeb47eebb67fdbfd579e565: Time spent bootstrapping tablet: real 0.030s	user 0.019s	sys 0.007s
I20250905 08:24:45.445508  5725 raft_consensus.cc:357] T b26d9d2bf7b040e8b6be159eb6307988 P 6f984f9cb10c4c91b8023f0365022039 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "028e737be1ca41e8ac40f2b7e9124802" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 38913 } } peers { permanent_uuid: "e2bd34d2bbeb47eebb67fdbfd579e565" member_type: VOTER last_known_addr { host: "127.0.106.133" port: 40845 } } peers { permanent_uuid: "6f984f9cb10c4c91b8023f0365022039" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 40703 } }
I20250905 08:24:45.446864  5725 raft_consensus.cc:383] T b26d9d2bf7b040e8b6be159eb6307988 P 6f984f9cb10c4c91b8023f0365022039 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:24:45.447180  5725 raft_consensus.cc:738] T b26d9d2bf7b040e8b6be159eb6307988 P 6f984f9cb10c4c91b8023f0365022039 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6f984f9cb10c4c91b8023f0365022039, State: Initialized, Role: FOLLOWER
I20250905 08:24:45.448240  5725 consensus_queue.cc:260] T b26d9d2bf7b040e8b6be159eb6307988 P 6f984f9cb10c4c91b8023f0365022039 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "028e737be1ca41e8ac40f2b7e9124802" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 38913 } } peers { permanent_uuid: "e2bd34d2bbeb47eebb67fdbfd579e565" member_type: VOTER last_known_addr { host: "127.0.106.133" port: 40845 } } peers { permanent_uuid: "6f984f9cb10c4c91b8023f0365022039" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 40703 } }
I20250905 08:24:45.449615  5726 raft_consensus.cc:357] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "028e737be1ca41e8ac40f2b7e9124802" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 38913 } } peers { permanent_uuid: "e2bd34d2bbeb47eebb67fdbfd579e565" member_type: VOTER last_known_addr { host: "127.0.106.133" port: 40845 } } peers { permanent_uuid: "6f984f9cb10c4c91b8023f0365022039" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 40703 } }
I20250905 08:24:45.451174  5726 raft_consensus.cc:383] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:24:45.451542  5726 raft_consensus.cc:738] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 028e737be1ca41e8ac40f2b7e9124802, State: Initialized, Role: FOLLOWER
I20250905 08:24:45.452699  5726 consensus_queue.cc:260] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "028e737be1ca41e8ac40f2b7e9124802" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 38913 } } peers { permanent_uuid: "e2bd34d2bbeb47eebb67fdbfd579e565" member_type: VOTER last_known_addr { host: "127.0.106.133" port: 40845 } } peers { permanent_uuid: "6f984f9cb10c4c91b8023f0365022039" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 40703 } }
I20250905 08:24:45.458283  5727 raft_consensus.cc:357] T b26d9d2bf7b040e8b6be159eb6307988 P e2bd34d2bbeb47eebb67fdbfd579e565 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "028e737be1ca41e8ac40f2b7e9124802" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 38913 } } peers { permanent_uuid: "e2bd34d2bbeb47eebb67fdbfd579e565" member_type: VOTER last_known_addr { host: "127.0.106.133" port: 40845 } } peers { permanent_uuid: "6f984f9cb10c4c91b8023f0365022039" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 40703 } }
I20250905 08:24:45.459200  5727 raft_consensus.cc:383] T b26d9d2bf7b040e8b6be159eb6307988 P e2bd34d2bbeb47eebb67fdbfd579e565 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:24:45.459542  5727 raft_consensus.cc:738] T b26d9d2bf7b040e8b6be159eb6307988 P e2bd34d2bbeb47eebb67fdbfd579e565 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e2bd34d2bbeb47eebb67fdbfd579e565, State: Initialized, Role: FOLLOWER
I20250905 08:24:45.460517  5727 consensus_queue.cc:260] T b26d9d2bf7b040e8b6be159eb6307988 P e2bd34d2bbeb47eebb67fdbfd579e565 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "028e737be1ca41e8ac40f2b7e9124802" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 38913 } } peers { permanent_uuid: "e2bd34d2bbeb47eebb67fdbfd579e565" member_type: VOTER last_known_addr { host: "127.0.106.133" port: 40845 } } peers { permanent_uuid: "6f984f9cb10c4c91b8023f0365022039" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 40703 } }
I20250905 08:24:45.464028  5725 ts_tablet_manager.cc:1428] T b26d9d2bf7b040e8b6be159eb6307988 P 6f984f9cb10c4c91b8023f0365022039: Time spent starting tablet: real 0.049s	user 0.025s	sys 0.010s
I20250905 08:24:45.464257  5706 heartbeater.cc:499] Master 127.0.106.190:43615 was elected leader, sending a full tablet report...
I20250905 08:24:45.466764  5727 ts_tablet_manager.cc:1428] T b26d9d2bf7b040e8b6be159eb6307988 P e2bd34d2bbeb47eebb67fdbfd579e565: Time spent starting tablet: real 0.046s	user 0.032s	sys 0.000s
I20250905 08:24:45.467609  5726 ts_tablet_manager.cc:1428] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802: Time spent starting tablet: real 0.053s	user 0.021s	sys 0.011s
W20250905 08:24:45.505681  5440 tablet.cc:2378] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250905 08:24:45.642678  5307 tablet.cc:2378] T b26d9d2bf7b040e8b6be159eb6307988 P 6f984f9cb10c4c91b8023f0365022039: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250905 08:24:45.706514  5707 tablet.cc:2378] T b26d9d2bf7b040e8b6be159eb6307988 P e2bd34d2bbeb47eebb67fdbfd579e565: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250905 08:24:45.812405  5733 raft_consensus.cc:491] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250905 08:24:45.812860  5733 raft_consensus.cc:513] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "028e737be1ca41e8ac40f2b7e9124802" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 38913 } } peers { permanent_uuid: "e2bd34d2bbeb47eebb67fdbfd579e565" member_type: VOTER last_known_addr { host: "127.0.106.133" port: 40845 } } peers { permanent_uuid: "6f984f9cb10c4c91b8023f0365022039" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 40703 } }
I20250905 08:24:45.814957  5733 leader_election.cc:290] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers e2bd34d2bbeb47eebb67fdbfd579e565 (127.0.106.133:40845), 6f984f9cb10c4c91b8023f0365022039 (127.0.106.130:40703)
I20250905 08:24:45.826932  5661 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b26d9d2bf7b040e8b6be159eb6307988" candidate_uuid: "028e737be1ca41e8ac40f2b7e9124802" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "e2bd34d2bbeb47eebb67fdbfd579e565" is_pre_election: true
I20250905 08:24:45.826918  5261 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b26d9d2bf7b040e8b6be159eb6307988" candidate_uuid: "028e737be1ca41e8ac40f2b7e9124802" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "6f984f9cb10c4c91b8023f0365022039" is_pre_election: true
I20250905 08:24:45.827715  5661 raft_consensus.cc:2466] T b26d9d2bf7b040e8b6be159eb6307988 P e2bd34d2bbeb47eebb67fdbfd579e565 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 028e737be1ca41e8ac40f2b7e9124802 in term 0.
I20250905 08:24:45.827714  5261 raft_consensus.cc:2466] T b26d9d2bf7b040e8b6be159eb6307988 P 6f984f9cb10c4c91b8023f0365022039 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 028e737be1ca41e8ac40f2b7e9124802 in term 0.
I20250905 08:24:45.828917  5328 leader_election.cc:304] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 028e737be1ca41e8ac40f2b7e9124802, 6f984f9cb10c4c91b8023f0365022039; no voters: 
I20250905 08:24:45.829535  5733 raft_consensus.cc:2802] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250905 08:24:45.829857  5733 raft_consensus.cc:491] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250905 08:24:45.830101  5733 raft_consensus.cc:3058] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:24:45.834255  5733 raft_consensus.cc:513] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "028e737be1ca41e8ac40f2b7e9124802" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 38913 } } peers { permanent_uuid: "e2bd34d2bbeb47eebb67fdbfd579e565" member_type: VOTER last_known_addr { host: "127.0.106.133" port: 40845 } } peers { permanent_uuid: "6f984f9cb10c4c91b8023f0365022039" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 40703 } }
I20250905 08:24:45.835516  5733 leader_election.cc:290] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [CANDIDATE]: Term 1 election: Requested vote from peers e2bd34d2bbeb47eebb67fdbfd579e565 (127.0.106.133:40845), 6f984f9cb10c4c91b8023f0365022039 (127.0.106.130:40703)
I20250905 08:24:45.836174  5661 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b26d9d2bf7b040e8b6be159eb6307988" candidate_uuid: "028e737be1ca41e8ac40f2b7e9124802" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "e2bd34d2bbeb47eebb67fdbfd579e565"
I20250905 08:24:45.836354  5261 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b26d9d2bf7b040e8b6be159eb6307988" candidate_uuid: "028e737be1ca41e8ac40f2b7e9124802" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "6f984f9cb10c4c91b8023f0365022039"
I20250905 08:24:45.836558  5661 raft_consensus.cc:3058] T b26d9d2bf7b040e8b6be159eb6307988 P e2bd34d2bbeb47eebb67fdbfd579e565 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:24:45.836836  5261 raft_consensus.cc:3058] T b26d9d2bf7b040e8b6be159eb6307988 P 6f984f9cb10c4c91b8023f0365022039 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:24:45.840653  5661 raft_consensus.cc:2466] T b26d9d2bf7b040e8b6be159eb6307988 P e2bd34d2bbeb47eebb67fdbfd579e565 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 028e737be1ca41e8ac40f2b7e9124802 in term 1.
I20250905 08:24:45.840920  5261 raft_consensus.cc:2466] T b26d9d2bf7b040e8b6be159eb6307988 P 6f984f9cb10c4c91b8023f0365022039 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 028e737be1ca41e8ac40f2b7e9124802 in term 1.
I20250905 08:24:45.841496  5330 leader_election.cc:304] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 028e737be1ca41e8ac40f2b7e9124802, e2bd34d2bbeb47eebb67fdbfd579e565; no voters: 
I20250905 08:24:45.842206  5733 raft_consensus.cc:2802] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:24:45.843554  5733 raft_consensus.cc:695] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [term 1 LEADER]: Becoming Leader. State: Replica: 028e737be1ca41e8ac40f2b7e9124802, State: Running, Role: LEADER
I20250905 08:24:45.844317  5733 consensus_queue.cc:237] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "028e737be1ca41e8ac40f2b7e9124802" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 38913 } } peers { permanent_uuid: "e2bd34d2bbeb47eebb67fdbfd579e565" member_type: VOTER last_known_addr { host: "127.0.106.133" port: 40845 } } peers { permanent_uuid: "6f984f9cb10c4c91b8023f0365022039" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 40703 } }
I20250905 08:24:45.853896  4982 catalog_manager.cc:5582] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 reported cstate change: term changed from 0 to 1, leader changed from <none> to 028e737be1ca41e8ac40f2b7e9124802 (127.0.106.131). New cstate: current_term: 1 leader_uuid: "028e737be1ca41e8ac40f2b7e9124802" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "028e737be1ca41e8ac40f2b7e9124802" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 38913 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "e2bd34d2bbeb47eebb67fdbfd579e565" member_type: VOTER last_known_addr { host: "127.0.106.133" port: 40845 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "6f984f9cb10c4c91b8023f0365022039" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 40703 } health_report { overall_health: UNKNOWN } } }
I20250905 08:24:45.914289   426 external_mini_cluster.cc:949] 5 TS(s) registered with all masters
I20250905 08:24:45.917836   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 6f984f9cb10c4c91b8023f0365022039 to finish bootstrapping
I20250905 08:24:45.931463   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 028e737be1ca41e8ac40f2b7e9124802 to finish bootstrapping
I20250905 08:24:45.944036   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver e2bd34d2bbeb47eebb67fdbfd579e565 to finish bootstrapping
I20250905 08:24:45.954195   426 test_util.cc:276] Using random seed: -1844904828
I20250905 08:24:45.977495   426 test_workload.cc:405] TestWorkload: Skipping table creation because table TestTable already exists
I20250905 08:24:45.978235   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 5577
I20250905 08:24:46.010080  5261 raft_consensus.cc:1273] T b26d9d2bf7b040e8b6be159eb6307988 P 6f984f9cb10c4c91b8023f0365022039 [term 1 FOLLOWER]: Refusing update from remote peer 028e737be1ca41e8ac40f2b7e9124802: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
W20250905 08:24:46.011370  5330 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.106.133:40845: connect: Connection refused (error 111)
I20250905 08:24:46.011778  5733 consensus_queue.cc:1035] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 [LEADER]: Connected to new peer: Peer: permanent_uuid: "6f984f9cb10c4c91b8023f0365022039" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 40703 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250905 08:24:46.024289  5330 consensus_peers.cc:489] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 -> Peer e2bd34d2bbeb47eebb67fdbfd579e565 (127.0.106.133:40845): Couldn't send request to peer e2bd34d2bbeb47eebb67fdbfd579e565. Status: Network error: Client connection negotiation failed: client connection to 127.0.106.133:40845: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250905 08:24:46.035691  5752 mvcc.cc:204] Tried to move back new op lower bound from 7196920569881735168 to 7196920569224900608. Current Snapshot: MvccSnapshot[applied={T|T < 7196920569881735168}]
W20250905 08:24:48.267470  5330 consensus_peers.cc:489] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 -> Peer e2bd34d2bbeb47eebb67fdbfd579e565 (127.0.106.133:40845): Couldn't send request to peer e2bd34d2bbeb47eebb67fdbfd579e565. Status: Network error: Client connection negotiation failed: client connection to 127.0.106.133:40845: connect: Connection refused (error 111). This is attempt 6: this message will repeat every 5th retry.
I20250905 08:24:48.522847  5241 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250905 08:24:48.526602  5107 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250905 08:24:48.527617  5508 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250905 08:24:48.531821  5374 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250905 08:24:50.262046  5374 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250905 08:24:50.290529  5508 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250905 08:24:50.294087  5241 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250905 08:24:50.296701  5107 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250905 08:24:51.030716  5330 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.106.133:40845: connect: Connection refused (error 111) [suppressed 9 similar messages]
W20250905 08:24:51.033063  5330 consensus_peers.cc:489] T b26d9d2bf7b040e8b6be159eb6307988 P 028e737be1ca41e8ac40f2b7e9124802 -> Peer e2bd34d2bbeb47eebb67fdbfd579e565 (127.0.106.133:40845): Couldn't send request to peer e2bd34d2bbeb47eebb67fdbfd579e565. Status: Network error: Client connection negotiation failed: client connection to 127.0.106.133:40845: connect: Connection refused (error 111). This is attempt 11: this message will repeat every 5th retry.
I20250905 08:24:52.732563   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 5043
I20250905 08:24:52.755008   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 5176
I20250905 08:24:52.786389   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 5310
I20250905 08:24:52.824373   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 5443
I20250905 08:24:52.845520   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 4951
2025-09-05T08:24:52Z chronyd exiting
[       OK ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4 (20133 ms)
[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest (20133 ms total)

[----------] 1 test from ListTableCliSimpleParamTest
[ RUN      ] ListTableCliSimpleParamTest.TestListTables/2
I20250905 08:24:52.904990   426 test_util.cc:276] Using random seed: -1837954042
I20250905 08:24:52.908843   426 ts_itest-base.cc:115] Starting cluster with:
I20250905 08:24:52.908982   426 ts_itest-base.cc:116] --------------
I20250905 08:24:52.909129   426 ts_itest-base.cc:117] 1 tablet servers
I20250905 08:24:52.909251   426 ts_itest-base.cc:118] 1 replicas per TS
I20250905 08:24:52.909394   426 ts_itest-base.cc:119] --------------
2025-09-05T08:24:52Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-09-05T08:24:52Z Disabled control of system clock
I20250905 08:24:52.948112   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:33925
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:33245
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:33925 with env {}
W20250905 08:24:53.229553  5858 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:24:53.230067  5858 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:24:53.230477  5858 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:24:53.259457  5858 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250905 08:24:53.259780  5858 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:24:53.260051  5858 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250905 08:24:53.260303  5858 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250905 08:24:53.292636  5858 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:33245
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:33925
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:33925
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:24:53.293875  5858 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:24:53.295387  5858 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:24:53.305601  5864 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:53.309988  5867 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:53.305888  5865 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:54.443361  5866 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1132 milliseconds
I20250905 08:24:54.443471  5858 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:24:54.444679  5858 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:24:54.446919  5858 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:24:54.448246  5858 hybrid_clock.cc:648] HybridClock initialized: now 1757060694448210 us; error 58 us; skew 500 ppm
I20250905 08:24:54.448997  5858 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:24:54.454819  5858 webserver.cc:480] Webserver started at http://127.0.106.190:35669/ using document root <none> and password file <none>
I20250905 08:24:54.455662  5858 fs_manager.cc:362] Metadata directory not provided
I20250905 08:24:54.455878  5858 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:24:54.456267  5858 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:24:54.460310  5858 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "2415d202470548f7b599bf195ff98874"
format_stamp: "Formatted at 2025-09-05 08:24:54 on dist-test-slave-0x95"
I20250905 08:24:54.461230  5858 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "2415d202470548f7b599bf195ff98874"
format_stamp: "Formatted at 2025-09-05 08:24:54 on dist-test-slave-0x95"
I20250905 08:24:54.467320  5858 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.006s	sys 0.001s
I20250905 08:24:54.472215  5874 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:54.473138  5858 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.004s	sys 0.000s
I20250905 08:24:54.473403  5858 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "2415d202470548f7b599bf195ff98874"
format_stamp: "Formatted at 2025-09-05 08:24:54 on dist-test-slave-0x95"
I20250905 08:24:54.473644  5858 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:24:54.520753  5858 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:24:54.521983  5858 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:24:54.522327  5858 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:24:54.588429  5858 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:33925
I20250905 08:24:54.588500  5925 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:33925 every 8 connection(s)
I20250905 08:24:54.591007  5858 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250905 08:24:54.596488  5926 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:24:54.598385   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 5858
I20250905 08:24:54.598770   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250905 08:24:54.618227  5926 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874: Bootstrap starting.
I20250905 08:24:54.623296  5926 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874: Neither blocks nor log segments found. Creating new log.
I20250905 08:24:54.625349  5926 log.cc:826] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874: Log is configured to *not* fsync() on all Append() calls
I20250905 08:24:54.629956  5926 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874: No bootstrap required, opened a new log
I20250905 08:24:54.646517  5926 raft_consensus.cc:357] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2415d202470548f7b599bf195ff98874" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 33925 } }
I20250905 08:24:54.647049  5926 raft_consensus.cc:383] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:24:54.647228  5926 raft_consensus.cc:738] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2415d202470548f7b599bf195ff98874, State: Initialized, Role: FOLLOWER
I20250905 08:24:54.647753  5926 consensus_queue.cc:260] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2415d202470548f7b599bf195ff98874" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 33925 } }
I20250905 08:24:54.648277  5926 raft_consensus.cc:397] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:24:54.648504  5926 raft_consensus.cc:491] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:24:54.648741  5926 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:24:54.652987  5926 raft_consensus.cc:513] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2415d202470548f7b599bf195ff98874" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 33925 } }
I20250905 08:24:54.653548  5926 leader_election.cc:304] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 2415d202470548f7b599bf195ff98874; no voters: 
I20250905 08:24:54.654990  5926 leader_election.cc:290] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250905 08:24:54.655668  5931 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:24:54.657586  5931 raft_consensus.cc:695] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [term 1 LEADER]: Becoming Leader. State: Replica: 2415d202470548f7b599bf195ff98874, State: Running, Role: LEADER
I20250905 08:24:54.658211  5931 consensus_queue.cc:237] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2415d202470548f7b599bf195ff98874" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 33925 } }
I20250905 08:24:54.658612  5926 sys_catalog.cc:564] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [sys.catalog]: configured and running, proceeding with master startup.
I20250905 08:24:54.668637  5933 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 2415d202470548f7b599bf195ff98874. Latest consensus state: current_term: 1 leader_uuid: "2415d202470548f7b599bf195ff98874" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2415d202470548f7b599bf195ff98874" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 33925 } } }
I20250905 08:24:54.670936  5933 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [sys.catalog]: This master's current role is: LEADER
I20250905 08:24:54.671484  5932 sys_catalog.cc:455] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "2415d202470548f7b599bf195ff98874" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "2415d202470548f7b599bf195ff98874" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 33925 } } }
I20250905 08:24:54.672230  5932 sys_catalog.cc:458] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874 [sys.catalog]: This master's current role is: LEADER
I20250905 08:24:54.674188  5941 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250905 08:24:54.684399  5941 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250905 08:24:54.697098  5941 catalog_manager.cc:1349] Generated new cluster ID: 0f653a106ee24a57aafdaf4134a76793
I20250905 08:24:54.697347  5941 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250905 08:24:54.721953  5941 catalog_manager.cc:1372] Generated new certificate authority record
I20250905 08:24:54.723552  5941 catalog_manager.cc:1506] Loading token signing keys...
I20250905 08:24:54.733471  5941 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 2415d202470548f7b599bf195ff98874: Generated new TSK 0
I20250905 08:24:54.734326  5941 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250905 08:24:54.757340   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:33925
--builtin_ntp_servers=127.0.106.148:33245
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250905 08:24:55.049615  5950 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:24:55.050079  5950 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:24:55.050552  5950 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:24:55.079660  5950 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:24:55.080446  5950 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250905 08:24:55.112627  5950 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:33245
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:33925
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:24:55.114003  5950 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:24:55.115525  5950 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:24:55.128221  5956 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:56.528548  5955 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 5950
W20250905 08:24:55.129835  5957 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:24:56.839424  5950 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.712s	user 0.599s	sys 1.061s
W20250905 08:24:56.840931  5950 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.713s	user 0.599s	sys 1.061s
W20250905 08:24:56.840936  5958 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1712 milliseconds
W20250905 08:24:56.842022  5959 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:24:56.841974  5950 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:24:56.845191  5950 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:24:56.847198  5950 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:24:56.848549  5950 hybrid_clock.cc:648] HybridClock initialized: now 1757060696848517 us; error 35 us; skew 500 ppm
I20250905 08:24:56.849310  5950 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:24:56.855034  5950 webserver.cc:480] Webserver started at http://127.0.106.129:35043/ using document root <none> and password file <none>
I20250905 08:24:56.855984  5950 fs_manager.cc:362] Metadata directory not provided
I20250905 08:24:56.856199  5950 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:24:56.856616  5950 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:24:56.860976  5950 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "f7f4f11156364fe6a9a2554a9b99f4ca"
format_stamp: "Formatted at 2025-09-05 08:24:56 on dist-test-slave-0x95"
I20250905 08:24:56.861976  5950 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "f7f4f11156364fe6a9a2554a9b99f4ca"
format_stamp: "Formatted at 2025-09-05 08:24:56 on dist-test-slave-0x95"
I20250905 08:24:56.868371  5950 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.001s	sys 0.006s
I20250905 08:24:56.873908  5966 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:56.874902  5950 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.001s	sys 0.005s
I20250905 08:24:56.875175  5950 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "f7f4f11156364fe6a9a2554a9b99f4ca"
format_stamp: "Formatted at 2025-09-05 08:24:56 on dist-test-slave-0x95"
I20250905 08:24:56.875465  5950 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:24:56.919239  5950 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:24:56.920646  5950 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:24:56.921042  5950 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:24:56.923408  5950 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:24:56.927840  5950 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:24:56.928025  5950 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.001s	sys 0.000s
I20250905 08:24:56.928277  5950 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:24:56.928431  5950 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:24:57.086669  5950 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:36017
I20250905 08:24:57.086771  6078 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:36017 every 8 connection(s)
I20250905 08:24:57.089253  5950 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250905 08:24:57.095984   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 5950
I20250905 08:24:57.096503   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1757060547998030-426-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250905 08:24:57.110694  6079 heartbeater.cc:344] Connected to a master server at 127.0.106.190:33925
I20250905 08:24:57.111063  6079 heartbeater.cc:461] Registering TS with master...
I20250905 08:24:57.111958  6079 heartbeater.cc:507] Master 127.0.106.190:33925 requested a full tablet report, sending...
I20250905 08:24:57.114401  5891 ts_manager.cc:194] Registered new tserver with Master: f7f4f11156364fe6a9a2554a9b99f4ca (127.0.106.129:36017)
I20250905 08:24:57.116158   426 external_mini_cluster.cc:949] 1 TS(s) registered with all masters
I20250905 08:24:57.116456  5891 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:33231
I20250905 08:24:57.149235  5891 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:39016:
name: "TestTable"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
owner: "alice"
I20250905 08:24:57.206024  6014 tablet_service.cc:1468] Processing CreateTablet for tablet 52cf053238c24bda9eb415c5ad95e5b4 (DEFAULT_TABLE table=TestTable [id=a8f42eb086a148cd907195134db343b2]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:24:57.207564  6014 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 52cf053238c24bda9eb415c5ad95e5b4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:24:57.225760  6094 tablet_bootstrap.cc:492] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca: Bootstrap starting.
I20250905 08:24:57.230787  6094 tablet_bootstrap.cc:654] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca: Neither blocks nor log segments found. Creating new log.
I20250905 08:24:57.232381  6094 log.cc:826] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca: Log is configured to *not* fsync() on all Append() calls
I20250905 08:24:57.236574  6094 tablet_bootstrap.cc:492] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca: No bootstrap required, opened a new log
I20250905 08:24:57.236903  6094 ts_tablet_manager.cc:1397] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca: Time spent bootstrapping tablet: real 0.012s	user 0.008s	sys 0.002s
I20250905 08:24:57.253924  6094 raft_consensus.cc:357] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f7f4f11156364fe6a9a2554a9b99f4ca" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 36017 } }
I20250905 08:24:57.254345  6094 raft_consensus.cc:383] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:24:57.254549  6094 raft_consensus.cc:738] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f7f4f11156364fe6a9a2554a9b99f4ca, State: Initialized, Role: FOLLOWER
I20250905 08:24:57.255118  6094 consensus_queue.cc:260] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f7f4f11156364fe6a9a2554a9b99f4ca" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 36017 } }
I20250905 08:24:57.255551  6094 raft_consensus.cc:397] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:24:57.255769  6094 raft_consensus.cc:491] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:24:57.256065  6094 raft_consensus.cc:3058] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:24:57.260255  6094 raft_consensus.cc:513] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f7f4f11156364fe6a9a2554a9b99f4ca" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 36017 } }
I20250905 08:24:57.260895  6094 leader_election.cc:304] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: f7f4f11156364fe6a9a2554a9b99f4ca; no voters: 
I20250905 08:24:57.262517  6094 leader_election.cc:290] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250905 08:24:57.263399  6096 raft_consensus.cc:2802] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:24:57.265588  6096 raft_consensus.cc:695] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca [term 1 LEADER]: Becoming Leader. State: Replica: f7f4f11156364fe6a9a2554a9b99f4ca, State: Running, Role: LEADER
I20250905 08:24:57.266119  6079 heartbeater.cc:499] Master 127.0.106.190:33925 was elected leader, sending a full tablet report...
I20250905 08:24:57.266392  6096 consensus_queue.cc:237] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f7f4f11156364fe6a9a2554a9b99f4ca" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 36017 } }
I20250905 08:24:57.267793  6094 ts_tablet_manager.cc:1428] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca: Time spent starting tablet: real 0.031s	user 0.023s	sys 0.008s
I20250905 08:24:57.279064  5890 catalog_manager.cc:5582] T 52cf053238c24bda9eb415c5ad95e5b4 P f7f4f11156364fe6a9a2554a9b99f4ca reported cstate change: term changed from 0 to 1, leader changed from <none> to f7f4f11156364fe6a9a2554a9b99f4ca (127.0.106.129). New cstate: current_term: 1 leader_uuid: "f7f4f11156364fe6a9a2554a9b99f4ca" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f7f4f11156364fe6a9a2554a9b99f4ca" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 36017 } health_report { overall_health: HEALTHY } } }
I20250905 08:24:57.303833   426 external_mini_cluster.cc:949] 1 TS(s) registered with all masters
I20250905 08:24:57.306680   426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver f7f4f11156364fe6a9a2554a9b99f4ca to finish bootstrapping
I20250905 08:24:59.923049   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 5950
I20250905 08:24:59.945930   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 5858
2025-09-05T08:24:59Z chronyd exiting
[       OK ] ListTableCliSimpleParamTest.TestListTables/2 (7099 ms)
[----------] 1 test from ListTableCliSimpleParamTest (7099 ms total)

[----------] 1 test from ListTableCliParamTest
[ RUN      ] ListTableCliParamTest.ListTabletWithPartitionInfo/4
I20250905 08:25:00.004276   426 test_util.cc:276] Using random seed: -1830854759
[       OK ] ListTableCliParamTest.ListTabletWithPartitionInfo/4 (10 ms)
[----------] 1 test from ListTableCliParamTest (10 ms total)

[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest
[ RUN      ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0
2025-09-05T08:25:00Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-09-05T08:25:00Z Disabled control of system clock
I20250905 08:25:00.051605   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:37837
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:45433
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:37837 with env {}
W20250905 08:25:00.332163  6123 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:25:00.332695  6123 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:25:00.333089  6123 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:25:00.361300  6123 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250905 08:25:00.361519  6123 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:25:00.361697  6123 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250905 08:25:00.361862  6123 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250905 08:25:00.393193  6123 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45433
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:37837
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:37837
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:25:00.394310  6123 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:25:00.395707  6123 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:25:00.405287  6129 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:00.405915  6130 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:01.511948  6123 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.106s	user 0.313s	sys 0.785s
W20250905 08:25:01.512326  6123 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.107s	user 0.313s	sys 0.785s
W20250905 08:25:01.512375  6132 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:01.512876  6131 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1105 milliseconds
I20250905 08:25:01.512930  6123 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:25:01.514210  6123 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:25:01.516425  6123 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:25:01.517727  6123 hybrid_clock.cc:648] HybridClock initialized: now 1757060701517697 us; error 33 us; skew 500 ppm
I20250905 08:25:01.518414  6123 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:25:01.524797  6123 webserver.cc:480] Webserver started at http://127.0.106.190:44527/ using document root <none> and password file <none>
I20250905 08:25:01.525722  6123 fs_manager.cc:362] Metadata directory not provided
I20250905 08:25:01.525913  6123 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:25:01.526295  6123 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:25:01.531476  6123 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data/instance:
uuid: "d53852727b2545f6b996cd5df9c5613c"
format_stamp: "Formatted at 2025-09-05 08:25:01 on dist-test-slave-0x95"
I20250905 08:25:01.532428  6123 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal/instance:
uuid: "d53852727b2545f6b996cd5df9c5613c"
format_stamp: "Formatted at 2025-09-05 08:25:01 on dist-test-slave-0x95"
I20250905 08:25:01.539624  6123 fs_manager.cc:696] Time spent creating directory manager: real 0.007s	user 0.005s	sys 0.001s
I20250905 08:25:01.545058  6139 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:25:01.546079  6123 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.003s	sys 0.002s
I20250905 08:25:01.546402  6123 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal
uuid: "d53852727b2545f6b996cd5df9c5613c"
format_stamp: "Formatted at 2025-09-05 08:25:01 on dist-test-slave-0x95"
I20250905 08:25:01.546705  6123 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:25:01.613010  6123 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:25:01.614390  6123 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:25:01.614763  6123 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:25:01.677927  6123 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:37837
I20250905 08:25:01.678002  6190 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:37837 every 8 connection(s)
I20250905 08:25:01.680514  6123 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data/info.pb
I20250905 08:25:01.683115   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 6123
I20250905 08:25:01.683604   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal/instance
I20250905 08:25:01.686864  6191 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:25:01.708034  6191 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c: Bootstrap starting.
I20250905 08:25:01.713488  6191 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c: Neither blocks nor log segments found. Creating new log.
I20250905 08:25:01.715209  6191 log.cc:826] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c: Log is configured to *not* fsync() on all Append() calls
I20250905 08:25:01.718638  6191 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c: No bootstrap required, opened a new log
I20250905 08:25:01.733817  6191 raft_consensus.cc:357] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d53852727b2545f6b996cd5df9c5613c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37837 } }
I20250905 08:25:01.734372  6191 raft_consensus.cc:383] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:25:01.734581  6191 raft_consensus.cc:738] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d53852727b2545f6b996cd5df9c5613c, State: Initialized, Role: FOLLOWER
I20250905 08:25:01.735112  6191 consensus_queue.cc:260] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d53852727b2545f6b996cd5df9c5613c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37837 } }
I20250905 08:25:01.735532  6191 raft_consensus.cc:397] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:25:01.735752  6191 raft_consensus.cc:491] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:25:01.736027  6191 raft_consensus.cc:3058] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:25:01.739653  6191 raft_consensus.cc:513] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d53852727b2545f6b996cd5df9c5613c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37837 } }
I20250905 08:25:01.740299  6191 leader_election.cc:304] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: d53852727b2545f6b996cd5df9c5613c; no voters: 
I20250905 08:25:01.741683  6191 leader_election.cc:290] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250905 08:25:01.742314  6196 raft_consensus.cc:2802] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:25:01.744449  6196 raft_consensus.cc:695] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [term 1 LEADER]: Becoming Leader. State: Replica: d53852727b2545f6b996cd5df9c5613c, State: Running, Role: LEADER
I20250905 08:25:01.745060  6196 consensus_queue.cc:237] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d53852727b2545f6b996cd5df9c5613c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37837 } }
I20250905 08:25:01.745813  6191 sys_catalog.cc:564] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [sys.catalog]: configured and running, proceeding with master startup.
I20250905 08:25:01.751551  6198 sys_catalog.cc:455] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [sys.catalog]: SysCatalogTable state changed. Reason: New leader d53852727b2545f6b996cd5df9c5613c. Latest consensus state: current_term: 1 leader_uuid: "d53852727b2545f6b996cd5df9c5613c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d53852727b2545f6b996cd5df9c5613c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37837 } } }
I20250905 08:25:01.752228  6198 sys_catalog.cc:458] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [sys.catalog]: This master's current role is: LEADER
I20250905 08:25:01.753682  6197 sys_catalog.cc:455] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "d53852727b2545f6b996cd5df9c5613c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d53852727b2545f6b996cd5df9c5613c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37837 } } }
I20250905 08:25:01.754415  6197 sys_catalog.cc:458] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c [sys.catalog]: This master's current role is: LEADER
I20250905 08:25:01.756345  6204 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250905 08:25:01.767227  6204 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250905 08:25:01.780638  6204 catalog_manager.cc:1349] Generated new cluster ID: c60fdf83f790404fad208568f3aa61ae
I20250905 08:25:01.780926  6204 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250905 08:25:01.801398  6204 catalog_manager.cc:1372] Generated new certificate authority record
I20250905 08:25:01.803169  6204 catalog_manager.cc:1506] Loading token signing keys...
I20250905 08:25:01.818463  6204 catalog_manager.cc:5955] T 00000000000000000000000000000000 P d53852727b2545f6b996cd5df9c5613c: Generated new TSK 0
I20250905 08:25:01.819351  6204 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250905 08:25:01.828685   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37837
--builtin_ntp_servers=127.0.106.148:45433
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
W20250905 08:25:02.112010  6215 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:25:02.112425  6215 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:25:02.112857  6215 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:25:02.142467  6215 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:25:02.143265  6215 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250905 08:25:02.175936  6215 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45433
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37837
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:25:02.177116  6215 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:25:02.178720  6215 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:25:02.190124  6221 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:02.192505  6222 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:02.193552  6224 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:03.639223  6223 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1445 milliseconds
I20250905 08:25:03.639323  6215 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:25:03.640501  6215 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:25:03.642939  6215 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:25:03.644321  6215 hybrid_clock.cc:648] HybridClock initialized: now 1757060703644279 us; error 60 us; skew 500 ppm
I20250905 08:25:03.645008  6215 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:25:03.651046  6215 webserver.cc:480] Webserver started at http://127.0.106.129:39097/ using document root <none> and password file <none>
I20250905 08:25:03.651959  6215 fs_manager.cc:362] Metadata directory not provided
I20250905 08:25:03.652156  6215 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:25:03.652577  6215 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:25:03.656823  6215 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data/instance:
uuid: "2fecb70240c04dc0990ccc827917c296"
format_stamp: "Formatted at 2025-09-05 08:25:03 on dist-test-slave-0x95"
I20250905 08:25:03.658061  6215 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/wal/instance:
uuid: "2fecb70240c04dc0990ccc827917c296"
format_stamp: "Formatted at 2025-09-05 08:25:03 on dist-test-slave-0x95"
I20250905 08:25:03.664640  6215 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.008s	sys 0.000s
I20250905 08:25:03.669812  6231 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:25:03.670751  6215 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.004s	sys 0.001s
I20250905 08:25:03.671032  6215 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/wal
uuid: "2fecb70240c04dc0990ccc827917c296"
format_stamp: "Formatted at 2025-09-05 08:25:03 on dist-test-slave-0x95"
I20250905 08:25:03.671308  6215 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:25:03.729528  6215 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:25:03.730866  6215 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:25:03.731230  6215 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:25:03.733445  6215 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:25:03.737022  6215 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:25:03.737187  6215 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:25:03.737434  6215 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:25:03.737572  6215 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:25:03.860620  6215 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:38999
I20250905 08:25:03.860718  6343 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:38999 every 8 connection(s)
I20250905 08:25:03.862859  6215 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data/info.pb
I20250905 08:25:03.869422   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 6215
I20250905 08:25:03.869801   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/wal/instance
I20250905 08:25:03.875329   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:0
--local_ip_for_outbound_sockets=127.0.106.130
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37837
--builtin_ntp_servers=127.0.106.148:45433
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250905 08:25:03.881345  6344 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37837
I20250905 08:25:03.881784  6344 heartbeater.cc:461] Registering TS with master...
I20250905 08:25:03.882934  6344 heartbeater.cc:507] Master 127.0.106.190:37837 requested a full tablet report, sending...
I20250905 08:25:03.885488  6156 ts_manager.cc:194] Registered new tserver with Master: 2fecb70240c04dc0990ccc827917c296 (127.0.106.129:38999)
I20250905 08:25:03.887967  6156 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:56175
W20250905 08:25:04.150285  6348 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:25:04.150733  6348 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:25:04.151142  6348 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:25:04.179955  6348 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:25:04.180727  6348 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250905 08:25:04.211989  6348 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45433
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37837
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:25:04.213040  6348 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:25:04.214545  6348 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:25:04.225432  6354 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:25:04.891155  6344 heartbeater.cc:499] Master 127.0.106.190:37837 was elected leader, sending a full tablet report...
W20250905 08:25:05.629796  6353 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 6348
W20250905 08:25:05.810703  6353 kernel_stack_watchdog.cc:198] Thread 6348 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250905 08:25:05.811204  6348 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.585s	user 0.546s	sys 1.031s
W20250905 08:25:04.226225  6355 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:05.811547  6348 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.586s	user 0.546s	sys 1.031s
W20250905 08:25:05.814838  6356 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1585 milliseconds
W20250905 08:25:05.816188  6357 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:25:05.816237  6348 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:25:05.817286  6348 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:25:05.819087  6348 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:25:05.820446  6348 hybrid_clock.cc:648] HybridClock initialized: now 1757060705820401 us; error 41 us; skew 500 ppm
I20250905 08:25:05.821225  6348 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:25:05.829699  6348 webserver.cc:480] Webserver started at http://127.0.106.130:39937/ using document root <none> and password file <none>
I20250905 08:25:05.830683  6348 fs_manager.cc:362] Metadata directory not provided
I20250905 08:25:05.830884  6348 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:25:05.831318  6348 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:25:05.835629  6348 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data/instance:
uuid: "d6b4974606744830b0cac4bdf3ce0372"
format_stamp: "Formatted at 2025-09-05 08:25:05 on dist-test-slave-0x95"
I20250905 08:25:05.836596  6348 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/wal/instance:
uuid: "d6b4974606744830b0cac4bdf3ce0372"
format_stamp: "Formatted at 2025-09-05 08:25:05 on dist-test-slave-0x95"
I20250905 08:25:05.843103  6348 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.005s	sys 0.001s
I20250905 08:25:05.848191  6364 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:25:05.849121  6348 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.003s	sys 0.001s
I20250905 08:25:05.849417  6348 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/wal
uuid: "d6b4974606744830b0cac4bdf3ce0372"
format_stamp: "Formatted at 2025-09-05 08:25:05 on dist-test-slave-0x95"
I20250905 08:25:05.849749  6348 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:25:05.912130  6348 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:25:05.913568  6348 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:25:05.913975  6348 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:25:05.916363  6348 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:25:05.920009  6348 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:25:05.920197  6348 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:25:05.920439  6348 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:25:05.920574  6348 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:25:06.041697  6348 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:36861
I20250905 08:25:06.041795  6476 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:36861 every 8 connection(s)
I20250905 08:25:06.044005  6348 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data/info.pb
I20250905 08:25:06.048821   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 6348
I20250905 08:25:06.049263   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/wal/instance
I20250905 08:25:06.054662   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:0
--local_ip_for_outbound_sockets=127.0.106.131
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37837
--builtin_ntp_servers=127.0.106.148:45433
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250905 08:25:06.063297  6477 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37837
I20250905 08:25:06.063659  6477 heartbeater.cc:461] Registering TS with master...
I20250905 08:25:06.064558  6477 heartbeater.cc:507] Master 127.0.106.190:37837 requested a full tablet report, sending...
I20250905 08:25:06.066519  6156 ts_manager.cc:194] Registered new tserver with Master: d6b4974606744830b0cac4bdf3ce0372 (127.0.106.130:36861)
I20250905 08:25:06.067660  6156 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:53403
W20250905 08:25:06.345031  6481 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:25:06.345533  6481 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:25:06.346019  6481 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:25:06.376845  6481 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:25:06.377656  6481 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250905 08:25:06.410112  6481 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45433
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37837
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:25:06.411271  6481 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:25:06.412817  6481 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:25:06.424777  6487 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:25:07.070384  6477 heartbeater.cc:499] Master 127.0.106.190:37837 was elected leader, sending a full tablet report...
W20250905 08:25:06.425649  6488 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:07.827402  6486 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 6481
W20250905 08:25:08.066761  6481 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.642s	user 0.552s	sys 1.089s
W20250905 08:25:08.067644  6481 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.643s	user 0.552s	sys 1.090s
W20250905 08:25:08.069377  6490 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:08.070835  6489 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1642 milliseconds
I20250905 08:25:08.070854  6481 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:25:08.072093  6481 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:25:08.074023  6481 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:25:08.075357  6481 hybrid_clock.cc:648] HybridClock initialized: now 1757060708075303 us; error 52 us; skew 500 ppm
I20250905 08:25:08.076223  6481 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:25:08.082187  6481 webserver.cc:480] Webserver started at http://127.0.106.131:33793/ using document root <none> and password file <none>
I20250905 08:25:08.083086  6481 fs_manager.cc:362] Metadata directory not provided
I20250905 08:25:08.083292  6481 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:25:08.083694  6481 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:25:08.087683  6481 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data/instance:
uuid: "9ff1e0785190406d8a41f92f2fb869fb"
format_stamp: "Formatted at 2025-09-05 08:25:08 on dist-test-slave-0x95"
I20250905 08:25:08.088709  6481 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/wal/instance:
uuid: "9ff1e0785190406d8a41f92f2fb869fb"
format_stamp: "Formatted at 2025-09-05 08:25:08 on dist-test-slave-0x95"
I20250905 08:25:08.095228  6481 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.006s	sys 0.000s
I20250905 08:25:08.100198  6497 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:25:08.101100  6481 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.004s	sys 0.000s
I20250905 08:25:08.101390  6481 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/wal
uuid: "9ff1e0785190406d8a41f92f2fb869fb"
format_stamp: "Formatted at 2025-09-05 08:25:08 on dist-test-slave-0x95"
I20250905 08:25:08.101719  6481 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:25:08.148547  6481 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:25:08.149843  6481 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:25:08.150259  6481 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:25:08.152616  6481 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:25:08.156235  6481 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250905 08:25:08.156425  6481 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:25:08.156647  6481 ts_tablet_manager.cc:610] Registered 0 tablets
I20250905 08:25:08.156800  6481 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:25:08.282724  6481 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:34783
I20250905 08:25:08.282828  6609 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:34783 every 8 connection(s)
I20250905 08:25:08.285837  6481 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data/info.pb
I20250905 08:25:08.290062   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 6481
I20250905 08:25:08.290535   426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/wal/instance
I20250905 08:25:08.312224  6610 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37837
I20250905 08:25:08.312613  6610 heartbeater.cc:461] Registering TS with master...
I20250905 08:25:08.313593  6610 heartbeater.cc:507] Master 127.0.106.190:37837 requested a full tablet report, sending...
I20250905 08:25:08.315459  6156 ts_manager.cc:194] Registered new tserver with Master: 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131:34783)
I20250905 08:25:08.316617  6156 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:45231
I20250905 08:25:08.323477   426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250905 08:25:08.348477   426 test_util.cc:276] Using random seed: -1822510555
I20250905 08:25:08.386987  6156 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:51300:
name: "pre_rebuild"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20250905 08:25:08.389241  6156 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table pre_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250905 08:25:08.448295  6545 tablet_service.cc:1468] Processing CreateTablet for tablet f0e558aafe044a9a9f3334d3a81ce0da (DEFAULT_TABLE table=pre_rebuild [id=c77b0bcade5c4a379a16fdfcc56d8772]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:25:08.450084  6545 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f0e558aafe044a9a9f3334d3a81ce0da. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:25:08.453503  6412 tablet_service.cc:1468] Processing CreateTablet for tablet f0e558aafe044a9a9f3334d3a81ce0da (DEFAULT_TABLE table=pre_rebuild [id=c77b0bcade5c4a379a16fdfcc56d8772]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:25:08.455248  6412 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f0e558aafe044a9a9f3334d3a81ce0da. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:25:08.458058  6279 tablet_service.cc:1468] Processing CreateTablet for tablet f0e558aafe044a9a9f3334d3a81ce0da (DEFAULT_TABLE table=pre_rebuild [id=c77b0bcade5c4a379a16fdfcc56d8772]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:25:08.459870  6279 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f0e558aafe044a9a9f3334d3a81ce0da. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:25:08.475800  6635 tablet_bootstrap.cc:492] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: Bootstrap starting.
I20250905 08:25:08.480819  6635 tablet_bootstrap.cc:654] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: Neither blocks nor log segments found. Creating new log.
I20250905 08:25:08.481243  6636 tablet_bootstrap.cc:492] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: Bootstrap starting.
I20250905 08:25:08.482995  6635 log.cc:826] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: Log is configured to *not* fsync() on all Append() calls
I20250905 08:25:08.488651  6636 tablet_bootstrap.cc:654] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: Neither blocks nor log segments found. Creating new log.
I20250905 08:25:08.490826  6636 log.cc:826] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: Log is configured to *not* fsync() on all Append() calls
I20250905 08:25:08.491362  6635 tablet_bootstrap.cc:492] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: No bootstrap required, opened a new log
I20250905 08:25:08.491657  6637 tablet_bootstrap.cc:492] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: Bootstrap starting.
I20250905 08:25:08.491888  6635 ts_tablet_manager.cc:1397] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: Time spent bootstrapping tablet: real 0.017s	user 0.008s	sys 0.006s
I20250905 08:25:08.499701  6637 tablet_bootstrap.cc:654] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: Neither blocks nor log segments found. Creating new log.
I20250905 08:25:08.502300  6636 tablet_bootstrap.cc:492] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: No bootstrap required, opened a new log
I20250905 08:25:08.502565  6637 log.cc:826] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: Log is configured to *not* fsync() on all Append() calls
I20250905 08:25:08.502871  6636 ts_tablet_manager.cc:1397] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: Time spent bootstrapping tablet: real 0.022s	user 0.015s	sys 0.005s
I20250905 08:25:08.511153  6637 tablet_bootstrap.cc:492] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: No bootstrap required, opened a new log
I20250905 08:25:08.511644  6637 ts_tablet_manager.cc:1397] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: Time spent bootstrapping tablet: real 0.022s	user 0.000s	sys 0.016s
I20250905 08:25:08.518271  6635 raft_consensus.cc:357] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:08.519225  6635 raft_consensus.cc:383] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:25:08.519577  6635 raft_consensus.cc:738] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9ff1e0785190406d8a41f92f2fb869fb, State: Initialized, Role: FOLLOWER
I20250905 08:25:08.520609  6635 consensus_queue.cc:260] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:08.524046  6610 heartbeater.cc:499] Master 127.0.106.190:37837 was elected leader, sending a full tablet report...
I20250905 08:25:08.525334  6635 ts_tablet_manager.cc:1428] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: Time spent starting tablet: real 0.033s	user 0.026s	sys 0.005s
I20250905 08:25:08.528721  6636 raft_consensus.cc:357] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:08.529516  6636 raft_consensus.cc:383] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:25:08.529798  6636 raft_consensus.cc:738] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d6b4974606744830b0cac4bdf3ce0372, State: Initialized, Role: FOLLOWER
I20250905 08:25:08.530634  6636 consensus_queue.cc:260] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:08.534219  6637 raft_consensus.cc:357] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:08.534893  6637 raft_consensus.cc:383] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:25:08.535049  6636 ts_tablet_manager.cc:1428] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: Time spent starting tablet: real 0.032s	user 0.029s	sys 0.000s
I20250905 08:25:08.535125  6637 raft_consensus.cc:738] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2fecb70240c04dc0990ccc827917c296, State: Initialized, Role: FOLLOWER
I20250905 08:25:08.535959  6637 consensus_queue.cc:260] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:08.539769  6637 ts_tablet_manager.cc:1428] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: Time spent starting tablet: real 0.028s	user 0.020s	sys 0.006s
W20250905 08:25:08.543529  6611 tablet.cc:2378] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250905 08:25:08.550678  6478 tablet.cc:2378] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250905 08:25:08.621824  6345 tablet.cc:2378] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250905 08:25:08.674597  6641 raft_consensus.cc:491] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250905 08:25:08.675055  6641 raft_consensus.cc:513] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:08.678213  6641 leader_election.cc:290] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers d6b4974606744830b0cac4bdf3ce0372 (127.0.106.130:36861), 2fecb70240c04dc0990ccc827917c296 (127.0.106.129:38999)
I20250905 08:25:08.687400  6432 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f0e558aafe044a9a9f3334d3a81ce0da" candidate_uuid: "9ff1e0785190406d8a41f92f2fb869fb" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d6b4974606744830b0cac4bdf3ce0372" is_pre_election: true
I20250905 08:25:08.688160  6432 raft_consensus.cc:2466] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 9ff1e0785190406d8a41f92f2fb869fb in term 0.
I20250905 08:25:08.688995  6643 raft_consensus.cc:491] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250905 08:25:08.689450  6498 leader_election.cc:304] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 9ff1e0785190406d8a41f92f2fb869fb, d6b4974606744830b0cac4bdf3ce0372; no voters: 
I20250905 08:25:08.689529  6643 raft_consensus.cc:513] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:08.690248  6641 raft_consensus.cc:2802] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250905 08:25:08.690299  6299 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f0e558aafe044a9a9f3334d3a81ce0da" candidate_uuid: "9ff1e0785190406d8a41f92f2fb869fb" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "2fecb70240c04dc0990ccc827917c296" is_pre_election: true
I20250905 08:25:08.690551  6641 raft_consensus.cc:491] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250905 08:25:08.690834  6641 raft_consensus.cc:3058] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:25:08.691015  6299 raft_consensus.cc:2466] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 9ff1e0785190406d8a41f92f2fb869fb in term 0.
I20250905 08:25:08.692539  6643 leader_election.cc:290] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131:34783), d6b4974606744830b0cac4bdf3ce0372 (127.0.106.130:36861)
I20250905 08:25:08.698761  6641 raft_consensus.cc:513] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:08.700492  6641 leader_election.cc:290] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [CANDIDATE]: Term 1 election: Requested vote from peers d6b4974606744830b0cac4bdf3ce0372 (127.0.106.130:36861), 2fecb70240c04dc0990ccc827917c296 (127.0.106.129:38999)
I20250905 08:25:08.701138  6432 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f0e558aafe044a9a9f3334d3a81ce0da" candidate_uuid: "9ff1e0785190406d8a41f92f2fb869fb" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d6b4974606744830b0cac4bdf3ce0372"
I20250905 08:25:08.701421  6299 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f0e558aafe044a9a9f3334d3a81ce0da" candidate_uuid: "9ff1e0785190406d8a41f92f2fb869fb" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "2fecb70240c04dc0990ccc827917c296"
I20250905 08:25:08.701606  6432 raft_consensus.cc:3058] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:25:08.701934  6299 raft_consensus.cc:3058] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:25:08.707762  6432 raft_consensus.cc:2466] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 9ff1e0785190406d8a41f92f2fb869fb in term 1.
I20250905 08:25:08.708446  6299 raft_consensus.cc:2466] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 9ff1e0785190406d8a41f92f2fb869fb in term 1.
I20250905 08:25:08.708626  6498 leader_election.cc:304] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 9ff1e0785190406d8a41f92f2fb869fb, d6b4974606744830b0cac4bdf3ce0372; no voters: 
I20250905 08:25:08.709334  6641 raft_consensus.cc:2802] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:25:08.712129  6565 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f0e558aafe044a9a9f3334d3a81ce0da" candidate_uuid: "2fecb70240c04dc0990ccc827917c296" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9ff1e0785190406d8a41f92f2fb869fb" is_pre_election: true
I20250905 08:25:08.712836  6641 raft_consensus.cc:695] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [term 1 LEADER]: Becoming Leader. State: Replica: 9ff1e0785190406d8a41f92f2fb869fb, State: Running, Role: LEADER
I20250905 08:25:08.713697  6432 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f0e558aafe044a9a9f3334d3a81ce0da" candidate_uuid: "2fecb70240c04dc0990ccc827917c296" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d6b4974606744830b0cac4bdf3ce0372" is_pre_election: true
I20250905 08:25:08.713649  6641 consensus_queue.cc:237] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:08.714432  6432 raft_consensus.cc:2391] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 1 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 2fecb70240c04dc0990ccc827917c296 in current term 1: Already voted for candidate 9ff1e0785190406d8a41f92f2fb869fb in this term.
I20250905 08:25:08.722888  6232 leader_election.cc:304] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 2fecb70240c04dc0990ccc827917c296; no voters: 9ff1e0785190406d8a41f92f2fb869fb, d6b4974606744830b0cac4bdf3ce0372
I20250905 08:25:08.723685  6643 raft_consensus.cc:2747] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 1 FOLLOWER]: Leader pre-election lost for term 1. Reason: could not achieve majority
I20250905 08:25:08.726457  6156 catalog_manager.cc:5582] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb reported cstate change: term changed from 0 to 1, leader changed from <none> to 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131). New cstate: current_term: 1 leader_uuid: "9ff1e0785190406d8a41f92f2fb869fb" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } health_report { overall_health: UNKNOWN } } }
I20250905 08:25:08.858017  6299 raft_consensus.cc:1273] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 1 FOLLOWER]: Refusing update from remote peer 9ff1e0785190406d8a41f92f2fb869fb: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250905 08:25:08.858330  6432 raft_consensus.cc:1273] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 1 FOLLOWER]: Refusing update from remote peer 9ff1e0785190406d8a41f92f2fb869fb: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250905 08:25:08.859745  6647 consensus_queue.cc:1035] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [LEADER]: Connected to new peer: Peer: permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250905 08:25:08.860352  6641 consensus_queue.cc:1035] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [LEADER]: Connected to new peer: Peer: permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250905 08:25:08.895956  6654 mvcc.cc:204] Tried to move back new op lower bound from 7196920663468593152 to 7196920662898331648. Current Snapshot: MvccSnapshot[applied={T|T < 7196920663468593152}]
I20250905 08:25:08.905046  6656 mvcc.cc:204] Tried to move back new op lower bound from 7196920663468593152 to 7196920662898331648. Current Snapshot: MvccSnapshot[applied={T|T < 7196920663468593152}]
I20250905 08:25:13.581637   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 6123
W20250905 08:25:13.947288  6689 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:25:13.947860  6689 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:25:13.962226  6610 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:37837 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:37837: connect: Connection refused (error 111)
W20250905 08:25:13.976492  6477 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:37837 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:37837: connect: Connection refused (error 111)
W20250905 08:25:13.979988  6689 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250905 08:25:13.989051  6344 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:37837 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:37837: connect: Connection refused (error 111)
W20250905 08:25:15.398492  6689 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.380s	user 0.447s	sys 0.927s
W20250905 08:25:15.398896  6689 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.381s	user 0.448s	sys 0.928s
I20250905 08:25:15.523177  6689 minidump.cc:252] Setting minidump size limit to 20M
I20250905 08:25:15.525449  6689 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:25:15.526418  6689 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:25:15.536543  6722 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:15.536994  6723 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:15.694366  6725 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:25:15.694898  6689 server_base.cc:1047] running on GCE node
I20250905 08:25:15.696192  6689 hybrid_clock.cc:584] initializing the hybrid clock with 'system' time source
I20250905 08:25:15.696720  6689 hybrid_clock.cc:648] HybridClock initialized: now 1757060715696695 us; error 131516 us; skew 500 ppm
I20250905 08:25:15.697569  6689 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:25:15.702432  6689 webserver.cc:480] Webserver started at http://0.0.0.0:45481/ using document root <none> and password file <none>
I20250905 08:25:15.703496  6689 fs_manager.cc:362] Metadata directory not provided
I20250905 08:25:15.703750  6689 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:25:15.704258  6689 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250905 08:25:15.710031  6689 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data/instance:
uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae"
format_stamp: "Formatted at 2025-09-05 08:25:15 on dist-test-slave-0x95"
I20250905 08:25:15.711365  6689 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal/instance:
uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae"
format_stamp: "Formatted at 2025-09-05 08:25:15 on dist-test-slave-0x95"
I20250905 08:25:15.718470  6689 fs_manager.cc:696] Time spent creating directory manager: real 0.006s	user 0.005s	sys 0.001s
I20250905 08:25:15.724014  6730 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:25:15.724886  6689 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.003s	sys 0.001s
I20250905 08:25:15.725210  6689 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal
uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae"
format_stamp: "Formatted at 2025-09-05 08:25:15 on dist-test-slave-0x95"
I20250905 08:25:15.725579  6689 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:25:15.783512  6689 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:25:15.784863  6689 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:25:15.785269  6689 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:25:15.789662  6689 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:25:15.802875  6689 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae: Bootstrap starting.
I20250905 08:25:15.807224  6689 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae: Neither blocks nor log segments found. Creating new log.
I20250905 08:25:15.808907  6689 log.cc:826] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae: Log is configured to *not* fsync() on all Append() calls
I20250905 08:25:15.813824  6689 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae: No bootstrap required, opened a new log
I20250905 08:25:15.828444  6689 raft_consensus.cc:357] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" member_type: VOTER }
I20250905 08:25:15.828811  6689 raft_consensus.cc:383] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:25:15.828981  6689 raft_consensus.cc:738] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9b4fd55bb34c4b0e8400c9dcc5da2dae, State: Initialized, Role: FOLLOWER
I20250905 08:25:15.829469  6689 consensus_queue.cc:260] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" member_type: VOTER }
I20250905 08:25:15.829857  6689 raft_consensus.cc:397] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:25:15.830098  6689 raft_consensus.cc:491] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:25:15.830354  6689 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:25:15.833936  6689 raft_consensus.cc:513] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" member_type: VOTER }
I20250905 08:25:15.834477  6689 leader_election.cc:304] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9b4fd55bb34c4b0e8400c9dcc5da2dae; no voters: 
I20250905 08:25:15.836074  6689 leader_election.cc:290] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [CANDIDATE]: Term 1 election: Requested vote from peers 
I20250905 08:25:15.836273  6737 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:25:15.838165  6737 raft_consensus.cc:695] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 1 LEADER]: Becoming Leader. State: Replica: 9b4fd55bb34c4b0e8400c9dcc5da2dae, State: Running, Role: LEADER
I20250905 08:25:15.838932  6737 consensus_queue.cc:237] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" member_type: VOTER }
I20250905 08:25:15.845956  6738 sys_catalog.cc:455] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" member_type: VOTER } }
I20250905 08:25:15.846400  6738 sys_catalog.cc:458] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [sys.catalog]: This master's current role is: LEADER
I20250905 08:25:15.847046  6739 sys_catalog.cc:455] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [sys.catalog]: SysCatalogTable state changed. Reason: New leader 9b4fd55bb34c4b0e8400c9dcc5da2dae. Latest consensus state: current_term: 1 leader_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" member_type: VOTER } }
I20250905 08:25:15.847394  6739 sys_catalog.cc:458] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [sys.catalog]: This master's current role is: LEADER
I20250905 08:25:15.857268  6689 tablet_replica.cc:331] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae: stopping tablet replica
I20250905 08:25:15.857707  6689 raft_consensus.cc:2241] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 1 LEADER]: Raft consensus shutting down.
I20250905 08:25:15.858049  6689 raft_consensus.cc:2270] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 1 FOLLOWER]: Raft consensus is shut down!
I20250905 08:25:15.859879  6689 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250905 08:25:15.860262  6689 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250905 08:25:15.886555  6689 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
I20250905 08:25:16.914520   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 6215
I20250905 08:25:16.944598   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 6348
I20250905 08:25:16.975117   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 6481
I20250905 08:25:17.012295   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:37837
--webserver_interface=127.0.106.190
--webserver_port=44527
--builtin_ntp_servers=127.0.106.148:45433
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:37837 with env {}
W20250905 08:25:17.293453  6747 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:25:17.294015  6747 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:25:17.294451  6747 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:25:17.325062  6747 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250905 08:25:17.325349  6747 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:25:17.325591  6747 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250905 08:25:17.325829  6747 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250905 08:25:17.357522  6747 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45433
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:37837
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:37837
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=44527
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true

Master server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:25:17.358625  6747 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:25:17.360069  6747 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:25:17.369529  6753 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:17.370054  6754 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:18.518347  6755 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1143 milliseconds
W20250905 08:25:18.518460  6747 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.148s	user 0.418s	sys 0.725s
W20250905 08:25:18.518864  6747 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.149s	user 0.419s	sys 0.726s
W20250905 08:25:18.519722  6756 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:25:18.519759  6747 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:25:18.521008  6747 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:25:18.528434  6747 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:25:18.529807  6747 hybrid_clock.cc:648] HybridClock initialized: now 1757060718529769 us; error 38 us; skew 500 ppm
I20250905 08:25:18.530697  6747 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:25:18.537153  6747 webserver.cc:480] Webserver started at http://127.0.106.190:44527/ using document root <none> and password file <none>
I20250905 08:25:18.537932  6747 fs_manager.cc:362] Metadata directory not provided
I20250905 08:25:18.538126  6747 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:25:18.545708  6747 fs_manager.cc:714] Time spent opening directory manager: real 0.005s	user 0.004s	sys 0.000s
I20250905 08:25:18.550016  6764 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:25:18.550989  6747 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.002s	sys 0.002s
I20250905 08:25:18.551277  6747 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal
uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae"
format_stamp: "Formatted at 2025-09-05 08:25:15 on dist-test-slave-0x95"
I20250905 08:25:18.553140  6747 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:25:18.615897  6747 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:25:18.617702  6747 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:25:18.618080  6747 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:25:18.683365  6747 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:37837
I20250905 08:25:18.683434  6815 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:37837 every 8 connection(s)
I20250905 08:25:18.686094  6747 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data/info.pb
I20250905 08:25:18.692718   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 6747
I20250905 08:25:18.694391   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:38999
--local_ip_for_outbound_sockets=127.0.106.129
--tserver_master_addrs=127.0.106.190:37837
--webserver_port=39097
--webserver_interface=127.0.106.129
--builtin_ntp_servers=127.0.106.148:45433
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250905 08:25:18.695814  6816 sys_catalog.cc:263] Verifying existing consensus state
I20250905 08:25:18.706009  6816 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae: Bootstrap starting.
I20250905 08:25:18.715444  6816 log.cc:826] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae: Log is configured to *not* fsync() on all Append() calls
I20250905 08:25:18.726441  6816 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=2 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:25:18.727113  6816 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae: Bootstrap complete.
I20250905 08:25:18.746886  6816 raft_consensus.cc:357] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37837 } }
I20250905 08:25:18.747594  6816 raft_consensus.cc:738] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9b4fd55bb34c4b0e8400c9dcc5da2dae, State: Initialized, Role: FOLLOWER
I20250905 08:25:18.748385  6816 consensus_queue.cc:260] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37837 } }
I20250905 08:25:18.748955  6816 raft_consensus.cc:397] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250905 08:25:18.749234  6816 raft_consensus.cc:491] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250905 08:25:18.749542  6816 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 1 FOLLOWER]: Advancing to term 2
I20250905 08:25:18.755129  6816 raft_consensus.cc:513] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37837 } }
I20250905 08:25:18.755923  6816 leader_election.cc:304] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9b4fd55bb34c4b0e8400c9dcc5da2dae; no voters: 
I20250905 08:25:18.757853  6816 leader_election.cc:290] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [CANDIDATE]: Term 2 election: Requested vote from peers 
I20250905 08:25:18.758126  6820 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 2 FOLLOWER]: Leader election won for term 2
I20250905 08:25:18.760445  6820 raft_consensus.cc:695] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [term 2 LEADER]: Becoming Leader. State: Replica: 9b4fd55bb34c4b0e8400c9dcc5da2dae, State: Running, Role: LEADER
I20250905 08:25:18.761157  6820 consensus_queue.cc:237] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37837 } }
I20250905 08:25:18.763224  6816 sys_catalog.cc:564] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [sys.catalog]: configured and running, proceeding with master startup.
I20250905 08:25:18.768975  6821 sys_catalog.cc:455] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [sys.catalog]: SysCatalogTable state changed. Reason: New leader 9b4fd55bb34c4b0e8400c9dcc5da2dae. Latest consensus state: current_term: 2 leader_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37837 } } }
I20250905 08:25:18.769681  6821 sys_catalog.cc:458] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [sys.catalog]: This master's current role is: LEADER
I20250905 08:25:18.772553  6822 sys_catalog.cc:455] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b4fd55bb34c4b0e8400c9dcc5da2dae" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37837 } } }
I20250905 08:25:18.773476  6822 sys_catalog.cc:458] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae [sys.catalog]: This master's current role is: LEADER
I20250905 08:25:18.777274  6827 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250905 08:25:18.787935  6827 catalog_manager.cc:671] Loaded metadata for table pre_rebuild [id=70247959484d4acd8cdc84064aeb8bc0]
I20250905 08:25:18.794791  6827 tablet_loader.cc:96] loaded metadata for tablet f0e558aafe044a9a9f3334d3a81ce0da (table pre_rebuild [id=70247959484d4acd8cdc84064aeb8bc0])
I20250905 08:25:18.796499  6827 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250905 08:25:18.825327  6827 catalog_manager.cc:1349] Generated new cluster ID: 2930a32710de43259b906927f8727fc9
I20250905 08:25:18.825603  6827 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250905 08:25:18.842104  6838 catalog_manager.cc:797] Waiting for catalog manager background task thread to start: Service unavailable: Catalog manager is not initialized. State: Starting
I20250905 08:25:18.851816  6827 catalog_manager.cc:1372] Generated new certificate authority record
I20250905 08:25:18.853652  6827 catalog_manager.cc:1506] Loading token signing keys...
I20250905 08:25:18.878156  6827 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae: Generated new TSK 0
I20250905 08:25:18.879083  6827 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250905 08:25:19.020165  6818 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:25:19.020627  6818 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:25:19.021036  6818 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:25:19.049088  6818 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:25:19.049796  6818 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250905 08:25:19.081141  6818 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45433
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:38999
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=39097
--tserver_master_addrs=127.0.106.190:37837
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:25:19.082362  6818 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:25:19.083935  6818 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:25:19.094947  6844 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:20.498967  6843 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 6818
W20250905 08:25:20.890647  6818 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.794s	user 0.649s	sys 1.071s
W20250905 08:25:20.891036  6818 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.795s	user 0.650s	sys 1.071s
W20250905 08:25:19.096455  6845 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:20.892104  6846 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1795 milliseconds
W20250905 08:25:20.893028  6847 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:25:20.892951  6818 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:25:20.896379  6818 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:25:20.898320  6818 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:25:20.899711  6818 hybrid_clock.cc:648] HybridClock initialized: now 1757060720899677 us; error 33 us; skew 500 ppm
I20250905 08:25:20.900470  6818 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:25:20.906499  6818 webserver.cc:480] Webserver started at http://127.0.106.129:39097/ using document root <none> and password file <none>
I20250905 08:25:20.907342  6818 fs_manager.cc:362] Metadata directory not provided
I20250905 08:25:20.907536  6818 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:25:20.914862  6818 fs_manager.cc:714] Time spent opening directory manager: real 0.005s	user 0.001s	sys 0.005s
I20250905 08:25:20.919345  6854 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:25:20.920291  6818 fs_manager.cc:730] Time spent opening block manager: real 0.003s	user 0.003s	sys 0.000s
I20250905 08:25:20.920552  6818 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/wal
uuid: "2fecb70240c04dc0990ccc827917c296"
format_stamp: "Formatted at 2025-09-05 08:25:03 on dist-test-slave-0x95"
I20250905 08:25:20.922143  6818 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:25:20.968967  6818 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:25:20.970214  6818 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:25:20.970623  6818 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:25:20.973505  6818 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:25:20.979315  6861 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250905 08:25:20.986474  6818 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250905 08:25:20.986657  6818 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s	user 0.002s	sys 0.000s
I20250905 08:25:20.986932  6818 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250905 08:25:20.990994  6818 ts_tablet_manager.cc:610] Registered 1 tablets
I20250905 08:25:20.991199  6818 ts_tablet_manager.cc:589] Time spent register tablets: real 0.004s	user 0.001s	sys 0.000s
I20250905 08:25:20.991520  6861 tablet_bootstrap.cc:492] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: Bootstrap starting.
I20250905 08:25:21.168630  6818 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:38999
I20250905 08:25:21.168756  6967 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:38999 every 8 connection(s)
I20250905 08:25:21.171342  6818 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data/info.pb
I20250905 08:25:21.180908   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 6818
I20250905 08:25:21.183126   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:36861
--local_ip_for_outbound_sockets=127.0.106.130
--tserver_master_addrs=127.0.106.190:37837
--webserver_port=39937
--webserver_interface=127.0.106.130
--builtin_ntp_servers=127.0.106.148:45433
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250905 08:25:21.236274  6968 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37837
I20250905 08:25:21.236743  6968 heartbeater.cc:461] Registering TS with master...
I20250905 08:25:21.237907  6968 heartbeater.cc:507] Master 127.0.106.190:37837 requested a full tablet report, sending...
I20250905 08:25:21.242036  6781 ts_manager.cc:194] Registered new tserver with Master: 2fecb70240c04dc0990ccc827917c296 (127.0.106.129:38999)
I20250905 08:25:21.249015  6781 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:38149
I20250905 08:25:21.302183  6861 log.cc:826] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: Log is configured to *not* fsync() on all Append() calls
W20250905 08:25:21.503458  6972 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:25:21.503952  6972 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:25:21.504470  6972 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:25:21.533831  6972 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:25:21.534660  6972 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250905 08:25:21.566879  6972 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45433
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:36861
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=39937
--tserver_master_addrs=127.0.106.190:37837
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:25:21.568102  6972 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:25:21.569537  6972 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:25:21.581104  6979 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:25:22.253166  6968 heartbeater.cc:499] Master 127.0.106.190:37837 was elected leader, sending a full tablet report...
W20250905 08:25:21.582398  6980 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:22.930838  6972 thread.cc:641] GCE (cloud detector) Time spent creating pthread: real 1.350s	user 0.483s	sys 0.863s
W20250905 08:25:22.931208  6972 thread.cc:608] GCE (cloud detector) Time spent starting thread: real 1.351s	user 0.484s	sys 0.865s
I20250905 08:25:22.933291  6972 server_base.cc:1047] running on GCE node
W20250905 08:25:22.934334  6984 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:25:22.935817  6972 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:25:22.938513  6972 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:25:22.939932  6972 hybrid_clock.cc:648] HybridClock initialized: now 1757060722939859 us; error 61 us; skew 500 ppm
I20250905 08:25:22.940985  6972 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:25:22.949242  6972 webserver.cc:480] Webserver started at http://127.0.106.130:39937/ using document root <none> and password file <none>
I20250905 08:25:22.950592  6972 fs_manager.cc:362] Metadata directory not provided
I20250905 08:25:22.950904  6972 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:25:22.962236  6972 fs_manager.cc:714] Time spent opening directory manager: real 0.006s	user 0.000s	sys 0.005s
I20250905 08:25:22.968307  6989 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:25:22.969657  6972 fs_manager.cc:730] Time spent opening block manager: real 0.005s	user 0.003s	sys 0.001s
I20250905 08:25:22.970153  6972 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/wal
uuid: "d6b4974606744830b0cac4bdf3ce0372"
format_stamp: "Formatted at 2025-09-05 08:25:05 on dist-test-slave-0x95"
I20250905 08:25:22.972723  6972 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:25:23.047612  6972 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:25:23.049501  6972 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:25:23.050122  6972 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:25:23.053438  6972 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:25:23.061111  6996 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250905 08:25:23.071519  6972 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250905 08:25:23.071712  6972 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.012s	user 0.000s	sys 0.002s
I20250905 08:25:23.071954  6972 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250905 08:25:23.076078  6972 ts_tablet_manager.cc:610] Registered 1 tablets
I20250905 08:25:23.076258  6972 ts_tablet_manager.cc:589] Time spent register tablets: real 0.004s	user 0.002s	sys 0.002s
I20250905 08:25:23.076691  6996 tablet_bootstrap.cc:492] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: Bootstrap starting.
I20250905 08:25:23.300416  6972 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:36861
I20250905 08:25:23.300631  7102 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:36861 every 8 connection(s)
I20250905 08:25:23.302983  6972 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data/info.pb
I20250905 08:25:23.310563   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 6972
I20250905 08:25:23.312335   426 external_mini_cluster.cc:1366] Running /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
/tmp/dist-test-task9wMJuX/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:34783
--local_ip_for_outbound_sockets=127.0.106.131
--tserver_master_addrs=127.0.106.190:37837
--webserver_port=33793
--webserver_interface=127.0.106.131
--builtin_ntp_servers=127.0.106.148:45433
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250905 08:25:23.347062  7103 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37837
I20250905 08:25:23.347594  7103 heartbeater.cc:461] Registering TS with master...
I20250905 08:25:23.349006  7103 heartbeater.cc:507] Master 127.0.106.190:37837 requested a full tablet report, sending...
I20250905 08:25:23.353261  6781 ts_manager.cc:194] Registered new tserver with Master: d6b4974606744830b0cac4bdf3ce0372 (127.0.106.130:36861)
I20250905 08:25:23.356423  6781 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:55369
I20250905 08:25:23.363497  6996 log.cc:826] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: Log is configured to *not* fsync() on all Append() calls
W20250905 08:25:23.769908  7107 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250905 08:25:23.770500  7107 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250905 08:25:23.771256  7107 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250905 08:25:23.824301  7107 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250905 08:25:23.825635  7107 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250905 08:25:23.882678  7107 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45433
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:34783
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=33793
--tserver_master_addrs=127.0.106.190:37837
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true

Tablet server version:
kudu 1.19.0-SNAPSHOT
revision e2e116a9cb215058aad161061214d11c617cedac
build type FASTDEBUG
built by None at 05 Sep 2025 08:06:49 UTC on 24a791456cd2
build id 7909
TSAN enabled
I20250905 08:25:23.884290  7107 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250905 08:25:23.886286  7107 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250905 08:25:23.900421  7114 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250905 08:25:24.360478  7103 heartbeater.cc:499] Master 127.0.106.190:37837 was elected leader, sending a full tablet report...
I20250905 08:25:25.366622  6861 tablet_bootstrap.cc:492] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:25:25.367571  6861 tablet_bootstrap.cc:492] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: Bootstrap complete.
I20250905 08:25:25.373471  6861 ts_tablet_manager.cc:1397] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: Time spent bootstrapping tablet: real 4.382s	user 3.577s	sys 0.062s
I20250905 08:25:25.398698  6861 raft_consensus.cc:357] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:25.403148  6861 raft_consensus.cc:738] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2fecb70240c04dc0990ccc827917c296, State: Initialized, Role: FOLLOWER
I20250905 08:25:25.404646  6861 consensus_queue.cc:260] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:25.414911  6861 ts_tablet_manager.cc:1428] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: Time spent starting tablet: real 0.041s	user 0.029s	sys 0.008s
W20250905 08:25:25.302253  7113 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 7107
W20250905 08:25:25.906493  7113 kernel_stack_watchdog.cc:198] Thread 7107 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 397ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250905 08:25:23.903090  7115 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:25.907805  7107 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 2.005s	user 0.708s	sys 1.248s
W20250905 08:25:25.908248  7107 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 2.005s	user 0.709s	sys 1.248s
W20250905 08:25:25.909426  7117 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250905 08:25:25.911659  7116 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 2004 milliseconds
I20250905 08:25:25.911729  7107 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250905 08:25:25.913055  7107 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250905 08:25:25.915491  7107 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250905 08:25:25.916944  7107 hybrid_clock.cc:648] HybridClock initialized: now 1757060725916891 us; error 48 us; skew 500 ppm
I20250905 08:25:25.917905  7107 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250905 08:25:25.924407  7107 webserver.cc:480] Webserver started at http://127.0.106.131:33793/ using document root <none> and password file <none>
I20250905 08:25:25.925590  7107 fs_manager.cc:362] Metadata directory not provided
I20250905 08:25:25.925848  7107 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250905 08:25:25.935302  7107 fs_manager.cc:714] Time spent opening directory manager: real 0.006s	user 0.005s	sys 0.001s
I20250905 08:25:25.940505  7125 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s	user 0.000s	sys 0.000s
I20250905 08:25:25.941504  7107 fs_manager.cc:730] Time spent opening block manager: real 0.004s	user 0.000s	sys 0.004s
I20250905 08:25:25.941833  7107 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data,/tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/wal
uuid: "9ff1e0785190406d8a41f92f2fb869fb"
format_stamp: "Formatted at 2025-09-05 08:25:08 on dist-test-slave-0x95"
I20250905 08:25:25.944314  7107 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250905 08:25:25.999989  7107 rpc_server.cc:225] running with OpenSSL 1.1.1  11 Sep 2018
I20250905 08:25:26.001585  7107 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250905 08:25:26.002045  7107 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250905 08:25:26.005172  7107 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250905 08:25:26.011478  7132 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250905 08:25:26.018855  7107 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250905 08:25:26.019083  7107 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s	user 0.001s	sys 0.001s
I20250905 08:25:26.019383  7107 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250905 08:25:26.025542  7107 ts_tablet_manager.cc:610] Registered 1 tablets
I20250905 08:25:26.025748  7107 ts_tablet_manager.cc:589] Time spent register tablets: real 0.006s	user 0.003s	sys 0.002s
I20250905 08:25:26.026048  7132 tablet_bootstrap.cc:492] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: Bootstrap starting.
I20250905 08:25:26.237690  7107 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:34783
I20250905 08:25:26.237880  7238 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:34783 every 8 connection(s)
I20250905 08:25:26.241158  7107 server_base.cc:1179] Dumped server information to /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data/info.pb
I20250905 08:25:26.247609   426 external_mini_cluster.cc:1428] Started /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu as pid 7107
I20250905 08:25:26.286350  7239 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37837
I20250905 08:25:26.286748  7239 heartbeater.cc:461] Registering TS with master...
I20250905 08:25:26.287940  7239 heartbeater.cc:507] Master 127.0.106.190:37837 requested a full tablet report, sending...
I20250905 08:25:26.290946  6781 ts_manager.cc:194] Registered new tserver with Master: 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131:34783)
I20250905 08:25:26.292855  6781 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:53507
I20250905 08:25:26.303303   426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250905 08:25:26.372334  7132 log.cc:826] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: Log is configured to *not* fsync() on all Append() calls
I20250905 08:25:26.666253  6996 tablet_bootstrap.cc:492] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:25:26.666996  6996 tablet_bootstrap.cc:492] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: Bootstrap complete.
I20250905 08:25:26.668222  6996 ts_tablet_manager.cc:1397] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: Time spent bootstrapping tablet: real 3.592s	user 3.008s	sys 0.048s
I20250905 08:25:26.677831  6996 raft_consensus.cc:357] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:26.679663  6996 raft_consensus.cc:738] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: d6b4974606744830b0cac4bdf3ce0372, State: Initialized, Role: FOLLOWER
I20250905 08:25:26.680258  6996 consensus_queue.cc:260] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:26.683251  6996 ts_tablet_manager.cc:1428] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: Time spent starting tablet: real 0.015s	user 0.014s	sys 0.000s
I20250905 08:25:27.182284  7252 raft_consensus.cc:491] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250905 08:25:27.182837  7252 raft_consensus.cc:513] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:27.185686  7252 leader_election.cc:290] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131:34783), d6b4974606744830b0cac4bdf3ce0372 (127.0.106.130:36861)
I20250905 08:25:27.209823  7058 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f0e558aafe044a9a9f3334d3a81ce0da" candidate_uuid: "2fecb70240c04dc0990ccc827917c296" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "d6b4974606744830b0cac4bdf3ce0372" is_pre_election: true
I20250905 08:25:27.210616  7058 raft_consensus.cc:2466] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 2fecb70240c04dc0990ccc827917c296 in term 1.
I20250905 08:25:27.212134  6855 leader_election.cc:304] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 2fecb70240c04dc0990ccc827917c296, d6b4974606744830b0cac4bdf3ce0372; no voters: 
I20250905 08:25:27.212873  7252 raft_consensus.cc:2802] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250905 08:25:27.213188  7252 raft_consensus.cc:491] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250905 08:25:27.213490  7252 raft_consensus.cc:3058] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 1 FOLLOWER]: Advancing to term 2
I20250905 08:25:27.208801  7194 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f0e558aafe044a9a9f3334d3a81ce0da" candidate_uuid: "2fecb70240c04dc0990ccc827917c296" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "9ff1e0785190406d8a41f92f2fb869fb" is_pre_election: true
W20250905 08:25:27.217952  6855 leader_election.cc:343] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [CANDIDATE]: Term 2 pre-election: Tablet error from VoteRequest() call to peer 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131:34783): Illegal state: must be running to vote when last-logged opid is not known
I20250905 08:25:27.221678  7252 raft_consensus.cc:513] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:27.223261  7252 leader_election.cc:290] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [CANDIDATE]: Term 2 election: Requested vote from peers 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131:34783), d6b4974606744830b0cac4bdf3ce0372 (127.0.106.130:36861)
I20250905 08:25:27.223865  7194 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f0e558aafe044a9a9f3334d3a81ce0da" candidate_uuid: "2fecb70240c04dc0990ccc827917c296" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "9ff1e0785190406d8a41f92f2fb869fb"
I20250905 08:25:27.224201  7058 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f0e558aafe044a9a9f3334d3a81ce0da" candidate_uuid: "2fecb70240c04dc0990ccc827917c296" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "d6b4974606744830b0cac4bdf3ce0372"
W20250905 08:25:27.224756  6855 leader_election.cc:343] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [CANDIDATE]: Term 2 election: Tablet error from VoteRequest() call to peer 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131:34783): Illegal state: must be running to vote when last-logged opid is not known
I20250905 08:25:27.224740  7058 raft_consensus.cc:3058] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 1 FOLLOWER]: Advancing to term 2
I20250905 08:25:27.232725  7058 raft_consensus.cc:2466] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 2fecb70240c04dc0990ccc827917c296 in term 2.
I20250905 08:25:27.233546  6855 leader_election.cc:304] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 2fecb70240c04dc0990ccc827917c296, d6b4974606744830b0cac4bdf3ce0372; no voters: 9ff1e0785190406d8a41f92f2fb869fb
I20250905 08:25:27.234212  7252 raft_consensus.cc:2802] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 2 FOLLOWER]: Leader election won for term 2
I20250905 08:25:27.235819  7252 raft_consensus.cc:695] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 2 LEADER]: Becoming Leader. State: Replica: 2fecb70240c04dc0990ccc827917c296, State: Running, Role: LEADER
I20250905 08:25:27.236815  7252 consensus_queue.cc:237] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 205, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:27.248497  6781 catalog_manager.cc:5582] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 reported cstate change: term changed from 0 to 2, leader changed from <none> to 2fecb70240c04dc0990ccc827917c296 (127.0.106.129), VOTER 2fecb70240c04dc0990ccc827917c296 (127.0.106.129) added, VOTER 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131) added, VOTER d6b4974606744830b0cac4bdf3ce0372 (127.0.106.130) added. New cstate: current_term: 2 leader_uuid: "2fecb70240c04dc0990ccc827917c296" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } health_report { overall_health: HEALTHY } } }
I20250905 08:25:27.295578  7239 heartbeater.cc:499] Master 127.0.106.190:37837 was elected leader, sending a full tablet report...
I20250905 08:25:27.681102  7058 raft_consensus.cc:1273] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 2 FOLLOWER]: Refusing update from remote peer 2fecb70240c04dc0990ccc827917c296: Log matching property violated. Preceding OpId in replica: term: 1 index: 205. Preceding OpId from leader: term: 2 index: 206. (index mismatch)
I20250905 08:25:27.682988  7252 consensus_queue.cc:1035] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 206, Last known committed idx: 205, Time since last communication: 0.000s
W20250905 08:25:27.693259   426 scanner-internal.cc:458] Time spent opening tablet: real 1.346s	user 0.004s	sys 0.002s
W20250905 08:25:27.704483  6855 consensus_peers.cc:489] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 -> Peer 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131:34783): Couldn't send request to peer 9ff1e0785190406d8a41f92f2fb869fb. Error code: TABLET_NOT_RUNNING (12). Status: Illegal state: Tablet not RUNNING: BOOTSTRAPPING. This is attempt 1: this message will repeat every 5th retry.
I20250905 08:25:27.797904  6923 consensus_queue.cc:237] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 206, Committed index: 206, Last appended: 2.206, Last appended by leader: 205, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:27.803169  7058 raft_consensus.cc:1273] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 2 FOLLOWER]: Refusing update from remote peer 2fecb70240c04dc0990ccc827917c296: Log matching property violated. Preceding OpId in replica: term: 2 index: 206. Preceding OpId from leader: term: 2 index: 207. (index mismatch)
I20250905 08:25:27.804697  7252 consensus_queue.cc:1035] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 207, Last known committed idx: 206, Time since last communication: 0.000s
I20250905 08:25:27.811270  7270 raft_consensus.cc:2953] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 2 LEADER]: Committing config change with OpId 2.207: config changed from index -1 to 207, VOTER 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131) evicted. New config: { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } } }
I20250905 08:25:27.812569  7058 raft_consensus.cc:2953] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 2 FOLLOWER]: Committing config change with OpId 2.207: config changed from index -1 to 207, VOTER 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131) evicted. New config: { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } } }
I20250905 08:25:27.820392  6766 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet f0e558aafe044a9a9f3334d3a81ce0da with cas_config_opid_index -1: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250905 08:25:27.824002  6781 catalog_manager.cc:5582] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 reported cstate change: config changed from index -1 to 207, VOTER 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131) evicted. New cstate: current_term: 2 leader_uuid: "2fecb70240c04dc0990ccc827917c296" committed_config { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } health_report { overall_health: HEALTHY } } }
I20250905 08:25:27.871881  6923 consensus_queue.cc:237] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 207, Committed index: 207, Last appended: 2.207, Last appended by leader: 205, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:27.873854  7270 raft_consensus.cc:2953] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 2 LEADER]: Committing config change with OpId 2.208: config changed from index 207 to 208, VOTER d6b4974606744830b0cac4bdf3ce0372 (127.0.106.130) evicted. New config: { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } } }
I20250905 08:25:27.881508  6766 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet f0e558aafe044a9a9f3334d3a81ce0da with cas_config_opid_index 207: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250905 08:25:27.884199  6781 catalog_manager.cc:5582] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 reported cstate change: config changed from index 207 to 208, VOTER d6b4974606744830b0cac4bdf3ce0372 (127.0.106.130) evicted. New cstate: current_term: 2 leader_uuid: "2fecb70240c04dc0990ccc827917c296" committed_config { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } health_report { overall_health: HEALTHY } } }
I20250905 08:25:27.946502  7038 tablet_service.cc:1515] Processing DeleteTablet for tablet f0e558aafe044a9a9f3334d3a81ce0da with delete_type TABLET_DATA_TOMBSTONED (TS d6b4974606744830b0cac4bdf3ce0372 not found in new config with opid_index 208) from {username='slave'} at 127.0.0.1:32852
I20250905 08:25:27.959753  7174 tablet_service.cc:1515] Processing DeleteTablet for tablet f0e558aafe044a9a9f3334d3a81ce0da with delete_type TABLET_DATA_TOMBSTONED (TS 9ff1e0785190406d8a41f92f2fb869fb not found in new config with opid_index 207) from {username='slave'} at 127.0.0.1:49766
I20250905 08:25:27.965442  7281 tablet_replica.cc:331] stopping tablet replica
I20250905 08:25:27.966907  7281 raft_consensus.cc:2241] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250905 08:25:27.967473  7281 raft_consensus.cc:2270] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372 [term 2 FOLLOWER]: Raft consensus is shut down!
W20250905 08:25:27.967665  6765 catalog_manager.cc:4908] TS 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131:34783): delete failed for tablet f0e558aafe044a9a9f3334d3a81ce0da because tablet deleting was already in progress. No further retry: Already present: State transition of tablet f0e558aafe044a9a9f3334d3a81ce0da already in progress: opening tablet
I20250905 08:25:27.996816  7281 ts_tablet_manager.cc:1905] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250905 08:25:28.010227  7281 ts_tablet_manager.cc:1918] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.207
I20250905 08:25:28.010700  7281 log.cc:1199] T f0e558aafe044a9a9f3334d3a81ce0da P d6b4974606744830b0cac4bdf3ce0372: Deleting WAL directory at /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/wal/wals/f0e558aafe044a9a9f3334d3a81ce0da
I20250905 08:25:28.012295  6765 catalog_manager.cc:4928] TS d6b4974606744830b0cac4bdf3ce0372 (127.0.106.130:36861): tablet f0e558aafe044a9a9f3334d3a81ce0da (table pre_rebuild [id=70247959484d4acd8cdc84064aeb8bc0]) successfully deleted
I20250905 08:25:28.524927  7132 tablet_bootstrap.cc:492] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250905 08:25:28.525702  7132 tablet_bootstrap.cc:492] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: Bootstrap complete.
I20250905 08:25:28.527484  7132 ts_tablet_manager.cc:1397] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: Time spent bootstrapping tablet: real 2.502s	user 2.384s	sys 0.064s
I20250905 08:25:28.534626  7132 raft_consensus.cc:357] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:28.537732  7132 raft_consensus.cc:738] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9ff1e0785190406d8a41f92f2fb869fb, State: Initialized, Role: FOLLOWER
I20250905 08:25:28.538470  7132 consensus_queue.cc:260] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } } peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } }
I20250905 08:25:28.544454  7132 ts_tablet_manager.cc:1428] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: Time spent starting tablet: real 0.017s	user 0.011s	sys 0.004s
I20250905 08:25:28.548667  7174 tablet_service.cc:1515] Processing DeleteTablet for tablet f0e558aafe044a9a9f3334d3a81ce0da with delete_type TABLET_DATA_TOMBSTONED (Replica has no consensus available (current committed config index is 208)) from {username='slave'} at 127.0.0.1:49766
I20250905 08:25:28.557246  7298 tablet_replica.cc:331] stopping tablet replica
I20250905 08:25:28.557915  7298 raft_consensus.cc:2241] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [term 1 FOLLOWER]: Raft consensus shutting down.
I20250905 08:25:28.558367  7298 raft_consensus.cc:2270] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb [term 1 FOLLOWER]: Raft consensus is shut down!
I20250905 08:25:28.584482  7298 ts_tablet_manager.cc:1905] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250905 08:25:28.599951  7298 ts_tablet_manager.cc:1918] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.205
I20250905 08:25:28.600414  7298 log.cc:1199] T f0e558aafe044a9a9f3334d3a81ce0da P 9ff1e0785190406d8a41f92f2fb869fb: Deleting WAL directory at /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/wal/wals/f0e558aafe044a9a9f3334d3a81ce0da
I20250905 08:25:28.601648  6765 catalog_manager.cc:4928] TS 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131:34783): tablet f0e558aafe044a9a9f3334d3a81ce0da (table pre_rebuild [id=70247959484d4acd8cdc84064aeb8bc0]) successfully deleted
I20250905 08:25:28.640313  6903 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250905 08:25:28.665726  7174 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250905 08:25:28.677901  7038 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
Master Summary
               UUID               |       Address       | Status
----------------------------------+---------------------+---------
 9b4fd55bb34c4b0e8400c9dcc5da2dae | 127.0.106.190:37837 | HEALTHY

Unusual flags for Master:
               Flag               |                                                                                     Value                                                                                      |      Tags       |         Master
----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
 ipki_ca_key_size                 | 768                                                                                                                                                                            | experimental    | all 1 server(s) checked
 ipki_server_key_size             | 768                                                                                                                                                                            | experimental    | all 1 server(s) checked
 never_fsync                      | true                                                                                                                                                                           | unsafe,advanced | all 1 server(s) checked
 openssl_security_level_override  | 0                                                                                                                                                                              | unsafe,hidden   | all 1 server(s) checked
 rpc_reuseport                    | true                                                                                                                                                                           | experimental    | all 1 server(s) checked
 rpc_server_allow_ephemeral_ports | true                                                                                                                                                                           | unsafe          | all 1 server(s) checked
 server_dump_info_format          | pb                                                                                                                                                                             | hidden          | all 1 server(s) checked
 server_dump_info_path            | /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data/info.pb | hidden          | all 1 server(s) checked
 tsk_num_rsa_bits                 | 512                                                                                                                                                                            | experimental    | all 1 server(s) checked

Flags of checked categories for Master:
        Flag         |        Value        |         Master
---------------------+---------------------+-------------------------
 builtin_ntp_servers | 127.0.106.148:45433 | all 1 server(s) checked
 time_source         | builtin             | all 1 server(s) checked

Tablet Server Summary
               UUID               |       Address       | Status  | Location | Tablet Leaders | Active Scanners
----------------------------------+---------------------+---------+----------+----------------+-----------------
 2fecb70240c04dc0990ccc827917c296 | 127.0.106.129:38999 | HEALTHY | <none>   |       1        |       0
 9ff1e0785190406d8a41f92f2fb869fb | 127.0.106.131:34783 | HEALTHY | <none>   |       0        |       0
 d6b4974606744830b0cac4bdf3ce0372 | 127.0.106.130:36861 | HEALTHY | <none>   |       0        |       0

Tablet Server Location Summary
 Location |  Count
----------+---------
 <none>   |       3

Unusual flags for Tablet Server:
               Flag               |                                                                                   Value                                                                                    |      Tags       |      Tablet Server
----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
 ipki_server_key_size             | 768                                                                                                                                                                        | experimental    | all 3 server(s) checked
 local_ip_for_outbound_sockets    | 127.0.106.129                                                                                                                                                              | experimental    | 127.0.106.129:38999
 local_ip_for_outbound_sockets    | 127.0.106.130                                                                                                                                                              | experimental    | 127.0.106.130:36861
 local_ip_for_outbound_sockets    | 127.0.106.131                                                                                                                                                              | experimental    | 127.0.106.131:34783
 never_fsync                      | true                                                                                                                                                                       | unsafe,advanced | all 3 server(s) checked
 openssl_security_level_override  | 0                                                                                                                                                                          | unsafe,hidden   | all 3 server(s) checked
 rpc_server_allow_ephemeral_ports | true                                                                                                                                                                       | unsafe          | all 3 server(s) checked
 server_dump_info_format          | pb                                                                                                                                                                         | hidden          | all 3 server(s) checked
 server_dump_info_path            | /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data/info.pb | hidden          | 127.0.106.129:38999
 server_dump_info_path            | /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data/info.pb | hidden          | 127.0.106.130:36861
 server_dump_info_path            | /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data/info.pb | hidden          | 127.0.106.131:34783

Flags of checked categories for Tablet Server:
        Flag         |        Value        |      Tablet Server
---------------------+---------------------+-------------------------
 builtin_ntp_servers | 127.0.106.148:45433 | all 3 server(s) checked
 time_source         | builtin             | all 3 server(s) checked

Version Summary
     Version     |         Servers
-----------------+-------------------------
 1.19.0-SNAPSHOT | all 4 server(s) checked

Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
    Name     | RF | Status  | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
-------------+----+---------+---------------+---------+------------+------------------+-------------
 pre_rebuild | 1  | HEALTHY | 1             | 1       | 0          | 0                | 0

Tablet Replica Count Summary
   Statistic    | Replica Count
----------------+---------------
 Minimum        | 0
 First Quartile | 0
 Median         | 0
 Third Quartile | 1
 Maximum        | 1

Total Count Summary
                | Total Count
----------------+-------------
 Masters        | 1
 Tablet Servers | 3
 Tables         | 1
 Tablets        | 1
 Replicas       | 1

==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set

OK
I20250905 08:25:28.940515   426 log_verifier.cc:126] Checking tablet f0e558aafe044a9a9f3334d3a81ce0da
I20250905 08:25:29.189438   426 log_verifier.cc:177] Verified matching terms for 208 ops in tablet f0e558aafe044a9a9f3334d3a81ce0da
I20250905 08:25:29.192147  6780 catalog_manager.cc:2482] Servicing SoftDeleteTable request from {username='slave'} at 127.0.0.1:43446:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250905 08:25:29.192734  6780 catalog_manager.cc:2730] Servicing DeleteTable request from {username='slave'} at 127.0.0.1:43446:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250905 08:25:29.205760  6780 catalog_manager.cc:5869] T 00000000000000000000000000000000 P 9b4fd55bb34c4b0e8400c9dcc5da2dae: Sending DeleteTablet for 1 replicas of tablet f0e558aafe044a9a9f3334d3a81ce0da
I20250905 08:25:29.207484   426 test_util.cc:276] Using random seed: -1801651552
I20250905 08:25:29.208205  6903 tablet_service.cc:1515] Processing DeleteTablet for tablet f0e558aafe044a9a9f3334d3a81ce0da with delete_type TABLET_DATA_DELETED (Table deleted at 2025-09-05 08:25:29 UTC) from {username='slave'} at 127.0.0.1:44604
I20250905 08:25:29.216502  7317 tablet_replica.cc:331] stopping tablet replica
I20250905 08:25:29.217605  7317 raft_consensus.cc:2241] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 2 LEADER]: Raft consensus shutting down.
I20250905 08:25:29.218762  7317 raft_consensus.cc:2270] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250905 08:25:29.251047  6780 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:52736:
name: "post_rebuild"
schema {
  columns {
    name: "key"
    type: INT32
    is_key: true
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "int_val"
    type: INT32
    is_key: false
    is_nullable: false
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
  columns {
    name: "string_val"
    type: STRING
    is_key: false
    is_nullable: true
    encoding: AUTO_ENCODING
    compression: DEFAULT_COMPRESSION
    cfile_block_size: 0
    immutable: false
  }
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
  range_schema {
    columns {
      name: "key"
    }
  }
}
W20250905 08:25:29.254329  6780 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table post_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250905 08:25:29.267423  7317 ts_tablet_manager.cc:1905] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250905 08:25:29.278825  7038 tablet_service.cc:1468] Processing CreateTablet for tablet 6fc140a8b3da4b0f8d54bd86eb6a7cca (DEFAULT_TABLE table=post_rebuild [id=eb7ecf80293d4ecc95daae483478fe06]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:25:29.280139  7038 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 6fc140a8b3da4b0f8d54bd86eb6a7cca. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:25:29.283955  7317 ts_tablet_manager.cc:1918] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 2.208
I20250905 08:25:29.284411  7317 log.cc:1199] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: Deleting WAL directory at /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/wal/wals/f0e558aafe044a9a9f3334d3a81ce0da
I20250905 08:25:29.285559  7317 ts_tablet_manager.cc:1939] T f0e558aafe044a9a9f3334d3a81ce0da P 2fecb70240c04dc0990ccc827917c296: Deleting consensus metadata
I20250905 08:25:29.288591  6766 catalog_manager.cc:4928] TS 2fecb70240c04dc0990ccc827917c296 (127.0.106.129:38999): tablet f0e558aafe044a9a9f3334d3a81ce0da (table pre_rebuild [id=70247959484d4acd8cdc84064aeb8bc0]) successfully deleted
I20250905 08:25:29.290897  7174 tablet_service.cc:1468] Processing CreateTablet for tablet 6fc140a8b3da4b0f8d54bd86eb6a7cca (DEFAULT_TABLE table=post_rebuild [id=eb7ecf80293d4ecc95daae483478fe06]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:25:29.290899  6903 tablet_service.cc:1468] Processing CreateTablet for tablet 6fc140a8b3da4b0f8d54bd86eb6a7cca (DEFAULT_TABLE table=post_rebuild [id=eb7ecf80293d4ecc95daae483478fe06]), partition=RANGE (key) PARTITION UNBOUNDED
I20250905 08:25:29.292169  7174 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 6fc140a8b3da4b0f8d54bd86eb6a7cca. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:25:29.292306  6903 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 6fc140a8b3da4b0f8d54bd86eb6a7cca. 1 dirs total, 0 dirs full, 0 dirs failed
I20250905 08:25:29.313411  7325 tablet_bootstrap.cc:492] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb: Bootstrap starting.
I20250905 08:25:29.315570  7324 tablet_bootstrap.cc:492] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 2fecb70240c04dc0990ccc827917c296: Bootstrap starting.
I20250905 08:25:29.321126  7326 tablet_bootstrap.cc:492] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P d6b4974606744830b0cac4bdf3ce0372: Bootstrap starting.
I20250905 08:25:29.322481  7325 tablet_bootstrap.cc:654] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb: Neither blocks nor log segments found. Creating new log.
I20250905 08:25:29.327237  7324 tablet_bootstrap.cc:654] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 2fecb70240c04dc0990ccc827917c296: Neither blocks nor log segments found. Creating new log.
I20250905 08:25:29.328116  7326 tablet_bootstrap.cc:654] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P d6b4974606744830b0cac4bdf3ce0372: Neither blocks nor log segments found. Creating new log.
I20250905 08:25:29.337335  7325 tablet_bootstrap.cc:492] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb: No bootstrap required, opened a new log
I20250905 08:25:29.337729  7325 ts_tablet_manager.cc:1397] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb: Time spent bootstrapping tablet: real 0.025s	user 0.009s	sys 0.005s
I20250905 08:25:29.340116  7325 raft_consensus.cc:357] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } } peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } }
I20250905 08:25:29.340749  7325 raft_consensus.cc:383] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:25:29.340991  7325 raft_consensus.cc:738] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9ff1e0785190406d8a41f92f2fb869fb, State: Initialized, Role: FOLLOWER
I20250905 08:25:29.341625  7325 consensus_queue.cc:260] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } } peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } }
I20250905 08:25:29.344745  7326 tablet_bootstrap.cc:492] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P d6b4974606744830b0cac4bdf3ce0372: No bootstrap required, opened a new log
I20250905 08:25:29.345129  7326 ts_tablet_manager.cc:1397] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P d6b4974606744830b0cac4bdf3ce0372: Time spent bootstrapping tablet: real 0.024s	user 0.004s	sys 0.008s
I20250905 08:25:29.347460  7326 raft_consensus.cc:357] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P d6b4974606744830b0cac4bdf3ce0372 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } } peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } }
I20250905 08:25:29.348073  7326 raft_consensus.cc:383] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P d6b4974606744830b0cac4bdf3ce0372 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:25:29.348325  7326 raft_consensus.cc:738] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P d6b4974606744830b0cac4bdf3ce0372 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d6b4974606744830b0cac4bdf3ce0372, State: Initialized, Role: FOLLOWER
I20250905 08:25:29.349035  7326 consensus_queue.cc:260] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P d6b4974606744830b0cac4bdf3ce0372 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } } peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } }
I20250905 08:25:29.351534  7325 ts_tablet_manager.cc:1428] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb: Time spent starting tablet: real 0.014s	user 0.006s	sys 0.000s
I20250905 08:25:29.360102  7324 tablet_bootstrap.cc:492] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 2fecb70240c04dc0990ccc827917c296: No bootstrap required, opened a new log
I20250905 08:25:29.360500  7324 ts_tablet_manager.cc:1397] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 2fecb70240c04dc0990ccc827917c296: Time spent bootstrapping tablet: real 0.045s	user 0.004s	sys 0.018s
I20250905 08:25:29.360999  7330 raft_consensus.cc:491] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250905 08:25:29.361819  7326 ts_tablet_manager.cc:1428] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P d6b4974606744830b0cac4bdf3ce0372: Time spent starting tablet: real 0.016s	user 0.008s	sys 0.005s
I20250905 08:25:29.361438  7330 raft_consensus.cc:513] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } } peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } }
I20250905 08:25:29.368618  7330 leader_election.cc:290] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers d6b4974606744830b0cac4bdf3ce0372 (127.0.106.130:36861), 2fecb70240c04dc0990ccc827917c296 (127.0.106.129:38999)
I20250905 08:25:29.371486  7324 raft_consensus.cc:357] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 2fecb70240c04dc0990ccc827917c296 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } } peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } }
I20250905 08:25:29.372260  7324 raft_consensus.cc:383] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 2fecb70240c04dc0990ccc827917c296 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250905 08:25:29.372574  7324 raft_consensus.cc:738] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 2fecb70240c04dc0990ccc827917c296 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 2fecb70240c04dc0990ccc827917c296, State: Initialized, Role: FOLLOWER
I20250905 08:25:29.373327  7324 consensus_queue.cc:260] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 2fecb70240c04dc0990ccc827917c296 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } } peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } }
I20250905 08:25:29.379276  7058 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "6fc140a8b3da4b0f8d54bd86eb6a7cca" candidate_uuid: "9ff1e0785190406d8a41f92f2fb869fb" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d6b4974606744830b0cac4bdf3ce0372" is_pre_election: true
I20250905 08:25:29.379894  7058 raft_consensus.cc:2466] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P d6b4974606744830b0cac4bdf3ce0372 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 9ff1e0785190406d8a41f92f2fb869fb in term 0.
I20250905 08:25:29.381163  7126 leader_election.cc:304] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 9ff1e0785190406d8a41f92f2fb869fb, d6b4974606744830b0cac4bdf3ce0372; no voters: 
I20250905 08:25:29.382062  7330 raft_consensus.cc:2802] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250905 08:25:29.382447  7330 raft_consensus.cc:491] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250905 08:25:29.382819  7330 raft_consensus.cc:3058] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:25:29.389421  7330 raft_consensus.cc:513] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } } peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } }
I20250905 08:25:29.391911  7058 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "6fc140a8b3da4b0f8d54bd86eb6a7cca" candidate_uuid: "9ff1e0785190406d8a41f92f2fb869fb" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d6b4974606744830b0cac4bdf3ce0372"
I20250905 08:25:29.392369  7058 raft_consensus.cc:3058] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P d6b4974606744830b0cac4bdf3ce0372 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:25:29.393554  7324 ts_tablet_manager.cc:1428] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 2fecb70240c04dc0990ccc827917c296: Time spent starting tablet: real 0.033s	user 0.010s	sys 0.011s
I20250905 08:25:29.395928  6923 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "6fc140a8b3da4b0f8d54bd86eb6a7cca" candidate_uuid: "9ff1e0785190406d8a41f92f2fb869fb" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "2fecb70240c04dc0990ccc827917c296" is_pre_election: true
I20250905 08:25:29.396742  6923 raft_consensus.cc:2466] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 2fecb70240c04dc0990ccc827917c296 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 9ff1e0785190406d8a41f92f2fb869fb in term 0.
I20250905 08:25:29.398447  7330 leader_election.cc:290] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [CANDIDATE]: Term 1 election: Requested vote from peers d6b4974606744830b0cac4bdf3ce0372 (127.0.106.130:36861), 2fecb70240c04dc0990ccc827917c296 (127.0.106.129:38999)
I20250905 08:25:29.398679  7058 raft_consensus.cc:2466] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P d6b4974606744830b0cac4bdf3ce0372 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 9ff1e0785190406d8a41f92f2fb869fb in term 1.
I20250905 08:25:29.399581  7126 leader_election.cc:304] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 9ff1e0785190406d8a41f92f2fb869fb, d6b4974606744830b0cac4bdf3ce0372; no voters: 
I20250905 08:25:29.400027  6923 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "6fc140a8b3da4b0f8d54bd86eb6a7cca" candidate_uuid: "9ff1e0785190406d8a41f92f2fb869fb" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "2fecb70240c04dc0990ccc827917c296"
I20250905 08:25:29.400276  7330 raft_consensus.cc:2802] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [term 1 FOLLOWER]: Leader election won for term 1
I20250905 08:25:29.400444  6923 raft_consensus.cc:3058] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 2fecb70240c04dc0990ccc827917c296 [term 0 FOLLOWER]: Advancing to term 1
I20250905 08:25:29.405028  7330 raft_consensus.cc:695] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [term 1 LEADER]: Becoming Leader. State: Replica: 9ff1e0785190406d8a41f92f2fb869fb, State: Running, Role: LEADER
I20250905 08:25:29.405926  7330 consensus_queue.cc:237] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } } peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } }
I20250905 08:25:29.406445  6923 raft_consensus.cc:2466] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 2fecb70240c04dc0990ccc827917c296 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 9ff1e0785190406d8a41f92f2fb869fb in term 1.
I20250905 08:25:29.416883  6781 catalog_manager.cc:5582] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb reported cstate change: term changed from 0 to 1, leader changed from <none> to 9ff1e0785190406d8a41f92f2fb869fb (127.0.106.131). New cstate: current_term: 1 leader_uuid: "9ff1e0785190406d8a41f92f2fb869fb" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "9ff1e0785190406d8a41f92f2fb869fb" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 34783 } health_report { overall_health: HEALTHY } } }
W20250905 08:25:29.474004  6969 tablet.cc:2378] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 2fecb70240c04dc0990ccc827917c296: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250905 08:25:29.518842  7240 tablet.cc:2378] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250905 08:25:29.584447  7104 tablet.cc:2378] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P d6b4974606744830b0cac4bdf3ce0372: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250905 08:25:29.622000  7058 raft_consensus.cc:1273] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P d6b4974606744830b0cac4bdf3ce0372 [term 1 FOLLOWER]: Refusing update from remote peer 9ff1e0785190406d8a41f92f2fb869fb: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250905 08:25:29.622036  6923 raft_consensus.cc:1273] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 2fecb70240c04dc0990ccc827917c296 [term 1 FOLLOWER]: Refusing update from remote peer 9ff1e0785190406d8a41f92f2fb869fb: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250905 08:25:29.623462  7330 consensus_queue.cc:1035] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [LEADER]: Connected to new peer: Peer: permanent_uuid: "2fecb70240c04dc0990ccc827917c296" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 38999 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250905 08:25:29.624176  7337 consensus_queue.cc:1035] T 6fc140a8b3da4b0f8d54bd86eb6a7cca P 9ff1e0785190406d8a41f92f2fb869fb [LEADER]: Connected to new peer: Peer: permanent_uuid: "d6b4974606744830b0cac4bdf3ce0372" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36861 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250905 08:25:29.653337  7345 mvcc.cc:204] Tried to move back new op lower bound from 7196920748517191680 to 7196920747656617984. Current Snapshot: MvccSnapshot[applied={T|T < 7196920748517191680}]
I20250905 08:25:29.657904  7347 mvcc.cc:204] Tried to move back new op lower bound from 7196920748517191680 to 7196920747656617984. Current Snapshot: MvccSnapshot[applied={T|T < 7196920748517191680}]
I20250905 08:25:29.667657  7346 mvcc.cc:204] Tried to move back new op lower bound from 7196920748517191680 to 7196920747656617984. Current Snapshot: MvccSnapshot[applied={T|T < 7196920748517191680}]
W20250905 08:25:30.384569  7127 outbound_call.cc:321] RPC callback for RPC call kudu.consensus.ConsensusService.UpdateConsensus -> {remote=127.0.106.129:38999, user_credentials={real_user=slave}} blocked reactor thread for 44232.1us
W20250905 08:25:33.320667  7340 meta_cache.cc:1261] Time spent looking up entry by key: real 0.062s	user 0.000s	sys 0.000s
I20250905 08:25:34.147450  6903 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250905 08:25:34.150184  7174 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250905 08:25:34.157234  7038 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
Master Summary
               UUID               |       Address       | Status
----------------------------------+---------------------+---------
 9b4fd55bb34c4b0e8400c9dcc5da2dae | 127.0.106.190:37837 | HEALTHY

Unusual flags for Master:
               Flag               |                                                                                     Value                                                                                      |      Tags       |         Master
----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
 ipki_ca_key_size                 | 768                                                                                                                                                                            | experimental    | all 1 server(s) checked
 ipki_server_key_size             | 768                                                                                                                                                                            | experimental    | all 1 server(s) checked
 never_fsync                      | true                                                                                                                                                                           | unsafe,advanced | all 1 server(s) checked
 openssl_security_level_override  | 0                                                                                                                                                                              | unsafe,hidden   | all 1 server(s) checked
 rpc_reuseport                    | true                                                                                                                                                                           | experimental    | all 1 server(s) checked
 rpc_server_allow_ephemeral_ports | true                                                                                                                                                                           | unsafe          | all 1 server(s) checked
 server_dump_info_format          | pb                                                                                                                                                                             | hidden          | all 1 server(s) checked
 server_dump_info_path            | /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/master-0/data/info.pb | hidden          | all 1 server(s) checked
 tsk_num_rsa_bits                 | 512                                                                                                                                                                            | experimental    | all 1 server(s) checked

Flags of checked categories for Master:
        Flag         |        Value        |         Master
---------------------+---------------------+-------------------------
 builtin_ntp_servers | 127.0.106.148:45433 | all 1 server(s) checked
 time_source         | builtin             | all 1 server(s) checked

Tablet Server Summary
               UUID               |       Address       | Status  | Location | Tablet Leaders | Active Scanners
----------------------------------+---------------------+---------+----------+----------------+-----------------
 2fecb70240c04dc0990ccc827917c296 | 127.0.106.129:38999 | HEALTHY | <none>   |       0        |       0
 9ff1e0785190406d8a41f92f2fb869fb | 127.0.106.131:34783 | HEALTHY | <none>   |       1        |       0
 d6b4974606744830b0cac4bdf3ce0372 | 127.0.106.130:36861 | HEALTHY | <none>   |       0        |       0

Tablet Server Location Summary
 Location |  Count
----------+---------
 <none>   |       3

Unusual flags for Tablet Server:
               Flag               |                                                                                   Value                                                                                    |      Tags       |      Tablet Server
----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
 ipki_server_key_size             | 768                                                                                                                                                                        | experimental    | all 3 server(s) checked
 local_ip_for_outbound_sockets    | 127.0.106.129                                                                                                                                                              | experimental    | 127.0.106.129:38999
 local_ip_for_outbound_sockets    | 127.0.106.130                                                                                                                                                              | experimental    | 127.0.106.130:36861
 local_ip_for_outbound_sockets    | 127.0.106.131                                                                                                                                                              | experimental    | 127.0.106.131:34783
 never_fsync                      | true                                                                                                                                                                       | unsafe,advanced | all 3 server(s) checked
 openssl_security_level_override  | 0                                                                                                                                                                          | unsafe,hidden   | all 3 server(s) checked
 rpc_server_allow_ephemeral_ports | true                                                                                                                                                                       | unsafe          | all 3 server(s) checked
 server_dump_info_format          | pb                                                                                                                                                                         | hidden          | all 3 server(s) checked
 server_dump_info_path            | /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-0/data/info.pb | hidden          | 127.0.106.129:38999
 server_dump_info_path            | /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-1/data/info.pb | hidden          | 127.0.106.130:36861
 server_dump_info_path            | /tmp/dist-test-task9wMJuX/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1757060547998030-426-0/minicluster-data/ts-2/data/info.pb | hidden          | 127.0.106.131:34783

Flags of checked categories for Tablet Server:
        Flag         |        Value        |      Tablet Server
---------------------+---------------------+-------------------------
 builtin_ntp_servers | 127.0.106.148:45433 | all 3 server(s) checked
 time_source         | builtin             | all 3 server(s) checked

Version Summary
     Version     |         Servers
-----------------+-------------------------
 1.19.0-SNAPSHOT | all 4 server(s) checked

Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
     Name     | RF | Status  | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
--------------+----+---------+---------------+---------+------------+------------------+-------------
 post_rebuild | 3  | HEALTHY | 1             | 1       | 0          | 0                | 0

Tablet Replica Count Summary
   Statistic    | Replica Count
----------------+---------------
 Minimum        | 1
 First Quartile | 1
 Median         | 1
 Third Quartile | 1
 Maximum        | 1

Total Count Summary
                | Total Count
----------------+-------------
 Masters        | 1
 Tablet Servers | 3
 Tables         | 1
 Tablets        | 1
 Replicas       | 3

==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set

OK
I20250905 08:25:34.385111   426 log_verifier.cc:126] Checking tablet 6fc140a8b3da4b0f8d54bd86eb6a7cca
W20250905 08:25:34.770102  7099 debug-util.cc:398] Leaking SignalData structure 0x7b08000af3a0 after lost signal to thread 6974
I20250905 08:25:35.105470   426 log_verifier.cc:177] Verified matching terms for 205 ops in tablet 6fc140a8b3da4b0f8d54bd86eb6a7cca
I20250905 08:25:35.106274   426 log_verifier.cc:126] Checking tablet f0e558aafe044a9a9f3334d3a81ce0da
I20250905 08:25:35.106562   426 log_verifier.cc:177] Verified matching terms for 0 ops in tablet f0e558aafe044a9a9f3334d3a81ce0da
I20250905 08:25:35.131902   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 6818
I20250905 08:25:35.174060   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 6972
I20250905 08:25:35.211719   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 7107
I20250905 08:25:35.251437   426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-task9wMJuX/build/tsan/bin/kudu with pid 6747
2025-09-05T08:25:35Z chronyd exiting
[       OK ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0 (35293 ms)
[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest (35293 ms total)

[----------] Global test environment tear-down
[==========] 9 tests from 5 test suites ran. (187247 ms total)
[  PASSED  ] 8 tests.
[  FAILED  ] 1 test, listed below:
[  FAILED  ] AdminCliTest.TestRebuildTables

 1 FAILED TEST