Note: This is test shard 6 of 8.
[==========] Running 9 tests from 5 test suites.
[----------] Global test environment set-up.
[----------] 5 tests from AdminCliTest
[ RUN ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes
WARNING: Logging before InitGoogleLogging() is written to STDERR
I20250809 19:57:24.668394 26098 test_util.cc:276] Using random seed: 426345344
W20250809 19:57:25.692576 26098 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 0.987s user 0.374s sys 0.612s
W20250809 19:57:25.692868 26098 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 0.988s user 0.374s sys 0.612s
I20250809 19:57:25.707785 26098 ts_itest-base.cc:115] Starting cluster with:
I20250809 19:57:25.707938 26098 ts_itest-base.cc:116] --------------
I20250809 19:57:25.708087 26098 ts_itest-base.cc:117] 4 tablet servers
I20250809 19:57:25.708227 26098 ts_itest-base.cc:118] 3 replicas per TS
I20250809 19:57:25.708355 26098 ts_itest-base.cc:119] --------------
2025-08-09T19:57:25Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-09T19:57:25Z Disabled control of system clock
I20250809 19:57:25.740284 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:40527
--webserver_interface=127.25.124.190
--webserver_port=0
--builtin_ntp_servers=127.25.124.148:37351
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.25.124.190:40527 with env {}
W20250809 19:57:25.993822 26112 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:57:25.994257 26112 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:57:25.994603 26112 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:57:26.019534 26112 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250809 19:57:26.019754 26112 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:57:26.019922 26112 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250809 19:57:26.020084 26112 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250809 19:57:26.047899 26112 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:37351
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.25.124.190:40527
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:40527
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.25.124.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:57:26.048940 26112 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:57:26.050246 26112 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:57:26.059610 26118 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:27.462882 26117 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 26112
W20250809 19:57:27.574267 26112 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.515s user 0.499s sys 1.015s
W20250809 19:57:27.574587 26112 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.515s user 0.499s sys 1.015s
W20250809 19:57:26.059929 26119 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:27.576545 26121 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:27.578580 26120 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1515 milliseconds
I20250809 19:57:27.578644 26112 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:57:27.579681 26112 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:57:27.581736 26112 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:57:27.583045 26112 hybrid_clock.cc:648] HybridClock initialized: now 1754769447583006 us; error 37 us; skew 500 ppm
I20250809 19:57:27.583716 26112 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:57:27.588711 26112 webserver.cc:489] Webserver started at http://127.25.124.190:36055/ using document root <none> and password file <none>
I20250809 19:57:27.589489 26112 fs_manager.cc:362] Metadata directory not provided
I20250809 19:57:27.589672 26112 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:57:27.590075 26112 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:57:27.593694 26112 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "d2080b6d0e634b60a14fa83483c1a4c0"
format_stamp: "Formatted at 2025-08-09 19:57:27 on dist-test-slave-xzln"
I20250809 19:57:27.594607 26112 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "d2080b6d0e634b60a14fa83483c1a4c0"
format_stamp: "Formatted at 2025-08-09 19:57:27 on dist-test-slave-xzln"
I20250809 19:57:27.600520 26112 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.006s sys 0.002s
I20250809 19:57:27.605079 26128 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:27.605924 26112 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250809 19:57:27.606190 26112 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
uuid: "d2080b6d0e634b60a14fa83483c1a4c0"
format_stamp: "Formatted at 2025-08-09 19:57:27 on dist-test-slave-xzln"
I20250809 19:57:27.606460 26112 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:57:27.649559 26112 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:57:27.650748 26112 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:57:27.651103 26112 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:57:27.709412 26112 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.190:40527
I20250809 19:57:27.709473 26179 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.190:40527 every 8 connection(s)
I20250809 19:57:27.711652 26112 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250809 19:57:27.715848 26180 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:57:27.722195 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 26112
I20250809 19:57:27.722537 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250809 19:57:27.731595 26180 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0: Bootstrap starting.
I20250809 19:57:27.735867 26180 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0: Neither blocks nor log segments found. Creating new log.
I20250809 19:57:27.737208 26180 log.cc:826] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0: Log is configured to *not* fsync() on all Append() calls
I20250809 19:57:27.741055 26180 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0: No bootstrap required, opened a new log
I20250809 19:57:27.756125 26180 raft_consensus.cc:357] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40527 } }
I20250809 19:57:27.756696 26180 raft_consensus.cc:383] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:57:27.756968 26180 raft_consensus.cc:738] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d2080b6d0e634b60a14fa83483c1a4c0, State: Initialized, Role: FOLLOWER
I20250809 19:57:27.757583 26180 consensus_queue.cc:260] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40527 } }
I20250809 19:57:27.758018 26180 raft_consensus.cc:397] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:57:27.758245 26180 raft_consensus.cc:491] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:57:27.758548 26180 raft_consensus.cc:3058] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:57:27.761785 26180 raft_consensus.cc:513] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40527 } }
I20250809 19:57:27.762326 26180 leader_election.cc:304] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: d2080b6d0e634b60a14fa83483c1a4c0; no voters:
I20250809 19:57:27.763799 26180 leader_election.cc:290] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250809 19:57:27.764513 26185 raft_consensus.cc:2802] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:57:27.766320 26185 raft_consensus.cc:695] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 1 LEADER]: Becoming Leader. State: Replica: d2080b6d0e634b60a14fa83483c1a4c0, State: Running, Role: LEADER
I20250809 19:57:27.766929 26185 consensus_queue.cc:237] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40527 } }
I20250809 19:57:27.767757 26180 sys_catalog.cc:564] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [sys.catalog]: configured and running, proceeding with master startup.
I20250809 19:57:27.773885 26186 sys_catalog.cc:455] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40527 } } }
I20250809 19:57:27.774144 26187 sys_catalog.cc:455] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [sys.catalog]: SysCatalogTable state changed. Reason: New leader d2080b6d0e634b60a14fa83483c1a4c0. Latest consensus state: current_term: 1 leader_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40527 } } }
I20250809 19:57:27.774443 26186 sys_catalog.cc:458] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [sys.catalog]: This master's current role is: LEADER
I20250809 19:57:27.774608 26187 sys_catalog.cc:458] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [sys.catalog]: This master's current role is: LEADER
I20250809 19:57:27.778937 26191 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250809 19:57:27.790539 26191 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250809 19:57:27.802414 26191 catalog_manager.cc:1349] Generated new cluster ID: 611c354b6fbf42a7bd014f69a4c07599
I20250809 19:57:27.802631 26191 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250809 19:57:27.815732 26191 catalog_manager.cc:1372] Generated new certificate authority record
I20250809 19:57:27.816807 26191 catalog_manager.cc:1506] Loading token signing keys...
I20250809 19:57:27.827936 26191 catalog_manager.cc:5955] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0: Generated new TSK 0
I20250809 19:57:27.828593 26191 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250809 19:57:27.844604 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.129:0
--local_ip_for_outbound_sockets=127.25.124.129
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:40527
--builtin_ntp_servers=127.25.124.148:37351
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250809 19:57:28.087833 26204 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:57:28.088210 26204 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:57:28.088621 26204 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:57:28.113726 26204 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:57:28.114377 26204 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.129
I20250809 19:57:28.141197 26204 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:37351
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:40527
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.129
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:57:28.142189 26204 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:57:28.143488 26204 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:57:28.153682 26210 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:29.556710 26209 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 26204
W20250809 19:57:29.833600 26204 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.679s user 0.569s sys 1.055s
W20250809 19:57:29.833931 26204 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.679s user 0.569s sys 1.055s
W20250809 19:57:28.154924 26211 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:29.835242 26212 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1679 milliseconds
W20250809 19:57:29.835951 26213 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:57:29.835901 26204 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:57:29.839385 26204 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:57:29.841192 26204 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:57:29.842511 26204 hybrid_clock.cc:648] HybridClock initialized: now 1754769449842467 us; error 44 us; skew 500 ppm
I20250809 19:57:29.843170 26204 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:57:29.848937 26204 webserver.cc:489] Webserver started at http://127.25.124.129:33965/ using document root <none> and password file <none>
I20250809 19:57:29.849680 26204 fs_manager.cc:362] Metadata directory not provided
I20250809 19:57:29.849874 26204 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:57:29.850236 26204 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:57:29.854244 26204 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "7ab907e815504215a23b5c93c8cfb057"
format_stamp: "Formatted at 2025-08-09 19:57:29 on dist-test-slave-xzln"
I20250809 19:57:29.855185 26204 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "7ab907e815504215a23b5c93c8cfb057"
format_stamp: "Formatted at 2025-08-09 19:57:29 on dist-test-slave-xzln"
I20250809 19:57:29.861258 26204 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.001s
I20250809 19:57:29.866391 26220 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:29.867297 26204 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.000s
I20250809 19:57:29.867548 26204 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "7ab907e815504215a23b5c93c8cfb057"
format_stamp: "Formatted at 2025-08-09 19:57:29 on dist-test-slave-xzln"
I20250809 19:57:29.867820 26204 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:57:29.939069 26204 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:57:29.940212 26204 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:57:29.940562 26204 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:57:29.943573 26204 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:57:29.946851 26204 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:57:29.947029 26204 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:29.947273 26204 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:57:29.947412 26204 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:30.108173 26204 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.129:32913
I20250809 19:57:30.108307 26332 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.129:32913 every 8 connection(s)
I20250809 19:57:30.110332 26204 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250809 19:57:30.120643 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 26204
I20250809 19:57:30.121032 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250809 19:57:30.126643 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.130:0
--local_ip_for_outbound_sockets=127.25.124.130
--webserver_interface=127.25.124.130
--webserver_port=0
--tserver_master_addrs=127.25.124.190:40527
--builtin_ntp_servers=127.25.124.148:37351
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250809 19:57:30.131045 26333 heartbeater.cc:344] Connected to a master server at 127.25.124.190:40527
I20250809 19:57:30.131489 26333 heartbeater.cc:461] Registering TS with master...
I20250809 19:57:30.132586 26333 heartbeater.cc:507] Master 127.25.124.190:40527 requested a full tablet report, sending...
I20250809 19:57:30.134631 26145 ts_manager.cc:194] Registered new tserver with Master: 7ab907e815504215a23b5c93c8cfb057 (127.25.124.129:32913)
I20250809 19:57:30.136376 26145 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.129:55789
W20250809 19:57:30.390926 26337 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:57:30.391315 26337 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:57:30.391682 26337 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:57:30.416267 26337 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:57:30.416877 26337 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.130
I20250809 19:57:30.443797 26337 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:37351
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.25.124.130
--webserver_port=0
--tserver_master_addrs=127.25.124.190:40527
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.130
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:57:30.444727 26337 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:57:30.445976 26337 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:57:30.456267 26343 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:57:31.139034 26333 heartbeater.cc:499] Master 127.25.124.190:40527 was elected leader, sending a full tablet report...
W20250809 19:57:30.456943 26344 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:31.641317 26346 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:31.642870 26345 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1181 milliseconds
W20250809 19:57:31.644394 26337 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.188s user 0.428s sys 0.751s
W20250809 19:57:31.644748 26337 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.189s user 0.428s sys 0.751s
I20250809 19:57:31.645032 26337 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:57:31.646498 26337 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:57:31.649073 26337 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:57:31.650539 26337 hybrid_clock.cc:648] HybridClock initialized: now 1754769451650504 us; error 25 us; skew 500 ppm
I20250809 19:57:31.651665 26337 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:57:31.660171 26337 webserver.cc:489] Webserver started at http://127.25.124.130:35741/ using document root <none> and password file <none>
I20250809 19:57:31.661408 26337 fs_manager.cc:362] Metadata directory not provided
I20250809 19:57:31.661692 26337 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:57:31.662293 26337 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:57:31.669085 26337 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "d59a8f553d8046d68b5f96555ae1582e"
format_stamp: "Formatted at 2025-08-09 19:57:31 on dist-test-slave-xzln"
I20250809 19:57:31.670496 26337 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "d59a8f553d8046d68b5f96555ae1582e"
format_stamp: "Formatted at 2025-08-09 19:57:31 on dist-test-slave-xzln"
I20250809 19:57:31.679052 26337 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.009s sys 0.002s
I20250809 19:57:31.686507 26353 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:31.687582 26337 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250809 19:57:31.687964 26337 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "d59a8f553d8046d68b5f96555ae1582e"
format_stamp: "Formatted at 2025-08-09 19:57:31 on dist-test-slave-xzln"
I20250809 19:57:31.688376 26337 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:57:31.778748 26337 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:57:31.780555 26337 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:57:31.781122 26337 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:57:31.783427 26337 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:57:31.787149 26337 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:57:31.787369 26337 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.001s sys 0.000s
I20250809 19:57:31.787550 26337 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:57:31.787671 26337 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:31.919025 26337 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.130:34207
I20250809 19:57:31.919097 26465 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.130:34207 every 8 connection(s)
I20250809 19:57:31.921386 26337 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250809 19:57:31.927030 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 26337
I20250809 19:57:31.927407 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250809 19:57:31.932363 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.131:0
--local_ip_for_outbound_sockets=127.25.124.131
--webserver_interface=127.25.124.131
--webserver_port=0
--tserver_master_addrs=127.25.124.190:40527
--builtin_ntp_servers=127.25.124.148:37351
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250809 19:57:31.941326 26466 heartbeater.cc:344] Connected to a master server at 127.25.124.190:40527
I20250809 19:57:31.941689 26466 heartbeater.cc:461] Registering TS with master...
I20250809 19:57:31.942525 26466 heartbeater.cc:507] Master 127.25.124.190:40527 requested a full tablet report, sending...
I20250809 19:57:31.944319 26145 ts_manager.cc:194] Registered new tserver with Master: d59a8f553d8046d68b5f96555ae1582e (127.25.124.130:34207)
I20250809 19:57:31.945425 26145 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.130:44917
W20250809 19:57:32.193748 26470 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:57:32.194150 26470 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:57:32.194571 26470 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:57:32.220579 26470 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:57:32.221246 26470 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.131
I20250809 19:57:32.252260 26470 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:37351
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.25.124.131
--webserver_port=0
--tserver_master_addrs=127.25.124.190:40527
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.131
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:57:32.253237 26470 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:57:32.254503 26470 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:57:32.264758 26476 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:57:32.948596 26466 heartbeater.cc:499] Master 127.25.124.190:40527 was elected leader, sending a full tablet report...
W20250809 19:57:32.265712 26477 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:33.302791 26479 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:33.304426 26478 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1038 milliseconds
W20250809 19:57:33.305076 26470 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.040s user 0.343s sys 0.695s
W20250809 19:57:33.305317 26470 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.040s user 0.343s sys 0.695s
I20250809 19:57:33.305506 26470 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:57:33.306416 26470 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:57:33.308840 26470 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:57:33.310161 26470 hybrid_clock.cc:648] HybridClock initialized: now 1754769453310125 us; error 34 us; skew 500 ppm
I20250809 19:57:33.310854 26470 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:57:33.318796 26470 webserver.cc:489] Webserver started at http://127.25.124.131:43931/ using document root <none> and password file <none>
I20250809 19:57:33.319630 26470 fs_manager.cc:362] Metadata directory not provided
I20250809 19:57:33.319810 26470 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:57:33.320217 26470 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:57:33.323971 26470 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "0da9de23653d4a7587ca3db8c02fa927"
format_stamp: "Formatted at 2025-08-09 19:57:33 on dist-test-slave-xzln"
I20250809 19:57:33.324920 26470 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "0da9de23653d4a7587ca3db8c02fa927"
format_stamp: "Formatted at 2025-08-09 19:57:33 on dist-test-slave-xzln"
I20250809 19:57:33.332026 26470 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.002s
I20250809 19:57:33.337909 26486 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:33.338841 26470 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.001s
I20250809 19:57:33.339111 26470 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "0da9de23653d4a7587ca3db8c02fa927"
format_stamp: "Formatted at 2025-08-09 19:57:33 on dist-test-slave-xzln"
I20250809 19:57:33.339398 26470 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:57:33.402225 26470 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:57:33.403333 26470 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:57:33.403658 26470 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:57:33.405639 26470 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:57:33.409205 26470 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:57:33.409370 26470 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:33.409576 26470 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:57:33.409705 26470 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:33.538993 26470 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.131:45703
I20250809 19:57:33.539072 26598 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.131:45703 every 8 connection(s)
I20250809 19:57:33.541235 26470 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250809 19:57:33.549180 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 26470
I20250809 19:57:33.549506 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250809 19:57:33.554630 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.132:0
--local_ip_for_outbound_sockets=127.25.124.132
--webserver_interface=127.25.124.132
--webserver_port=0
--tserver_master_addrs=127.25.124.190:40527
--builtin_ntp_servers=127.25.124.148:37351
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250809 19:57:33.568089 26599 heartbeater.cc:344] Connected to a master server at 127.25.124.190:40527
I20250809 19:57:33.568418 26599 heartbeater.cc:461] Registering TS with master...
I20250809 19:57:33.569296 26599 heartbeater.cc:507] Master 127.25.124.190:40527 requested a full tablet report, sending...
I20250809 19:57:33.571127 26145 ts_manager.cc:194] Registered new tserver with Master: 0da9de23653d4a7587ca3db8c02fa927 (127.25.124.131:45703)
I20250809 19:57:33.572592 26145 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.131:55437
W20250809 19:57:33.815802 26603 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:57:33.816200 26603 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:57:33.816612 26603 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:57:33.841159 26603 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:57:33.841818 26603 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.132
I20250809 19:57:33.868904 26603 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:37351
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.132:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.25.124.132
--webserver_port=0
--tserver_master_addrs=127.25.124.190:40527
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.132
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:57:33.869948 26603 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:57:33.871223 26603 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:57:33.881771 26609 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:57:34.575297 26599 heartbeater.cc:499] Master 127.25.124.190:40527 was elected leader, sending a full tablet report...
W20250809 19:57:35.284938 26608 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 26603
W20250809 19:57:35.376673 26608 kernel_stack_watchdog.cc:198] Thread 26603 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250809 19:57:33.882302 26610 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:35.377799 26603 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.496s user 0.566s sys 0.928s
W20250809 19:57:35.378113 26603 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.496s user 0.566s sys 0.928s
W20250809 19:57:35.379905 26612 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:35.381693 26611 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1495 milliseconds
I20250809 19:57:35.381744 26603 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:57:35.382757 26603 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:57:35.384564 26603 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:57:35.385936 26603 hybrid_clock.cc:648] HybridClock initialized: now 1754769455385895 us; error 34 us; skew 500 ppm
I20250809 19:57:35.386560 26603 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:57:35.391571 26603 webserver.cc:489] Webserver started at http://127.25.124.132:38915/ using document root <none> and password file <none>
I20250809 19:57:35.392297 26603 fs_manager.cc:362] Metadata directory not provided
I20250809 19:57:35.392462 26603 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:57:35.392788 26603 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:57:35.396391 26603 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "ca969d1884d04201a64ebb5be92367dc"
format_stamp: "Formatted at 2025-08-09 19:57:35 on dist-test-slave-xzln"
I20250809 19:57:35.397250 26603 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "ca969d1884d04201a64ebb5be92367dc"
format_stamp: "Formatted at 2025-08-09 19:57:35 on dist-test-slave-xzln"
I20250809 19:57:35.403244 26603 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.001s
I20250809 19:57:35.408052 26620 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:35.408804 26603 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.002s
I20250809 19:57:35.409053 26603 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "ca969d1884d04201a64ebb5be92367dc"
format_stamp: "Formatted at 2025-08-09 19:57:35 on dist-test-slave-xzln"
I20250809 19:57:35.409314 26603 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:57:35.459971 26603 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:57:35.461093 26603 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:57:35.461458 26603 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:57:35.463843 26603 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:57:35.467072 26603 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:57:35.467276 26603 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.001s sys 0.000s
I20250809 19:57:35.467473 26603 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:57:35.467602 26603 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:35.602042 26603 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.132:34161
I20250809 19:57:35.602560 26732 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.132:34161 every 8 connection(s)
I20250809 19:57:35.604712 26603 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250809 19:57:35.607430 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 26603
I20250809 19:57:35.608098 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250809 19:57:35.623296 26733 heartbeater.cc:344] Connected to a master server at 127.25.124.190:40527
I20250809 19:57:35.623613 26733 heartbeater.cc:461] Registering TS with master...
I20250809 19:57:35.624377 26733 heartbeater.cc:507] Master 127.25.124.190:40527 requested a full tablet report, sending...
I20250809 19:57:35.626101 26144 ts_manager.cc:194] Registered new tserver with Master: ca969d1884d04201a64ebb5be92367dc (127.25.124.132:34161)
I20250809 19:57:35.627996 26144 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.132:44075
I20250809 19:57:35.628057 26098 external_mini_cluster.cc:949] 4 TS(s) registered with all masters
I20250809 19:57:35.663036 26144 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:35446:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250809 19:57:35.734733 26401 tablet_service.cc:1468] Processing CreateTablet for tablet 66f672a0b8a14232ab4403e334019f13 (DEFAULT_TABLE table=TestTable [id=129791ca401e456b80959ca8eec5784d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:57:35.736483 26401 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 66f672a0b8a14232ab4403e334019f13. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:57:35.736361 26668 tablet_service.cc:1468] Processing CreateTablet for tablet 66f672a0b8a14232ab4403e334019f13 (DEFAULT_TABLE table=TestTable [id=129791ca401e456b80959ca8eec5784d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:57:35.737844 26668 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 66f672a0b8a14232ab4403e334019f13. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:57:35.732738 26534 tablet_service.cc:1468] Processing CreateTablet for tablet 66f672a0b8a14232ab4403e334019f13 (DEFAULT_TABLE table=TestTable [id=129791ca401e456b80959ca8eec5784d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:57:35.740144 26534 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 66f672a0b8a14232ab4403e334019f13. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:57:35.766971 26753 tablet_bootstrap.cc:492] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc: Bootstrap starting.
I20250809 19:57:35.770236 26752 tablet_bootstrap.cc:492] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e: Bootstrap starting.
I20250809 19:57:35.773137 26754 tablet_bootstrap.cc:492] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927: Bootstrap starting.
I20250809 19:57:35.776703 26752 tablet_bootstrap.cc:654] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e: Neither blocks nor log segments found. Creating new log.
I20250809 19:57:35.777683 26753 tablet_bootstrap.cc:654] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc: Neither blocks nor log segments found. Creating new log.
I20250809 19:57:35.777722 26754 tablet_bootstrap.cc:654] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927: Neither blocks nor log segments found. Creating new log.
I20250809 19:57:35.779098 26752 log.cc:826] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e: Log is configured to *not* fsync() on all Append() calls
I20250809 19:57:35.779631 26753 log.cc:826] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc: Log is configured to *not* fsync() on all Append() calls
I20250809 19:57:35.780185 26754 log.cc:826] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927: Log is configured to *not* fsync() on all Append() calls
I20250809 19:57:35.784315 26752 tablet_bootstrap.cc:492] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e: No bootstrap required, opened a new log
I20250809 19:57:35.784657 26752 ts_tablet_manager.cc:1397] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e: Time spent bootstrapping tablet: real 0.015s user 0.012s sys 0.001s
I20250809 19:57:35.785429 26753 tablet_bootstrap.cc:492] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc: No bootstrap required, opened a new log
I20250809 19:57:35.785429 26754 tablet_bootstrap.cc:492] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927: No bootstrap required, opened a new log
I20250809 19:57:35.785795 26754 ts_tablet_manager.cc:1397] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927: Time spent bootstrapping tablet: real 0.013s user 0.005s sys 0.005s
I20250809 19:57:35.785799 26753 ts_tablet_manager.cc:1397] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc: Time spent bootstrapping tablet: real 0.019s user 0.008s sys 0.005s
I20250809 19:57:35.799685 26752 raft_consensus.cc:357] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }
I20250809 19:57:35.800196 26752 raft_consensus.cc:383] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:57:35.800406 26752 raft_consensus.cc:738] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d59a8f553d8046d68b5f96555ae1582e, State: Initialized, Role: FOLLOWER
I20250809 19:57:35.801054 26752 consensus_queue.cc:260] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }
I20250809 19:57:35.807278 26752 ts_tablet_manager.cc:1428] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e: Time spent starting tablet: real 0.022s user 0.019s sys 0.001s
I20250809 19:57:35.808175 26754 raft_consensus.cc:357] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }
I20250809 19:57:35.808331 26753 raft_consensus.cc:357] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }
I20250809 19:57:35.808970 26754 raft_consensus.cc:383] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:57:35.809005 26753 raft_consensus.cc:383] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:57:35.809228 26753 raft_consensus.cc:738] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ca969d1884d04201a64ebb5be92367dc, State: Initialized, Role: FOLLOWER
I20250809 19:57:35.809262 26754 raft_consensus.cc:738] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0da9de23653d4a7587ca3db8c02fa927, State: Initialized, Role: FOLLOWER
I20250809 19:57:35.809957 26753 consensus_queue.cc:260] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }
I20250809 19:57:35.810098 26754 consensus_queue.cc:260] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }
I20250809 19:57:35.812930 26733 heartbeater.cc:499] Master 127.25.124.190:40527 was elected leader, sending a full tablet report...
I20250809 19:57:35.815068 26753 ts_tablet_manager.cc:1428] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc: Time spent starting tablet: real 0.029s user 0.022s sys 0.008s
I20250809 19:57:35.817231 26754 ts_tablet_manager.cc:1428] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927: Time spent starting tablet: real 0.031s user 0.028s sys 0.003s
W20250809 19:57:35.859205 26734 tablet.cc:2378] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250809 19:57:35.930855 26467 tablet.cc:2378] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250809 19:57:36.006862 26760 raft_consensus.cc:491] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250809 19:57:36.007333 26760 raft_consensus.cc:513] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }
I20250809 19:57:36.009308 26760 leader_election.cc:290] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers ca969d1884d04201a64ebb5be92367dc (127.25.124.132:34161), d59a8f553d8046d68b5f96555ae1582e (127.25.124.130:34207)
I20250809 19:57:36.019429 26421 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "66f672a0b8a14232ab4403e334019f13" candidate_uuid: "0da9de23653d4a7587ca3db8c02fa927" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d59a8f553d8046d68b5f96555ae1582e" is_pre_election: true
I20250809 19:57:36.019529 26688 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "66f672a0b8a14232ab4403e334019f13" candidate_uuid: "0da9de23653d4a7587ca3db8c02fa927" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ca969d1884d04201a64ebb5be92367dc" is_pre_election: true
I20250809 19:57:36.020021 26421 raft_consensus.cc:2466] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0da9de23653d4a7587ca3db8c02fa927 in term 0.
I20250809 19:57:36.020102 26688 raft_consensus.cc:2466] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0da9de23653d4a7587ca3db8c02fa927 in term 0.
I20250809 19:57:36.021075 26487 leader_election.cc:304] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0da9de23653d4a7587ca3db8c02fa927, ca969d1884d04201a64ebb5be92367dc; no voters:
I20250809 19:57:36.021646 26760 raft_consensus.cc:2802] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250809 19:57:36.021857 26760 raft_consensus.cc:491] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250809 19:57:36.022068 26760 raft_consensus.cc:3058] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:57:36.025641 26760 raft_consensus.cc:513] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }
I20250809 19:57:36.026796 26760 leader_election.cc:290] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [CANDIDATE]: Term 1 election: Requested vote from peers ca969d1884d04201a64ebb5be92367dc (127.25.124.132:34161), d59a8f553d8046d68b5f96555ae1582e (127.25.124.130:34207)
I20250809 19:57:36.027377 26688 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "66f672a0b8a14232ab4403e334019f13" candidate_uuid: "0da9de23653d4a7587ca3db8c02fa927" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ca969d1884d04201a64ebb5be92367dc"
I20250809 19:57:36.027599 26421 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "66f672a0b8a14232ab4403e334019f13" candidate_uuid: "0da9de23653d4a7587ca3db8c02fa927" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d59a8f553d8046d68b5f96555ae1582e"
I20250809 19:57:36.027799 26688 raft_consensus.cc:3058] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:57:36.028016 26421 raft_consensus.cc:3058] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:57:36.033654 26688 raft_consensus.cc:2466] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0da9de23653d4a7587ca3db8c02fa927 in term 1.
I20250809 19:57:36.033761 26421 raft_consensus.cc:2466] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0da9de23653d4a7587ca3db8c02fa927 in term 1.
I20250809 19:57:36.034428 26487 leader_election.cc:304] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0da9de23653d4a7587ca3db8c02fa927, ca969d1884d04201a64ebb5be92367dc; no voters:
I20250809 19:57:36.034919 26760 raft_consensus.cc:2802] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:57:36.036231 26760 raft_consensus.cc:695] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [term 1 LEADER]: Becoming Leader. State: Replica: 0da9de23653d4a7587ca3db8c02fa927, State: Running, Role: LEADER
I20250809 19:57:36.036827 26760 consensus_queue.cc:237] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }
I20250809 19:57:36.045593 26145 catalog_manager.cc:5582] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 reported cstate change: term changed from 0 to 1, leader changed from <none> to 0da9de23653d4a7587ca3db8c02fa927 (127.25.124.131). New cstate: current_term: 1 leader_uuid: "0da9de23653d4a7587ca3db8c02fa927" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } health_report { overall_health: UNKNOWN } } }
W20250809 19:57:36.047663 26600 tablet.cc:2378] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250809 19:57:36.085843 26098 external_mini_cluster.cc:949] 4 TS(s) registered with all masters
I20250809 19:57:36.088727 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver d59a8f553d8046d68b5f96555ae1582e to finish bootstrapping
I20250809 19:57:36.099937 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 0da9de23653d4a7587ca3db8c02fa927 to finish bootstrapping
I20250809 19:57:36.108227 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver ca969d1884d04201a64ebb5be92367dc to finish bootstrapping
I20250809 19:57:36.117005 26098 kudu-admin-test.cc:709] Waiting for Master to see the current replicas...
I20250809 19:57:36.119390 26098 kudu-admin-test.cc:716] Tablet locations:
tablet_locations {
tablet_id: "66f672a0b8a14232ab4403e334019f13"
DEPRECATED_stale: false
partition {
partition_key_start: ""
partition_key_end: ""
}
interned_replicas {
ts_info_idx: 0
role: LEADER
}
interned_replicas {
ts_info_idx: 1
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 2
role: FOLLOWER
}
}
ts_infos {
permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927"
rpc_addresses {
host: "127.25.124.131"
port: 45703
}
}
ts_infos {
permanent_uuid: "ca969d1884d04201a64ebb5be92367dc"
rpc_addresses {
host: "127.25.124.132"
port: 34161
}
}
ts_infos {
permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e"
rpc_addresses {
host: "127.25.124.130"
port: 34207
}
}
I20250809 19:57:36.512434 26760 consensus_queue.cc:1035] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250809 19:57:36.525000 26764 consensus_queue.cc:1035] T 66f672a0b8a14232ab4403e334019f13 P 0da9de23653d4a7587ca3db8c02fa927 [LEADER]: Connected to new peer: Peer: permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250809 19:57:36.530543 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 26470
W20250809 19:57:36.556200 26624 connection.cc:537] server connection from 127.25.124.131:35869 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250809 19:57:36.556612 26129 connection.cc:537] server connection from 127.25.124.131:55437 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250809 19:57:36.557327 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 26112
I20250809 19:57:36.579617 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:40527
--webserver_interface=127.25.124.190
--webserver_port=36055
--builtin_ntp_servers=127.25.124.148:37351
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.25.124.190:40527 with env {}
W20250809 19:57:36.832911 26776 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:57:36.833320 26776 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:57:36.833679 26776 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:57:36.858994 26776 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250809 19:57:36.859225 26776 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:57:36.859395 26776 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250809 19:57:36.859555 26776 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250809 19:57:36.887434 26776 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:37351
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.25.124.190:40527
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:40527
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.25.124.190
--webserver_port=36055
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:57:36.888394 26776 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:57:36.889623 26776 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:57:36.897801 26782 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:37.165699 26333 heartbeater.cc:646] Failed to heartbeat to 127.25.124.190:40527 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.25.124.190:40527: connect: Connection refused (error 111)
W20250809 19:57:37.536270 26466 heartbeater.cc:646] Failed to heartbeat to 127.25.124.190:40527 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.25.124.190:40527: connect: Connection refused (error 111)
W20250809 19:57:37.550012 26733 heartbeater.cc:646] Failed to heartbeat to 127.25.124.190:40527 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.25.124.190:40527: connect: Connection refused (error 111)
W20250809 19:57:36.899441 26783 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:37.963579 26785 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:37.965341 26784 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1062 milliseconds
I20250809 19:57:37.965417 26776 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:57:37.966476 26776 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:57:37.969147 26776 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:57:37.970527 26776 hybrid_clock.cc:648] HybridClock initialized: now 1754769457970490 us; error 38 us; skew 500 ppm
I20250809 19:57:37.971271 26776 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:57:37.976960 26776 webserver.cc:489] Webserver started at http://127.25.124.190:36055/ using document root <none> and password file <none>
I20250809 19:57:37.977715 26776 fs_manager.cc:362] Metadata directory not provided
I20250809 19:57:37.977883 26776 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:57:37.984826 26776 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.001s sys 0.003s
I20250809 19:57:37.988539 26795 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:37.989538 26776 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250809 19:57:37.989805 26776 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
uuid: "d2080b6d0e634b60a14fa83483c1a4c0"
format_stamp: "Formatted at 2025-08-09 19:57:27 on dist-test-slave-xzln"
I20250809 19:57:37.991461 26776 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:57:38.023720 26798 raft_consensus.cc:491] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 1 FOLLOWER]: Starting pre-election (detected failure of leader 0da9de23653d4a7587ca3db8c02fa927)
I20250809 19:57:38.024055 26798 raft_consensus.cc:513] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }
I20250809 19:57:38.025839 26798 leader_election.cc:290] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 0da9de23653d4a7587ca3db8c02fa927 (127.25.124.131:45703), ca969d1884d04201a64ebb5be92367dc (127.25.124.132:34161)
W20250809 19:57:38.027050 26357 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.25.124.131:45703: connect: Connection refused (error 111)
W20250809 19:57:38.031164 26357 leader_election.cc:336] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer 0da9de23653d4a7587ca3db8c02fa927 (127.25.124.131:45703): Network error: Client connection negotiation failed: client connection to 127.25.124.131:45703: connect: Connection refused (error 111)
I20250809 19:57:38.032903 26804 raft_consensus.cc:491] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 1 FOLLOWER]: Starting pre-election (detected failure of leader 0da9de23653d4a7587ca3db8c02fa927)
I20250809 19:57:38.033346 26804 raft_consensus.cc:513] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }
I20250809 19:57:38.035516 26804 leader_election.cc:290] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 0da9de23653d4a7587ca3db8c02fa927 (127.25.124.131:45703), d59a8f553d8046d68b5f96555ae1582e (127.25.124.130:34207)
W20250809 19:57:38.038480 26624 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.25.124.131:45703: connect: Connection refused (error 111)
I20250809 19:57:38.042376 26776 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:57:38.043167 26688 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "66f672a0b8a14232ab4403e334019f13" candidate_uuid: "d59a8f553d8046d68b5f96555ae1582e" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "ca969d1884d04201a64ebb5be92367dc" is_pre_election: true
I20250809 19:57:38.043808 26688 raft_consensus.cc:2466] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d59a8f553d8046d68b5f96555ae1582e in term 1.
I20250809 19:57:38.044054 26776 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:57:38.044584 26776 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:57:38.045094 26354 leader_election.cc:304] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: ca969d1884d04201a64ebb5be92367dc, d59a8f553d8046d68b5f96555ae1582e; no voters: 0da9de23653d4a7587ca3db8c02fa927
I20250809 19:57:38.046337 26798 raft_consensus.cc:2802] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250809 19:57:38.046710 26798 raft_consensus.cc:491] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 1 FOLLOWER]: Starting leader election (detected failure of leader 0da9de23653d4a7587ca3db8c02fa927)
I20250809 19:57:38.047078 26798 raft_consensus.cc:3058] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 1 FOLLOWER]: Advancing to term 2
I20250809 19:57:38.050479 26421 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "66f672a0b8a14232ab4403e334019f13" candidate_uuid: "ca969d1884d04201a64ebb5be92367dc" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "d59a8f553d8046d68b5f96555ae1582e" is_pre_election: true
W20250809 19:57:38.051012 26624 leader_election.cc:336] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer 0da9de23653d4a7587ca3db8c02fa927 (127.25.124.131:45703): Network error: Client connection negotiation failed: client connection to 127.25.124.131:45703: connect: Connection refused (error 111)
I20250809 19:57:38.052889 26798 raft_consensus.cc:513] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }
I20250809 19:57:38.053714 26421 raft_consensus.cc:2391] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 2 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate ca969d1884d04201a64ebb5be92367dc in current term 2: Already voted for candidate d59a8f553d8046d68b5f96555ae1582e in this term.
I20250809 19:57:38.054894 26688 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "66f672a0b8a14232ab4403e334019f13" candidate_uuid: "d59a8f553d8046d68b5f96555ae1582e" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "ca969d1884d04201a64ebb5be92367dc"
I20250809 19:57:38.055341 26688 raft_consensus.cc:3058] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 1 FOLLOWER]: Advancing to term 2
I20250809 19:57:38.057044 26623 leader_election.cc:304] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: ca969d1884d04201a64ebb5be92367dc; no voters: 0da9de23653d4a7587ca3db8c02fa927, d59a8f553d8046d68b5f96555ae1582e
W20250809 19:57:38.057546 26357 leader_election.cc:336] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer 0da9de23653d4a7587ca3db8c02fa927 (127.25.124.131:45703): Network error: Client connection negotiation failed: client connection to 127.25.124.131:45703: connect: Connection refused (error 111)
I20250809 19:57:38.058061 26798 leader_election.cc:290] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [CANDIDATE]: Term 2 election: Requested vote from peers 0da9de23653d4a7587ca3db8c02fa927 (127.25.124.131:45703), ca969d1884d04201a64ebb5be92367dc (127.25.124.132:34161)
I20250809 19:57:38.060006 26688 raft_consensus.cc:2466] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d59a8f553d8046d68b5f96555ae1582e in term 2.
I20250809 19:57:38.060295 26804 raft_consensus.cc:2747] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 2 FOLLOWER]: Leader pre-election lost for term 2. Reason: could not achieve majority
I20250809 19:57:38.060731 26354 leader_election.cc:304] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: ca969d1884d04201a64ebb5be92367dc, d59a8f553d8046d68b5f96555ae1582e; no voters: 0da9de23653d4a7587ca3db8c02fa927
I20250809 19:57:38.061364 26798 raft_consensus.cc:2802] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 2 FOLLOWER]: Leader election won for term 2
I20250809 19:57:38.063330 26798 raft_consensus.cc:695] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 2 LEADER]: Becoming Leader. State: Replica: d59a8f553d8046d68b5f96555ae1582e, State: Running, Role: LEADER
I20250809 19:57:38.064185 26798 consensus_queue.cc:237] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }
I20250809 19:57:38.116848 26776 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.190:40527
I20250809 19:57:38.116968 26857 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.190:40527 every 8 connection(s)
I20250809 19:57:38.119170 26776 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250809 19:57:38.124011 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 26776
I20250809 19:57:38.124418 26098 kudu-admin-test.cc:735] Forcing unsafe config change on tserver d59a8f553d8046d68b5f96555ae1582e
I20250809 19:57:38.127516 26858 sys_catalog.cc:263] Verifying existing consensus state
I20250809 19:57:38.131285 26858 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0: Bootstrap starting.
I20250809 19:57:38.161368 26858 log.cc:826] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0: Log is configured to *not* fsync() on all Append() calls
I20250809 19:57:38.188642 26858 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=5 ignored=0} mutations{seen=2 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:57:38.189519 26858 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0: Bootstrap complete.
I20250809 19:57:38.201152 26333 heartbeater.cc:344] Connected to a master server at 127.25.124.190:40527
I20250809 19:57:38.207877 26858 raft_consensus.cc:357] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40527 } }
I20250809 19:57:38.209708 26858 raft_consensus.cc:738] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: d2080b6d0e634b60a14fa83483c1a4c0, State: Initialized, Role: FOLLOWER
I20250809 19:57:38.210310 26858 consensus_queue.cc:260] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40527 } }
I20250809 19:57:38.210724 26858 raft_consensus.cc:397] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:57:38.210948 26858 raft_consensus.cc:491] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:57:38.211226 26858 raft_consensus.cc:3058] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 1 FOLLOWER]: Advancing to term 2
I20250809 19:57:38.215792 26858 raft_consensus.cc:513] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40527 } }
I20250809 19:57:38.216302 26858 leader_election.cc:304] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: d2080b6d0e634b60a14fa83483c1a4c0; no voters:
I20250809 19:57:38.218120 26858 leader_election.cc:290] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250809 19:57:38.218462 26864 raft_consensus.cc:2802] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 2 FOLLOWER]: Leader election won for term 2
I20250809 19:57:38.220532 26864 raft_consensus.cc:695] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [term 2 LEADER]: Becoming Leader. State: Replica: d2080b6d0e634b60a14fa83483c1a4c0, State: Running, Role: LEADER
I20250809 19:57:38.221252 26864 consensus_queue.cc:237] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40527 } }
I20250809 19:57:38.223205 26858 sys_catalog.cc:564] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [sys.catalog]: configured and running, proceeding with master startup.
I20250809 19:57:38.230095 26866 sys_catalog.cc:455] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40527 } } }
I20250809 19:57:38.230240 26865 sys_catalog.cc:455] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [sys.catalog]: SysCatalogTable state changed. Reason: New leader d2080b6d0e634b60a14fa83483c1a4c0. Latest consensus state: current_term: 2 leader_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2080b6d0e634b60a14fa83483c1a4c0" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40527 } } }
I20250809 19:57:38.230700 26866 sys_catalog.cc:458] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [sys.catalog]: This master's current role is: LEADER
I20250809 19:57:38.230894 26865 sys_catalog.cc:458] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0 [sys.catalog]: This master's current role is: LEADER
I20250809 19:57:38.243851 26871 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250809 19:57:38.254503 26871 catalog_manager.cc:671] Loaded metadata for table TestTable [id=129791ca401e456b80959ca8eec5784d]
I20250809 19:57:38.261950 26871 tablet_loader.cc:96] loaded metadata for tablet 66f672a0b8a14232ab4403e334019f13 (table TestTable [id=129791ca401e456b80959ca8eec5784d])
I20250809 19:57:38.263485 26871 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250809 19:57:38.268848 26871 catalog_manager.cc:1261] Loaded cluster ID: 611c354b6fbf42a7bd014f69a4c07599
I20250809 19:57:38.269101 26871 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250809 19:57:38.276161 26871 catalog_manager.cc:1506] Loading token signing keys...
I20250809 19:57:38.281042 26871 catalog_manager.cc:5966] T 00000000000000000000000000000000 P d2080b6d0e634b60a14fa83483c1a4c0: Loaded TSK: 0
I20250809 19:57:38.282577 26871 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250809 19:57:38.430732 26860 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:57:38.431233 26860 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:57:38.457527 26860 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250809 19:57:38.579478 26733 heartbeater.cc:344] Connected to a master server at 127.25.124.190:40527
I20250809 19:57:38.587355 26823 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" instance_seqno: 1754769455570974) as {username='slave'} at 127.25.124.132:57745; Asking this server to re-register.
I20250809 19:57:38.589262 26733 heartbeater.cc:461] Registering TS with master...
I20250809 19:57:38.590031 26733 heartbeater.cc:507] Master 127.25.124.190:40527 requested a full tablet report, sending...
I20250809 19:57:38.593380 26822 ts_manager.cc:194] Registered new tserver with Master: ca969d1884d04201a64ebb5be92367dc (127.25.124.132:34161)
I20250809 19:57:38.594014 26688 raft_consensus.cc:1273] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 2 FOLLOWER]: Refusing update from remote peer d59a8f553d8046d68b5f96555ae1582e: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250809 19:57:38.599861 26893 consensus_queue.cc:1035] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [LEADER]: Connected to new peer: Peer: permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250809 19:57:38.653081 26822 catalog_manager.cc:5582] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc reported cstate change: term changed from 1 to 2, leader changed from 0da9de23653d4a7587ca3db8c02fa927 (127.25.124.131) to d59a8f553d8046d68b5f96555ae1582e (127.25.124.130). New cstate: current_term: 2 leader_uuid: "d59a8f553d8046d68b5f96555ae1582e" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } }
W20250809 19:57:38.719717 26357 consensus_peers.cc:489] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e -> Peer 0da9de23653d4a7587ca3db8c02fa927 (127.25.124.131:45703): Couldn't send request to peer 0da9de23653d4a7587ca3db8c02fa927. Status: Network error: Client connection negotiation failed: client connection to 127.25.124.131:45703: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250809 19:57:38.723387 26466 heartbeater.cc:344] Connected to a master server at 127.25.124.190:40527
I20250809 19:57:38.726708 26823 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" instance_seqno: 1754769451887572) as {username='slave'} at 127.25.124.130:35297; Asking this server to re-register.
I20250809 19:57:38.728061 26466 heartbeater.cc:461] Registering TS with master...
I20250809 19:57:38.728605 26466 heartbeater.cc:507] Master 127.25.124.190:40527 requested a full tablet report, sending...
I20250809 19:57:38.731288 26823 ts_manager.cc:194] Registered new tserver with Master: d59a8f553d8046d68b5f96555ae1582e (127.25.124.130:34207)
I20250809 19:57:39.206486 26823 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "7ab907e815504215a23b5c93c8cfb057" instance_seqno: 1754769450071584) as {username='slave'} at 127.25.124.129:52939; Asking this server to re-register.
I20250809 19:57:39.208711 26333 heartbeater.cc:461] Registering TS with master...
I20250809 19:57:39.209671 26333 heartbeater.cc:507] Master 127.25.124.190:40527 requested a full tablet report, sending...
I20250809 19:57:39.212980 26823 ts_manager.cc:194] Registered new tserver with Master: 7ab907e815504215a23b5c93c8cfb057 (127.25.124.129:32913)
W20250809 19:57:39.656731 26860 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.165s user 0.426s sys 0.653s
W20250809 19:57:39.657109 26860 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.165s user 0.426s sys 0.653s
I20250809 19:57:39.712358 26421 tablet_service.cc:1905] Received UnsafeChangeConfig RPC: dest_uuid: "d59a8f553d8046d68b5f96555ae1582e"
tablet_id: "66f672a0b8a14232ab4403e334019f13"
caller_id: "kudu-tools"
new_config {
peers {
permanent_uuid: "ca969d1884d04201a64ebb5be92367dc"
}
peers {
permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e"
}
}
from {username='slave'} at 127.0.0.1:57356
W20250809 19:57:39.713419 26421 raft_consensus.cc:2216] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 2 LEADER]: PROCEEDING WITH UNSAFE CONFIG CHANGE ON THIS SERVER, COMMITTED CONFIG: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }NEW CONFIG: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } unsafe_config_change: true
I20250809 19:57:39.714146 26421 raft_consensus.cc:3053] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 2 LEADER]: Stepping down as leader of term 2
I20250809 19:57:39.714359 26421 raft_consensus.cc:738] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 2 LEADER]: Becoming Follower/Learner. State: Replica: d59a8f553d8046d68b5f96555ae1582e, State: Running, Role: LEADER
I20250809 19:57:39.714850 26421 consensus_queue.cc:260] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 2.2, Last appended by leader: 2, Current term: 2, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }
I20250809 19:57:39.715765 26421 raft_consensus.cc:3058] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 2 FOLLOWER]: Advancing to term 3
I20250809 19:57:40.694764 26915 raft_consensus.cc:491] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 2 FOLLOWER]: Starting pre-election (detected failure of leader d59a8f553d8046d68b5f96555ae1582e)
I20250809 19:57:40.695062 26915 raft_consensus.cc:513] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 2 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } }
I20250809 19:57:40.696362 26915 leader_election.cc:290] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 0da9de23653d4a7587ca3db8c02fa927 (127.25.124.131:45703), d59a8f553d8046d68b5f96555ae1582e (127.25.124.130:34207)
I20250809 19:57:40.697573 26421 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "66f672a0b8a14232ab4403e334019f13" candidate_uuid: "ca969d1884d04201a64ebb5be92367dc" candidate_term: 3 candidate_status { last_received { term: 2 index: 2 } } ignore_live_leader: false dest_uuid: "d59a8f553d8046d68b5f96555ae1582e" is_pre_election: true
W20250809 19:57:40.700433 26624 leader_election.cc:336] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 0da9de23653d4a7587ca3db8c02fa927 (127.25.124.131:45703): Network error: Client connection negotiation failed: client connection to 127.25.124.131:45703: connect: Connection refused (error 111)
I20250809 19:57:40.700704 26624 leader_election.cc:304] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: ca969d1884d04201a64ebb5be92367dc; no voters: 0da9de23653d4a7587ca3db8c02fa927, d59a8f553d8046d68b5f96555ae1582e
I20250809 19:57:40.701141 26915 raft_consensus.cc:2747] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250809 19:57:41.223111 26920 raft_consensus.cc:491] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 3 FOLLOWER]: Starting pre-election (detected failure of leader kudu-tools)
I20250809 19:57:41.223508 26920 raft_consensus.cc:513] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 3 FOLLOWER]: Starting pre-election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } unsafe_config_change: true
I20250809 19:57:41.224416 26920 leader_election.cc:290] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [CANDIDATE]: Term 4 pre-election: Requested pre-vote from peers ca969d1884d04201a64ebb5be92367dc (127.25.124.132:34161)
I20250809 19:57:41.225224 26687 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "66f672a0b8a14232ab4403e334019f13" candidate_uuid: "d59a8f553d8046d68b5f96555ae1582e" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "ca969d1884d04201a64ebb5be92367dc" is_pre_election: true
I20250809 19:57:41.225594 26687 raft_consensus.cc:2466] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d59a8f553d8046d68b5f96555ae1582e in term 2.
I20250809 19:57:41.226354 26354 leader_election.cc:304] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [CANDIDATE]: Term 4 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: ca969d1884d04201a64ebb5be92367dc, d59a8f553d8046d68b5f96555ae1582e; no voters:
I20250809 19:57:41.226851 26920 raft_consensus.cc:2802] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 3 FOLLOWER]: Leader pre-election won for term 4
I20250809 19:57:41.227095 26920 raft_consensus.cc:491] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 3 FOLLOWER]: Starting leader election (detected failure of leader kudu-tools)
I20250809 19:57:41.227347 26920 raft_consensus.cc:3058] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 3 FOLLOWER]: Advancing to term 4
I20250809 19:57:41.231343 26920 raft_consensus.cc:513] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 4 FOLLOWER]: Starting leader election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } unsafe_config_change: true
I20250809 19:57:41.232091 26920 leader_election.cc:290] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [CANDIDATE]: Term 4 election: Requested vote from peers ca969d1884d04201a64ebb5be92367dc (127.25.124.132:34161)
I20250809 19:57:41.232728 26687 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "66f672a0b8a14232ab4403e334019f13" candidate_uuid: "d59a8f553d8046d68b5f96555ae1582e" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "ca969d1884d04201a64ebb5be92367dc"
I20250809 19:57:41.233043 26687 raft_consensus.cc:3058] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 2 FOLLOWER]: Advancing to term 4
I20250809 19:57:41.236483 26687 raft_consensus.cc:2466] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 4 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d59a8f553d8046d68b5f96555ae1582e in term 4.
I20250809 19:57:41.237085 26354 leader_election.cc:304] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: ca969d1884d04201a64ebb5be92367dc, d59a8f553d8046d68b5f96555ae1582e; no voters:
I20250809 19:57:41.237536 26920 raft_consensus.cc:2802] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 4 FOLLOWER]: Leader election won for term 4
I20250809 19:57:41.238148 26920 raft_consensus.cc:695] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 4 LEADER]: Becoming Leader. State: Replica: d59a8f553d8046d68b5f96555ae1582e, State: Running, Role: LEADER
I20250809 19:57:41.238690 26920 consensus_queue.cc:237] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 3.3, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } unsafe_config_change: true
I20250809 19:57:41.243984 26823 catalog_manager.cc:5582] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e reported cstate change: term changed from 2 to 4, now has a pending config: VOTER ca969d1884d04201a64ebb5be92367dc (127.25.124.132), VOTER d59a8f553d8046d68b5f96555ae1582e (127.25.124.130). New cstate: current_term: 4 leader_uuid: "d59a8f553d8046d68b5f96555ae1582e" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0da9de23653d4a7587ca3db8c02fa927" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 45703 } } peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } health_report { overall_health: HEALTHY } } } pending_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } unsafe_config_change: true }
I20250809 19:57:41.676298 26687 raft_consensus.cc:1273] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 4 FOLLOWER]: Refusing update from remote peer d59a8f553d8046d68b5f96555ae1582e: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 4 index: 4. (index mismatch)
I20250809 19:57:41.677282 26920 consensus_queue.cc:1035] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [LEADER]: Connected to new peer: Peer: permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 2, Time since last communication: 0.000s
I20250809 19:57:41.683259 26921 raft_consensus.cc:2953] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 4 LEADER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER 0da9de23653d4a7587ca3db8c02fa927 (127.25.124.131) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } unsafe_config_change: true }
I20250809 19:57:41.684438 26687 raft_consensus.cc:2953] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 4 FOLLOWER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER 0da9de23653d4a7587ca3db8c02fa927 (127.25.124.131) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } unsafe_config_change: true }
I20250809 19:57:41.691716 26823 catalog_manager.cc:5582] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e reported cstate change: config changed from index -1 to 3, VOTER 0da9de23653d4a7587ca3db8c02fa927 (127.25.124.131) evicted, no longer has a pending config: VOTER ca969d1884d04201a64ebb5be92367dc (127.25.124.132), VOTER d59a8f553d8046d68b5f96555ae1582e (127.25.124.130). New cstate: current_term: 4 leader_uuid: "d59a8f553d8046d68b5f96555ae1582e" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } health_report { overall_health: HEALTHY } } unsafe_config_change: true }
W20250809 19:57:41.697227 26823 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet 66f672a0b8a14232ab4403e334019f13 on TS 0da9de23653d4a7587ca3db8c02fa927: Not found: failed to reset TS proxy: Could not find TS for UUID 0da9de23653d4a7587ca3db8c02fa927
I20250809 19:57:41.710095 26421 consensus_queue.cc:237] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 4.4, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } peers { permanent_uuid: "7ab907e815504215a23b5c93c8cfb057" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 32913 } attrs { promote: true } } unsafe_config_change: true
I20250809 19:57:41.713547 26687 raft_consensus.cc:1273] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 4 FOLLOWER]: Refusing update from remote peer d59a8f553d8046d68b5f96555ae1582e: Log matching property violated. Preceding OpId in replica: term: 4 index: 4. Preceding OpId from leader: term: 4 index: 5. (index mismatch)
I20250809 19:57:41.714900 26931 consensus_queue.cc:1035] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [LEADER]: Connected to new peer: Peer: permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250809 19:57:41.719241 26921 raft_consensus.cc:2953] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 4 LEADER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER 7ab907e815504215a23b5c93c8cfb057 (127.25.124.129) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } peers { permanent_uuid: "7ab907e815504215a23b5c93c8cfb057" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 32913 } attrs { promote: true } } unsafe_config_change: true }
I20250809 19:57:41.721053 26687 raft_consensus.cc:2953] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 4 FOLLOWER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER 7ab907e815504215a23b5c93c8cfb057 (127.25.124.129) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } peers { permanent_uuid: "7ab907e815504215a23b5c93c8cfb057" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 32913 } attrs { promote: true } } unsafe_config_change: true }
W20250809 19:57:41.723536 26356 consensus_peers.cc:489] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e -> Peer 7ab907e815504215a23b5c93c8cfb057 (127.25.124.129:32913): Couldn't send request to peer 7ab907e815504215a23b5c93c8cfb057. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 66f672a0b8a14232ab4403e334019f13. This is attempt 1: this message will repeat every 5th retry.
I20250809 19:57:41.724946 26799 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 66f672a0b8a14232ab4403e334019f13 with cas_config_opid_index 3: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250809 19:57:41.727537 26822 catalog_manager.cc:5582] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e reported cstate change: config changed from index 3 to 5, NON_VOTER 7ab907e815504215a23b5c93c8cfb057 (127.25.124.129) added. New cstate: current_term: 4 leader_uuid: "d59a8f553d8046d68b5f96555ae1582e" committed_config { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "7ab907e815504215a23b5c93c8cfb057" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 32913 } attrs { promote: true } health_report { overall_health: UNKNOWN } } unsafe_config_change: true }
W20250809 19:57:41.749552 26797 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet 66f672a0b8a14232ab4403e334019f13 on TS 0da9de23653d4a7587ca3db8c02fa927 failed: Not found: failed to reset TS proxy: Could not find TS for UUID 0da9de23653d4a7587ca3db8c02fa927
I20250809 19:57:42.267699 26936 ts_tablet_manager.cc:927] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057: Initiating tablet copy from peer d59a8f553d8046d68b5f96555ae1582e (127.25.124.130:34207)
I20250809 19:57:42.269518 26936 tablet_copy_client.cc:323] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057: tablet copy: Beginning tablet copy session from remote peer at address 127.25.124.130:34207
I20250809 19:57:42.281800 26441 tablet_copy_service.cc:140] P d59a8f553d8046d68b5f96555ae1582e: Received BeginTabletCopySession request for tablet 66f672a0b8a14232ab4403e334019f13 from peer 7ab907e815504215a23b5c93c8cfb057 ({username='slave'} at 127.25.124.129:56213)
I20250809 19:57:42.282164 26441 tablet_copy_service.cc:161] P d59a8f553d8046d68b5f96555ae1582e: Beginning new tablet copy session on tablet 66f672a0b8a14232ab4403e334019f13 from peer 7ab907e815504215a23b5c93c8cfb057 at {username='slave'} at 127.25.124.129:56213: session id = 7ab907e815504215a23b5c93c8cfb057-66f672a0b8a14232ab4403e334019f13
I20250809 19:57:42.286224 26441 tablet_copy_source_session.cc:215] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e: Tablet Copy: opened 0 blocks and 1 log segments
I20250809 19:57:42.289984 26936 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 66f672a0b8a14232ab4403e334019f13. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:57:42.305384 26936 tablet_copy_client.cc:806] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057: tablet copy: Starting download of 0 data blocks...
I20250809 19:57:42.305759 26936 tablet_copy_client.cc:670] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057: tablet copy: Starting download of 1 WAL segments...
I20250809 19:57:42.308458 26936 tablet_copy_client.cc:538] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250809 19:57:42.313045 26936 tablet_bootstrap.cc:492] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057: Bootstrap starting.
I20250809 19:57:42.323359 26936 log.cc:826] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057: Log is configured to *not* fsync() on all Append() calls
I20250809 19:57:42.332603 26936 tablet_bootstrap.cc:492] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057: Bootstrap replayed 1/1 log segments. Stats: ops{read=5 overwritten=0 applied=5 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:57:42.333149 26936 tablet_bootstrap.cc:492] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057: Bootstrap complete.
I20250809 19:57:42.333595 26936 ts_tablet_manager.cc:1397] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057: Time spent bootstrapping tablet: real 0.021s user 0.015s sys 0.004s
I20250809 19:57:42.348145 26936 raft_consensus.cc:357] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057 [term 4 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } peers { permanent_uuid: "7ab907e815504215a23b5c93c8cfb057" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 32913 } attrs { promote: true } } unsafe_config_change: true
I20250809 19:57:42.348899 26936 raft_consensus.cc:738] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057 [term 4 LEARNER]: Becoming Follower/Learner. State: Replica: 7ab907e815504215a23b5c93c8cfb057, State: Initialized, Role: LEARNER
I20250809 19:57:42.349548 26936 consensus_queue.cc:260] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 5, Last appended: 4.5, Last appended by leader: 5, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } peers { permanent_uuid: "7ab907e815504215a23b5c93c8cfb057" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 32913 } attrs { promote: true } } unsafe_config_change: true
I20250809 19:57:42.352602 26936 ts_tablet_manager.cc:1428] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057: Time spent starting tablet: real 0.019s user 0.016s sys 0.004s
I20250809 19:57:42.354032 26441 tablet_copy_service.cc:342] P d59a8f553d8046d68b5f96555ae1582e: Request end of tablet copy session 7ab907e815504215a23b5c93c8cfb057-66f672a0b8a14232ab4403e334019f13 received from {username='slave'} at 127.25.124.129:56213
I20250809 19:57:42.354386 26441 tablet_copy_service.cc:434] P d59a8f553d8046d68b5f96555ae1582e: ending tablet copy session 7ab907e815504215a23b5c93c8cfb057-66f672a0b8a14232ab4403e334019f13 on tablet 66f672a0b8a14232ab4403e334019f13 with peer 7ab907e815504215a23b5c93c8cfb057
I20250809 19:57:42.841203 26288 raft_consensus.cc:1215] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057 [term 4 LEARNER]: Deduplicated request from leader. Original: 4.4->[4.5-4.5] Dedup: 4.5->[]
W20250809 19:57:42.915709 26797 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet 66f672a0b8a14232ab4403e334019f13 on TS 0da9de23653d4a7587ca3db8c02fa927 failed: Not found: failed to reset TS proxy: Could not find TS for UUID 0da9de23653d4a7587ca3db8c02fa927
I20250809 19:57:43.382202 26943 raft_consensus.cc:1062] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e: attempting to promote NON_VOTER 7ab907e815504215a23b5c93c8cfb057 to VOTER
I20250809 19:57:43.383589 26943 consensus_queue.cc:237] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 5, Committed index: 5, Last appended: 4.5, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } peers { permanent_uuid: "7ab907e815504215a23b5c93c8cfb057" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 32913 } attrs { promote: false } } unsafe_config_change: true
I20250809 19:57:43.387914 26288 raft_consensus.cc:1273] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057 [term 4 LEARNER]: Refusing update from remote peer d59a8f553d8046d68b5f96555ae1582e: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250809 19:57:43.388361 26687 raft_consensus.cc:1273] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 4 FOLLOWER]: Refusing update from remote peer d59a8f553d8046d68b5f96555ae1582e: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250809 19:57:43.388995 26943 consensus_queue.cc:1035] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [LEADER]: Connected to new peer: Peer: permanent_uuid: "7ab907e815504215a23b5c93c8cfb057" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 32913 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250809 19:57:43.389696 26944 consensus_queue.cc:1035] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [LEADER]: Connected to new peer: Peer: permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250809 19:57:43.395149 26943 raft_consensus.cc:2953] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e [term 4 LEADER]: Committing config change with OpId 4.6: config changed from index 5 to 6, 7ab907e815504215a23b5c93c8cfb057 (127.25.124.129) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } peers { permanent_uuid: "7ab907e815504215a23b5c93c8cfb057" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 32913 } attrs { promote: false } } unsafe_config_change: true }
I20250809 19:57:43.396479 26687 raft_consensus.cc:2953] T 66f672a0b8a14232ab4403e334019f13 P ca969d1884d04201a64ebb5be92367dc [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, 7ab907e815504215a23b5c93c8cfb057 (127.25.124.129) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } peers { permanent_uuid: "7ab907e815504215a23b5c93c8cfb057" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 32913 } attrs { promote: false } } unsafe_config_change: true }
I20250809 19:57:43.398090 26287 raft_consensus.cc:2953] T 66f672a0b8a14232ab4403e334019f13 P 7ab907e815504215a23b5c93c8cfb057 [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, 7ab907e815504215a23b5c93c8cfb057 (127.25.124.129) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } } peers { permanent_uuid: "7ab907e815504215a23b5c93c8cfb057" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 32913 } attrs { promote: false } } unsafe_config_change: true }
I20250809 19:57:43.406929 26822 catalog_manager.cc:5582] T 66f672a0b8a14232ab4403e334019f13 P d59a8f553d8046d68b5f96555ae1582e reported cstate change: config changed from index 5 to 6, 7ab907e815504215a23b5c93c8cfb057 (127.25.124.129) changed from NON_VOTER to VOTER. New cstate: current_term: 4 leader_uuid: "d59a8f553d8046d68b5f96555ae1582e" committed_config { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "ca969d1884d04201a64ebb5be92367dc" member_type: VOTER last_known_addr { host: "127.25.124.132" port: 34161 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 34207 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "7ab907e815504215a23b5c93c8cfb057" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 32913 } attrs { promote: false } health_report { overall_health: HEALTHY } } unsafe_config_change: true }
I20250809 19:57:43.495510 26098 kudu-admin-test.cc:751] Waiting for Master to see new config...
I20250809 19:57:43.506121 26098 kudu-admin-test.cc:756] Tablet locations:
tablet_locations {
tablet_id: "66f672a0b8a14232ab4403e334019f13"
DEPRECATED_stale: false
partition {
partition_key_start: ""
partition_key_end: ""
}
interned_replicas {
ts_info_idx: 0
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 1
role: LEADER
}
interned_replicas {
ts_info_idx: 2
role: FOLLOWER
}
}
ts_infos {
permanent_uuid: "ca969d1884d04201a64ebb5be92367dc"
rpc_addresses {
host: "127.25.124.132"
port: 34161
}
}
ts_infos {
permanent_uuid: "d59a8f553d8046d68b5f96555ae1582e"
rpc_addresses {
host: "127.25.124.130"
port: 34207
}
}
ts_infos {
permanent_uuid: "7ab907e815504215a23b5c93c8cfb057"
rpc_addresses {
host: "127.25.124.129"
port: 32913
}
}
I20250809 19:57:43.508055 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 26204
I20250809 19:57:43.528952 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 26337
I20250809 19:57:43.551959 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 26603
I20250809 19:57:43.575809 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 26776
2025-08-09T19:57:43Z chronyd exiting
[ OK ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes (18960 ms)
[ RUN ] AdminCliTest.TestGracefulSpecificLeaderStepDown
I20250809 19:57:43.628047 26098 test_util.cc:276] Using random seed: 445305084
I20250809 19:57:43.633098 26098 ts_itest-base.cc:115] Starting cluster with:
I20250809 19:57:43.633242 26098 ts_itest-base.cc:116] --------------
I20250809 19:57:43.633352 26098 ts_itest-base.cc:117] 3 tablet servers
I20250809 19:57:43.633450 26098 ts_itest-base.cc:118] 3 replicas per TS
I20250809 19:57:43.633549 26098 ts_itest-base.cc:119] --------------
2025-08-09T19:57:43Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-09T19:57:43Z Disabled control of system clock
I20250809 19:57:43.661670 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:43259
--webserver_interface=127.25.124.190
--webserver_port=0
--builtin_ntp_servers=127.25.124.148:33875
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.25.124.190:43259
--catalog_manager_wait_for_new_tablets_to_elect_leader=false with env {}
W20250809 19:57:43.903578 26965 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:57:43.904028 26965 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:57:43.904409 26965 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:57:43.929442 26965 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250809 19:57:43.929721 26965 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:57:43.929939 26965 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250809 19:57:43.930153 26965 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250809 19:57:43.957479 26965 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:33875
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--catalog_manager_wait_for_new_tablets_to_elect_leader=false
--ipki_ca_key_size=768
--master_addresses=127.25.124.190:43259
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:43259
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.25.124.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:57:43.958462 26965 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:57:43.959782 26965 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:57:43.968613 26971 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:43.969199 26972 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:45.124713 26974 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:45.127128 26973 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1153 milliseconds
W20250809 19:57:45.127694 26965 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.159s user 0.423s sys 0.730s
W20250809 19:57:45.127959 26965 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.160s user 0.423s sys 0.730s
I20250809 19:57:45.128155 26965 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:57:45.129103 26965 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:57:45.131614 26965 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:57:45.132948 26965 hybrid_clock.cc:648] HybridClock initialized: now 1754769465132921 us; error 30 us; skew 500 ppm
I20250809 19:57:45.133664 26965 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:57:45.140707 26965 webserver.cc:489] Webserver started at http://127.25.124.190:35753/ using document root <none> and password file <none>
I20250809 19:57:45.141547 26965 fs_manager.cc:362] Metadata directory not provided
I20250809 19:57:45.141752 26965 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:57:45.142134 26965 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:57:45.145884 26965 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "7acc21eb30bd4ce5bf7187fdac0caf23"
format_stamp: "Formatted at 2025-08-09 19:57:45 on dist-test-slave-xzln"
I20250809 19:57:45.146854 26965 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "7acc21eb30bd4ce5bf7187fdac0caf23"
format_stamp: "Formatted at 2025-08-09 19:57:45 on dist-test-slave-xzln"
I20250809 19:57:45.153918 26965 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.001s
I20250809 19:57:45.159051 26982 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:45.160056 26965 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.001s
I20250809 19:57:45.160336 26965 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
uuid: "7acc21eb30bd4ce5bf7187fdac0caf23"
format_stamp: "Formatted at 2025-08-09 19:57:45 on dist-test-slave-xzln"
I20250809 19:57:45.160611 26965 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:57:45.216143 26965 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:57:45.217351 26965 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:57:45.217748 26965 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:57:45.277724 26965 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.190:43259
I20250809 19:57:45.277770 27033 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.190:43259 every 8 connection(s)
I20250809 19:57:45.280098 26965 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250809 19:57:45.284243 27034 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:57:45.286289 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 26965
I20250809 19:57:45.286752 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250809 19:57:45.301909 27034 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23: Bootstrap starting.
I20250809 19:57:45.306356 27034 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23: Neither blocks nor log segments found. Creating new log.
I20250809 19:57:45.307925 27034 log.cc:826] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23: Log is configured to *not* fsync() on all Append() calls
I20250809 19:57:45.312136 27034 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23: No bootstrap required, opened a new log
I20250809 19:57:45.326005 27034 raft_consensus.cc:357] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7acc21eb30bd4ce5bf7187fdac0caf23" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 43259 } }
I20250809 19:57:45.326464 27034 raft_consensus.cc:383] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:57:45.326668 27034 raft_consensus.cc:738] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7acc21eb30bd4ce5bf7187fdac0caf23, State: Initialized, Role: FOLLOWER
I20250809 19:57:45.327204 27034 consensus_queue.cc:260] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7acc21eb30bd4ce5bf7187fdac0caf23" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 43259 } }
I20250809 19:57:45.327651 27034 raft_consensus.cc:397] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:57:45.327872 27034 raft_consensus.cc:491] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:57:45.328128 27034 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:57:45.331611 27034 raft_consensus.cc:513] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7acc21eb30bd4ce5bf7187fdac0caf23" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 43259 } }
I20250809 19:57:45.332130 27034 leader_election.cc:304] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7acc21eb30bd4ce5bf7187fdac0caf23; no voters:
I20250809 19:57:45.333451 27034 leader_election.cc:290] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250809 19:57:45.334116 27039 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:57:45.335889 27039 raft_consensus.cc:695] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [term 1 LEADER]: Becoming Leader. State: Replica: 7acc21eb30bd4ce5bf7187fdac0caf23, State: Running, Role: LEADER
I20250809 19:57:45.336458 27039 consensus_queue.cc:237] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7acc21eb30bd4ce5bf7187fdac0caf23" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 43259 } }
I20250809 19:57:45.336723 27034 sys_catalog.cc:564] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [sys.catalog]: configured and running, proceeding with master startup.
I20250809 19:57:45.341747 27041 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 7acc21eb30bd4ce5bf7187fdac0caf23. Latest consensus state: current_term: 1 leader_uuid: "7acc21eb30bd4ce5bf7187fdac0caf23" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7acc21eb30bd4ce5bf7187fdac0caf23" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 43259 } } }
I20250809 19:57:45.342146 27041 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [sys.catalog]: This master's current role is: LEADER
I20250809 19:57:45.343518 27040 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "7acc21eb30bd4ce5bf7187fdac0caf23" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7acc21eb30bd4ce5bf7187fdac0caf23" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 43259 } } }
I20250809 19:57:45.344341 27040 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23 [sys.catalog]: This master's current role is: LEADER
I20250809 19:57:45.346452 27044 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250809 19:57:45.355645 27044 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250809 19:57:45.372457 27044 catalog_manager.cc:1349] Generated new cluster ID: aa90dc5ff5d3488999da03b939d552c4
I20250809 19:57:45.372677 27044 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250809 19:57:45.396906 27044 catalog_manager.cc:1372] Generated new certificate authority record
I20250809 19:57:45.398046 27044 catalog_manager.cc:1506] Loading token signing keys...
I20250809 19:57:45.407613 27044 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 7acc21eb30bd4ce5bf7187fdac0caf23: Generated new TSK 0
I20250809 19:57:45.408265 27044 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250809 19:57:45.416810 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.129:0
--local_ip_for_outbound_sockets=127.25.124.129
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:43259
--builtin_ntp_servers=127.25.124.148:33875
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
W20250809 19:57:45.674535 27058 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250809 19:57:45.675036 27058 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:57:45.675285 27058 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:57:45.675699 27058 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:57:45.700455 27058 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:57:45.701123 27058 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.129
I20250809 19:57:45.728803 27058 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:33875
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:43259
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.129
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:57:45.729871 27058 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:57:45.731178 27058 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:57:45.741796 27064 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:45.744704 27065 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:45.745517 27067 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:46.747460 27066 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250809 19:57:46.747538 27058 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:57:46.748530 27058 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:57:46.751631 27058 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:57:46.752936 27058 hybrid_clock.cc:648] HybridClock initialized: now 1754769466752896 us; error 56 us; skew 500 ppm
I20250809 19:57:46.753593 27058 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:57:46.759577 27058 webserver.cc:489] Webserver started at http://127.25.124.129:34675/ using document root <none> and password file <none>
I20250809 19:57:46.760378 27058 fs_manager.cc:362] Metadata directory not provided
I20250809 19:57:46.760561 27058 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:57:46.760950 27058 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:57:46.764582 27058 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "1ec5695a872d4e78ac757c39beab980c"
format_stamp: "Formatted at 2025-08-09 19:57:46 on dist-test-slave-xzln"
I20250809 19:57:46.765492 27058 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "1ec5695a872d4e78ac757c39beab980c"
format_stamp: "Formatted at 2025-08-09 19:57:46 on dist-test-slave-xzln"
I20250809 19:57:46.771232 27058 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.003s sys 0.004s
I20250809 19:57:46.775779 27074 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:46.776552 27058 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.001s
I20250809 19:57:46.776823 27058 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "1ec5695a872d4e78ac757c39beab980c"
format_stamp: "Formatted at 2025-08-09 19:57:46 on dist-test-slave-xzln"
I20250809 19:57:46.777086 27058 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:57:46.823231 27058 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:57:46.824330 27058 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:57:46.824673 27058 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:57:46.826772 27058 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:57:46.830046 27058 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:57:46.830230 27058 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:46.830426 27058 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:57:46.830562 27058 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:46.941606 27058 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.129:33289
I20250809 19:57:46.941665 27186 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.129:33289 every 8 connection(s)
I20250809 19:57:46.943809 27058 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250809 19:57:46.952149 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 27058
I20250809 19:57:46.952538 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250809 19:57:46.958057 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.130:0
--local_ip_for_outbound_sockets=127.25.124.130
--webserver_interface=127.25.124.130
--webserver_port=0
--tserver_master_addrs=127.25.124.190:43259
--builtin_ntp_servers=127.25.124.148:33875
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250809 19:57:46.962530 27187 heartbeater.cc:344] Connected to a master server at 127.25.124.190:43259
I20250809 19:57:46.962858 27187 heartbeater.cc:461] Registering TS with master...
I20250809 19:57:46.963754 27187 heartbeater.cc:507] Master 127.25.124.190:43259 requested a full tablet report, sending...
I20250809 19:57:46.965833 26999 ts_manager.cc:194] Registered new tserver with Master: 1ec5695a872d4e78ac757c39beab980c (127.25.124.129:33289)
I20250809 19:57:46.967545 26999 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.129:33701
W20250809 19:57:47.214345 27191 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250809 19:57:47.214893 27191 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:57:47.215145 27191 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:57:47.215564 27191 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:57:47.242113 27191 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:57:47.242908 27191 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.130
I20250809 19:57:47.271411 27191 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:33875
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.25.124.130
--webserver_port=0
--tserver_master_addrs=127.25.124.190:43259
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.130
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:57:47.272491 27191 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:57:47.273900 27191 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:57:47.285161 27197 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:57:47.970983 27187 heartbeater.cc:499] Master 127.25.124.190:43259 was elected leader, sending a full tablet report...
W20250809 19:57:48.688680 27196 debug-util.cc:398] Leaking SignalData structure 0x7b08000068a0 after lost signal to thread 27191
W20250809 19:57:48.802196 27191 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.516s user 0.570s sys 0.946s
W20250809 19:57:47.286170 27198 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:48.802561 27191 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.517s user 0.570s sys 0.946s
W20250809 19:57:48.804399 27200 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:48.806664 27199 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1516 milliseconds
I20250809 19:57:48.806674 27191 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:57:48.807776 27191 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:57:48.809458 27191 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:57:48.810746 27191 hybrid_clock.cc:648] HybridClock initialized: now 1754769468810690 us; error 52 us; skew 500 ppm
I20250809 19:57:48.811439 27191 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:57:48.816612 27191 webserver.cc:489] Webserver started at http://127.25.124.130:43897/ using document root <none> and password file <none>
I20250809 19:57:48.817390 27191 fs_manager.cc:362] Metadata directory not provided
I20250809 19:57:48.817562 27191 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:57:48.817940 27191 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:57:48.821600 27191 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "97bb1b2006e44262ad66762a69a5338f"
format_stamp: "Formatted at 2025-08-09 19:57:48 on dist-test-slave-xzln"
I20250809 19:57:48.822561 27191 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "97bb1b2006e44262ad66762a69a5338f"
format_stamp: "Formatted at 2025-08-09 19:57:48 on dist-test-slave-xzln"
I20250809 19:57:48.828258 27191 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.006s sys 0.002s
I20250809 19:57:48.832924 27207 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:48.833726 27191 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.000s
I20250809 19:57:48.833983 27191 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "97bb1b2006e44262ad66762a69a5338f"
format_stamp: "Formatted at 2025-08-09 19:57:48 on dist-test-slave-xzln"
I20250809 19:57:48.834261 27191 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:57:48.881129 27191 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:57:48.882332 27191 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:57:48.882697 27191 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:57:48.884748 27191 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:57:48.887991 27191 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:57:48.888163 27191 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:48.888379 27191 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:57:48.888530 27191 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:49.006510 27191 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.130:42323
I20250809 19:57:49.006625 27319 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.130:42323 every 8 connection(s)
I20250809 19:57:49.009012 27191 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250809 19:57:49.015076 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 27191
I20250809 19:57:49.015551 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250809 19:57:49.020787 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.131:0
--local_ip_for_outbound_sockets=127.25.124.131
--webserver_interface=127.25.124.131
--webserver_port=0
--tserver_master_addrs=127.25.124.190:43259
--builtin_ntp_servers=127.25.124.148:33875
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250809 19:57:49.030058 27320 heartbeater.cc:344] Connected to a master server at 127.25.124.190:43259
I20250809 19:57:49.030463 27320 heartbeater.cc:461] Registering TS with master...
I20250809 19:57:49.031479 27320 heartbeater.cc:507] Master 127.25.124.190:43259 requested a full tablet report, sending...
I20250809 19:57:49.033435 26999 ts_manager.cc:194] Registered new tserver with Master: 97bb1b2006e44262ad66762a69a5338f (127.25.124.130:42323)
I20250809 19:57:49.034605 26999 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.130:36841
W20250809 19:57:49.302217 27324 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250809 19:57:49.302757 27324 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:57:49.302995 27324 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:57:49.303484 27324 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:57:49.332628 27324 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:57:49.333379 27324 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.131
I20250809 19:57:49.364575 27324 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:33875
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.25.124.131
--webserver_port=0
--tserver_master_addrs=127.25.124.190:43259
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.131
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:57:49.365706 27324 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:57:49.367117 27324 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:57:49.377306 27330 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:57:50.037497 27320 heartbeater.cc:499] Master 127.25.124.190:43259 was elected leader, sending a full tablet report...
W20250809 19:57:49.377921 27331 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:49.381103 27333 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:50.407603 27332 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1026 milliseconds
I20250809 19:57:50.407718 27324 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:57:50.408741 27324 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:57:50.410585 27324 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:57:50.411877 27324 hybrid_clock.cc:648] HybridClock initialized: now 1754769470411855 us; error 38 us; skew 500 ppm
I20250809 19:57:50.412520 27324 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:57:50.417768 27324 webserver.cc:489] Webserver started at http://127.25.124.131:42745/ using document root <none> and password file <none>
I20250809 19:57:50.418504 27324 fs_manager.cc:362] Metadata directory not provided
I20250809 19:57:50.418669 27324 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:57:50.419055 27324 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:57:50.422633 27324 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "7763497f5f804e21bbdcca51e1d78939"
format_stamp: "Formatted at 2025-08-09 19:57:50 on dist-test-slave-xzln"
I20250809 19:57:50.423589 27324 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "7763497f5f804e21bbdcca51e1d78939"
format_stamp: "Formatted at 2025-08-09 19:57:50 on dist-test-slave-xzln"
I20250809 19:57:50.429375 27324 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.005s sys 0.000s
I20250809 19:57:50.434017 27340 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:50.434830 27324 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250809 19:57:50.435086 27324 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "7763497f5f804e21bbdcca51e1d78939"
format_stamp: "Formatted at 2025-08-09 19:57:50 on dist-test-slave-xzln"
I20250809 19:57:50.435351 27324 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:57:50.483923 27324 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:57:50.485283 27324 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:57:50.485630 27324 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:57:50.487746 27324 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:57:50.491094 27324 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:57:50.491338 27324 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.001s sys 0.000s
I20250809 19:57:50.491542 27324 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:57:50.491659 27324 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:50.602245 27324 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.131:40437
I20250809 19:57:50.602322 27452 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.131:40437 every 8 connection(s)
I20250809 19:57:50.604456 27324 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250809 19:57:50.610523 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 27324
I20250809 19:57:50.610839 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250809 19:57:50.625932 27453 heartbeater.cc:344] Connected to a master server at 127.25.124.190:43259
I20250809 19:57:50.626261 27453 heartbeater.cc:461] Registering TS with master...
I20250809 19:57:50.627205 27453 heartbeater.cc:507] Master 127.25.124.190:43259 requested a full tablet report, sending...
I20250809 19:57:50.628959 26999 ts_manager.cc:194] Registered new tserver with Master: 7763497f5f804e21bbdcca51e1d78939 (127.25.124.131:40437)
I20250809 19:57:50.629998 26999 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.131:42157
I20250809 19:57:50.641364 26098 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250809 19:57:50.669967 26999 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:45340:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250809 19:57:50.685204 26999 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250809 19:57:50.729852 27255 tablet_service.cc:1468] Processing CreateTablet for tablet 3d2a3b45a3be4884af7a633ff569956b (DEFAULT_TABLE table=TestTable [id=858a0b20f1414d7982f199fda618e6c3]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:57:50.731460 27255 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3d2a3b45a3be4884af7a633ff569956b. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:57:50.737704 27122 tablet_service.cc:1468] Processing CreateTablet for tablet 3d2a3b45a3be4884af7a633ff569956b (DEFAULT_TABLE table=TestTable [id=858a0b20f1414d7982f199fda618e6c3]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:57:50.737704 27388 tablet_service.cc:1468] Processing CreateTablet for tablet 3d2a3b45a3be4884af7a633ff569956b (DEFAULT_TABLE table=TestTable [id=858a0b20f1414d7982f199fda618e6c3]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:57:50.739277 27388 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3d2a3b45a3be4884af7a633ff569956b. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:57:50.739933 27122 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3d2a3b45a3be4884af7a633ff569956b. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:57:50.750382 27472 tablet_bootstrap.cc:492] T 3d2a3b45a3be4884af7a633ff569956b P 97bb1b2006e44262ad66762a69a5338f: Bootstrap starting.
I20250809 19:57:50.756018 27472 tablet_bootstrap.cc:654] T 3d2a3b45a3be4884af7a633ff569956b P 97bb1b2006e44262ad66762a69a5338f: Neither blocks nor log segments found. Creating new log.
I20250809 19:57:50.757550 27472 log.cc:826] T 3d2a3b45a3be4884af7a633ff569956b P 97bb1b2006e44262ad66762a69a5338f: Log is configured to *not* fsync() on all Append() calls
I20250809 19:57:50.761951 27472 tablet_bootstrap.cc:492] T 3d2a3b45a3be4884af7a633ff569956b P 97bb1b2006e44262ad66762a69a5338f: No bootstrap required, opened a new log
I20250809 19:57:50.762439 27472 ts_tablet_manager.cc:1397] T 3d2a3b45a3be4884af7a633ff569956b P 97bb1b2006e44262ad66762a69a5338f: Time spent bootstrapping tablet: real 0.012s user 0.012s sys 0.000s
I20250809 19:57:50.763754 27474 tablet_bootstrap.cc:492] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c: Bootstrap starting.
I20250809 19:57:50.768606 27475 tablet_bootstrap.cc:492] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939: Bootstrap starting.
I20250809 19:57:50.770092 27474 tablet_bootstrap.cc:654] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c: Neither blocks nor log segments found. Creating new log.
I20250809 19:57:50.772066 27474 log.cc:826] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c: Log is configured to *not* fsync() on all Append() calls
I20250809 19:57:50.773051 27475 tablet_bootstrap.cc:654] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939: Neither blocks nor log segments found. Creating new log.
I20250809 19:57:50.775418 27475 log.cc:826] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939: Log is configured to *not* fsync() on all Append() calls
I20250809 19:57:50.777088 27474 tablet_bootstrap.cc:492] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c: No bootstrap required, opened a new log
I20250809 19:57:50.777554 27474 ts_tablet_manager.cc:1397] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c: Time spent bootstrapping tablet: real 0.014s user 0.007s sys 0.006s
I20250809 19:57:50.779592 27475 tablet_bootstrap.cc:492] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939: No bootstrap required, opened a new log
I20250809 19:57:50.779932 27475 ts_tablet_manager.cc:1397] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939: Time spent bootstrapping tablet: real 0.012s user 0.001s sys 0.009s
I20250809 19:57:50.786538 27472 raft_consensus.cc:357] T 3d2a3b45a3be4884af7a633ff569956b P 97bb1b2006e44262ad66762a69a5338f [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "97bb1b2006e44262ad66762a69a5338f" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 42323 } } peers { permanent_uuid: "1ec5695a872d4e78ac757c39beab980c" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 33289 } } peers { permanent_uuid: "7763497f5f804e21bbdcca51e1d78939" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 40437 } }
I20250809 19:57:50.787300 27472 raft_consensus.cc:738] T 3d2a3b45a3be4884af7a633ff569956b P 97bb1b2006e44262ad66762a69a5338f [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 97bb1b2006e44262ad66762a69a5338f, State: Initialized, Role: FOLLOWER
I20250809 19:57:50.787938 27472 consensus_queue.cc:260] T 3d2a3b45a3be4884af7a633ff569956b P 97bb1b2006e44262ad66762a69a5338f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "97bb1b2006e44262ad66762a69a5338f" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 42323 } } peers { permanent_uuid: "1ec5695a872d4e78ac757c39beab980c" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 33289 } } peers { permanent_uuid: "7763497f5f804e21bbdcca51e1d78939" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 40437 } }
I20250809 19:57:50.791559 27472 ts_tablet_manager.cc:1428] T 3d2a3b45a3be4884af7a633ff569956b P 97bb1b2006e44262ad66762a69a5338f: Time spent starting tablet: real 0.028s user 0.027s sys 0.000s
I20250809 19:57:50.796322 27475 raft_consensus.cc:357] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "97bb1b2006e44262ad66762a69a5338f" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 42323 } } peers { permanent_uuid: "1ec5695a872d4e78ac757c39beab980c" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 33289 } } peers { permanent_uuid: "7763497f5f804e21bbdcca51e1d78939" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 40437 } }
I20250809 19:57:50.796892 27475 raft_consensus.cc:738] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7763497f5f804e21bbdcca51e1d78939, State: Initialized, Role: FOLLOWER
I20250809 19:57:50.797483 27475 consensus_queue.cc:260] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "97bb1b2006e44262ad66762a69a5338f" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 42323 } } peers { permanent_uuid: "1ec5695a872d4e78ac757c39beab980c" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 33289 } } peers { permanent_uuid: "7763497f5f804e21bbdcca51e1d78939" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 40437 } }
I20250809 19:57:50.798902 27474 raft_consensus.cc:357] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "97bb1b2006e44262ad66762a69a5338f" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 42323 } } peers { permanent_uuid: "1ec5695a872d4e78ac757c39beab980c" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 33289 } } peers { permanent_uuid: "7763497f5f804e21bbdcca51e1d78939" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 40437 } }
I20250809 19:57:50.799834 27474 raft_consensus.cc:738] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1ec5695a872d4e78ac757c39beab980c, State: Initialized, Role: FOLLOWER
I20250809 19:57:50.800184 27453 heartbeater.cc:499] Master 127.25.124.190:43259 was elected leader, sending a full tablet report...
I20250809 19:57:50.800436 27474 consensus_queue.cc:260] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "97bb1b2006e44262ad66762a69a5338f" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 42323 } } peers { permanent_uuid: "1ec5695a872d4e78ac757c39beab980c" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 33289 } } peers { permanent_uuid: "7763497f5f804e21bbdcca51e1d78939" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 40437 } }
I20250809 19:57:50.802116 27475 ts_tablet_manager.cc:1428] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939: Time spent starting tablet: real 0.022s user 0.014s sys 0.006s
I20250809 19:57:50.806229 27474 ts_tablet_manager.cc:1428] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c: Time spent starting tablet: real 0.028s user 0.015s sys 0.012s
I20250809 19:57:50.815470 26098 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250809 19:57:50.818127 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 1ec5695a872d4e78ac757c39beab980c to finish bootstrapping
I20250809 19:57:50.828603 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 97bb1b2006e44262ad66762a69a5338f to finish bootstrapping
I20250809 19:57:50.836865 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 7763497f5f804e21bbdcca51e1d78939 to finish bootstrapping
W20250809 19:57:50.858335 27454 tablet.cc:2378] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250809 19:57:50.869218 27142 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "3d2a3b45a3be4884af7a633ff569956b"
dest_uuid: "1ec5695a872d4e78ac757c39beab980c"
from {username='slave'} at 127.0.0.1:44354
I20250809 19:57:50.869678 27142 raft_consensus.cc:491] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 0 FOLLOWER]: Starting forced leader election (received explicit request)
I20250809 19:57:50.869942 27142 raft_consensus.cc:3058] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:57:50.873687 27142 raft_consensus.cc:513] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 1 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "97bb1b2006e44262ad66762a69a5338f" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 42323 } } peers { permanent_uuid: "1ec5695a872d4e78ac757c39beab980c" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 33289 } } peers { permanent_uuid: "7763497f5f804e21bbdcca51e1d78939" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 40437 } }
I20250809 19:57:50.875653 27142 leader_election.cc:290] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [CANDIDATE]: Term 1 election: Requested vote from peers 97bb1b2006e44262ad66762a69a5338f (127.25.124.130:42323), 7763497f5f804e21bbdcca51e1d78939 (127.25.124.131:40437)
I20250809 19:57:50.883399 26098 cluster_itest_util.cc:257] Not converged past 1 yet: 0.0 0.0 0.0
I20250809 19:57:50.886080 27408 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3d2a3b45a3be4884af7a633ff569956b" candidate_uuid: "1ec5695a872d4e78ac757c39beab980c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "7763497f5f804e21bbdcca51e1d78939"
I20250809 19:57:50.886570 27408 raft_consensus.cc:3058] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:57:50.887369 27275 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3d2a3b45a3be4884af7a633ff569956b" candidate_uuid: "1ec5695a872d4e78ac757c39beab980c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "97bb1b2006e44262ad66762a69a5338f"
I20250809 19:57:50.887858 27275 raft_consensus.cc:3058] T 3d2a3b45a3be4884af7a633ff569956b P 97bb1b2006e44262ad66762a69a5338f [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:57:50.890359 27408 raft_consensus.cc:2466] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 1ec5695a872d4e78ac757c39beab980c in term 1.
I20250809 19:57:50.891178 27076 leader_election.cc:304] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1ec5695a872d4e78ac757c39beab980c, 7763497f5f804e21bbdcca51e1d78939; no voters:
I20250809 19:57:50.891532 27275 raft_consensus.cc:2466] T 3d2a3b45a3be4884af7a633ff569956b P 97bb1b2006e44262ad66762a69a5338f [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 1ec5695a872d4e78ac757c39beab980c in term 1.
I20250809 19:57:50.891824 27480 raft_consensus.cc:2802] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:57:50.893343 27480 raft_consensus.cc:695] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 1 LEADER]: Becoming Leader. State: Replica: 1ec5695a872d4e78ac757c39beab980c, State: Running, Role: LEADER
I20250809 19:57:50.894040 27480 consensus_queue.cc:237] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "97bb1b2006e44262ad66762a69a5338f" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 42323 } } peers { permanent_uuid: "1ec5695a872d4e78ac757c39beab980c" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 33289 } } peers { permanent_uuid: "7763497f5f804e21bbdcca51e1d78939" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 40437 } }
I20250809 19:57:50.902802 26998 catalog_manager.cc:5582] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c reported cstate change: term changed from 0 to 1, leader changed from <none> to 1ec5695a872d4e78ac757c39beab980c (127.25.124.129). New cstate: current_term: 1 leader_uuid: "1ec5695a872d4e78ac757c39beab980c" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "97bb1b2006e44262ad66762a69a5338f" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 42323 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "1ec5695a872d4e78ac757c39beab980c" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 33289 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "7763497f5f804e21bbdcca51e1d78939" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 40437 } health_report { overall_health: UNKNOWN } } }
W20250809 19:57:50.951403 27188 tablet.cc:2378] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250809 19:57:50.987540 26098 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
W20250809 19:57:51.014926 27321 tablet.cc:2378] T 3d2a3b45a3be4884af7a633ff569956b P 97bb1b2006e44262ad66762a69a5338f: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250809 19:57:51.191830 26098 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
I20250809 19:57:51.301446 27480 consensus_queue.cc:1035] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [LEADER]: Connected to new peer: Peer: permanent_uuid: "97bb1b2006e44262ad66762a69a5338f" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 42323 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250809 19:57:51.317519 27491 consensus_queue.cc:1035] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [LEADER]: Connected to new peer: Peer: permanent_uuid: "7763497f5f804e21bbdcca51e1d78939" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 40437 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250809 19:57:52.897006 27142 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "3d2a3b45a3be4884af7a633ff569956b"
dest_uuid: "1ec5695a872d4e78ac757c39beab980c"
mode: GRACEFUL
from {username='slave'} at 127.0.0.1:44358
I20250809 19:57:52.897559 27142 raft_consensus.cc:604] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 1 LEADER]: Received request to transfer leadership
I20250809 19:57:53.253288 27515 raft_consensus.cc:991] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c: : Instructing follower 7763497f5f804e21bbdcca51e1d78939 to start an election
I20250809 19:57:53.253636 27514 raft_consensus.cc:1079] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 1 LEADER]: Signalling peer 7763497f5f804e21bbdcca51e1d78939 to start an election
I20250809 19:57:53.254721 27408 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "3d2a3b45a3be4884af7a633ff569956b"
dest_uuid: "7763497f5f804e21bbdcca51e1d78939"
from {username='slave'} at 127.25.124.129:43047
I20250809 19:57:53.255123 27408 raft_consensus.cc:491] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 [term 1 FOLLOWER]: Starting forced leader election (received explicit request)
I20250809 19:57:53.255338 27408 raft_consensus.cc:3058] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 [term 1 FOLLOWER]: Advancing to term 2
I20250809 19:57:53.258806 27408 raft_consensus.cc:513] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 [term 2 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "97bb1b2006e44262ad66762a69a5338f" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 42323 } } peers { permanent_uuid: "1ec5695a872d4e78ac757c39beab980c" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 33289 } } peers { permanent_uuid: "7763497f5f804e21bbdcca51e1d78939" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 40437 } }
I20250809 19:57:53.260607 27408 leader_election.cc:290] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 [CANDIDATE]: Term 2 election: Requested vote from peers 97bb1b2006e44262ad66762a69a5338f (127.25.124.130:42323), 1ec5695a872d4e78ac757c39beab980c (127.25.124.129:33289)
I20250809 19:57:53.269724 27275 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3d2a3b45a3be4884af7a633ff569956b" candidate_uuid: "7763497f5f804e21bbdcca51e1d78939" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "97bb1b2006e44262ad66762a69a5338f"
I20250809 19:57:53.270080 27275 raft_consensus.cc:3058] T 3d2a3b45a3be4884af7a633ff569956b P 97bb1b2006e44262ad66762a69a5338f [term 1 FOLLOWER]: Advancing to term 2
I20250809 19:57:53.270401 27142 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "3d2a3b45a3be4884af7a633ff569956b" candidate_uuid: "7763497f5f804e21bbdcca51e1d78939" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "1ec5695a872d4e78ac757c39beab980c"
I20250809 19:57:53.270800 27142 raft_consensus.cc:3053] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 1 LEADER]: Stepping down as leader of term 1
I20250809 19:57:53.271003 27142 raft_consensus.cc:738] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 1 LEADER]: Becoming Follower/Learner. State: Replica: 1ec5695a872d4e78ac757c39beab980c, State: Running, Role: LEADER
I20250809 19:57:53.271437 27142 consensus_queue.cc:260] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 1, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "97bb1b2006e44262ad66762a69a5338f" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 42323 } } peers { permanent_uuid: "1ec5695a872d4e78ac757c39beab980c" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 33289 } } peers { permanent_uuid: "7763497f5f804e21bbdcca51e1d78939" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 40437 } }
I20250809 19:57:53.272181 27142 raft_consensus.cc:3058] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 1 FOLLOWER]: Advancing to term 2
I20250809 19:57:53.273701 27275 raft_consensus.cc:2466] T 3d2a3b45a3be4884af7a633ff569956b P 97bb1b2006e44262ad66762a69a5338f [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 7763497f5f804e21bbdcca51e1d78939 in term 2.
I20250809 19:57:53.274428 27341 leader_election.cc:304] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 7763497f5f804e21bbdcca51e1d78939, 97bb1b2006e44262ad66762a69a5338f; no voters:
I20250809 19:57:53.275599 27142 raft_consensus.cc:2466] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 7763497f5f804e21bbdcca51e1d78939 in term 2.
I20250809 19:57:53.276325 27519 raft_consensus.cc:2802] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 [term 2 FOLLOWER]: Leader election won for term 2
I20250809 19:57:53.277544 27519 raft_consensus.cc:695] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 [term 2 LEADER]: Becoming Leader. State: Replica: 7763497f5f804e21bbdcca51e1d78939, State: Running, Role: LEADER
I20250809 19:57:53.278175 27519 consensus_queue.cc:237] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "97bb1b2006e44262ad66762a69a5338f" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 42323 } } peers { permanent_uuid: "1ec5695a872d4e78ac757c39beab980c" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 33289 } } peers { permanent_uuid: "7763497f5f804e21bbdcca51e1d78939" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 40437 } }
I20250809 19:57:53.285018 26996 catalog_manager.cc:5582] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 reported cstate change: term changed from 1 to 2, leader changed from 1ec5695a872d4e78ac757c39beab980c (127.25.124.129) to 7763497f5f804e21bbdcca51e1d78939 (127.25.124.131). New cstate: current_term: 2 leader_uuid: "7763497f5f804e21bbdcca51e1d78939" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "97bb1b2006e44262ad66762a69a5338f" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 42323 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "1ec5695a872d4e78ac757c39beab980c" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 33289 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "7763497f5f804e21bbdcca51e1d78939" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 40437 } health_report { overall_health: HEALTHY } } }
I20250809 19:57:53.722108 27275 raft_consensus.cc:1273] T 3d2a3b45a3be4884af7a633ff569956b P 97bb1b2006e44262ad66762a69a5338f [term 2 FOLLOWER]: Refusing update from remote peer 7763497f5f804e21bbdcca51e1d78939: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250809 19:57:53.722957 27519 consensus_queue.cc:1035] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 [LEADER]: Connected to new peer: Peer: permanent_uuid: "97bb1b2006e44262ad66762a69a5338f" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 42323 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250809 19:57:53.731969 27142 raft_consensus.cc:1273] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 2 FOLLOWER]: Refusing update from remote peer 7763497f5f804e21bbdcca51e1d78939: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250809 19:57:53.733373 27520 consensus_queue.cc:1035] T 3d2a3b45a3be4884af7a633ff569956b P 7763497f5f804e21bbdcca51e1d78939 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1ec5695a872d4e78ac757c39beab980c" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 33289 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250809 19:57:55.380810 27142 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "3d2a3b45a3be4884af7a633ff569956b"
dest_uuid: "1ec5695a872d4e78ac757c39beab980c"
mode: GRACEFUL
from {username='slave'} at 127.0.0.1:44374
I20250809 19:57:55.381188 27142 raft_consensus.cc:604] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 2 FOLLOWER]: Received request to transfer leadership
I20250809 19:57:55.381402 27142 raft_consensus.cc:612] T 3d2a3b45a3be4884af7a633ff569956b P 1ec5695a872d4e78ac757c39beab980c [term 2 FOLLOWER]: Rejecting request to transer leadership while not leader
I20250809 19:57:56.414795 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 27058
I20250809 19:57:56.435029 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 27191
I20250809 19:57:56.456730 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 27324
I20250809 19:57:56.477686 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 26965
2025-08-09T19:57:56Z chronyd exiting
[ OK ] AdminCliTest.TestGracefulSpecificLeaderStepDown (12899 ms)
[ RUN ] AdminCliTest.TestDescribeTableColumnFlags
I20250809 19:57:56.526865 26098 test_util.cc:276] Using random seed: 458203898
I20250809 19:57:56.530318 26098 ts_itest-base.cc:115] Starting cluster with:
I20250809 19:57:56.530450 26098 ts_itest-base.cc:116] --------------
I20250809 19:57:56.530592 26098 ts_itest-base.cc:117] 3 tablet servers
I20250809 19:57:56.530712 26098 ts_itest-base.cc:118] 3 replicas per TS
I20250809 19:57:56.530830 26098 ts_itest-base.cc:119] --------------
2025-08-09T19:57:56Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-09T19:57:56Z Disabled control of system clock
I20250809 19:57:56.559687 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:41025
--webserver_interface=127.25.124.190
--webserver_port=0
--builtin_ntp_servers=127.25.124.148:37927
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.25.124.190:41025 with env {}
W20250809 19:57:56.806602 27562 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:57:56.807044 27562 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:57:56.807390 27562 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:57:56.832975 27562 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250809 19:57:56.833233 27562 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:57:56.833469 27562 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250809 19:57:56.833700 27562 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250809 19:57:56.863124 27562 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:37927
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.25.124.190:41025
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:41025
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.25.124.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:57:56.864266 27562 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:57:56.865967 27562 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:57:56.875815 27568 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:56.876161 27569 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:58.058482 27571 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:58.060567 27570 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1179 milliseconds
W20250809 19:57:58.060827 27562 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.185s user 0.443s sys 0.724s
W20250809 19:57:58.061168 27562 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.186s user 0.443s sys 0.724s
I20250809 19:57:58.061380 27562 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:57:58.062338 27562 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:57:58.064757 27562 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:57:58.066075 27562 hybrid_clock.cc:648] HybridClock initialized: now 1754769478066028 us; error 46 us; skew 500 ppm
I20250809 19:57:58.066807 27562 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:57:58.073794 27562 webserver.cc:489] Webserver started at http://127.25.124.190:44561/ using document root <none> and password file <none>
I20250809 19:57:58.074584 27562 fs_manager.cc:362] Metadata directory not provided
I20250809 19:57:58.074791 27562 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:57:58.075186 27562 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:57:58.079007 27562 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "934f0df7c3f049979e67e5aa266e9bbb"
format_stamp: "Formatted at 2025-08-09 19:57:58 on dist-test-slave-xzln"
I20250809 19:57:58.079985 27562 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "934f0df7c3f049979e67e5aa266e9bbb"
format_stamp: "Formatted at 2025-08-09 19:57:58 on dist-test-slave-xzln"
I20250809 19:57:58.086927 27562 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.001s
I20250809 19:57:58.092224 27578 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:57:58.093204 27562 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.002s
I20250809 19:57:58.093489 27562 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
uuid: "934f0df7c3f049979e67e5aa266e9bbb"
format_stamp: "Formatted at 2025-08-09 19:57:58 on dist-test-slave-xzln"
I20250809 19:57:58.093771 27562 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:57:58.161355 27562 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:57:58.162674 27562 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:57:58.163069 27562 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:57:58.224454 27562 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.190:41025
I20250809 19:57:58.224509 27629 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.190:41025 every 8 connection(s)
I20250809 19:57:58.226886 27562 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250809 19:57:58.230854 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 27562
I20250809 19:57:58.231160 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250809 19:57:58.231315 27630 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:57:58.251662 27630 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb: Bootstrap starting.
I20250809 19:57:58.256042 27630 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb: Neither blocks nor log segments found. Creating new log.
I20250809 19:57:58.257365 27630 log.cc:826] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb: Log is configured to *not* fsync() on all Append() calls
I20250809 19:57:58.261476 27630 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb: No bootstrap required, opened a new log
I20250809 19:57:58.275892 27630 raft_consensus.cc:357] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "934f0df7c3f049979e67e5aa266e9bbb" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 41025 } }
I20250809 19:57:58.276424 27630 raft_consensus.cc:383] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:57:58.276597 27630 raft_consensus.cc:738] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 934f0df7c3f049979e67e5aa266e9bbb, State: Initialized, Role: FOLLOWER
I20250809 19:57:58.277067 27630 consensus_queue.cc:260] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "934f0df7c3f049979e67e5aa266e9bbb" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 41025 } }
I20250809 19:57:58.277451 27630 raft_consensus.cc:397] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:57:58.277634 27630 raft_consensus.cc:491] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:57:58.277853 27630 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:57:58.281136 27630 raft_consensus.cc:513] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "934f0df7c3f049979e67e5aa266e9bbb" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 41025 } }
I20250809 19:57:58.281630 27630 leader_election.cc:304] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 934f0df7c3f049979e67e5aa266e9bbb; no voters:
I20250809 19:57:58.282953 27630 leader_election.cc:290] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [CANDIDATE]: Term 1 election: Requested vote from peers
I20250809 19:57:58.283615 27635 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:57:58.285800 27635 raft_consensus.cc:695] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [term 1 LEADER]: Becoming Leader. State: Replica: 934f0df7c3f049979e67e5aa266e9bbb, State: Running, Role: LEADER
I20250809 19:57:58.286207 27630 sys_catalog.cc:564] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [sys.catalog]: configured and running, proceeding with master startup.
I20250809 19:57:58.286448 27635 consensus_queue.cc:237] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "934f0df7c3f049979e67e5aa266e9bbb" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 41025 } }
I20250809 19:57:58.292886 27637 sys_catalog.cc:455] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [sys.catalog]: SysCatalogTable state changed. Reason: New leader 934f0df7c3f049979e67e5aa266e9bbb. Latest consensus state: current_term: 1 leader_uuid: "934f0df7c3f049979e67e5aa266e9bbb" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "934f0df7c3f049979e67e5aa266e9bbb" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 41025 } } }
I20250809 19:57:58.293381 27637 sys_catalog.cc:458] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [sys.catalog]: This master's current role is: LEADER
I20250809 19:57:58.293300 27636 sys_catalog.cc:455] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "934f0df7c3f049979e67e5aa266e9bbb" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "934f0df7c3f049979e67e5aa266e9bbb" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 41025 } } }
I20250809 19:57:58.293972 27636 sys_catalog.cc:458] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb [sys.catalog]: This master's current role is: LEADER
I20250809 19:57:58.296408 27643 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250809 19:57:58.305663 27643 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250809 19:57:58.321725 27643 catalog_manager.cc:1349] Generated new cluster ID: a2ec426e557a487e83362894231eaf70
I20250809 19:57:58.321914 27643 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250809 19:57:58.343138 27643 catalog_manager.cc:1372] Generated new certificate authority record
I20250809 19:57:58.344246 27643 catalog_manager.cc:1506] Loading token signing keys...
I20250809 19:57:58.354954 27643 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 934f0df7c3f049979e67e5aa266e9bbb: Generated new TSK 0
I20250809 19:57:58.355801 27643 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250809 19:57:58.372592 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.129:0
--local_ip_for_outbound_sockets=127.25.124.129
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:41025
--builtin_ntp_servers=127.25.124.148:37927
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250809 19:57:58.631577 27654 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:57:58.631985 27654 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:57:58.632396 27654 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:57:58.658021 27654 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:57:58.658737 27654 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.129
I20250809 19:57:58.688201 27654 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:37927
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:41025
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.129
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:57:58.689250 27654 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:57:58.690608 27654 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:57:58.701261 27660 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:57:58.702538 27661 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:00.034219 27654 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.332s user 0.470s sys 0.846s
W20250809 19:58:00.034636 27654 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.332s user 0.472s sys 0.848s
W20250809 19:58:00.035625 27662 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1332 milliseconds
W20250809 19:58:00.036629 27663 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:58:00.036695 27654 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:58:00.038256 27654 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:00.047677 27654 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:00.049194 27654 hybrid_clock.cc:648] HybridClock initialized: now 1754769480049122 us; error 70 us; skew 500 ppm
I20250809 19:58:00.050359 27654 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:00.059834 27654 webserver.cc:489] Webserver started at http://127.25.124.129:44355/ using document root <none> and password file <none>
I20250809 19:58:00.061035 27654 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:00.061282 27654 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:00.061841 27654 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:58:00.068106 27654 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "e4f0f700225d4f1e988ff2d20693e7cd"
format_stamp: "Formatted at 2025-08-09 19:58:00 on dist-test-slave-xzln"
I20250809 19:58:00.069554 27654 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "e4f0f700225d4f1e988ff2d20693e7cd"
format_stamp: "Formatted at 2025-08-09 19:58:00 on dist-test-slave-xzln"
I20250809 19:58:00.078007 27654 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.001s sys 0.009s
I20250809 19:58:00.085296 27670 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:00.086402 27654 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.006s sys 0.000s
I20250809 19:58:00.086758 27654 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "e4f0f700225d4f1e988ff2d20693e7cd"
format_stamp: "Formatted at 2025-08-09 19:58:00 on dist-test-slave-xzln"
I20250809 19:58:00.087205 27654 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:00.155097 27654 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:00.156963 27654 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:00.157480 27654 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:00.160077 27654 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:58:00.163669 27654 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:58:00.163853 27654 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:00.164085 27654 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:58:00.164222 27654 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:00.297065 27654 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.129:39847
I20250809 19:58:00.297152 27782 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.129:39847 every 8 connection(s)
I20250809 19:58:00.299388 27654 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250809 19:58:00.305356 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 27654
I20250809 19:58:00.305856 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250809 19:58:00.310642 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.130:0
--local_ip_for_outbound_sockets=127.25.124.130
--webserver_interface=127.25.124.130
--webserver_port=0
--tserver_master_addrs=127.25.124.190:41025
--builtin_ntp_servers=127.25.124.148:37927
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250809 19:58:00.322319 27783 heartbeater.cc:344] Connected to a master server at 127.25.124.190:41025
I20250809 19:58:00.322778 27783 heartbeater.cc:461] Registering TS with master...
I20250809 19:58:00.323901 27783 heartbeater.cc:507] Master 127.25.124.190:41025 requested a full tablet report, sending...
I20250809 19:58:00.326542 27595 ts_manager.cc:194] Registered new tserver with Master: e4f0f700225d4f1e988ff2d20693e7cd (127.25.124.129:39847)
I20250809 19:58:00.328933 27595 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.129:52463
W20250809 19:58:00.564610 27787 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:00.565009 27787 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:00.565420 27787 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:00.591161 27787 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:00.591872 27787 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.130
I20250809 19:58:00.619343 27787 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:37927
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.25.124.130
--webserver_port=0
--tserver_master_addrs=127.25.124.190:41025
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.130
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:00.620347 27787 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:00.621632 27787 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:00.631963 27793 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:58:01.332500 27783 heartbeater.cc:499] Master 127.25.124.190:41025 was elected leader, sending a full tablet report...
W20250809 19:58:02.034276 27792 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 27787
W20250809 19:58:02.096242 27792 kernel_stack_watchdog.cc:198] Thread 27787 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250809 19:58:00.632659 27794 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:02.096752 27787 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.465s user 0.602s sys 0.853s
W20250809 19:58:02.097082 27787 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.465s user 0.602s sys 0.853s
W20250809 19:58:02.098493 27796 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:02.100332 27795 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1463 milliseconds
I20250809 19:58:02.100342 27787 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:58:02.101336 27787 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:02.103052 27787 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:02.104363 27787 hybrid_clock.cc:648] HybridClock initialized: now 1754769482104329 us; error 33 us; skew 500 ppm
I20250809 19:58:02.105000 27787 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:02.109980 27787 webserver.cc:489] Webserver started at http://127.25.124.130:39129/ using document root <none> and password file <none>
I20250809 19:58:02.110738 27787 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:02.110916 27787 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:02.111341 27787 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:58:02.116766 27787 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "46509f54391e42618b4436627667487c"
format_stamp: "Formatted at 2025-08-09 19:58:02 on dist-test-slave-xzln"
I20250809 19:58:02.117709 27787 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "46509f54391e42618b4436627667487c"
format_stamp: "Formatted at 2025-08-09 19:58:02 on dist-test-slave-xzln"
I20250809 19:58:02.123476 27787 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.005s sys 0.002s
I20250809 19:58:02.128448 27803 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:02.129253 27787 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.000s sys 0.005s
I20250809 19:58:02.129505 27787 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "46509f54391e42618b4436627667487c"
format_stamp: "Formatted at 2025-08-09 19:58:02 on dist-test-slave-xzln"
I20250809 19:58:02.129765 27787 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:02.177246 27787 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:02.178484 27787 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:02.178857 27787 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:02.181087 27787 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:58:02.184350 27787 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:58:02.184525 27787 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:02.184733 27787 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:58:02.184872 27787 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:02.301602 27787 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.130:33789
I20250809 19:58:02.301746 27915 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.130:33789 every 8 connection(s)
I20250809 19:58:02.303810 27787 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250809 19:58:02.312253 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 27787
I20250809 19:58:02.312575 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250809 19:58:02.318145 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.131:0
--local_ip_for_outbound_sockets=127.25.124.131
--webserver_interface=127.25.124.131
--webserver_port=0
--tserver_master_addrs=127.25.124.190:41025
--builtin_ntp_servers=127.25.124.148:37927
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250809 19:58:02.322328 27916 heartbeater.cc:344] Connected to a master server at 127.25.124.190:41025
I20250809 19:58:02.322747 27916 heartbeater.cc:461] Registering TS with master...
I20250809 19:58:02.323905 27916 heartbeater.cc:507] Master 127.25.124.190:41025 requested a full tablet report, sending...
I20250809 19:58:02.325958 27595 ts_manager.cc:194] Registered new tserver with Master: 46509f54391e42618b4436627667487c (127.25.124.130:33789)
I20250809 19:58:02.327484 27595 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.130:43843
W20250809 19:58:02.570345 27920 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:02.570755 27920 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:02.571162 27920 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:02.596246 27920 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:02.596922 27920 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.131
I20250809 19:58:02.624155 27920 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:37927
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.25.124.131
--webserver_port=0
--tserver_master_addrs=127.25.124.190:41025
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.131
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:02.625216 27920 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:02.626587 27920 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:02.636693 27926 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:58:03.330070 27916 heartbeater.cc:499] Master 127.25.124.190:41025 was elected leader, sending a full tablet report...
W20250809 19:58:02.636966 27927 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:03.758272 27929 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:03.760778 27928 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1121 milliseconds
W20250809 19:58:03.761551 27920 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.124s user 0.423s sys 0.689s
W20250809 19:58:03.761765 27920 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.125s user 0.423s sys 0.689s
I20250809 19:58:03.761952 27920 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:58:03.762828 27920 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:03.764737 27920 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:03.766038 27920 hybrid_clock.cc:648] HybridClock initialized: now 1754769483766007 us; error 32 us; skew 500 ppm
I20250809 19:58:03.766707 27920 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:03.772547 27920 webserver.cc:489] Webserver started at http://127.25.124.131:32961/ using document root <none> and password file <none>
I20250809 19:58:03.773386 27920 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:03.773589 27920 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:03.774015 27920 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:58:03.777699 27920 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "b95076872f574802a4ac48e1462efdb7"
format_stamp: "Formatted at 2025-08-09 19:58:03 on dist-test-slave-xzln"
I20250809 19:58:03.778625 27920 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "b95076872f574802a4ac48e1462efdb7"
format_stamp: "Formatted at 2025-08-09 19:58:03 on dist-test-slave-xzln"
I20250809 19:58:03.785085 27920 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.003s
I20250809 19:58:03.790247 27936 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:03.791163 27920 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.001s
I20250809 19:58:03.791467 27920 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "b95076872f574802a4ac48e1462efdb7"
format_stamp: "Formatted at 2025-08-09 19:58:03 on dist-test-slave-xzln"
I20250809 19:58:03.791725 27920 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:03.844156 27920 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:03.845325 27920 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:03.845697 27920 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:03.847954 27920 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:58:03.851295 27920 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:58:03.851473 27920 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:03.851689 27920 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:58:03.851824 27920 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:03.963837 27920 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.131:41015
I20250809 19:58:03.963912 28048 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.131:41015 every 8 connection(s)
I20250809 19:58:03.966009 27920 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250809 19:58:03.968705 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 27920
I20250809 19:58:03.969132 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250809 19:58:03.984247 28049 heartbeater.cc:344] Connected to a master server at 127.25.124.190:41025
I20250809 19:58:03.984582 28049 heartbeater.cc:461] Registering TS with master...
I20250809 19:58:03.985366 28049 heartbeater.cc:507] Master 127.25.124.190:41025 requested a full tablet report, sending...
I20250809 19:58:03.987310 27595 ts_manager.cc:194] Registered new tserver with Master: b95076872f574802a4ac48e1462efdb7 (127.25.124.131:41015)
I20250809 19:58:03.988473 27595 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.131:38521
I20250809 19:58:04.000905 26098 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250809 19:58:04.024832 27595 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:37822:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250809 19:58:04.040488 27595 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250809 19:58:04.082641 27718 tablet_service.cc:1468] Processing CreateTablet for tablet f831532b9845433b9f72a20682eb4f14 (DEFAULT_TABLE table=TestTable [id=3a82d43c970041ba8d01dc012352475f]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:58:04.084569 27718 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f831532b9845433b9f72a20682eb4f14. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:04.084270 27984 tablet_service.cc:1468] Processing CreateTablet for tablet f831532b9845433b9f72a20682eb4f14 (DEFAULT_TABLE table=TestTable [id=3a82d43c970041ba8d01dc012352475f]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:58:04.085870 27984 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f831532b9845433b9f72a20682eb4f14. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:04.086028 27851 tablet_service.cc:1468] Processing CreateTablet for tablet f831532b9845433b9f72a20682eb4f14 (DEFAULT_TABLE table=TestTable [id=3a82d43c970041ba8d01dc012352475f]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:58:04.087663 27851 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f831532b9845433b9f72a20682eb4f14. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:04.105172 28068 tablet_bootstrap.cc:492] T f831532b9845433b9f72a20682eb4f14 P 46509f54391e42618b4436627667487c: Bootstrap starting.
I20250809 19:58:04.108430 28069 tablet_bootstrap.cc:492] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd: Bootstrap starting.
I20250809 19:58:04.110031 28070 tablet_bootstrap.cc:492] T f831532b9845433b9f72a20682eb4f14 P b95076872f574802a4ac48e1462efdb7: Bootstrap starting.
I20250809 19:58:04.111923 28068 tablet_bootstrap.cc:654] T f831532b9845433b9f72a20682eb4f14 P 46509f54391e42618b4436627667487c: Neither blocks nor log segments found. Creating new log.
I20250809 19:58:04.114017 28068 log.cc:826] T f831532b9845433b9f72a20682eb4f14 P 46509f54391e42618b4436627667487c: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:04.114473 28070 tablet_bootstrap.cc:654] T f831532b9845433b9f72a20682eb4f14 P b95076872f574802a4ac48e1462efdb7: Neither blocks nor log segments found. Creating new log.
I20250809 19:58:04.115198 28069 tablet_bootstrap.cc:654] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd: Neither blocks nor log segments found. Creating new log.
I20250809 19:58:04.116176 28070 log.cc:826] T f831532b9845433b9f72a20682eb4f14 P b95076872f574802a4ac48e1462efdb7: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:04.118431 28069 log.cc:826] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:04.121373 28070 tablet_bootstrap.cc:492] T f831532b9845433b9f72a20682eb4f14 P b95076872f574802a4ac48e1462efdb7: No bootstrap required, opened a new log
I20250809 19:58:04.121740 28070 ts_tablet_manager.cc:1397] T f831532b9845433b9f72a20682eb4f14 P b95076872f574802a4ac48e1462efdb7: Time spent bootstrapping tablet: real 0.012s user 0.009s sys 0.000s
I20250809 19:58:04.122826 28069 tablet_bootstrap.cc:492] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd: No bootstrap required, opened a new log
I20250809 19:58:04.123257 28069 ts_tablet_manager.cc:1397] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd: Time spent bootstrapping tablet: real 0.015s user 0.010s sys 0.004s
I20250809 19:58:04.123299 28068 tablet_bootstrap.cc:492] T f831532b9845433b9f72a20682eb4f14 P 46509f54391e42618b4436627667487c: No bootstrap required, opened a new log
I20250809 19:58:04.123696 28068 ts_tablet_manager.cc:1397] T f831532b9845433b9f72a20682eb4f14 P 46509f54391e42618b4436627667487c: Time spent bootstrapping tablet: real 0.019s user 0.011s sys 0.007s
I20250809 19:58:04.136420 28070 raft_consensus.cc:357] T f831532b9845433b9f72a20682eb4f14 P b95076872f574802a4ac48e1462efdb7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } } peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } }
I20250809 19:58:04.136972 28070 raft_consensus.cc:383] T f831532b9845433b9f72a20682eb4f14 P b95076872f574802a4ac48e1462efdb7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:58:04.137185 28070 raft_consensus.cc:738] T f831532b9845433b9f72a20682eb4f14 P b95076872f574802a4ac48e1462efdb7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b95076872f574802a4ac48e1462efdb7, State: Initialized, Role: FOLLOWER
I20250809 19:58:04.137771 28070 consensus_queue.cc:260] T f831532b9845433b9f72a20682eb4f14 P b95076872f574802a4ac48e1462efdb7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } } peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } }
I20250809 19:58:04.140552 28049 heartbeater.cc:499] Master 127.25.124.190:41025 was elected leader, sending a full tablet report...
I20250809 19:58:04.141677 28070 ts_tablet_manager.cc:1428] T f831532b9845433b9f72a20682eb4f14 P b95076872f574802a4ac48e1462efdb7: Time spent starting tablet: real 0.020s user 0.013s sys 0.008s
I20250809 19:58:04.145440 28069 raft_consensus.cc:357] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } } peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } }
I20250809 19:58:04.146212 28069 raft_consensus.cc:383] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:58:04.146462 28069 raft_consensus.cc:738] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e4f0f700225d4f1e988ff2d20693e7cd, State: Initialized, Role: FOLLOWER
I20250809 19:58:04.146842 28068 raft_consensus.cc:357] T f831532b9845433b9f72a20682eb4f14 P 46509f54391e42618b4436627667487c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } } peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } }
I20250809 19:58:04.147586 28068 raft_consensus.cc:383] T f831532b9845433b9f72a20682eb4f14 P 46509f54391e42618b4436627667487c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:58:04.147229 28069 consensus_queue.cc:260] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } } peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } }
I20250809 19:58:04.147836 28068 raft_consensus.cc:738] T f831532b9845433b9f72a20682eb4f14 P 46509f54391e42618b4436627667487c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 46509f54391e42618b4436627667487c, State: Initialized, Role: FOLLOWER
I20250809 19:58:04.148625 28068 consensus_queue.cc:260] T f831532b9845433b9f72a20682eb4f14 P 46509f54391e42618b4436627667487c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } } peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } }
I20250809 19:58:04.152650 28068 ts_tablet_manager.cc:1428] T f831532b9845433b9f72a20682eb4f14 P 46509f54391e42618b4436627667487c: Time spent starting tablet: real 0.029s user 0.016s sys 0.011s
I20250809 19:58:04.153630 28069 ts_tablet_manager.cc:1428] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd: Time spent starting tablet: real 0.030s user 0.024s sys 0.004s
W20250809 19:58:04.219772 28050 tablet.cc:2378] T f831532b9845433b9f72a20682eb4f14 P b95076872f574802a4ac48e1462efdb7: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250809 19:58:04.307271 27784 tablet.cc:2378] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250809 19:58:04.309316 27917 tablet.cc:2378] T f831532b9845433b9f72a20682eb4f14 P 46509f54391e42618b4436627667487c: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250809 19:58:04.399325 28075 raft_consensus.cc:491] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250809 19:58:04.399749 28075 raft_consensus.cc:513] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } } peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } }
I20250809 19:58:04.401844 28075 leader_election.cc:290] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers b95076872f574802a4ac48e1462efdb7 (127.25.124.131:41015), 46509f54391e42618b4436627667487c (127.25.124.130:33789)
I20250809 19:58:04.411473 28004 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f831532b9845433b9f72a20682eb4f14" candidate_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b95076872f574802a4ac48e1462efdb7" is_pre_election: true
I20250809 19:58:04.411465 27871 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f831532b9845433b9f72a20682eb4f14" candidate_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "46509f54391e42618b4436627667487c" is_pre_election: true
I20250809 19:58:04.412022 28004 raft_consensus.cc:2466] T f831532b9845433b9f72a20682eb4f14 P b95076872f574802a4ac48e1462efdb7 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate e4f0f700225d4f1e988ff2d20693e7cd in term 0.
I20250809 19:58:04.412088 27871 raft_consensus.cc:2466] T f831532b9845433b9f72a20682eb4f14 P 46509f54391e42618b4436627667487c [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate e4f0f700225d4f1e988ff2d20693e7cd in term 0.
I20250809 19:58:04.412966 27673 leader_election.cc:304] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: b95076872f574802a4ac48e1462efdb7, e4f0f700225d4f1e988ff2d20693e7cd; no voters:
I20250809 19:58:04.413703 28075 raft_consensus.cc:2802] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250809 19:58:04.413933 28075 raft_consensus.cc:491] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250809 19:58:04.414183 28075 raft_consensus.cc:3058] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:58:04.417976 28075 raft_consensus.cc:513] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } } peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } }
I20250809 19:58:04.419317 28075 leader_election.cc:290] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [CANDIDATE]: Term 1 election: Requested vote from peers b95076872f574802a4ac48e1462efdb7 (127.25.124.131:41015), 46509f54391e42618b4436627667487c (127.25.124.130:33789)
I20250809 19:58:04.420006 28004 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f831532b9845433b9f72a20682eb4f14" candidate_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b95076872f574802a4ac48e1462efdb7"
I20250809 19:58:04.420080 27871 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f831532b9845433b9f72a20682eb4f14" candidate_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "46509f54391e42618b4436627667487c"
I20250809 19:58:04.420387 28004 raft_consensus.cc:3058] T f831532b9845433b9f72a20682eb4f14 P b95076872f574802a4ac48e1462efdb7 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:58:04.420428 27871 raft_consensus.cc:3058] T f831532b9845433b9f72a20682eb4f14 P 46509f54391e42618b4436627667487c [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:58:04.424182 27871 raft_consensus.cc:2466] T f831532b9845433b9f72a20682eb4f14 P 46509f54391e42618b4436627667487c [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate e4f0f700225d4f1e988ff2d20693e7cd in term 1.
I20250809 19:58:04.424204 28004 raft_consensus.cc:2466] T f831532b9845433b9f72a20682eb4f14 P b95076872f574802a4ac48e1462efdb7 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate e4f0f700225d4f1e988ff2d20693e7cd in term 1.
I20250809 19:58:04.424969 27673 leader_election.cc:304] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: b95076872f574802a4ac48e1462efdb7, e4f0f700225d4f1e988ff2d20693e7cd; no voters:
I20250809 19:58:04.425460 28075 raft_consensus.cc:2802] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:58:04.426820 28075 raft_consensus.cc:695] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [term 1 LEADER]: Becoming Leader. State: Replica: e4f0f700225d4f1e988ff2d20693e7cd, State: Running, Role: LEADER
I20250809 19:58:04.427539 28075 consensus_queue.cc:237] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } } peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } }
I20250809 19:58:04.435767 27595 catalog_manager.cc:5582] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd reported cstate change: term changed from 0 to 1, leader changed from <none> to e4f0f700225d4f1e988ff2d20693e7cd (127.25.124.129). New cstate: current_term: 1 leader_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } health_report { overall_health: UNKNOWN } } }
I20250809 19:58:04.529973 26098 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250809 19:58:04.532776 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver e4f0f700225d4f1e988ff2d20693e7cd to finish bootstrapping
I20250809 19:58:04.542814 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 46509f54391e42618b4436627667487c to finish bootstrapping
I20250809 19:58:04.550766 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver b95076872f574802a4ac48e1462efdb7 to finish bootstrapping
I20250809 19:58:04.560426 27595 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:37822:
name: "TestAnotherTable"
schema {
columns {
name: "foo"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "bar"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
comment: "comment for bar"
immutable: false
}
}
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "foo"
}
}
}
W20250809 19:58:04.561743 27595 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestAnotherTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250809 19:58:04.574640 27718 tablet_service.cc:1468] Processing CreateTablet for tablet 71cbf5f65d064392ae530b0b37433a62 (DEFAULT_TABLE table=TestAnotherTable [id=14569bae4b9447ff90623028b22b2981]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250809 19:58:04.575044 27851 tablet_service.cc:1468] Processing CreateTablet for tablet 71cbf5f65d064392ae530b0b37433a62 (DEFAULT_TABLE table=TestAnotherTable [id=14569bae4b9447ff90623028b22b2981]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250809 19:58:04.575562 27718 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 71cbf5f65d064392ae530b0b37433a62. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:04.575939 27851 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 71cbf5f65d064392ae530b0b37433a62. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:04.575783 27984 tablet_service.cc:1468] Processing CreateTablet for tablet 71cbf5f65d064392ae530b0b37433a62 (DEFAULT_TABLE table=TestAnotherTable [id=14569bae4b9447ff90623028b22b2981]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250809 19:58:04.576647 27984 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 71cbf5f65d064392ae530b0b37433a62. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:04.585855 28069 tablet_bootstrap.cc:492] T 71cbf5f65d064392ae530b0b37433a62 P e4f0f700225d4f1e988ff2d20693e7cd: Bootstrap starting.
I20250809 19:58:04.588194 28068 tablet_bootstrap.cc:492] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c: Bootstrap starting.
I20250809 19:58:04.588598 28070 tablet_bootstrap.cc:492] T 71cbf5f65d064392ae530b0b37433a62 P b95076872f574802a4ac48e1462efdb7: Bootstrap starting.
I20250809 19:58:04.590323 28069 tablet_bootstrap.cc:654] T 71cbf5f65d064392ae530b0b37433a62 P e4f0f700225d4f1e988ff2d20693e7cd: Neither blocks nor log segments found. Creating new log.
I20250809 19:58:04.592444 28070 tablet_bootstrap.cc:654] T 71cbf5f65d064392ae530b0b37433a62 P b95076872f574802a4ac48e1462efdb7: Neither blocks nor log segments found. Creating new log.
I20250809 19:58:04.593338 28068 tablet_bootstrap.cc:654] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c: Neither blocks nor log segments found. Creating new log.
I20250809 19:58:04.596042 28069 tablet_bootstrap.cc:492] T 71cbf5f65d064392ae530b0b37433a62 P e4f0f700225d4f1e988ff2d20693e7cd: No bootstrap required, opened a new log
I20250809 19:58:04.596437 28069 ts_tablet_manager.cc:1397] T 71cbf5f65d064392ae530b0b37433a62 P e4f0f700225d4f1e988ff2d20693e7cd: Time spent bootstrapping tablet: real 0.011s user 0.004s sys 0.006s
I20250809 19:58:04.597182 28070 tablet_bootstrap.cc:492] T 71cbf5f65d064392ae530b0b37433a62 P b95076872f574802a4ac48e1462efdb7: No bootstrap required, opened a new log
I20250809 19:58:04.597548 28070 ts_tablet_manager.cc:1397] T 71cbf5f65d064392ae530b0b37433a62 P b95076872f574802a4ac48e1462efdb7: Time spent bootstrapping tablet: real 0.009s user 0.004s sys 0.004s
I20250809 19:58:04.598330 28069 raft_consensus.cc:357] T 71cbf5f65d064392ae530b0b37433a62 P e4f0f700225d4f1e988ff2d20693e7cd [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } } peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } }
I20250809 19:58:04.598874 28069 raft_consensus.cc:383] T 71cbf5f65d064392ae530b0b37433a62 P e4f0f700225d4f1e988ff2d20693e7cd [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:58:04.599066 28069 raft_consensus.cc:738] T 71cbf5f65d064392ae530b0b37433a62 P e4f0f700225d4f1e988ff2d20693e7cd [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e4f0f700225d4f1e988ff2d20693e7cd, State: Initialized, Role: FOLLOWER
I20250809 19:58:04.599272 28068 tablet_bootstrap.cc:492] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c: No bootstrap required, opened a new log
I20250809 19:58:04.599656 28068 ts_tablet_manager.cc:1397] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c: Time spent bootstrapping tablet: real 0.012s user 0.010s sys 0.000s
I20250809 19:58:04.599567 28069 consensus_queue.cc:260] T 71cbf5f65d064392ae530b0b37433a62 P e4f0f700225d4f1e988ff2d20693e7cd [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } } peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } }
I20250809 19:58:04.599812 28070 raft_consensus.cc:357] T 71cbf5f65d064392ae530b0b37433a62 P b95076872f574802a4ac48e1462efdb7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } } peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } }
I20250809 19:58:04.600435 28070 raft_consensus.cc:383] T 71cbf5f65d064392ae530b0b37433a62 P b95076872f574802a4ac48e1462efdb7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:58:04.600721 28070 raft_consensus.cc:738] T 71cbf5f65d064392ae530b0b37433a62 P b95076872f574802a4ac48e1462efdb7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b95076872f574802a4ac48e1462efdb7, State: Initialized, Role: FOLLOWER
I20250809 19:58:04.601187 28069 ts_tablet_manager.cc:1428] T 71cbf5f65d064392ae530b0b37433a62 P e4f0f700225d4f1e988ff2d20693e7cd: Time spent starting tablet: real 0.005s user 0.001s sys 0.001s
I20250809 19:58:04.601410 28070 consensus_queue.cc:260] T 71cbf5f65d064392ae530b0b37433a62 P b95076872f574802a4ac48e1462efdb7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } } peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } }
I20250809 19:58:04.601776 28068 raft_consensus.cc:357] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } } peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } }
I20250809 19:58:04.602365 28068 raft_consensus.cc:383] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:58:04.602625 28068 raft_consensus.cc:738] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 46509f54391e42618b4436627667487c, State: Initialized, Role: FOLLOWER
I20250809 19:58:04.603226 28070 ts_tablet_manager.cc:1428] T 71cbf5f65d064392ae530b0b37433a62 P b95076872f574802a4ac48e1462efdb7: Time spent starting tablet: real 0.005s user 0.004s sys 0.000s
I20250809 19:58:04.603178 28068 consensus_queue.cc:260] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } } peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } }
I20250809 19:58:04.604943 28068 ts_tablet_manager.cc:1428] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c: Time spent starting tablet: real 0.005s user 0.004s sys 0.000s
I20250809 19:58:04.724546 28076 raft_consensus.cc:491] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250809 19:58:04.724848 28076 raft_consensus.cc:513] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } } peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } }
I20250809 19:58:04.726682 28076 leader_election.cc:290] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers e4f0f700225d4f1e988ff2d20693e7cd (127.25.124.129:39847), b95076872f574802a4ac48e1462efdb7 (127.25.124.131:41015)
I20250809 19:58:04.737383 28004 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "71cbf5f65d064392ae530b0b37433a62" candidate_uuid: "46509f54391e42618b4436627667487c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b95076872f574802a4ac48e1462efdb7" is_pre_election: true
I20250809 19:58:04.737884 28004 raft_consensus.cc:2466] T 71cbf5f65d064392ae530b0b37433a62 P b95076872f574802a4ac48e1462efdb7 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 46509f54391e42618b4436627667487c in term 0.
I20250809 19:58:04.737857 27738 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "71cbf5f65d064392ae530b0b37433a62" candidate_uuid: "46509f54391e42618b4436627667487c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" is_pre_election: true
I20250809 19:58:04.738495 27738 raft_consensus.cc:2466] T 71cbf5f65d064392ae530b0b37433a62 P e4f0f700225d4f1e988ff2d20693e7cd [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 46509f54391e42618b4436627667487c in term 0.
I20250809 19:58:04.738761 27806 leader_election.cc:304] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 46509f54391e42618b4436627667487c, b95076872f574802a4ac48e1462efdb7; no voters:
I20250809 19:58:04.739442 28076 raft_consensus.cc:2802] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250809 19:58:04.739725 28076 raft_consensus.cc:491] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250809 19:58:04.739905 28076 raft_consensus.cc:3058] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:58:04.743422 28076 raft_consensus.cc:513] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } } peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } }
I20250809 19:58:04.744573 28076 leader_election.cc:290] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [CANDIDATE]: Term 1 election: Requested vote from peers e4f0f700225d4f1e988ff2d20693e7cd (127.25.124.129:39847), b95076872f574802a4ac48e1462efdb7 (127.25.124.131:41015)
I20250809 19:58:04.745085 27738 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "71cbf5f65d064392ae530b0b37433a62" candidate_uuid: "46509f54391e42618b4436627667487c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "e4f0f700225d4f1e988ff2d20693e7cd"
I20250809 19:58:04.745265 28004 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "71cbf5f65d064392ae530b0b37433a62" candidate_uuid: "46509f54391e42618b4436627667487c" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b95076872f574802a4ac48e1462efdb7"
I20250809 19:58:04.745397 27738 raft_consensus.cc:3058] T 71cbf5f65d064392ae530b0b37433a62 P e4f0f700225d4f1e988ff2d20693e7cd [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:58:04.745617 28004 raft_consensus.cc:3058] T 71cbf5f65d064392ae530b0b37433a62 P b95076872f574802a4ac48e1462efdb7 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:58:04.748873 28004 raft_consensus.cc:2466] T 71cbf5f65d064392ae530b0b37433a62 P b95076872f574802a4ac48e1462efdb7 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 46509f54391e42618b4436627667487c in term 1.
I20250809 19:58:04.748879 27738 raft_consensus.cc:2466] T 71cbf5f65d064392ae530b0b37433a62 P e4f0f700225d4f1e988ff2d20693e7cd [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 46509f54391e42618b4436627667487c in term 1.
I20250809 19:58:04.749609 27805 leader_election.cc:304] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 46509f54391e42618b4436627667487c, e4f0f700225d4f1e988ff2d20693e7cd; no voters:
I20250809 19:58:04.750144 28076 raft_consensus.cc:2802] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:58:04.751396 28076 raft_consensus.cc:695] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [term 1 LEADER]: Becoming Leader. State: Replica: 46509f54391e42618b4436627667487c, State: Running, Role: LEADER
I20250809 19:58:04.751963 28076 consensus_queue.cc:237] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } } peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } }
I20250809 19:58:04.760418 27593 catalog_manager.cc:5582] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c reported cstate change: term changed from 0 to 1, leader changed from <none> to 46509f54391e42618b4436627667487c (127.25.124.130). New cstate: current_term: 1 leader_uuid: "46509f54391e42618b4436627667487c" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 } health_report { overall_health: UNKNOWN } } }
I20250809 19:58:04.850549 28075 consensus_queue.cc:1035] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [LEADER]: Connected to new peer: Peer: permanent_uuid: "46509f54391e42618b4436627667487c" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 33789 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250809 19:58:04.867409 28079 consensus_queue.cc:1035] T f831532b9845433b9f72a20682eb4f14 P e4f0f700225d4f1e988ff2d20693e7cd [LEADER]: Connected to new peer: Peer: permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250809 19:58:05.038968 28090 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:05.039461 28090 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:05.065219 28090 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250809 19:58:05.135530 28076 consensus_queue.cc:1035] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [LEADER]: Connected to new peer: Peer: permanent_uuid: "e4f0f700225d4f1e988ff2d20693e7cd" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39847 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250809 19:58:05.156291 28076 consensus_queue.cc:1035] T 71cbf5f65d064392ae530b0b37433a62 P 46509f54391e42618b4436627667487c [LEADER]: Connected to new peer: Peer: permanent_uuid: "b95076872f574802a4ac48e1462efdb7" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41015 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250809 19:58:06.185141 28090 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.089s user 0.377s sys 0.702s
W20250809 19:58:06.185381 28090 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.089s user 0.377s sys 0.702s
W20250809 19:58:07.506155 28117 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:07.506603 28117 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:07.531769 28117 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250809 19:58:08.633450 28117 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.069s user 0.380s sys 0.687s
W20250809 19:58:08.633684 28117 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.070s user 0.380s sys 0.687s
W20250809 19:58:09.969985 28131 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:09.970432 28131 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:09.995888 28131 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250809 19:58:11.066161 28131 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.039s user 0.362s sys 0.673s
W20250809 19:58:11.066437 28131 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.039s user 0.362s sys 0.673s
W20250809 19:58:12.395045 28146 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:12.395563 28146 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:12.421357 28146 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250809 19:58:13.554850 28146 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.100s user 0.454s sys 0.642s
W20250809 19:58:13.555292 28146 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.100s user 0.458s sys 0.642s
I20250809 19:58:14.617651 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 27654
I20250809 19:58:14.639904 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 27787
I20250809 19:58:14.662140 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 27920
I20250809 19:58:14.682387 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 27562
2025-08-09T19:58:14Z chronyd exiting
[ OK ] AdminCliTest.TestDescribeTableColumnFlags (18206 ms)
[ RUN ] AdminCliTest.TestAuthzResetCacheNotAuthorized
I20250809 19:58:14.733304 26098 test_util.cc:276] Using random seed: 476410328
I20250809 19:58:14.736768 26098 ts_itest-base.cc:115] Starting cluster with:
I20250809 19:58:14.736912 26098 ts_itest-base.cc:116] --------------
I20250809 19:58:14.737048 26098 ts_itest-base.cc:117] 3 tablet servers
I20250809 19:58:14.737166 26098 ts_itest-base.cc:118] 3 replicas per TS
I20250809 19:58:14.737285 26098 ts_itest-base.cc:119] --------------
2025-08-09T19:58:14Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-09T19:58:14Z Disabled control of system clock
I20250809 19:58:14.766798 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:43713
--webserver_interface=127.25.124.190
--webserver_port=0
--builtin_ntp_servers=127.25.124.148:36801
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.25.124.190:43713
--superuser_acl=no-such-user with env {}
W20250809 19:58:15.011842 28170 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:15.012321 28170 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:15.012741 28170 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:15.037818 28170 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250809 19:58:15.038084 28170 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:15.038303 28170 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250809 19:58:15.038514 28170 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250809 19:58:15.066618 28170 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:36801
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.25.124.190:43713
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:43713
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--superuser_acl=<redacted>
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.25.124.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:15.067687 28170 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:15.069011 28170 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:15.077344 28177 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:16.481784 28176 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ec0 after lost signal to thread 28170
W20250809 19:58:16.592800 28170 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.514s user 0.475s sys 1.038s
W20250809 19:58:15.078815 28178 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:16.593295 28170 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.515s user 0.475s sys 1.038s
W20250809 19:58:16.595162 28180 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:16.597323 28179 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1516 milliseconds
I20250809 19:58:16.597342 28170 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:58:16.598449 28170 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:16.600575 28170 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:16.601979 28170 hybrid_clock.cc:648] HybridClock initialized: now 1754769496601919 us; error 31 us; skew 500 ppm
I20250809 19:58:16.602835 28170 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:16.607995 28170 webserver.cc:489] Webserver started at http://127.25.124.190:41169/ using document root <none> and password file <none>
I20250809 19:58:16.608789 28170 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:16.608958 28170 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:16.609360 28170 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:58:16.612936 28170 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "dda80c31753249b38866f853b705e034"
format_stamp: "Formatted at 2025-08-09 19:58:16 on dist-test-slave-xzln"
I20250809 19:58:16.613829 28170 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "dda80c31753249b38866f853b705e034"
format_stamp: "Formatted at 2025-08-09 19:58:16 on dist-test-slave-xzln"
I20250809 19:58:16.619619 28170 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.005s sys 0.002s
I20250809 19:58:16.624064 28187 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:16.624891 28170 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.001s sys 0.003s
I20250809 19:58:16.625167 28170 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
uuid: "dda80c31753249b38866f853b705e034"
format_stamp: "Formatted at 2025-08-09 19:58:16 on dist-test-slave-xzln"
I20250809 19:58:16.625454 28170 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:16.673064 28170 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:16.674170 28170 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:16.674525 28170 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:16.731827 28170 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.190:43713
I20250809 19:58:16.731886 28238 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.190:43713 every 8 connection(s)
I20250809 19:58:16.733982 28170 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250809 19:58:16.736467 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 28170
I20250809 19:58:16.737076 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250809 19:58:16.739195 28239 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:16.758379 28239 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034: Bootstrap starting.
I20250809 19:58:16.762979 28239 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034: Neither blocks nor log segments found. Creating new log.
I20250809 19:58:16.764366 28239 log.cc:826] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:16.768077 28239 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034: No bootstrap required, opened a new log
I20250809 19:58:16.782301 28239 raft_consensus.cc:357] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dda80c31753249b38866f853b705e034" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 43713 } }
I20250809 19:58:16.782805 28239 raft_consensus.cc:383] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:58:16.782996 28239 raft_consensus.cc:738] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: dda80c31753249b38866f853b705e034, State: Initialized, Role: FOLLOWER
I20250809 19:58:16.783592 28239 consensus_queue.cc:260] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dda80c31753249b38866f853b705e034" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 43713 } }
I20250809 19:58:16.784013 28239 raft_consensus.cc:397] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:58:16.784235 28239 raft_consensus.cc:491] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:58:16.784503 28239 raft_consensus.cc:3058] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:58:16.787724 28239 raft_consensus.cc:513] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dda80c31753249b38866f853b705e034" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 43713 } }
I20250809 19:58:16.788307 28239 leader_election.cc:304] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: dda80c31753249b38866f853b705e034; no voters:
I20250809 19:58:16.789896 28239 leader_election.cc:290] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250809 19:58:16.790509 28244 raft_consensus.cc:2802] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:58:16.792418 28244 raft_consensus.cc:695] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [term 1 LEADER]: Becoming Leader. State: Replica: dda80c31753249b38866f853b705e034, State: Running, Role: LEADER
I20250809 19:58:16.793079 28244 consensus_queue.cc:237] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dda80c31753249b38866f853b705e034" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 43713 } }
I20250809 19:58:16.793802 28239 sys_catalog.cc:564] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [sys.catalog]: configured and running, proceeding with master startup.
I20250809 19:58:16.800897 28246 sys_catalog.cc:455] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [sys.catalog]: SysCatalogTable state changed. Reason: New leader dda80c31753249b38866f853b705e034. Latest consensus state: current_term: 1 leader_uuid: "dda80c31753249b38866f853b705e034" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dda80c31753249b38866f853b705e034" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 43713 } } }
I20250809 19:58:16.801036 28245 sys_catalog.cc:455] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "dda80c31753249b38866f853b705e034" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dda80c31753249b38866f853b705e034" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 43713 } } }
I20250809 19:58:16.801576 28245 sys_catalog.cc:458] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [sys.catalog]: This master's current role is: LEADER
I20250809 19:58:16.801549 28246 sys_catalog.cc:458] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034 [sys.catalog]: This master's current role is: LEADER
I20250809 19:58:16.805414 28251 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250809 19:58:16.814863 28251 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250809 19:58:16.831578 28251 catalog_manager.cc:1349] Generated new cluster ID: ffb539346f144802ae57fadd7f056445
I20250809 19:58:16.831771 28251 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250809 19:58:16.853845 28251 catalog_manager.cc:1372] Generated new certificate authority record
I20250809 19:58:16.855304 28251 catalog_manager.cc:1506] Loading token signing keys...
I20250809 19:58:16.864820 28251 catalog_manager.cc:5955] T 00000000000000000000000000000000 P dda80c31753249b38866f853b705e034: Generated new TSK 0
I20250809 19:58:16.865602 28251 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250809 19:58:16.873432 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.129:0
--local_ip_for_outbound_sockets=127.25.124.129
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:43713
--builtin_ntp_servers=127.25.124.148:36801
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250809 19:58:17.124483 28263 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:17.124892 28263 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:17.125324 28263 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:17.151299 28263 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:17.151968 28263 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.129
I20250809 19:58:17.181406 28263 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:36801
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:43713
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.129
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:17.182462 28263 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:17.183806 28263 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:17.194490 28269 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:17.198320 28270 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:17.199193 28272 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:58:17.200101 28263 server_base.cc:1047] running on GCE node
I20250809 19:58:18.209606 28263 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:18.211936 28263 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:18.213387 28263 hybrid_clock.cc:648] HybridClock initialized: now 1754769498213352 us; error 37 us; skew 500 ppm
I20250809 19:58:18.214252 28263 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:18.220755 28263 webserver.cc:489] Webserver started at http://127.25.124.129:34741/ using document root <none> and password file <none>
I20250809 19:58:18.221781 28263 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:18.221997 28263 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:18.222486 28263 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:58:18.227716 28263 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4"
format_stamp: "Formatted at 2025-08-09 19:58:18 on dist-test-slave-xzln"
I20250809 19:58:18.228911 28263 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4"
format_stamp: "Formatted at 2025-08-09 19:58:18 on dist-test-slave-xzln"
I20250809 19:58:18.235184 28263 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.000s
I20250809 19:58:18.239887 28279 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:18.240758 28263 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.001s
I20250809 19:58:18.241006 28263 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4"
format_stamp: "Formatted at 2025-08-09 19:58:18 on dist-test-slave-xzln"
I20250809 19:58:18.241325 28263 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:18.279826 28263 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:18.280938 28263 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:18.281352 28263 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:18.283387 28263 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:58:18.286655 28263 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:58:18.286830 28263 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:18.287043 28263 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:58:18.287178 28263 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:18.402495 28263 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.129:39509
I20250809 19:58:18.402585 28391 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.129:39509 every 8 connection(s)
I20250809 19:58:18.404750 28263 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250809 19:58:18.411250 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 28263
I20250809 19:58:18.411623 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250809 19:58:18.416756 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.130:0
--local_ip_for_outbound_sockets=127.25.124.130
--webserver_interface=127.25.124.130
--webserver_port=0
--tserver_master_addrs=127.25.124.190:43713
--builtin_ntp_servers=127.25.124.148:36801
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250809 19:58:18.425275 28392 heartbeater.cc:344] Connected to a master server at 127.25.124.190:43713
I20250809 19:58:18.425591 28392 heartbeater.cc:461] Registering TS with master...
I20250809 19:58:18.426380 28392 heartbeater.cc:507] Master 127.25.124.190:43713 requested a full tablet report, sending...
I20250809 19:58:18.428444 28204 ts_manager.cc:194] Registered new tserver with Master: fc4527ac90c84f0dbd64471b5ac2b1a4 (127.25.124.129:39509)
I20250809 19:58:18.429987 28204 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.129:59091
W20250809 19:58:18.672740 28396 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:18.673118 28396 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:18.673601 28396 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:18.699522 28396 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:18.700213 28396 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.130
I20250809 19:58:18.727718 28396 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:36801
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.25.124.130
--webserver_port=0
--tserver_master_addrs=127.25.124.190:43713
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.130
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:18.728821 28396 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:18.730276 28396 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:18.740494 28402 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:58:19.432657 28392 heartbeater.cc:499] Master 127.25.124.190:43713 was elected leader, sending a full tablet report...
W20250809 19:58:18.741539 28403 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:19.779001 28396 thread.cc:641] GCE (cloud detector) Time spent creating pthread: real 1.039s user 0.343s sys 0.693s
W20250809 19:58:19.779398 28396 thread.cc:608] GCE (cloud detector) Time spent starting thread: real 1.040s user 0.347s sys 0.693s
W20250809 19:58:19.782860 28407 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:58:19.786118 28396 server_base.cc:1047] running on GCE node
I20250809 19:58:19.787503 28396 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:19.790112 28396 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:19.791550 28396 hybrid_clock.cc:648] HybridClock initialized: now 1754769499791463 us; error 77 us; skew 500 ppm
I20250809 19:58:19.792554 28396 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:19.800693 28396 webserver.cc:489] Webserver started at http://127.25.124.130:35153/ using document root <none> and password file <none>
I20250809 19:58:19.801834 28396 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:19.802075 28396 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:19.802590 28396 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:58:19.808915 28396 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "cd66db8ebd804ad69e45ad6619c18f6d"
format_stamp: "Formatted at 2025-08-09 19:58:19 on dist-test-slave-xzln"
I20250809 19:58:19.810304 28396 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "cd66db8ebd804ad69e45ad6619c18f6d"
format_stamp: "Formatted at 2025-08-09 19:58:19 on dist-test-slave-xzln"
I20250809 19:58:19.819335 28396 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.006s sys 0.004s
I20250809 19:58:19.826182 28412 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:19.827371 28396 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250809 19:58:19.827785 28396 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "cd66db8ebd804ad69e45ad6619c18f6d"
format_stamp: "Formatted at 2025-08-09 19:58:19 on dist-test-slave-xzln"
I20250809 19:58:19.828208 28396 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:19.898916 28396 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:19.900656 28396 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:19.900983 28396 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:19.903043 28396 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:58:19.906400 28396 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:58:19.906567 28396 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:19.906729 28396 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:58:19.906841 28396 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:20.019131 28396 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.130:39705
I20250809 19:58:20.019236 28524 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.130:39705 every 8 connection(s)
I20250809 19:58:20.021289 28396 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250809 19:58:20.026665 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 28396
I20250809 19:58:20.027065 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250809 19:58:20.032310 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.131:0
--local_ip_for_outbound_sockets=127.25.124.131
--webserver_interface=127.25.124.131
--webserver_port=0
--tserver_master_addrs=127.25.124.190:43713
--builtin_ntp_servers=127.25.124.148:36801
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250809 19:58:20.039346 28525 heartbeater.cc:344] Connected to a master server at 127.25.124.190:43713
I20250809 19:58:20.039671 28525 heartbeater.cc:461] Registering TS with master...
I20250809 19:58:20.040468 28525 heartbeater.cc:507] Master 127.25.124.190:43713 requested a full tablet report, sending...
I20250809 19:58:20.042199 28204 ts_manager.cc:194] Registered new tserver with Master: cd66db8ebd804ad69e45ad6619c18f6d (127.25.124.130:39705)
I20250809 19:58:20.043232 28204 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.130:47771
W20250809 19:58:20.278118 28529 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:20.278543 28529 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:20.278923 28529 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:20.304212 28529 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:20.304832 28529 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.131
I20250809 19:58:20.332371 28529 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:36801
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.25.124.131
--webserver_port=0
--tserver_master_addrs=127.25.124.190:43713
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.131
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:20.333317 28529 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:20.334585 28529 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:20.344292 28535 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:58:21.045660 28525 heartbeater.cc:499] Master 127.25.124.190:43713 was elected leader, sending a full tablet report...
W20250809 19:58:21.748008 28534 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 28529
W20250809 19:58:21.826925 28534 kernel_stack_watchdog.cc:198] Thread 28529 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 401ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250809 19:58:21.827890 28529 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.483s user 0.490s sys 0.992s
W20250809 19:58:21.828302 28529 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.484s user 0.490s sys 0.992s
W20250809 19:58:20.344722 28536 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:21.830039 28538 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:21.832634 28537 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1483 milliseconds
I20250809 19:58:21.832652 28529 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:58:21.833726 28529 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:21.835572 28529 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:21.836894 28529 hybrid_clock.cc:648] HybridClock initialized: now 1754769501836859 us; error 25 us; skew 500 ppm
I20250809 19:58:21.837890 28529 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:21.843194 28529 webserver.cc:489] Webserver started at http://127.25.124.131:44351/ using document root <none> and password file <none>
I20250809 19:58:21.843995 28529 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:21.844159 28529 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:21.844516 28529 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:58:21.848204 28529 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "91ba591f943a4dccbfd1b516495e17ef"
format_stamp: "Formatted at 2025-08-09 19:58:21 on dist-test-slave-xzln"
I20250809 19:58:21.849102 28529 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "91ba591f943a4dccbfd1b516495e17ef"
format_stamp: "Formatted at 2025-08-09 19:58:21 on dist-test-slave-xzln"
I20250809 19:58:21.855063 28529 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.002s
I20250809 19:58:21.859699 28545 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:21.860527 28529 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.001s
I20250809 19:58:21.860802 28529 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "91ba591f943a4dccbfd1b516495e17ef"
format_stamp: "Formatted at 2025-08-09 19:58:21 on dist-test-slave-xzln"
I20250809 19:58:21.861063 28529 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:21.905856 28529 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:21.907006 28529 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:21.907398 28529 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:21.909531 28529 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:58:21.913250 28529 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:58:21.913426 28529 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:21.913652 28529 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:58:21.913790 28529 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:22.032871 28529 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.131:33973
I20250809 19:58:22.032913 28657 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.131:33973 every 8 connection(s)
I20250809 19:58:22.035827 28529 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250809 19:58:22.043162 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 28529
I20250809 19:58:22.043592 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250809 19:58:22.059764 28658 heartbeater.cc:344] Connected to a master server at 127.25.124.190:43713
I20250809 19:58:22.060140 28658 heartbeater.cc:461] Registering TS with master...
I20250809 19:58:22.061213 28658 heartbeater.cc:507] Master 127.25.124.190:43713 requested a full tablet report, sending...
I20250809 19:58:22.063077 28204 ts_manager.cc:194] Registered new tserver with Master: 91ba591f943a4dccbfd1b516495e17ef (127.25.124.131:33973)
I20250809 19:58:22.064201 28204 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.131:42221
I20250809 19:58:22.075012 26098 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250809 19:58:22.102293 28204 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:47300:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250809 19:58:22.117997 28204 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250809 19:58:22.166304 28593 tablet_service.cc:1468] Processing CreateTablet for tablet b01bff37891f42c08534ab32a54b0537 (DEFAULT_TABLE table=TestTable [id=d25c12eaa80e419f8fee78a90c5eea9f]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:58:22.169737 28593 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b01bff37891f42c08534ab32a54b0537. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:22.169643 28460 tablet_service.cc:1468] Processing CreateTablet for tablet b01bff37891f42c08534ab32a54b0537 (DEFAULT_TABLE table=TestTable [id=d25c12eaa80e419f8fee78a90c5eea9f]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:58:22.171125 28460 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b01bff37891f42c08534ab32a54b0537. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:22.173005 28327 tablet_service.cc:1468] Processing CreateTablet for tablet b01bff37891f42c08534ab32a54b0537 (DEFAULT_TABLE table=TestTable [id=d25c12eaa80e419f8fee78a90c5eea9f]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:58:22.174182 28327 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b01bff37891f42c08534ab32a54b0537. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:22.191030 28677 tablet_bootstrap.cc:492] T b01bff37891f42c08534ab32a54b0537 P fc4527ac90c84f0dbd64471b5ac2b1a4: Bootstrap starting.
I20250809 19:58:22.192931 28678 tablet_bootstrap.cc:492] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef: Bootstrap starting.
I20250809 19:58:22.194622 28679 tablet_bootstrap.cc:492] T b01bff37891f42c08534ab32a54b0537 P cd66db8ebd804ad69e45ad6619c18f6d: Bootstrap starting.
I20250809 19:58:22.198344 28677 tablet_bootstrap.cc:654] T b01bff37891f42c08534ab32a54b0537 P fc4527ac90c84f0dbd64471b5ac2b1a4: Neither blocks nor log segments found. Creating new log.
I20250809 19:58:22.199326 28679 tablet_bootstrap.cc:654] T b01bff37891f42c08534ab32a54b0537 P cd66db8ebd804ad69e45ad6619c18f6d: Neither blocks nor log segments found. Creating new log.
I20250809 19:58:22.199499 28678 tablet_bootstrap.cc:654] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef: Neither blocks nor log segments found. Creating new log.
I20250809 19:58:22.200379 28677 log.cc:826] T b01bff37891f42c08534ab32a54b0537 P fc4527ac90c84f0dbd64471b5ac2b1a4: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:22.200773 28679 log.cc:826] T b01bff37891f42c08534ab32a54b0537 P cd66db8ebd804ad69e45ad6619c18f6d: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:22.201447 28678 log.cc:826] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:22.205157 28677 tablet_bootstrap.cc:492] T b01bff37891f42c08534ab32a54b0537 P fc4527ac90c84f0dbd64471b5ac2b1a4: No bootstrap required, opened a new log
I20250809 19:58:22.205555 28677 ts_tablet_manager.cc:1397] T b01bff37891f42c08534ab32a54b0537 P fc4527ac90c84f0dbd64471b5ac2b1a4: Time spent bootstrapping tablet: real 0.015s user 0.005s sys 0.007s
I20250809 19:58:22.205765 28679 tablet_bootstrap.cc:492] T b01bff37891f42c08534ab32a54b0537 P cd66db8ebd804ad69e45ad6619c18f6d: No bootstrap required, opened a new log
I20250809 19:58:22.205894 28678 tablet_bootstrap.cc:492] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef: No bootstrap required, opened a new log
I20250809 19:58:22.206097 28679 ts_tablet_manager.cc:1397] T b01bff37891f42c08534ab32a54b0537 P cd66db8ebd804ad69e45ad6619c18f6d: Time spent bootstrapping tablet: real 0.012s user 0.000s sys 0.009s
I20250809 19:58:22.206305 28678 ts_tablet_manager.cc:1397] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef: Time spent bootstrapping tablet: real 0.014s user 0.005s sys 0.006s
I20250809 19:58:22.220475 28679 raft_consensus.cc:357] T b01bff37891f42c08534ab32a54b0537 P cd66db8ebd804ad69e45ad6619c18f6d [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "91ba591f943a4dccbfd1b516495e17ef" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 33973 } } peers { permanent_uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39509 } } peers { permanent_uuid: "cd66db8ebd804ad69e45ad6619c18f6d" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 39705 } }
I20250809 19:58:22.221000 28679 raft_consensus.cc:383] T b01bff37891f42c08534ab32a54b0537 P cd66db8ebd804ad69e45ad6619c18f6d [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:58:22.221210 28679 raft_consensus.cc:738] T b01bff37891f42c08534ab32a54b0537 P cd66db8ebd804ad69e45ad6619c18f6d [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: cd66db8ebd804ad69e45ad6619c18f6d, State: Initialized, Role: FOLLOWER
I20250809 19:58:22.221796 28679 consensus_queue.cc:260] T b01bff37891f42c08534ab32a54b0537 P cd66db8ebd804ad69e45ad6619c18f6d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "91ba591f943a4dccbfd1b516495e17ef" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 33973 } } peers { permanent_uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39509 } } peers { permanent_uuid: "cd66db8ebd804ad69e45ad6619c18f6d" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 39705 } }
I20250809 19:58:22.225085 28679 ts_tablet_manager.cc:1428] T b01bff37891f42c08534ab32a54b0537 P cd66db8ebd804ad69e45ad6619c18f6d: Time spent starting tablet: real 0.019s user 0.016s sys 0.003s
I20250809 19:58:22.228600 28677 raft_consensus.cc:357] T b01bff37891f42c08534ab32a54b0537 P fc4527ac90c84f0dbd64471b5ac2b1a4 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "91ba591f943a4dccbfd1b516495e17ef" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 33973 } } peers { permanent_uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39509 } } peers { permanent_uuid: "cd66db8ebd804ad69e45ad6619c18f6d" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 39705 } }
I20250809 19:58:22.229382 28677 raft_consensus.cc:383] T b01bff37891f42c08534ab32a54b0537 P fc4527ac90c84f0dbd64471b5ac2b1a4 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:58:22.230057 28678 raft_consensus.cc:357] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "91ba591f943a4dccbfd1b516495e17ef" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 33973 } } peers { permanent_uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39509 } } peers { permanent_uuid: "cd66db8ebd804ad69e45ad6619c18f6d" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 39705 } }
I20250809 19:58:22.230484 28677 raft_consensus.cc:738] T b01bff37891f42c08534ab32a54b0537 P fc4527ac90c84f0dbd64471b5ac2b1a4 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: fc4527ac90c84f0dbd64471b5ac2b1a4, State: Initialized, Role: FOLLOWER
I20250809 19:58:22.230824 28678 raft_consensus.cc:383] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:58:22.231178 28678 raft_consensus.cc:738] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 91ba591f943a4dccbfd1b516495e17ef, State: Initialized, Role: FOLLOWER
I20250809 19:58:22.231256 28677 consensus_queue.cc:260] T b01bff37891f42c08534ab32a54b0537 P fc4527ac90c84f0dbd64471b5ac2b1a4 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "91ba591f943a4dccbfd1b516495e17ef" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 33973 } } peers { permanent_uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39509 } } peers { permanent_uuid: "cd66db8ebd804ad69e45ad6619c18f6d" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 39705 } }
I20250809 19:58:22.231930 28678 consensus_queue.cc:260] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "91ba591f943a4dccbfd1b516495e17ef" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 33973 } } peers { permanent_uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39509 } } peers { permanent_uuid: "cd66db8ebd804ad69e45ad6619c18f6d" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 39705 } }
I20250809 19:58:22.235054 28658 heartbeater.cc:499] Master 127.25.124.190:43713 was elected leader, sending a full tablet report...
I20250809 19:58:22.236246 28678 ts_tablet_manager.cc:1428] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef: Time spent starting tablet: real 0.030s user 0.028s sys 0.001s
I20250809 19:58:22.236835 28677 ts_tablet_manager.cc:1428] T b01bff37891f42c08534ab32a54b0537 P fc4527ac90c84f0dbd64471b5ac2b1a4: Time spent starting tablet: real 0.031s user 0.026s sys 0.003s
I20250809 19:58:22.268321 28685 raft_consensus.cc:491] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250809 19:58:22.268683 28685 raft_consensus.cc:513] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "91ba591f943a4dccbfd1b516495e17ef" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 33973 } } peers { permanent_uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39509 } } peers { permanent_uuid: "cd66db8ebd804ad69e45ad6619c18f6d" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 39705 } }
I20250809 19:58:22.270538 28685 leader_election.cc:290] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers fc4527ac90c84f0dbd64471b5ac2b1a4 (127.25.124.129:39509), cd66db8ebd804ad69e45ad6619c18f6d (127.25.124.130:39705)
W20250809 19:58:22.277159 28526 tablet.cc:2378] T b01bff37891f42c08534ab32a54b0537 P cd66db8ebd804ad69e45ad6619c18f6d: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250809 19:58:22.278887 28347 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b01bff37891f42c08534ab32a54b0537" candidate_uuid: "91ba591f943a4dccbfd1b516495e17ef" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4" is_pre_election: true
I20250809 19:58:22.279583 28347 raft_consensus.cc:2466] T b01bff37891f42c08534ab32a54b0537 P fc4527ac90c84f0dbd64471b5ac2b1a4 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 91ba591f943a4dccbfd1b516495e17ef in term 0.
I20250809 19:58:22.280567 28549 leader_election.cc:304] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 91ba591f943a4dccbfd1b516495e17ef, fc4527ac90c84f0dbd64471b5ac2b1a4; no voters:
I20250809 19:58:22.281180 28685 raft_consensus.cc:2802] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250809 19:58:22.281162 28480 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b01bff37891f42c08534ab32a54b0537" candidate_uuid: "91ba591f943a4dccbfd1b516495e17ef" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "cd66db8ebd804ad69e45ad6619c18f6d" is_pre_election: true
I20250809 19:58:22.281463 28685 raft_consensus.cc:491] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250809 19:58:22.281711 28685 raft_consensus.cc:3058] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:58:22.281805 28480 raft_consensus.cc:2466] T b01bff37891f42c08534ab32a54b0537 P cd66db8ebd804ad69e45ad6619c18f6d [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 91ba591f943a4dccbfd1b516495e17ef in term 0.
I20250809 19:58:22.285322 28685 raft_consensus.cc:513] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "91ba591f943a4dccbfd1b516495e17ef" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 33973 } } peers { permanent_uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39509 } } peers { permanent_uuid: "cd66db8ebd804ad69e45ad6619c18f6d" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 39705 } }
I20250809 19:58:22.286358 28685 leader_election.cc:290] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [CANDIDATE]: Term 1 election: Requested vote from peers fc4527ac90c84f0dbd64471b5ac2b1a4 (127.25.124.129:39509), cd66db8ebd804ad69e45ad6619c18f6d (127.25.124.130:39705)
I20250809 19:58:22.286945 28347 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b01bff37891f42c08534ab32a54b0537" candidate_uuid: "91ba591f943a4dccbfd1b516495e17ef" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4"
I20250809 19:58:22.287053 28480 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b01bff37891f42c08534ab32a54b0537" candidate_uuid: "91ba591f943a4dccbfd1b516495e17ef" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "cd66db8ebd804ad69e45ad6619c18f6d"
I20250809 19:58:22.287273 28347 raft_consensus.cc:3058] T b01bff37891f42c08534ab32a54b0537 P fc4527ac90c84f0dbd64471b5ac2b1a4 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:58:22.287397 28480 raft_consensus.cc:3058] T b01bff37891f42c08534ab32a54b0537 P cd66db8ebd804ad69e45ad6619c18f6d [term 0 FOLLOWER]: Advancing to term 1
W20250809 19:58:22.289608 28659 tablet.cc:2378] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250809 19:58:22.290956 28347 raft_consensus.cc:2466] T b01bff37891f42c08534ab32a54b0537 P fc4527ac90c84f0dbd64471b5ac2b1a4 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 91ba591f943a4dccbfd1b516495e17ef in term 1.
I20250809 19:58:22.291585 28549 leader_election.cc:304] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 91ba591f943a4dccbfd1b516495e17ef, fc4527ac90c84f0dbd64471b5ac2b1a4; no voters:
I20250809 19:58:22.291682 28480 raft_consensus.cc:2466] T b01bff37891f42c08534ab32a54b0537 P cd66db8ebd804ad69e45ad6619c18f6d [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 91ba591f943a4dccbfd1b516495e17ef in term 1.
I20250809 19:58:22.292038 28685 raft_consensus.cc:2802] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:58:22.293391 28685 raft_consensus.cc:695] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [term 1 LEADER]: Becoming Leader. State: Replica: 91ba591f943a4dccbfd1b516495e17ef, State: Running, Role: LEADER
I20250809 19:58:22.293989 28685 consensus_queue.cc:237] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "91ba591f943a4dccbfd1b516495e17ef" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 33973 } } peers { permanent_uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39509 } } peers { permanent_uuid: "cd66db8ebd804ad69e45ad6619c18f6d" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 39705 } }
I20250809 19:58:22.302853 28203 catalog_manager.cc:5582] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef reported cstate change: term changed from 0 to 1, leader changed from <none> to 91ba591f943a4dccbfd1b516495e17ef (127.25.124.131). New cstate: current_term: 1 leader_uuid: "91ba591f943a4dccbfd1b516495e17ef" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "91ba591f943a4dccbfd1b516495e17ef" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 33973 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39509 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "cd66db8ebd804ad69e45ad6619c18f6d" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 39705 } health_report { overall_health: UNKNOWN } } }
I20250809 19:58:22.346804 26098 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250809 19:58:22.349368 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver fc4527ac90c84f0dbd64471b5ac2b1a4 to finish bootstrapping
I20250809 19:58:22.359638 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver cd66db8ebd804ad69e45ad6619c18f6d to finish bootstrapping
I20250809 19:58:22.368497 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 91ba591f943a4dccbfd1b516495e17ef to finish bootstrapping
W20250809 19:58:22.419572 28393 tablet.cc:2378] T b01bff37891f42c08534ab32a54b0537 P fc4527ac90c84f0dbd64471b5ac2b1a4: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250809 19:58:22.780408 28685 consensus_queue.cc:1035] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [LEADER]: Connected to new peer: Peer: permanent_uuid: "fc4527ac90c84f0dbd64471b5ac2b1a4" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 39509 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250809 19:58:22.869441 28702 consensus_queue.cc:1035] T b01bff37891f42c08534ab32a54b0537 P 91ba591f943a4dccbfd1b516495e17ef [LEADER]: Connected to new peer: Peer: permanent_uuid: "cd66db8ebd804ad69e45ad6619c18f6d" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 39705 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250809 19:58:23.786465 28203 server_base.cc:1129] Unauthorized access attempt to method kudu.master.MasterService.RefreshAuthzCache from {username='slave'} at 127.0.0.1:47310
I20250809 19:58:24.817819 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 28263
I20250809 19:58:24.838616 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 28396
I20250809 19:58:24.859027 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 28529
I20250809 19:58:24.879717 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 28170
2025-08-09T19:58:24Z chronyd exiting
[ OK ] AdminCliTest.TestAuthzResetCacheNotAuthorized (10197 ms)
[ RUN ] AdminCliTest.TestRebuildTables
I20250809 19:58:24.930351 26098 test_util.cc:276] Using random seed: 486607384
I20250809 19:58:24.933677 26098 ts_itest-base.cc:115] Starting cluster with:
I20250809 19:58:24.933804 26098 ts_itest-base.cc:116] --------------
I20250809 19:58:24.933941 26098 ts_itest-base.cc:117] 3 tablet servers
I20250809 19:58:24.934059 26098 ts_itest-base.cc:118] 3 replicas per TS
I20250809 19:58:24.934187 26098 ts_itest-base.cc:119] --------------
2025-08-09T19:58:24Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-09T19:58:24Z Disabled control of system clock
I20250809 19:58:24.961798 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:34851
--webserver_interface=127.25.124.190
--webserver_port=0
--builtin_ntp_servers=127.25.124.148:46385
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.25.124.190:34851 with env {}
W20250809 19:58:25.205863 28729 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:25.206292 28729 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:25.206632 28729 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:25.231312 28729 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250809 19:58:25.231523 28729 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:25.231689 28729 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250809 19:58:25.231859 28729 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250809 19:58:25.259595 28729 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:46385
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.25.124.190:34851
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:34851
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.25.124.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:25.260632 28729 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:25.261950 28729 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:25.271415 28735 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:25.274467 28738 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:25.272253 28736 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:26.286365 28737 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1010 milliseconds
I20250809 19:58:26.286471 28729 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:58:26.287611 28729 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:26.289809 28729 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:26.291093 28729 hybrid_clock.cc:648] HybridClock initialized: now 1754769506291073 us; error 38 us; skew 500 ppm
I20250809 19:58:26.291810 28729 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:26.297458 28729 webserver.cc:489] Webserver started at http://127.25.124.190:36449/ using document root <none> and password file <none>
I20250809 19:58:26.298218 28729 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:26.298395 28729 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:26.298789 28729 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:58:26.302650 28729 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "8066dbf4c38443dabf13535fbe0eb147"
format_stamp: "Formatted at 2025-08-09 19:58:26 on dist-test-slave-xzln"
I20250809 19:58:26.303838 28729 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "8066dbf4c38443dabf13535fbe0eb147"
format_stamp: "Formatted at 2025-08-09 19:58:26 on dist-test-slave-xzln"
I20250809 19:58:26.309752 28729 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.005s sys 0.000s
I20250809 19:58:26.314183 28745 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:26.315044 28729 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250809 19:58:26.315349 28729 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
uuid: "8066dbf4c38443dabf13535fbe0eb147"
format_stamp: "Formatted at 2025-08-09 19:58:26 on dist-test-slave-xzln"
I20250809 19:58:26.315632 28729 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:26.357263 28729 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:26.358402 28729 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:26.358759 28729 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:26.416370 28729 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.190:34851
I20250809 19:58:26.416432 28796 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.190:34851 every 8 connection(s)
I20250809 19:58:26.418622 28729 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250809 19:58:26.422092 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 28729
I20250809 19:58:26.422492 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250809 19:58:26.423462 28797 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:26.443414 28797 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Bootstrap starting.
I20250809 19:58:26.447942 28797 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Neither blocks nor log segments found. Creating new log.
I20250809 19:58:26.449262 28797 log.cc:826] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:26.453052 28797 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: No bootstrap required, opened a new log
I20250809 19:58:26.467286 28797 raft_consensus.cc:357] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:26.467787 28797 raft_consensus.cc:383] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:58:26.467958 28797 raft_consensus.cc:738] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8066dbf4c38443dabf13535fbe0eb147, State: Initialized, Role: FOLLOWER
I20250809 19:58:26.468490 28797 consensus_queue.cc:260] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:26.468899 28797 raft_consensus.cc:397] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:58:26.469087 28797 raft_consensus.cc:491] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:58:26.469302 28797 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:58:26.472783 28797 raft_consensus.cc:513] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:26.473292 28797 leader_election.cc:304] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8066dbf4c38443dabf13535fbe0eb147; no voters:
I20250809 19:58:26.474597 28797 leader_election.cc:290] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250809 19:58:26.475193 28802 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:58:26.477149 28802 raft_consensus.cc:695] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 1 LEADER]: Becoming Leader. State: Replica: 8066dbf4c38443dabf13535fbe0eb147, State: Running, Role: LEADER
I20250809 19:58:26.477675 28802 consensus_queue.cc:237] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:26.478088 28797 sys_catalog.cc:564] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: configured and running, proceeding with master startup.
I20250809 19:58:26.488224 28804 sys_catalog.cc:455] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 8066dbf4c38443dabf13535fbe0eb147. Latest consensus state: current_term: 1 leader_uuid: "8066dbf4c38443dabf13535fbe0eb147" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } } }
I20250809 19:58:26.488920 28804 sys_catalog.cc:458] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: This master's current role is: LEADER
I20250809 19:58:26.492556 28803 sys_catalog.cc:455] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "8066dbf4c38443dabf13535fbe0eb147" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } } }
I20250809 19:58:26.493175 28803 sys_catalog.cc:458] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: This master's current role is: LEADER
I20250809 19:58:26.495708 28810 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250809 19:58:26.505015 28810 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250809 19:58:26.518431 28810 catalog_manager.cc:1349] Generated new cluster ID: 2128a1420868429aa5b5d946fa91acde
I20250809 19:58:26.518663 28810 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250809 19:58:26.541165 28810 catalog_manager.cc:1372] Generated new certificate authority record
I20250809 19:58:26.542717 28810 catalog_manager.cc:1506] Loading token signing keys...
I20250809 19:58:26.558334 28810 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Generated new TSK 0
I20250809 19:58:26.559166 28810 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250809 19:58:26.570811 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.129:0
--local_ip_for_outbound_sockets=127.25.124.129
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:34851
--builtin_ntp_servers=127.25.124.148:46385
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250809 19:58:26.820156 28821 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:26.820559 28821 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:26.820978 28821 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:26.846732 28821 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:26.847477 28821 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.129
I20250809 19:58:26.875528 28821 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:46385
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:34851
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.129
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:26.876580 28821 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:26.877947 28821 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:26.888298 28827 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:26.889768 28828 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:28.127121 28830 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:28.127597 28829 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1236 milliseconds
W20250809 19:58:28.130180 28821 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.241s user 0.398s sys 0.839s
W20250809 19:58:28.130584 28821 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.242s user 0.399s sys 0.841s
I20250809 19:58:28.130788 28821 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:58:28.131791 28821 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:28.134116 28821 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:28.135522 28821 hybrid_clock.cc:648] HybridClock initialized: now 1754769508135451 us; error 70 us; skew 500 ppm
I20250809 19:58:28.136248 28821 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:28.143353 28821 webserver.cc:489] Webserver started at http://127.25.124.129:40961/ using document root <none> and password file <none>
I20250809 19:58:28.144309 28821 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:28.144528 28821 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:28.144981 28821 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:58:28.150225 28821 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "5206f9ca2f294510a3639afe13cba75a"
format_stamp: "Formatted at 2025-08-09 19:58:28 on dist-test-slave-xzln"
I20250809 19:58:28.151187 28821 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "5206f9ca2f294510a3639afe13cba75a"
format_stamp: "Formatted at 2025-08-09 19:58:28 on dist-test-slave-xzln"
I20250809 19:58:28.157960 28821 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.000s
I20250809 19:58:28.163961 28837 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:28.165087 28821 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.000s sys 0.004s
I20250809 19:58:28.165460 28821 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "5206f9ca2f294510a3639afe13cba75a"
format_stamp: "Formatted at 2025-08-09 19:58:28 on dist-test-slave-xzln"
I20250809 19:58:28.165856 28821 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:28.228411 28821 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:28.229885 28821 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:28.230325 28821 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:28.233350 28821 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:58:28.237787 28821 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:58:28.237993 28821 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:28.238238 28821 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:58:28.238409 28821 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:28.389863 28821 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.129:45679
I20250809 19:58:28.389968 28949 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.129:45679 every 8 connection(s)
I20250809 19:58:28.392297 28821 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250809 19:58:28.402819 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 28821
I20250809 19:58:28.403184 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250809 19:58:28.412037 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.130:0
--local_ip_for_outbound_sockets=127.25.124.130
--webserver_interface=127.25.124.130
--webserver_port=0
--tserver_master_addrs=127.25.124.190:34851
--builtin_ntp_servers=127.25.124.148:46385
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250809 19:58:28.419273 28950 heartbeater.cc:344] Connected to a master server at 127.25.124.190:34851
I20250809 19:58:28.419726 28950 heartbeater.cc:461] Registering TS with master...
I20250809 19:58:28.420864 28950 heartbeater.cc:507] Master 127.25.124.190:34851 requested a full tablet report, sending...
I20250809 19:58:28.423550 28762 ts_manager.cc:194] Registered new tserver with Master: 5206f9ca2f294510a3639afe13cba75a (127.25.124.129:45679)
I20250809 19:58:28.425856 28762 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.129:49765
W20250809 19:58:28.696892 28954 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:28.697288 28954 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:28.697708 28954 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:28.723505 28954 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:28.724225 28954 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.130
I20250809 19:58:28.752184 28954 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:46385
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.25.124.130
--webserver_port=0
--tserver_master_addrs=127.25.124.190:34851
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.130
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:28.753250 28954 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:28.754570 28954 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:28.765156 28960 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:58:29.429379 28950 heartbeater.cc:499] Master 127.25.124.190:34851 was elected leader, sending a full tablet report...
W20250809 19:58:28.767917 28963 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:28.766593 28961 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:29.771179 28962 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1000 milliseconds
I20250809 19:58:29.771317 28954 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:58:29.772382 28954 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:29.774744 28954 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:29.776152 28954 hybrid_clock.cc:648] HybridClock initialized: now 1754769509776115 us; error 54 us; skew 500 ppm
I20250809 19:58:29.776830 28954 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:29.782223 28954 webserver.cc:489] Webserver started at http://127.25.124.130:38491/ using document root <none> and password file <none>
I20250809 19:58:29.782991 28954 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:29.783164 28954 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:29.783596 28954 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:58:29.787374 28954 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175"
format_stamp: "Formatted at 2025-08-09 19:58:29 on dist-test-slave-xzln"
I20250809 19:58:29.788293 28954 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175"
format_stamp: "Formatted at 2025-08-09 19:58:29 on dist-test-slave-xzln"
I20250809 19:58:29.794315 28954 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.000s
I20250809 19:58:29.799852 28970 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:29.800671 28954 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250809 19:58:29.800936 28954 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175"
format_stamp: "Formatted at 2025-08-09 19:58:29 on dist-test-slave-xzln"
I20250809 19:58:29.801219 28954 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:29.856956 28954 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:29.858481 28954 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:29.858942 28954 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:29.861156 28954 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:58:29.864550 28954 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:58:29.864799 28954 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:29.865061 28954 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:58:29.865226 28954 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:29.973265 28954 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.130:46405
I20250809 19:58:29.973340 29082 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.130:46405 every 8 connection(s)
I20250809 19:58:29.975426 28954 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250809 19:58:29.983554 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 28954
I20250809 19:58:29.983880 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250809 19:58:29.988785 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.131:0
--local_ip_for_outbound_sockets=127.25.124.131
--webserver_interface=127.25.124.131
--webserver_port=0
--tserver_master_addrs=127.25.124.190:34851
--builtin_ntp_servers=127.25.124.148:46385
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250809 19:58:29.993603 29083 heartbeater.cc:344] Connected to a master server at 127.25.124.190:34851
I20250809 19:58:29.994021 29083 heartbeater.cc:461] Registering TS with master...
I20250809 19:58:29.995116 29083 heartbeater.cc:507] Master 127.25.124.190:34851 requested a full tablet report, sending...
I20250809 19:58:29.997175 28762 ts_manager.cc:194] Registered new tserver with Master: ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130:46405)
I20250809 19:58:29.998692 28762 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.130:48043
W20250809 19:58:30.238804 29087 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:30.239177 29087 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:30.239557 29087 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:30.264417 29087 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:30.265064 29087 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.131
I20250809 19:58:30.292789 29087 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:46385
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.25.124.131
--webserver_port=0
--tserver_master_addrs=127.25.124.190:34851
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.131
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:30.293766 29087 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:30.295323 29087 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:30.305073 29093 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:58:31.001368 29083 heartbeater.cc:499] Master 127.25.124.190:34851 was elected leader, sending a full tablet report...
W20250809 19:58:31.708235 29092 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 29087
W20250809 19:58:31.779389 29092 kernel_stack_watchdog.cc:198] Thread 29087 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250809 19:58:30.305538 29094 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:31.780490 29087 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.475s user 0.463s sys 1.010s
W20250809 19:58:31.780862 29087 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.476s user 0.463s sys 1.010s
W20250809 19:58:31.782287 29096 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:58:31.784030 29087 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250809 19:58:31.784083 29095 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1474 milliseconds
I20250809 19:58:31.785214 29087 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:31.786938 29087 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:31.788245 29087 hybrid_clock.cc:648] HybridClock initialized: now 1754769511788193 us; error 54 us; skew 500 ppm
I20250809 19:58:31.788908 29087 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:31.793998 29087 webserver.cc:489] Webserver started at http://127.25.124.131:40843/ using document root <none> and password file <none>
I20250809 19:58:31.794764 29087 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:31.794950 29087 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:31.795362 29087 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:58:31.798980 29087 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "6fa68d85f6924882be8d0d10d5c55b1a"
format_stamp: "Formatted at 2025-08-09 19:58:31 on dist-test-slave-xzln"
I20250809 19:58:31.799955 29087 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "6fa68d85f6924882be8d0d10d5c55b1a"
format_stamp: "Formatted at 2025-08-09 19:58:31 on dist-test-slave-xzln"
I20250809 19:58:31.806793 29087 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.001s
I20250809 19:58:31.811425 29103 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:31.812234 29087 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.000s sys 0.004s
I20250809 19:58:31.812520 29087 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "6fa68d85f6924882be8d0d10d5c55b1a"
format_stamp: "Formatted at 2025-08-09 19:58:31 on dist-test-slave-xzln"
I20250809 19:58:31.812812 29087 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:31.855607 29087 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:31.856747 29087 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:31.857115 29087 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:31.859126 29087 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:58:31.862380 29087 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:58:31.862555 29087 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:31.862771 29087 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:58:31.862906 29087 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:31.976553 29087 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.131:35781
I20250809 19:58:31.976694 29215 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.131:35781 every 8 connection(s)
I20250809 19:58:31.978747 29087 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250809 19:58:31.987112 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 29087
I20250809 19:58:31.987548 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250809 19:58:31.998577 29216 heartbeater.cc:344] Connected to a master server at 127.25.124.190:34851
I20250809 19:58:31.998876 29216 heartbeater.cc:461] Registering TS with master...
I20250809 19:58:31.999789 29216 heartbeater.cc:507] Master 127.25.124.190:34851 requested a full tablet report, sending...
I20250809 19:58:32.001405 28762 ts_manager.cc:194] Registered new tserver with Master: 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781)
I20250809 19:58:32.002425 28762 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.131:34795
I20250809 19:58:32.005952 26098 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250809 19:58:32.031006 26098 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250809 19:58:32.031291 26098 test_util.cc:276] Using random seed: 493708331
I20250809 19:58:32.063741 28762 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:47996:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250809 19:58:32.097460 29151 tablet_service.cc:1468] Processing CreateTablet for tablet b24467c3debc41148d89819b96bbd341 (DEFAULT_TABLE table=TestTable [id=3f4aca8bac554b99894e76c70119d08d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:58:32.098805 29151 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b24467c3debc41148d89819b96bbd341. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:32.115307 29236 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Bootstrap starting.
I20250809 19:58:32.120009 29236 tablet_bootstrap.cc:654] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Neither blocks nor log segments found. Creating new log.
I20250809 19:58:32.121402 29236 log.cc:826] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:32.124806 29236 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: No bootstrap required, opened a new log
I20250809 19:58:32.125126 29236 ts_tablet_manager.cc:1397] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Time spent bootstrapping tablet: real 0.010s user 0.000s sys 0.008s
I20250809 19:58:32.139195 29236 raft_consensus.cc:357] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } }
I20250809 19:58:32.139613 29236 raft_consensus.cc:383] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:58:32.139770 29236 raft_consensus.cc:738] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6fa68d85f6924882be8d0d10d5c55b1a, State: Initialized, Role: FOLLOWER
I20250809 19:58:32.140246 29236 consensus_queue.cc:260] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } }
I20250809 19:58:32.140673 29236 raft_consensus.cc:397] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:58:32.140868 29236 raft_consensus.cc:491] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:58:32.141111 29236 raft_consensus.cc:3058] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:58:32.144500 29236 raft_consensus.cc:513] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } }
I20250809 19:58:32.145009 29236 leader_election.cc:304] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 6fa68d85f6924882be8d0d10d5c55b1a; no voters:
I20250809 19:58:32.146318 29236 leader_election.cc:290] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [CANDIDATE]: Term 1 election: Requested vote from peers
I20250809 19:58:32.146978 29238 raft_consensus.cc:2802] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:58:32.148573 29216 heartbeater.cc:499] Master 127.25.124.190:34851 was elected leader, sending a full tablet report...
I20250809 19:58:32.148932 29238 raft_consensus.cc:695] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 1 LEADER]: Becoming Leader. State: Replica: 6fa68d85f6924882be8d0d10d5c55b1a, State: Running, Role: LEADER
I20250809 19:58:32.149665 29238 consensus_queue.cc:237] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } }
I20250809 19:58:32.150172 29236 ts_tablet_manager.cc:1428] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Time spent starting tablet: real 0.025s user 0.017s sys 0.008s
I20250809 19:58:32.159727 28762 catalog_manager.cc:5582] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a reported cstate change: term changed from 0 to 1, leader changed from <none> to 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131). New cstate: current_term: 1 leader_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } health_report { overall_health: HEALTHY } } }
I20250809 19:58:32.339164 26098 test_util.cc:276] Using random seed: 494016195
I20250809 19:58:32.356770 28760 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:48008:
name: "TestTable1"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250809 19:58:32.380568 29018 tablet_service.cc:1468] Processing CreateTablet for tablet 06a690aa4a264e50a47bcb3f31a0fdde (DEFAULT_TABLE table=TestTable1 [id=c26d8b95c7bb480cac2936fbf0b2a04e]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:58:32.381748 29018 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 06a690aa4a264e50a47bcb3f31a0fdde. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:32.397737 29258 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Bootstrap starting.
I20250809 19:58:32.402105 29258 tablet_bootstrap.cc:654] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Neither blocks nor log segments found. Creating new log.
I20250809 19:58:32.403949 29258 log.cc:826] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:32.407819 29258 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: No bootstrap required, opened a new log
I20250809 19:58:32.408160 29258 ts_tablet_manager.cc:1397] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Time spent bootstrapping tablet: real 0.011s user 0.009s sys 0.000s
I20250809 19:58:32.422785 29258 raft_consensus.cc:357] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } }
I20250809 19:58:32.423276 29258 raft_consensus.cc:383] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:58:32.423534 29258 raft_consensus.cc:738] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ba551624a9fc4e14a2ffdc1d5a5e1175, State: Initialized, Role: FOLLOWER
I20250809 19:58:32.424238 29258 consensus_queue.cc:260] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } }
I20250809 19:58:32.424866 29258 raft_consensus.cc:397] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:58:32.425138 29258 raft_consensus.cc:491] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:58:32.425424 29258 raft_consensus.cc:3058] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:58:32.428838 29258 raft_consensus.cc:513] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } }
I20250809 19:58:32.429388 29258 leader_election.cc:304] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: ba551624a9fc4e14a2ffdc1d5a5e1175; no voters:
I20250809 19:58:32.430974 29258 leader_election.cc:290] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250809 19:58:32.431344 29260 raft_consensus.cc:2802] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:58:32.434567 29260 raft_consensus.cc:695] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 1 LEADER]: Becoming Leader. State: Replica: ba551624a9fc4e14a2ffdc1d5a5e1175, State: Running, Role: LEADER
I20250809 19:58:32.435286 29258 ts_tablet_manager.cc:1428] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Time spent starting tablet: real 0.027s user 0.023s sys 0.004s
I20250809 19:58:32.435238 29260 consensus_queue.cc:237] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } }
I20250809 19:58:32.446180 28760 catalog_manager.cc:5582] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 reported cstate change: term changed from 0 to 1, leader changed from <none> to ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130). New cstate: current_term: 1 leader_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } health_report { overall_health: HEALTHY } } }
I20250809 19:58:32.603859 26098 test_util.cc:276] Using random seed: 494280885
I20250809 19:58:32.621745 28755 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:48010:
name: "TestTable2"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250809 19:58:32.646476 28885 tablet_service.cc:1468] Processing CreateTablet for tablet 183d980bc8a9431695fca8e1057b20a8 (DEFAULT_TABLE table=TestTable2 [id=3f574a9c685242d69cc0da3709d8898c]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:58:32.647758 28885 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 183d980bc8a9431695fca8e1057b20a8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:32.664203 29279 tablet_bootstrap.cc:492] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Bootstrap starting.
I20250809 19:58:32.668762 29279 tablet_bootstrap.cc:654] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Neither blocks nor log segments found. Creating new log.
I20250809 19:58:32.670426 29279 log.cc:826] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:32.673746 29279 tablet_bootstrap.cc:492] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: No bootstrap required, opened a new log
I20250809 19:58:32.674031 29279 ts_tablet_manager.cc:1397] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Time spent bootstrapping tablet: real 0.010s user 0.003s sys 0.005s
I20250809 19:58:32.688133 29279 raft_consensus.cc:357] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } }
I20250809 19:58:32.688649 29279 raft_consensus.cc:383] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:58:32.688845 29279 raft_consensus.cc:738] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5206f9ca2f294510a3639afe13cba75a, State: Initialized, Role: FOLLOWER
I20250809 19:58:32.689455 29279 consensus_queue.cc:260] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } }
I20250809 19:58:32.690014 29279 raft_consensus.cc:397] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:58:32.690240 29279 raft_consensus.cc:491] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:58:32.690495 29279 raft_consensus.cc:3058] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:58:32.693706 29279 raft_consensus.cc:513] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } }
I20250809 19:58:32.694245 29279 leader_election.cc:304] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 5206f9ca2f294510a3639afe13cba75a; no voters:
I20250809 19:58:32.695809 29279 leader_election.cc:290] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 1 election: Requested vote from peers
I20250809 19:58:32.696122 29281 raft_consensus.cc:2802] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:58:32.698287 29281 raft_consensus.cc:695] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 1 LEADER]: Becoming Leader. State: Replica: 5206f9ca2f294510a3639afe13cba75a, State: Running, Role: LEADER
I20250809 19:58:32.699293 29279 ts_tablet_manager.cc:1428] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Time spent starting tablet: real 0.025s user 0.020s sys 0.007s
I20250809 19:58:32.699050 29281 consensus_queue.cc:237] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } }
I20250809 19:58:32.707234 28755 catalog_manager.cc:5582] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a reported cstate change: term changed from 0 to 1, leader changed from <none> to 5206f9ca2f294510a3639afe13cba75a (127.25.124.129). New cstate: current_term: 1 leader_uuid: "5206f9ca2f294510a3639afe13cba75a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } health_report { overall_health: HEALTHY } } }
I20250809 19:58:32.843533 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 28729
W20250809 19:58:33.175024 29216 heartbeater.cc:646] Failed to heartbeat to 127.25.124.190:34851 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.25.124.190:34851: connect: Connection refused (error 111)
W20250809 19:58:33.474968 29083 heartbeater.cc:646] Failed to heartbeat to 127.25.124.190:34851 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.25.124.190:34851: connect: Connection refused (error 111)
W20250809 19:58:33.721540 28950 heartbeater.cc:646] Failed to heartbeat to 127.25.124.190:34851 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.25.124.190:34851: connect: Connection refused (error 111)
I20250809 19:58:37.069705 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 28821
I20250809 19:58:37.089908 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 28954
I20250809 19:58:37.111304 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 29087
I20250809 19:58:37.134039 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:34851
--webserver_interface=127.25.124.190
--webserver_port=36449
--builtin_ntp_servers=127.25.124.148:46385
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.25.124.190:34851 with env {}
W20250809 19:58:37.385963 29356 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:37.386505 29356 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:37.386924 29356 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:37.415526 29356 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250809 19:58:37.415879 29356 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:37.416162 29356 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250809 19:58:37.416410 29356 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250809 19:58:37.446936 29356 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:46385
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.25.124.190:34851
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:34851
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.25.124.190
--webserver_port=36449
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:37.448314 29356 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:37.449795 29356 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:37.459070 29362 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:37.459779 29363 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:38.493868 29365 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:38.495939 29364 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1032 milliseconds
I20250809 19:58:38.496042 29356 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:58:38.497148 29356 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:38.499379 29356 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:38.500671 29356 hybrid_clock.cc:648] HybridClock initialized: now 1754769518500634 us; error 44 us; skew 500 ppm
I20250809 19:58:38.501376 29356 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:38.506419 29356 webserver.cc:489] Webserver started at http://127.25.124.190:36449/ using document root <none> and password file <none>
I20250809 19:58:38.507230 29356 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:38.507423 29356 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:38.513823 29356 fs_manager.cc:714] Time spent opening directory manager: real 0.004s user 0.005s sys 0.000s
I20250809 19:58:38.518791 29372 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:38.519702 29356 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.001s
I20250809 19:58:38.519950 29356 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
uuid: "8066dbf4c38443dabf13535fbe0eb147"
format_stamp: "Formatted at 2025-08-09 19:58:26 on dist-test-slave-xzln"
I20250809 19:58:38.521809 29356 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:38.567466 29356 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:38.568583 29356 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:38.568908 29356 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:38.628226 29356 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.190:34851
I20250809 19:58:38.628258 29423 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.190:34851 every 8 connection(s)
I20250809 19:58:38.630625 29356 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250809 19:58:38.638944 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 29356
I20250809 19:58:38.640789 29424 sys_catalog.cc:263] Verifying existing consensus state
I20250809 19:58:38.640789 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.129:45679
--local_ip_for_outbound_sockets=127.25.124.129
--tserver_master_addrs=127.25.124.190:34851
--webserver_port=40961
--webserver_interface=127.25.124.129
--builtin_ntp_servers=127.25.124.148:46385
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250809 19:58:38.647732 29424 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Bootstrap starting.
I20250809 19:58:38.656483 29424 log.cc:826] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:38.696370 29424 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Bootstrap replayed 1/1 log segments. Stats: ops{read=18 overwritten=0 applied=18 ignored=0} inserts{seen=13 ignored=0} mutations{seen=10 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:38.697050 29424 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Bootstrap complete.
I20250809 19:58:38.713752 29424 raft_consensus.cc:357] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:38.715482 29424 raft_consensus.cc:738] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8066dbf4c38443dabf13535fbe0eb147, State: Initialized, Role: FOLLOWER
I20250809 19:58:38.716128 29424 consensus_queue.cc:260] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 18, Last appended: 2.18, Last appended by leader: 18, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:38.716547 29424 raft_consensus.cc:397] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:58:38.716763 29424 raft_consensus.cc:491] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:58:38.717029 29424 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 2 FOLLOWER]: Advancing to term 3
I20250809 19:58:38.721673 29424 raft_consensus.cc:513] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:38.722180 29424 leader_election.cc:304] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8066dbf4c38443dabf13535fbe0eb147; no voters:
I20250809 19:58:38.724015 29424 leader_election.cc:290] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [CANDIDATE]: Term 3 election: Requested vote from peers
I20250809 19:58:38.724323 29428 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 3 FOLLOWER]: Leader election won for term 3
I20250809 19:58:38.726944 29428 raft_consensus.cc:695] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 3 LEADER]: Becoming Leader. State: Replica: 8066dbf4c38443dabf13535fbe0eb147, State: Running, Role: LEADER
I20250809 19:58:38.727624 29428 consensus_queue.cc:237] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 18, Committed index: 18, Last appended: 2.18, Last appended by leader: 18, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:38.728080 29424 sys_catalog.cc:564] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: configured and running, proceeding with master startup.
I20250809 19:58:38.737093 29430 sys_catalog.cc:455] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 8066dbf4c38443dabf13535fbe0eb147. Latest consensus state: current_term: 3 leader_uuid: "8066dbf4c38443dabf13535fbe0eb147" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } } }
I20250809 19:58:38.737640 29430 sys_catalog.cc:458] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: This master's current role is: LEADER
I20250809 19:58:38.736125 29429 sys_catalog.cc:455] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 3 leader_uuid: "8066dbf4c38443dabf13535fbe0eb147" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } } }
I20250809 19:58:38.740288 29429 sys_catalog.cc:458] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: This master's current role is: LEADER
I20250809 19:58:38.750068 29435 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250809 19:58:38.760711 29435 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=3f574a9c685242d69cc0da3709d8898c]
I20250809 19:58:38.762235 29435 catalog_manager.cc:671] Loaded metadata for table TestTable [id=545cbfb642a54042b15f17f21e6356a3]
I20250809 19:58:38.763655 29435 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=827628a756584e8990ca4ccc98ae0281]
I20250809 19:58:38.769882 29435 tablet_loader.cc:96] loaded metadata for tablet 06a690aa4a264e50a47bcb3f31a0fdde (table TestTable1 [id=827628a756584e8990ca4ccc98ae0281])
I20250809 19:58:38.770974 29435 tablet_loader.cc:96] loaded metadata for tablet 183d980bc8a9431695fca8e1057b20a8 (table TestTable2 [id=3f574a9c685242d69cc0da3709d8898c])
I20250809 19:58:38.772058 29435 tablet_loader.cc:96] loaded metadata for tablet b24467c3debc41148d89819b96bbd341 (table TestTable [id=545cbfb642a54042b15f17f21e6356a3])
I20250809 19:58:38.773150 29435 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250809 19:58:38.777341 29435 catalog_manager.cc:1261] Loaded cluster ID: 2128a1420868429aa5b5d946fa91acde
I20250809 19:58:38.777592 29435 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250809 19:58:38.784202 29435 catalog_manager.cc:1506] Loading token signing keys...
I20250809 19:58:38.788666 29435 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Loaded TSK: 0
I20250809 19:58:38.789984 29435 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250809 19:58:38.919498 29426 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:38.919898 29426 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:38.920312 29426 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:38.946038 29426 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:38.946707 29426 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.129
I20250809 19:58:38.974161 29426 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:46385
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.129:45679
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.25.124.129
--webserver_port=40961
--tserver_master_addrs=127.25.124.190:34851
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.129
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:38.975229 29426 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:38.976531 29426 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:38.986958 29451 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:38.988336 29452 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:40.252104 29454 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:40.253707 29453 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1262 milliseconds
W20250809 19:58:40.255051 29426 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.267s user 0.400s sys 0.778s
W20250809 19:58:40.255386 29426 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.268s user 0.404s sys 0.778s
I20250809 19:58:40.255635 29426 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:58:40.256932 29426 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:40.259546 29426 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:40.260999 29426 hybrid_clock.cc:648] HybridClock initialized: now 1754769520260927 us; error 63 us; skew 500 ppm
I20250809 19:58:40.262090 29426 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:40.270632 29426 webserver.cc:489] Webserver started at http://127.25.124.129:40961/ using document root <none> and password file <none>
I20250809 19:58:40.271886 29426 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:40.272156 29426 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:40.284931 29426 fs_manager.cc:714] Time spent opening directory manager: real 0.008s user 0.008s sys 0.000s
I20250809 19:58:40.291177 29461 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:40.292368 29426 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.001s sys 0.002s
I20250809 19:58:40.292718 29426 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "5206f9ca2f294510a3639afe13cba75a"
format_stamp: "Formatted at 2025-08-09 19:58:28 on dist-test-slave-xzln"
I20250809 19:58:40.295182 29426 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:40.365182 29426 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:40.367008 29426 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:40.367551 29426 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:40.370600 29426 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:58:40.377570 29468 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250809 19:58:40.383728 29426 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250809 19:58:40.383895 29426 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.008s user 0.001s sys 0.001s
I20250809 19:58:40.384122 29426 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250809 19:58:40.387768 29426 ts_tablet_manager.cc:610] Registered 1 tablets
I20250809 19:58:40.387974 29426 ts_tablet_manager.cc:589] Time spent register tablets: real 0.004s user 0.004s sys 0.000s
I20250809 19:58:40.388265 29468 tablet_bootstrap.cc:492] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Bootstrap starting.
I20250809 19:58:40.455103 29468 log.cc:826] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:40.551965 29426 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.129:45679
I20250809 19:58:40.552132 29575 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.129:45679 every 8 connection(s)
I20250809 19:58:40.555073 29426 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250809 19:58:40.564311 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 29426
I20250809 19:58:40.565814 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.130:46405
--local_ip_for_outbound_sockets=127.25.124.130
--tserver_master_addrs=127.25.124.190:34851
--webserver_port=38491
--webserver_interface=127.25.124.130
--builtin_ntp_servers=127.25.124.148:46385
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250809 19:58:40.597685 29468 tablet_bootstrap.cc:492] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:40.598641 29468 tablet_bootstrap.cc:492] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Bootstrap complete.
I20250809 19:58:40.600199 29468 ts_tablet_manager.cc:1397] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Time spent bootstrapping tablet: real 0.212s user 0.137s sys 0.055s
I20250809 19:58:40.607049 29576 heartbeater.cc:344] Connected to a master server at 127.25.124.190:34851
I20250809 19:58:40.607499 29576 heartbeater.cc:461] Registering TS with master...
I20250809 19:58:40.608512 29576 heartbeater.cc:507] Master 127.25.124.190:34851 requested a full tablet report, sending...
I20250809 19:58:40.612212 29389 ts_manager.cc:194] Registered new tserver with Master: 5206f9ca2f294510a3639afe13cba75a (127.25.124.129:45679)
I20250809 19:58:40.615983 29468 raft_consensus.cc:357] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } }
I20250809 19:58:40.618654 29468 raft_consensus.cc:738] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5206f9ca2f294510a3639afe13cba75a, State: Initialized, Role: FOLLOWER
I20250809 19:58:40.618814 29389 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.129:45685
I20250809 19:58:40.619503 29468 consensus_queue.cc:260] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } }
I20250809 19:58:40.620086 29468 raft_consensus.cc:397] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:58:40.620406 29468 raft_consensus.cc:491] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:58:40.620755 29468 raft_consensus.cc:3058] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 1 FOLLOWER]: Advancing to term 2
I20250809 19:58:40.626847 29468 raft_consensus.cc:513] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } }
I20250809 19:58:40.627667 29468 leader_election.cc:304] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 5206f9ca2f294510a3639afe13cba75a; no voters:
I20250809 19:58:40.639668 29468 leader_election.cc:290] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 2 election: Requested vote from peers
I20250809 19:58:40.640039 29581 raft_consensus.cc:2802] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Leader election won for term 2
I20250809 19:58:40.651104 29581 raft_consensus.cc:695] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 2 LEADER]: Becoming Leader. State: Replica: 5206f9ca2f294510a3639afe13cba75a, State: Running, Role: LEADER
I20250809 19:58:40.651226 29576 heartbeater.cc:499] Master 127.25.124.190:34851 was elected leader, sending a full tablet report...
I20250809 19:58:40.651839 29581 consensus_queue.cc:237] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } }
I20250809 19:58:40.655697 29468 ts_tablet_manager.cc:1428] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Time spent starting tablet: real 0.055s user 0.034s sys 0.018s
I20250809 19:58:40.665350 29389 catalog_manager.cc:5582] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "5206f9ca2f294510a3639afe13cba75a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } health_report { overall_health: HEALTHY } } }
W20250809 19:58:40.881320 29580 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:40.881709 29580 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:40.882143 29580 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:40.907420 29580 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:40.908098 29580 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.130
I20250809 19:58:40.936169 29580 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:46385
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.130:46405
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.25.124.130
--webserver_port=38491
--tserver_master_addrs=127.25.124.190:34851
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.130
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:40.937245 29580 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:40.938555 29580 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:40.949201 29595 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:40.949829 29596 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:42.026393 29598 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:42.028412 29580 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.078s user 0.005s sys 0.004s
W20250809 19:58:42.028734 29580 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.078s user 0.005s sys 0.004s
I20250809 19:58:42.028986 29580 server_base.cc:1047] running on GCE node
I20250809 19:58:42.030272 29580 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:42.032660 29580 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:42.034101 29580 hybrid_clock.cc:648] HybridClock initialized: now 1754769522034065 us; error 21 us; skew 500 ppm
I20250809 19:58:42.035108 29580 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:42.042801 29580 webserver.cc:489] Webserver started at http://127.25.124.130:38491/ using document root <none> and password file <none>
I20250809 19:58:42.044070 29580 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:42.044346 29580 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:42.054311 29580 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.006s sys 0.000s
I20250809 19:58:42.059692 29605 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:42.060796 29580 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.000s sys 0.004s
I20250809 19:58:42.061136 29580 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175"
format_stamp: "Formatted at 2025-08-09 19:58:29 on dist-test-slave-xzln"
I20250809 19:58:42.063625 29580 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:42.145684 29580 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:42.146854 29580 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:42.147186 29580 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:42.149250 29580 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:58:42.153766 29612 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250809 19:58:42.159798 29580 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250809 19:58:42.159993 29580 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.007s user 0.001s sys 0.000s
I20250809 19:58:42.160274 29580 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250809 19:58:42.164115 29580 ts_tablet_manager.cc:610] Registered 1 tablets
I20250809 19:58:42.164296 29580 ts_tablet_manager.cc:589] Time spent register tablets: real 0.004s user 0.003s sys 0.000s
I20250809 19:58:42.164563 29612 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Bootstrap starting.
I20250809 19:58:42.207729 29612 log.cc:826] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:42.277179 29612 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:42.277860 29612 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Bootstrap complete.
I20250809 19:58:42.278942 29612 ts_tablet_manager.cc:1397] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Time spent bootstrapping tablet: real 0.115s user 0.074s sys 0.036s
I20250809 19:58:42.290758 29612 raft_consensus.cc:357] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } }
I20250809 19:58:42.292546 29612 raft_consensus.cc:738] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: ba551624a9fc4e14a2ffdc1d5a5e1175, State: Initialized, Role: FOLLOWER
I20250809 19:58:42.293202 29612 consensus_queue.cc:260] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } }
I20250809 19:58:42.293589 29612 raft_consensus.cc:397] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:58:42.293828 29612 raft_consensus.cc:491] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:58:42.294081 29612 raft_consensus.cc:3058] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 1 FOLLOWER]: Advancing to term 2
I20250809 19:58:42.298843 29612 raft_consensus.cc:513] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } }
I20250809 19:58:42.299360 29612 leader_election.cc:304] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: ba551624a9fc4e14a2ffdc1d5a5e1175; no voters:
I20250809 19:58:42.300896 29612 leader_election.cc:290] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250809 19:58:42.301867 29696 raft_consensus.cc:2802] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 FOLLOWER]: Leader election won for term 2
I20250809 19:58:42.304411 29696 raft_consensus.cc:695] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 LEADER]: Becoming Leader. State: Replica: ba551624a9fc4e14a2ffdc1d5a5e1175, State: Running, Role: LEADER
I20250809 19:58:42.305370 29696 consensus_queue.cc:237] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } }
I20250809 19:58:42.307163 29612 ts_tablet_manager.cc:1428] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Time spent starting tablet: real 0.028s user 0.022s sys 0.009s
I20250809 19:58:42.331749 29580 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.130:46405
I20250809 19:58:42.332229 29724 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.130:46405 every 8 connection(s)
I20250809 19:58:42.333875 29580 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250809 19:58:42.335198 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 29580
I20250809 19:58:42.336515 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.131:35781
--local_ip_for_outbound_sockets=127.25.124.131
--tserver_master_addrs=127.25.124.190:34851
--webserver_port=40843
--webserver_interface=127.25.124.131
--builtin_ntp_servers=127.25.124.148:46385
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250809 19:58:42.358224 29725 heartbeater.cc:344] Connected to a master server at 127.25.124.190:34851
I20250809 19:58:42.358723 29725 heartbeater.cc:461] Registering TS with master...
I20250809 19:58:42.360008 29725 heartbeater.cc:507] Master 127.25.124.190:34851 requested a full tablet report, sending...
I20250809 19:58:42.363674 29389 ts_manager.cc:194] Registered new tserver with Master: ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130:46405)
I20250809 19:58:42.364527 29389 catalog_manager.cc:5582] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 reported cstate change: term changed from 0 to 2, leader changed from <none> to ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130), VOTER ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130) added. New cstate: current_term: 2 leader_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } health_report { overall_health: HEALTHY } } }
I20250809 19:58:42.375738 29389 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.130:35913
I20250809 19:58:42.379287 29725 heartbeater.cc:499] Master 127.25.124.190:34851 was elected leader, sending a full tablet report...
I20250809 19:58:42.390443 29675 consensus_queue.cc:237] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 2.8, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: true } }
I20250809 19:58:42.393815 29698 raft_consensus.cc:2953] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 LEADER]: Committing config change with OpId 2.9: config changed from index -1 to 9, NON_VOTER 5206f9ca2f294510a3639afe13cba75a (127.25.124.129) added. New config: { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: true } } }
I20250809 19:58:42.402056 29375 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 06a690aa4a264e50a47bcb3f31a0fdde with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
W20250809 19:58:42.404186 29606 consensus_peers.cc:489] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 -> Peer 5206f9ca2f294510a3639afe13cba75a (127.25.124.129:45679): Couldn't send request to peer 5206f9ca2f294510a3639afe13cba75a. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 06a690aa4a264e50a47bcb3f31a0fdde. This is attempt 1: this message will repeat every 5th retry.
I20250809 19:58:42.404379 29389 catalog_manager.cc:5582] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 reported cstate change: config changed from index -1 to 9, NON_VOTER 5206f9ca2f294510a3639afe13cba75a (127.25.124.129) added. New cstate: current_term: 2 leader_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" committed_config { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
W20250809 19:58:42.410701 29389 catalog_manager.cc:5260] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 06a690aa4a264e50a47bcb3f31a0fdde with cas_config_opid_index 9: no extra replica candidate found for tablet 06a690aa4a264e50a47bcb3f31a0fdde (table TestTable1 [id=827628a756584e8990ca4ccc98ae0281]): Not found: could not select location for extra replica: not enough tablet servers to satisfy replica placement policy: the total number of registered tablet servers (2) does not allow for adding an extra replica; consider bringing up more to have at least 4 tablet servers up and running
W20250809 19:58:42.614734 29729 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:42.615165 29729 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:42.615633 29729 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:42.655887 29729 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:42.656955 29729 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.131
I20250809 19:58:42.689147 29729 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:46385
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.131:35781
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.25.124.131
--webserver_port=40843
--tserver_master_addrs=127.25.124.190:34851
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.131
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:42.690208 29729 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:42.691548 29729 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:42.701129 29742 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:58:42.818234 29748 ts_tablet_manager.cc:927] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a: Initiating tablet copy from peer ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130:46405)
I20250809 19:58:42.820365 29748 tablet_copy_client.cc:323] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a: tablet copy: Beginning tablet copy session from remote peer at address 127.25.124.130:46405
I20250809 19:58:42.829389 29695 tablet_copy_service.cc:140] P ba551624a9fc4e14a2ffdc1d5a5e1175: Received BeginTabletCopySession request for tablet 06a690aa4a264e50a47bcb3f31a0fdde from peer 5206f9ca2f294510a3639afe13cba75a ({username='slave'} at 127.25.124.129:35995)
I20250809 19:58:42.829809 29695 tablet_copy_service.cc:161] P ba551624a9fc4e14a2ffdc1d5a5e1175: Beginning new tablet copy session on tablet 06a690aa4a264e50a47bcb3f31a0fdde from peer 5206f9ca2f294510a3639afe13cba75a at {username='slave'} at 127.25.124.129:35995: session id = 5206f9ca2f294510a3639afe13cba75a-06a690aa4a264e50a47bcb3f31a0fdde
I20250809 19:58:42.833829 29695 tablet_copy_source_session.cc:215] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Tablet Copy: opened 0 blocks and 1 log segments
I20250809 19:58:42.836692 29748 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 06a690aa4a264e50a47bcb3f31a0fdde. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:42.846351 29748 tablet_copy_client.cc:806] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a: tablet copy: Starting download of 0 data blocks...
I20250809 19:58:42.846789 29748 tablet_copy_client.cc:670] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a: tablet copy: Starting download of 1 WAL segments...
I20250809 19:58:42.850212 29748 tablet_copy_client.cc:538] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250809 19:58:42.854712 29748 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a: Bootstrap starting.
I20250809 19:58:42.921829 29748 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a: Bootstrap replayed 1/1 log segments. Stats: ops{read=9 overwritten=0 applied=9 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:42.922330 29748 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a: Bootstrap complete.
I20250809 19:58:42.922686 29748 ts_tablet_manager.cc:1397] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a: Time spent bootstrapping tablet: real 0.068s user 0.049s sys 0.015s
I20250809 19:58:42.923998 29748 raft_consensus.cc:357] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: true } }
I20250809 19:58:42.924353 29748 raft_consensus.cc:738] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 5206f9ca2f294510a3639afe13cba75a, State: Initialized, Role: LEARNER
I20250809 19:58:42.924692 29748 consensus_queue.cc:260] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 9, Last appended: 2.9, Last appended by leader: 9, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: true } }
I20250809 19:58:42.927268 29748 ts_tablet_manager.cc:1428] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a: Time spent starting tablet: real 0.004s user 0.007s sys 0.001s
I20250809 19:58:42.928632 29695 tablet_copy_service.cc:342] P ba551624a9fc4e14a2ffdc1d5a5e1175: Request end of tablet copy session 5206f9ca2f294510a3639afe13cba75a-06a690aa4a264e50a47bcb3f31a0fdde received from {username='slave'} at 127.25.124.129:35995
I20250809 19:58:42.928997 29695 tablet_copy_service.cc:434] P ba551624a9fc4e14a2ffdc1d5a5e1175: ending tablet copy session 5206f9ca2f294510a3639afe13cba75a-06a690aa4a264e50a47bcb3f31a0fdde on tablet 06a690aa4a264e50a47bcb3f31a0fdde with peer 5206f9ca2f294510a3639afe13cba75a
I20250809 19:58:43.437181 29531 raft_consensus.cc:1215] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 LEARNER]: Deduplicated request from leader. Original: 2.8->[2.9-2.9] Dedup: 2.9->[]
W20250809 19:58:42.702812 29743 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:43.833832 29745 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:43.835341 29744 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1129 milliseconds
I20250809 19:58:43.835415 29729 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:58:43.836458 29729 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:43.838359 29729 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:43.839663 29729 hybrid_clock.cc:648] HybridClock initialized: now 1754769523839623 us; error 39 us; skew 500 ppm
I20250809 19:58:43.840356 29729 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:43.846443 29729 webserver.cc:489] Webserver started at http://127.25.124.131:40843/ using document root <none> and password file <none>
I20250809 19:58:43.847203 29729 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:43.847456 29729 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:43.853885 29729 fs_manager.cc:714] Time spent opening directory manager: real 0.004s user 0.006s sys 0.000s
I20250809 19:58:43.857750 29761 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:43.858558 29729 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.000s sys 0.002s
I20250809 19:58:43.858801 29729 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "6fa68d85f6924882be8d0d10d5c55b1a"
format_stamp: "Formatted at 2025-08-09 19:58:31 on dist-test-slave-xzln"
I20250809 19:58:43.860428 29729 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:43.916890 29729 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:43.918045 29729 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:43.918383 29729 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:43.920423 29729 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:58:43.924923 29768 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250809 19:58:43.931110 29729 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250809 19:58:43.931317 29729 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.008s user 0.002s sys 0.000s
I20250809 19:58:43.931532 29729 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250809 19:58:43.935248 29729 ts_tablet_manager.cc:610] Registered 1 tablets
I20250809 19:58:43.935386 29729 ts_tablet_manager.cc:589] Time spent register tablets: real 0.004s user 0.004s sys 0.000s
I20250809 19:58:43.935774 29768 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Bootstrap starting.
I20250809 19:58:43.984261 29768 log.cc:826] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:43.992528 29801 raft_consensus.cc:1062] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: attempting to promote NON_VOTER 5206f9ca2f294510a3639afe13cba75a to VOTER
I20250809 19:58:43.994469 29801 consensus_queue.cc:237] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 2.9, Last appended by leader: 7, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } }
I20250809 19:58:43.999869 29531 raft_consensus.cc:1273] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 LEARNER]: Refusing update from remote peer ba551624a9fc4e14a2ffdc1d5a5e1175: Log matching property violated. Preceding OpId in replica: term: 2 index: 9. Preceding OpId from leader: term: 2 index: 10. (index mismatch)
I20250809 19:58:44.001345 29801 consensus_queue.cc:1035] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [LEADER]: Connected to new peer: Peer: permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 10, Last known committed idx: 9, Time since last communication: 0.000s
I20250809 19:58:44.009290 29805 raft_consensus.cc:2953] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 LEADER]: Committing config change with OpId 2.10: config changed from index 9 to 10, 5206f9ca2f294510a3639afe13cba75a (127.25.124.129) changed from NON_VOTER to VOTER. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } }
I20250809 19:58:44.010747 29531 raft_consensus.cc:2953] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Committing config change with OpId 2.10: config changed from index 9 to 10, 5206f9ca2f294510a3639afe13cba75a (127.25.124.129) changed from NON_VOTER to VOTER. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } }
I20250809 19:58:44.020504 29389 catalog_manager.cc:5582] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 reported cstate change: config changed from index 9 to 10, 5206f9ca2f294510a3639afe13cba75a (127.25.124.129) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" committed_config { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250809 19:58:44.104132 29768 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Bootstrap replayed 1/1 log segments. Stats: ops{read=9 overwritten=0 applied=9 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:44.104866 29768 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Bootstrap complete.
I20250809 19:58:44.106143 29768 ts_tablet_manager.cc:1397] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Time spent bootstrapping tablet: real 0.171s user 0.139s sys 0.028s
I20250809 19:58:44.107991 29729 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.131:35781
I20250809 19:58:44.108079 29884 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.131:35781 every 8 connection(s)
I20250809 19:58:44.110862 29729 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250809 19:58:44.117803 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 29729
I20250809 19:58:44.124781 29768 raft_consensus.cc:357] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } }
I20250809 19:58:44.126772 29768 raft_consensus.cc:738] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6fa68d85f6924882be8d0d10d5c55b1a, State: Initialized, Role: FOLLOWER
I20250809 19:58:44.127372 29768 consensus_queue.cc:260] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 9, Last appended: 1.9, Last appended by leader: 9, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } }
I20250809 19:58:44.127856 29768 raft_consensus.cc:397] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:58:44.128144 29768 raft_consensus.cc:491] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:58:44.128515 29768 raft_consensus.cc:3058] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 1 FOLLOWER]: Advancing to term 2
I20250809 19:58:44.133987 29885 heartbeater.cc:344] Connected to a master server at 127.25.124.190:34851
I20250809 19:58:44.134294 29885 heartbeater.cc:461] Registering TS with master...
I20250809 19:58:44.135030 29885 heartbeater.cc:507] Master 127.25.124.190:34851 requested a full tablet report, sending...
I20250809 19:58:44.134976 29768 raft_consensus.cc:513] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } }
I20250809 19:58:44.135519 29768 leader_election.cc:304] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 6fa68d85f6924882be8d0d10d5c55b1a; no voters:
I20250809 19:58:44.136922 29768 leader_election.cc:290] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [CANDIDATE]: Term 2 election: Requested vote from peers
I20250809 19:58:44.137809 29389 ts_manager.cc:194] Registered new tserver with Master: 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781)
I20250809 19:58:44.138048 29891 raft_consensus.cc:2802] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 FOLLOWER]: Leader election won for term 2
I20250809 19:58:44.140138 29891 raft_consensus.cc:695] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 LEADER]: Becoming Leader. State: Replica: 6fa68d85f6924882be8d0d10d5c55b1a, State: Running, Role: LEADER
I20250809 19:58:44.140676 29389 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.131:37653
I20250809 19:58:44.140859 29891 consensus_queue.cc:237] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 1.9, Last appended by leader: 9, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } }
I20250809 19:58:44.142450 29768 ts_tablet_manager.cc:1428] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Time spent starting tablet: real 0.036s user 0.035s sys 0.000s
I20250809 19:58:44.143497 29885 heartbeater.cc:499] Master 127.25.124.190:34851 was elected leader, sending a full tablet report...
I20250809 19:58:44.147315 26098 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250809 19:58:44.146983 29388 catalog_manager.cc:5582] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a reported cstate change: term changed from 0 to 2, leader changed from <none> to 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131), VOTER 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131) added. New cstate: current_term: 2 leader_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } health_report { overall_health: HEALTHY } } }
I20250809 19:58:44.151048 26098 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
W20250809 19:58:44.156199 26098 ts_itest-base.cc:209] found only 0 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" }
I20250809 19:58:44.166301 29840 consensus_queue.cc:237] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 9, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true } }
I20250809 19:58:44.168780 29892 raft_consensus.cc:2953] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index -1 to 11, NON_VOTER ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130) added. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true } } }
I20250809 19:58:44.174474 29376 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet b24467c3debc41148d89819b96bbd341 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
W20250809 19:58:44.176877 29764 consensus_peers.cc:489] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a -> Peer ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130:46405): Couldn't send request to peer ba551624a9fc4e14a2ffdc1d5a5e1175. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: b24467c3debc41148d89819b96bbd341. This is attempt 1: this message will repeat every 5th retry.
I20250809 19:58:44.176924 29388 catalog_manager.cc:5582] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a reported cstate change: config changed from index -1 to 11, NON_VOTER ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130) added. New cstate: current_term: 2 leader_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250809 19:58:44.183838 29840 consensus_queue.cc:237] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 9, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: true } }
I20250809 19:58:44.186364 29893 raft_consensus.cc:2953] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 LEADER]: Committing config change with OpId 2.12: config changed from index 11 to 12, NON_VOTER 5206f9ca2f294510a3639afe13cba75a (127.25.124.129) added. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: true } } }
W20250809 19:58:44.187507 29764 consensus_peers.cc:489] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a -> Peer ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130:46405): Couldn't send request to peer ba551624a9fc4e14a2ffdc1d5a5e1175. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: b24467c3debc41148d89819b96bbd341. This is attempt 1: this message will repeat every 5th retry.
I20250809 19:58:44.193964 29376 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet b24467c3debc41148d89819b96bbd341 with cas_config_opid_index 11: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
W20250809 19:58:44.195796 29762 consensus_peers.cc:489] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a -> Peer 5206f9ca2f294510a3639afe13cba75a (127.25.124.129:45679): Couldn't send request to peer 5206f9ca2f294510a3639afe13cba75a. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: b24467c3debc41148d89819b96bbd341. This is attempt 1: this message will repeat every 5th retry.
I20250809 19:58:44.196486 29388 catalog_manager.cc:5582] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a reported cstate change: config changed from index 11 to 12, NON_VOTER 5206f9ca2f294510a3639afe13cba75a (127.25.124.129) added. New cstate: current_term: 2 leader_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" committed_config { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250809 19:58:44.245025 29675 consensus_queue.cc:237] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 7, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: NON_VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: true } }
I20250809 19:58:44.248557 29531 raft_consensus.cc:1273] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Refusing update from remote peer ba551624a9fc4e14a2ffdc1d5a5e1175: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250809 19:58:44.249446 29801 consensus_queue.cc:1035] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [LEADER]: Connected to new peer: Peer: permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.000s
I20250809 19:58:44.253667 29805 raft_consensus.cc:2953] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index 10 to 11, NON_VOTER 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131) added. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: NON_VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: true } } }
I20250809 19:58:44.254974 29531 raft_consensus.cc:2953] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, NON_VOTER 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131) added. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: NON_VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: true } } }
W20250809 19:58:44.257230 29609 consensus_peers.cc:489] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 -> Peer 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781): Couldn't send request to peer 6fa68d85f6924882be8d0d10d5c55b1a. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 06a690aa4a264e50a47bcb3f31a0fdde. This is attempt 1: this message will repeat every 5th retry.
I20250809 19:58:44.259163 29375 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 06a690aa4a264e50a47bcb3f31a0fdde with cas_config_opid_index 10: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 4)
I20250809 19:58:44.262174 29388 catalog_manager.cc:5582] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 reported cstate change: config changed from index 10 to 11, NON_VOTER 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131) added. New cstate: current_term: 2 leader_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: NON_VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250809 19:58:44.628598 29901 ts_tablet_manager.cc:927] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a: Initiating tablet copy from peer 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781)
I20250809 19:58:44.630303 29901 tablet_copy_client.cc:323] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a: tablet copy: Beginning tablet copy session from remote peer at address 127.25.124.131:35781
I20250809 19:58:44.640314 29860 tablet_copy_service.cc:140] P 6fa68d85f6924882be8d0d10d5c55b1a: Received BeginTabletCopySession request for tablet b24467c3debc41148d89819b96bbd341 from peer 5206f9ca2f294510a3639afe13cba75a ({username='slave'} at 127.25.124.129:53145)
I20250809 19:58:44.640762 29860 tablet_copy_service.cc:161] P 6fa68d85f6924882be8d0d10d5c55b1a: Beginning new tablet copy session on tablet b24467c3debc41148d89819b96bbd341 from peer 5206f9ca2f294510a3639afe13cba75a at {username='slave'} at 127.25.124.129:53145: session id = 5206f9ca2f294510a3639afe13cba75a-b24467c3debc41148d89819b96bbd341
I20250809 19:58:44.646067 29860 tablet_copy_source_session.cc:215] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Tablet Copy: opened 0 blocks and 1 log segments
I20250809 19:58:44.648844 29901 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b24467c3debc41148d89819b96bbd341. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:44.657975 29901 tablet_copy_client.cc:806] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a: tablet copy: Starting download of 0 data blocks...
I20250809 19:58:44.658427 29901 tablet_copy_client.cc:670] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a: tablet copy: Starting download of 1 WAL segments...
I20250809 19:58:44.661772 29901 tablet_copy_client.cc:538] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250809 19:58:44.666461 29901 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a: Bootstrap starting.
I20250809 19:58:44.709056 29374 catalog_manager.cc:5129] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 06a690aa4a264e50a47bcb3f31a0fdde with cas_config_opid_index 9: aborting the task: latest config opid_index 11; task opid_index 9
I20250809 19:58:44.711930 29905 ts_tablet_manager.cc:927] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: Initiating tablet copy from peer 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781)
I20250809 19:58:44.713956 29905 tablet_copy_client.cc:323] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: tablet copy: Beginning tablet copy session from remote peer at address 127.25.124.131:35781
I20250809 19:58:44.715353 29860 tablet_copy_service.cc:140] P 6fa68d85f6924882be8d0d10d5c55b1a: Received BeginTabletCopySession request for tablet b24467c3debc41148d89819b96bbd341 from peer ba551624a9fc4e14a2ffdc1d5a5e1175 ({username='slave'} at 127.25.124.130:35313)
I20250809 19:58:44.715761 29860 tablet_copy_service.cc:161] P 6fa68d85f6924882be8d0d10d5c55b1a: Beginning new tablet copy session on tablet b24467c3debc41148d89819b96bbd341 from peer ba551624a9fc4e14a2ffdc1d5a5e1175 at {username='slave'} at 127.25.124.130:35313: session id = ba551624a9fc4e14a2ffdc1d5a5e1175-b24467c3debc41148d89819b96bbd341
I20250809 19:58:44.719594 29860 tablet_copy_source_session.cc:215] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Tablet Copy: opened 0 blocks and 1 log segments
I20250809 19:58:44.721769 29905 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b24467c3debc41148d89819b96bbd341. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:44.730870 29905 tablet_copy_client.cc:806] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: tablet copy: Starting download of 0 data blocks...
I20250809 19:58:44.731343 29905 tablet_copy_client.cc:670] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: tablet copy: Starting download of 1 WAL segments...
I20250809 19:58:44.734620 29905 tablet_copy_client.cc:538] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250809 19:58:44.739248 29905 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: Bootstrap starting.
I20250809 19:58:44.760650 29901 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:44.761127 29901 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a: Bootstrap complete.
I20250809 19:58:44.761518 29901 ts_tablet_manager.cc:1397] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a: Time spent bootstrapping tablet: real 0.095s user 0.091s sys 0.005s
I20250809 19:58:44.762805 29901 raft_consensus.cc:357] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: true } }
I20250809 19:58:44.763195 29901 raft_consensus.cc:738] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 5206f9ca2f294510a3639afe13cba75a, State: Initialized, Role: LEARNER
I20250809 19:58:44.763572 29901 consensus_queue.cc:260] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: true } }
I20250809 19:58:44.765594 29901 ts_tablet_manager.cc:1428] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a: Time spent starting tablet: real 0.004s user 0.000s sys 0.003s
I20250809 19:58:44.766939 29860 tablet_copy_service.cc:342] P 6fa68d85f6924882be8d0d10d5c55b1a: Request end of tablet copy session 5206f9ca2f294510a3639afe13cba75a-b24467c3debc41148d89819b96bbd341 received from {username='slave'} at 127.25.124.129:53145
I20250809 19:58:44.767289 29860 tablet_copy_service.cc:434] P 6fa68d85f6924882be8d0d10d5c55b1a: ending tablet copy session 5206f9ca2f294510a3639afe13cba75a-b24467c3debc41148d89819b96bbd341 on tablet b24467c3debc41148d89819b96bbd341 with peer 5206f9ca2f294510a3639afe13cba75a
I20250809 19:58:44.826597 29905 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:44.827136 29905 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: Bootstrap complete.
I20250809 19:58:44.827527 29905 ts_tablet_manager.cc:1397] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: Time spent bootstrapping tablet: real 0.088s user 0.076s sys 0.016s
I20250809 19:58:44.828930 29905 raft_consensus.cc:357] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: true } }
I20250809 19:58:44.829452 29905 raft_consensus.cc:738] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: ba551624a9fc4e14a2ffdc1d5a5e1175, State: Initialized, Role: LEARNER
I20250809 19:58:44.829936 29910 ts_tablet_manager.cc:927] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a: Initiating tablet copy from peer ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130:46405)
I20250809 19:58:44.829806 29905 consensus_queue.cc:260] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: NON_VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: true } }
I20250809 19:58:44.831333 29905 ts_tablet_manager.cc:1428] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: Time spent starting tablet: real 0.004s user 0.003s sys 0.001s
I20250809 19:58:44.831980 29910 tablet_copy_client.cc:323] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a: tablet copy: Beginning tablet copy session from remote peer at address 127.25.124.130:46405
I20250809 19:58:44.832779 29860 tablet_copy_service.cc:342] P 6fa68d85f6924882be8d0d10d5c55b1a: Request end of tablet copy session ba551624a9fc4e14a2ffdc1d5a5e1175-b24467c3debc41148d89819b96bbd341 received from {username='slave'} at 127.25.124.130:35313
I20250809 19:58:44.833107 29860 tablet_copy_service.cc:434] P 6fa68d85f6924882be8d0d10d5c55b1a: ending tablet copy session ba551624a9fc4e14a2ffdc1d5a5e1175-b24467c3debc41148d89819b96bbd341 on tablet b24467c3debc41148d89819b96bbd341 with peer ba551624a9fc4e14a2ffdc1d5a5e1175
I20250809 19:58:44.833276 29695 tablet_copy_service.cc:140] P ba551624a9fc4e14a2ffdc1d5a5e1175: Received BeginTabletCopySession request for tablet 06a690aa4a264e50a47bcb3f31a0fdde from peer 6fa68d85f6924882be8d0d10d5c55b1a ({username='slave'} at 127.25.124.131:45735)
I20250809 19:58:44.833653 29695 tablet_copy_service.cc:161] P ba551624a9fc4e14a2ffdc1d5a5e1175: Beginning new tablet copy session on tablet 06a690aa4a264e50a47bcb3f31a0fdde from peer 6fa68d85f6924882be8d0d10d5c55b1a at {username='slave'} at 127.25.124.131:45735: session id = 6fa68d85f6924882be8d0d10d5c55b1a-06a690aa4a264e50a47bcb3f31a0fdde
I20250809 19:58:44.840097 29695 tablet_copy_source_session.cc:215] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Tablet Copy: opened 0 blocks and 1 log segments
I20250809 19:58:44.842231 29910 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 06a690aa4a264e50a47bcb3f31a0fdde. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:58:44.853729 29910 tablet_copy_client.cc:806] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a: tablet copy: Starting download of 0 data blocks...
I20250809 19:58:44.854066 29910 tablet_copy_client.cc:670] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a: tablet copy: Starting download of 1 WAL segments...
I20250809 19:58:44.856683 29910 tablet_copy_client.cc:538] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250809 19:58:44.860949 29910 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a: Bootstrap starting.
I20250809 19:58:44.926388 29910 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:44.926868 29910 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a: Bootstrap complete.
I20250809 19:58:44.927234 29910 ts_tablet_manager.cc:1397] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a: Time spent bootstrapping tablet: real 0.066s user 0.056s sys 0.008s
I20250809 19:58:44.928575 29910 raft_consensus.cc:357] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: NON_VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: true } }
I20250809 19:58:44.928961 29910 raft_consensus.cc:738] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 6fa68d85f6924882be8d0d10d5c55b1a, State: Initialized, Role: LEARNER
I20250809 19:58:44.929364 29910 consensus_queue.cc:260] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: NON_VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: true } }
I20250809 19:58:44.930444 29910 ts_tablet_manager.cc:1428] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a: Time spent starting tablet: real 0.003s user 0.000s sys 0.000s
I20250809 19:58:44.931689 29695 tablet_copy_service.cc:342] P ba551624a9fc4e14a2ffdc1d5a5e1175: Request end of tablet copy session 6fa68d85f6924882be8d0d10d5c55b1a-06a690aa4a264e50a47bcb3f31a0fdde received from {username='slave'} at 127.25.124.131:45735
I20250809 19:58:44.932036 29695 tablet_copy_service.cc:434] P ba551624a9fc4e14a2ffdc1d5a5e1175: ending tablet copy session 6fa68d85f6924882be8d0d10d5c55b1a-06a690aa4a264e50a47bcb3f31a0fdde on tablet 06a690aa4a264e50a47bcb3f31a0fdde with peer 6fa68d85f6924882be8d0d10d5c55b1a
I20250809 19:58:45.077873 29531 raft_consensus.cc:1215] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 2 LEARNER]: Deduplicated request from leader. Original: 2.11->[2.12-2.12] Dedup: 2.12->[]
I20250809 19:58:45.160614 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 5206f9ca2f294510a3639afe13cba75a to finish bootstrapping
I20250809 19:58:45.176867 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver ba551624a9fc4e14a2ffdc1d5a5e1175 to finish bootstrapping
I20250809 19:58:45.186033 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 6fa68d85f6924882be8d0d10d5c55b1a to finish bootstrapping
I20250809 19:58:45.266839 29675 raft_consensus.cc:1215] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.11->[2.12-2.12] Dedup: 2.12->[]
I20250809 19:58:45.273497 29840 raft_consensus.cc:1215] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 LEARNER]: Deduplicated request from leader. Original: 2.10->[2.11-2.11] Dedup: 2.11->[]
I20250809 19:58:45.416286 29511 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250809 19:58:45.417986 29814 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250809 19:58:45.419384 29655 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250809 19:58:45.535284 29915 raft_consensus.cc:1062] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: attempting to promote NON_VOTER 5206f9ca2f294510a3639afe13cba75a to VOTER
I20250809 19:58:45.536571 29915 consensus_queue.cc:237] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 12, Committed index: 12, Last appended: 2.12, Last appended by leader: 9, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } }
I20250809 19:58:45.540684 29675 raft_consensus.cc:1273] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 LEARNER]: Refusing update from remote peer 6fa68d85f6924882be8d0d10d5c55b1a: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 2 index: 13. (index mismatch)
I20250809 19:58:45.540953 29531 raft_consensus.cc:1273] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 2 LEARNER]: Refusing update from remote peer 6fa68d85f6924882be8d0d10d5c55b1a: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 2 index: 13. (index mismatch)
I20250809 19:58:45.542343 29915 consensus_queue.cc:1035] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [LEADER]: Connected to new peer: Peer: permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.000s
I20250809 19:58:45.543115 29914 consensus_queue.cc:1035] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [LEADER]: Connected to new peer: Peer: permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.000s
I20250809 19:58:45.560038 29893 raft_consensus.cc:2953] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 LEADER]: Committing config change with OpId 2.13: config changed from index 12 to 13, 5206f9ca2f294510a3639afe13cba75a (127.25.124.129) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } }
I20250809 19:58:45.561257 29531 raft_consensus.cc:2953] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Committing config change with OpId 2.13: config changed from index 12 to 13, 5206f9ca2f294510a3639afe13cba75a (127.25.124.129) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } }
I20250809 19:58:45.570658 29387 catalog_manager.cc:5582] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a reported cstate change: config changed from index 12 to 13, 5206f9ca2f294510a3639afe13cba75a (127.25.124.129) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" committed_config { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250809 19:58:45.579613 29914 raft_consensus.cc:1062] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: attempting to promote NON_VOTER ba551624a9fc4e14a2ffdc1d5a5e1175 to VOTER
I20250809 19:58:45.580991 29914 consensus_queue.cc:237] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 13, Committed index: 13, Last appended: 2.13, Last appended by leader: 9, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } }
I20250809 19:58:45.581110 29674 raft_consensus.cc:2953] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 LEARNER]: Committing config change with OpId 2.13: config changed from index 12 to 13, 5206f9ca2f294510a3639afe13cba75a (127.25.124.129) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: NON_VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: true } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } }
I20250809 19:58:45.585878 29531 raft_consensus.cc:1273] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Refusing update from remote peer 6fa68d85f6924882be8d0d10d5c55b1a: Log matching property violated. Preceding OpId in replica: term: 2 index: 13. Preceding OpId from leader: term: 2 index: 14. (index mismatch)
I20250809 19:58:45.586966 29915 consensus_queue.cc:1035] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [LEADER]: Connected to new peer: Peer: permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 14, Last known committed idx: 13, Time since last communication: 0.000s
I20250809 19:58:45.593951 29915 raft_consensus.cc:2953] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 LEADER]: Committing config change with OpId 2.14: config changed from index 13 to 14, ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130) changed from NON_VOTER to VOTER. New config: { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } }
I20250809 19:58:45.596060 29531 raft_consensus.cc:2953] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Committing config change with OpId 2.14: config changed from index 13 to 14, ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130) changed from NON_VOTER to VOTER. New config: { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } }
I20250809 19:58:45.604851 29674 raft_consensus.cc:1273] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 LEARNER]: Refusing update from remote peer 6fa68d85f6924882be8d0d10d5c55b1a: Log matching property violated. Preceding OpId in replica: term: 2 index: 13. Preceding OpId from leader: term: 2 index: 14. (index mismatch)
I20250809 19:58:45.605952 29914 consensus_queue.cc:1035] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [LEADER]: Connected to new peer: Peer: permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 14, Last known committed idx: 13, Time since last communication: 0.000s
I20250809 19:58:45.608984 29674 raft_consensus.cc:2953] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 FOLLOWER]: Committing config change with OpId 2.14: config changed from index 13 to 14, ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130) changed from NON_VOTER to VOTER. New config: { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } }
I20250809 19:58:45.636786 29387 catalog_manager.cc:5582] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a reported cstate change: config changed from index 13 to 14, ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" committed_config { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
Master Summary
UUID | Address | Status
----------------------------------+----------------------+---------
8066dbf4c38443dabf13535fbe0eb147 | 127.25.124.190:34851 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+----------------------+-------------------------
builtin_ntp_servers | 127.25.124.148:46385 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+----------------------+---------+----------+----------------+-----------------
5206f9ca2f294510a3639afe13cba75a | 127.25.124.129:45679 | HEALTHY | <none> | 1 | 0
6fa68d85f6924882be8d0d10d5c55b1a | 127.25.124.131:35781 | HEALTHY | <none> | 1 | 0
ba551624a9fc4e14a2ffdc1d5a5e1175 | 127.25.124.130:46405 | HEALTHY | <none> | 1 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.25.124.129 | experimental | 127.25.124.129:45679
local_ip_for_outbound_sockets | 127.25.124.130 | experimental | 127.25.124.130:46405
local_ip_for_outbound_sockets | 127.25.124.131 | experimental | 127.25.124.131:35781
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb | hidden | 127.25.124.129:45679
server_dump_info_path | /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb | hidden | 127.25.124.130:46405
server_dump_info_path | /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb | hidden | 127.25.124.131:35781
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+----------------------+-------------------------
builtin_ntp_servers | 127.25.124.148:46385 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.19.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
------------+----+---------+---------------+---------+------------+------------------+-------------
TestTable | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
TestTable1 | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
TestTable2 | 1 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 2
First Quartile | 2
Median | 2
Third Quartile | 3
Maximum | 3
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 3
Tablets | 3
Replicas | 7
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250809 19:58:45.679314 26098 log_verifier.cc:126] Checking tablet 06a690aa4a264e50a47bcb3f31a0fdde
I20250809 19:58:45.753378 26098 log_verifier.cc:177] Verified matching terms for 11 ops in tablet 06a690aa4a264e50a47bcb3f31a0fdde
I20250809 19:58:45.753664 26098 log_verifier.cc:126] Checking tablet 183d980bc8a9431695fca8e1057b20a8
I20250809 19:58:45.776108 26098 log_verifier.cc:177] Verified matching terms for 8 ops in tablet 183d980bc8a9431695fca8e1057b20a8
I20250809 19:58:45.776291 26098 log_verifier.cc:126] Checking tablet b24467c3debc41148d89819b96bbd341
I20250809 19:58:45.817214 29928 raft_consensus.cc:1062] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: attempting to promote NON_VOTER 6fa68d85f6924882be8d0d10d5c55b1a to VOTER
I20250809 19:58:45.818388 29928 consensus_queue.cc:237] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 7, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false } }
I20250809 19:58:45.822602 29840 raft_consensus.cc:1273] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 LEARNER]: Refusing update from remote peer ba551624a9fc4e14a2ffdc1d5a5e1175: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 2 index: 12. (index mismatch)
I20250809 19:58:45.823707 29928 consensus_queue.cc:1035] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [LEADER]: Connected to new peer: Peer: permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.000s
I20250809 19:58:45.824146 29531 raft_consensus.cc:1273] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Refusing update from remote peer ba551624a9fc4e14a2ffdc1d5a5e1175: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 2 index: 12. (index mismatch)
I20250809 19:58:45.825196 29928 consensus_queue.cc:1035] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [LEADER]: Connected to new peer: Peer: permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.000s
I20250809 19:58:45.829891 29953 raft_consensus.cc:2953] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 LEADER]: Committing config change with OpId 2.12: config changed from index 11 to 12, 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131) changed from NON_VOTER to VOTER. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false } } }
I20250809 19:58:45.832927 29530 raft_consensus.cc:2953] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Committing config change with OpId 2.12: config changed from index 11 to 12, 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131) changed from NON_VOTER to VOTER. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false } } }
I20250809 19:58:45.831327 29840 raft_consensus.cc:2953] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 FOLLOWER]: Committing config change with OpId 2.12: config changed from index 11 to 12, 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131) changed from NON_VOTER to VOTER. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false } } }
I20250809 19:58:45.840931 29388 catalog_manager.cc:5582] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 reported cstate change: config changed from index 11 to 12, 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" committed_config { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250809 19:58:45.862454 26098 log_verifier.cc:177] Verified matching terms for 14 ops in tablet b24467c3debc41148d89819b96bbd341
I20250809 19:58:45.862811 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 29356
I20250809 19:58:45.888617 26098 minidump.cc:252] Setting minidump size limit to 20M
I20250809 19:58:45.889644 26098 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:45.890504 26098 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:45.899250 29959 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:45.899358 29960 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:45.900301 29962 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:58:45.900785 26098 server_base.cc:1047] running on GCE node
I20250809 19:58:45.981292 26098 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250809 19:58:45.981487 26098 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250809 19:58:45.981640 26098 hybrid_clock.cc:648] HybridClock initialized: now 1754769525981623 us; error 0 us; skew 500 ppm
I20250809 19:58:45.982208 26098 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:45.984679 26098 webserver.cc:489] Webserver started at http://0.0.0.0:44873/ using document root <none> and password file <none>
I20250809 19:58:45.985349 26098 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:45.985527 26098 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:45.989816 26098 fs_manager.cc:714] Time spent opening directory manager: real 0.003s user 0.005s sys 0.000s
I20250809 19:58:45.992682 29967 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:45.993409 26098 fs_manager.cc:730] Time spent opening block manager: real 0.002s user 0.001s sys 0.000s
I20250809 19:58:45.993655 26098 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
uuid: "8066dbf4c38443dabf13535fbe0eb147"
format_stamp: "Formatted at 2025-08-09 19:58:26 on dist-test-slave-xzln"
I20250809 19:58:45.995049 26098 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:46.020685 26098 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:46.021793 26098 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:46.022143 26098 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:46.029038 26098 sys_catalog.cc:263] Verifying existing consensus state
W20250809 19:58:46.031939 26098 sys_catalog.cc:243] For a single master config, on-disk Raft master: 127.25.124.190:34851 exists but no master address supplied!
I20250809 19:58:46.033326 26098 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Bootstrap starting.
I20250809 19:58:46.066212 26098 log.cc:826] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:46.118041 26098 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Bootstrap replayed 1/1 log segments. Stats: ops{read=30 overwritten=0 applied=30 ignored=0} inserts{seen=13 ignored=0} mutations{seen=21 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:46.118650 26098 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Bootstrap complete.
I20250809 19:58:46.129514 26098 raft_consensus.cc:357] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 3 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:46.129976 26098 raft_consensus.cc:738] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 3 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8066dbf4c38443dabf13535fbe0eb147, State: Initialized, Role: FOLLOWER
I20250809 19:58:46.130537 26098 consensus_queue.cc:260] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 30, Last appended: 3.30, Last appended by leader: 30, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:46.130932 26098 raft_consensus.cc:397] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 3 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:58:46.131134 26098 raft_consensus.cc:491] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 3 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:58:46.131510 26098 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 3 FOLLOWER]: Advancing to term 4
I20250809 19:58:46.135849 26098 raft_consensus.cc:513] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 4 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:46.136376 26098 leader_election.cc:304] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8066dbf4c38443dabf13535fbe0eb147; no voters:
I20250809 19:58:46.137276 26098 leader_election.cc:290] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [CANDIDATE]: Term 4 election: Requested vote from peers
I20250809 19:58:46.137481 29974 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 4 FOLLOWER]: Leader election won for term 4
I20250809 19:58:46.138515 29974 raft_consensus.cc:695] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 4 LEADER]: Becoming Leader. State: Replica: 8066dbf4c38443dabf13535fbe0eb147, State: Running, Role: LEADER
I20250809 19:58:46.139098 29974 consensus_queue.cc:237] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 30, Committed index: 30, Last appended: 3.30, Last appended by leader: 30, Current term: 4, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:46.144326 29975 sys_catalog.cc:455] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 4 leader_uuid: "8066dbf4c38443dabf13535fbe0eb147" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } } }
I20250809 19:58:46.144758 29975 sys_catalog.cc:458] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: This master's current role is: LEADER
I20250809 19:58:46.145458 29976 sys_catalog.cc:455] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 8066dbf4c38443dabf13535fbe0eb147. Latest consensus state: current_term: 4 leader_uuid: "8066dbf4c38443dabf13535fbe0eb147" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } } }
I20250809 19:58:46.145895 29976 sys_catalog.cc:458] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: This master's current role is: LEADER
I20250809 19:58:46.165905 26098 tablet_replica.cc:331] stopping tablet replica
I20250809 19:58:46.166329 26098 raft_consensus.cc:2241] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 4 LEADER]: Raft consensus shutting down.
I20250809 19:58:46.166687 26098 raft_consensus.cc:2270] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 4 FOLLOWER]: Raft consensus is shut down!
I20250809 19:58:46.168728 26098 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250809 19:58:46.169128 26098 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250809 19:58:46.191092 26098 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
W20250809 19:58:46.236073 29885 heartbeater.cc:646] Failed to heartbeat to 127.25.124.190:34851 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.25.124.190:34851: connect: Connection refused (error 111)
W20250809 19:58:46.885342 29725 heartbeater.cc:646] Failed to heartbeat to 127.25.124.190:34851 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.25.124.190:34851: connect: Connection refused (error 111)
W20250809 19:58:46.887527 29576 heartbeater.cc:646] Failed to heartbeat to 127.25.124.190:34851 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.25.124.190:34851: connect: Connection refused (error 111)
I20250809 19:58:50.928241 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 29426
I20250809 19:58:50.950217 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 29580
I20250809 19:58:50.973176 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 29729
I20250809 19:58:50.998778 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:34851
--webserver_interface=127.25.124.190
--webserver_port=36449
--builtin_ntp_servers=127.25.124.148:46385
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.25.124.190:34851 with env {}
W20250809 19:58:51.262907 30048 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:51.263476 30048 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:51.263880 30048 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:51.289551 30048 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250809 19:58:51.289819 30048 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:51.290048 30048 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250809 19:58:51.290271 30048 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250809 19:58:51.319527 30048 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:46385
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.25.124.190:34851
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:34851
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.25.124.190
--webserver_port=36449
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:51.320722 30048 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:51.322103 30048 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:51.332094 30054 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:52.734946 30053 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 30048
W20250809 19:58:52.853075 30048 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.521s user 0.505s sys 1.016s
W20250809 19:58:52.853489 30048 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.522s user 0.505s sys 1.016s
W20250809 19:58:51.332243 30055 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:52.855020 30057 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:52.857771 30056 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1521 milliseconds
I20250809 19:58:52.857784 30048 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:58:52.858901 30048 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:52.860982 30048 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:52.862284 30048 hybrid_clock.cc:648] HybridClock initialized: now 1754769532862247 us; error 45 us; skew 500 ppm
I20250809 19:58:52.862946 30048 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:52.868006 30048 webserver.cc:489] Webserver started at http://127.25.124.190:36449/ using document root <none> and password file <none>
I20250809 19:58:52.868767 30048 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:52.868938 30048 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:52.875939 30048 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.001s sys 0.002s
I20250809 19:58:52.879720 30064 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:52.880501 30048 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.001s sys 0.003s
I20250809 19:58:52.880751 30048 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
uuid: "8066dbf4c38443dabf13535fbe0eb147"
format_stamp: "Formatted at 2025-08-09 19:58:26 on dist-test-slave-xzln"
I20250809 19:58:52.882390 30048 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:52.921749 30048 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:52.922883 30048 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:52.923257 30048 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:52.982106 30048 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.190:34851
I20250809 19:58:52.982163 30115 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.190:34851 every 8 connection(s)
I20250809 19:58:52.984476 30048 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250809 19:58:52.991408 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 30048
I20250809 19:58:52.993072 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.129:45679
--local_ip_for_outbound_sockets=127.25.124.129
--tserver_master_addrs=127.25.124.190:34851
--webserver_port=40961
--webserver_interface=127.25.124.129
--builtin_ntp_servers=127.25.124.148:46385
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250809 19:58:52.993769 30116 sys_catalog.cc:263] Verifying existing consensus state
I20250809 19:58:53.000607 30116 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Bootstrap starting.
I20250809 19:58:53.009749 30116 log.cc:826] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:53.089005 30116 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Bootstrap replayed 1/1 log segments. Stats: ops{read=34 overwritten=0 applied=34 ignored=0} inserts{seen=15 ignored=0} mutations{seen=23 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:53.089670 30116 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Bootstrap complete.
I20250809 19:58:53.105054 30116 raft_consensus.cc:357] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 5 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:53.106719 30116 raft_consensus.cc:738] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 5 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8066dbf4c38443dabf13535fbe0eb147, State: Initialized, Role: FOLLOWER
I20250809 19:58:53.107304 30116 consensus_queue.cc:260] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 34, Last appended: 5.34, Last appended by leader: 34, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:53.107710 30116 raft_consensus.cc:397] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 5 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:58:53.107918 30116 raft_consensus.cc:491] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 5 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:58:53.108165 30116 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 5 FOLLOWER]: Advancing to term 6
I20250809 19:58:53.112295 30116 raft_consensus.cc:513] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 6 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:53.112797 30116 leader_election.cc:304] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [CANDIDATE]: Term 6 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8066dbf4c38443dabf13535fbe0eb147; no voters:
I20250809 19:58:53.114619 30116 leader_election.cc:290] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [CANDIDATE]: Term 6 election: Requested vote from peers
I20250809 19:58:53.114951 30120 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 6 FOLLOWER]: Leader election won for term 6
I20250809 19:58:53.117699 30120 raft_consensus.cc:695] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [term 6 LEADER]: Becoming Leader. State: Replica: 8066dbf4c38443dabf13535fbe0eb147, State: Running, Role: LEADER
I20250809 19:58:53.118399 30120 consensus_queue.cc:237] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 34, Committed index: 34, Last appended: 5.34, Last appended by leader: 34, Current term: 6, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } }
I20250809 19:58:53.118788 30116 sys_catalog.cc:564] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: configured and running, proceeding with master startup.
I20250809 19:58:53.126976 30121 sys_catalog.cc:455] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 6 leader_uuid: "8066dbf4c38443dabf13535fbe0eb147" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } } }
I20250809 19:58:53.128183 30122 sys_catalog.cc:455] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 8066dbf4c38443dabf13535fbe0eb147. Latest consensus state: current_term: 6 leader_uuid: "8066dbf4c38443dabf13535fbe0eb147" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8066dbf4c38443dabf13535fbe0eb147" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 34851 } } }
I20250809 19:58:53.128893 30122 sys_catalog.cc:458] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: This master's current role is: LEADER
I20250809 19:58:53.130971 30121 sys_catalog.cc:458] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147 [sys.catalog]: This master's current role is: LEADER
I20250809 19:58:53.138863 30126 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250809 19:58:53.150617 30126 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=3f574a9c685242d69cc0da3709d8898c]
I20250809 19:58:53.152112 30126 catalog_manager.cc:671] Loaded metadata for table TestTable [id=776bea9d5f4a427d8291e03abba3717a]
I20250809 19:58:53.153493 30126 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=827628a756584e8990ca4ccc98ae0281]
I20250809 19:58:53.160635 30126 tablet_loader.cc:96] loaded metadata for tablet 06a690aa4a264e50a47bcb3f31a0fdde (table TestTable1 [id=827628a756584e8990ca4ccc98ae0281])
I20250809 19:58:53.161831 30126 tablet_loader.cc:96] loaded metadata for tablet 183d980bc8a9431695fca8e1057b20a8 (table TestTable2 [id=3f574a9c685242d69cc0da3709d8898c])
I20250809 19:58:53.162873 30126 tablet_loader.cc:96] loaded metadata for tablet b24467c3debc41148d89819b96bbd341 (table TestTable [id=776bea9d5f4a427d8291e03abba3717a])
I20250809 19:58:53.164019 30126 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250809 19:58:53.168622 30126 catalog_manager.cc:1261] Loaded cluster ID: 2128a1420868429aa5b5d946fa91acde
I20250809 19:58:53.168875 30126 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250809 19:58:53.176074 30126 catalog_manager.cc:1506] Loading token signing keys...
I20250809 19:58:53.180970 30126 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 8066dbf4c38443dabf13535fbe0eb147: Loaded TSK: 0
I20250809 19:58:53.182282 30126 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250809 19:58:53.313627 30118 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:53.314072 30118 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:53.314581 30118 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:53.340631 30118 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:53.341344 30118 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.129
I20250809 19:58:53.369436 30118 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:46385
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.129:45679
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.25.124.129
--webserver_port=40961
--tserver_master_addrs=127.25.124.190:34851
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.129
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:53.370558 30118 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:53.372061 30118 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:53.383139 30143 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:53.384244 30144 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:54.627666 30118 thread.cc:641] GCE (cloud detector) Time spent creating pthread: real 1.245s user 0.475s sys 0.770s
W20250809 19:58:54.628043 30118 thread.cc:608] GCE (cloud detector) Time spent starting thread: real 1.246s user 0.475s sys 0.771s
I20250809 19:58:54.629439 30118 server_base.cc:1047] running on GCE node
W20250809 19:58:54.630887 30148 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:58:54.632352 30118 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:54.634790 30118 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:54.636237 30118 hybrid_clock.cc:648] HybridClock initialized: now 1754769534636167 us; error 63 us; skew 500 ppm
I20250809 19:58:54.637302 30118 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:54.645345 30118 webserver.cc:489] Webserver started at http://127.25.124.129:40961/ using document root <none> and password file <none>
I20250809 19:58:54.646569 30118 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:54.646816 30118 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:54.656977 30118 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.008s sys 0.000s
I20250809 19:58:54.662982 30153 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:54.664175 30118 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250809 19:58:54.664532 30118 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "5206f9ca2f294510a3639afe13cba75a"
format_stamp: "Formatted at 2025-08-09 19:58:28 on dist-test-slave-xzln"
I20250809 19:58:54.667043 30118 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:54.747048 30118 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:54.748915 30118 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:54.749439 30118 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:54.752827 30118 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:58:54.759001 30160 ts_tablet_manager.cc:542] Loading tablet metadata (0/3 complete)
I20250809 19:58:54.773419 30118 ts_tablet_manager.cc:579] Loaded tablet metadata (3 total tablets, 3 live tablets)
I20250809 19:58:54.773628 30118 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.016s user 0.001s sys 0.001s
I20250809 19:58:54.773900 30118 ts_tablet_manager.cc:594] Registering tablets (0/3 complete)
I20250809 19:58:54.778725 30160 tablet_bootstrap.cc:492] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Bootstrap starting.
I20250809 19:58:54.788709 30118 ts_tablet_manager.cc:610] Registered 3 tablets
I20250809 19:58:54.788931 30118 ts_tablet_manager.cc:589] Time spent register tablets: real 0.015s user 0.011s sys 0.003s
I20250809 19:58:54.843576 30160 log.cc:826] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:54.952557 30160 tablet_bootstrap.cc:492] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Bootstrap replayed 1/1 log segments. Stats: ops{read=8 overwritten=0 applied=8 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:54.953599 30160 tablet_bootstrap.cc:492] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Bootstrap complete.
I20250809 19:58:54.955164 30160 ts_tablet_manager.cc:1397] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Time spent bootstrapping tablet: real 0.177s user 0.150s sys 0.020s
I20250809 19:58:54.974532 30160 raft_consensus.cc:357] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } }
I20250809 19:58:54.977165 30160 raft_consensus.cc:738] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5206f9ca2f294510a3639afe13cba75a, State: Initialized, Role: FOLLOWER
I20250809 19:58:54.978204 30160 consensus_queue.cc:260] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 8, Last appended: 2.8, Last appended by leader: 8, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } }
I20250809 19:58:54.978834 30160 raft_consensus.cc:397] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:58:54.979166 30160 raft_consensus.cc:491] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:58:54.979547 30160 raft_consensus.cc:3058] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Advancing to term 3
I20250809 19:58:54.986546 30160 raft_consensus.cc:513] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } }
I20250809 19:58:54.987316 30160 leader_election.cc:304] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 5206f9ca2f294510a3639afe13cba75a; no voters:
I20250809 19:58:54.994995 30269 raft_consensus.cc:2802] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 3 FOLLOWER]: Leader election won for term 3
I20250809 19:58:54.995342 30160 leader_election.cc:290] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 3 election: Requested vote from peers
I20250809 19:58:54.996217 30118 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.129:45679
I20250809 19:58:54.996943 30268 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.129:45679 every 8 connection(s)
I20250809 19:58:54.999328 30118 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250809 19:58:55.007627 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 30118
I20250809 19:58:55.009971 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.130:46405
--local_ip_for_outbound_sockets=127.25.124.130
--tserver_master_addrs=127.25.124.190:34851
--webserver_port=38491
--webserver_interface=127.25.124.130
--builtin_ntp_servers=127.25.124.148:46385
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250809 19:58:55.011554 30160 ts_tablet_manager.cc:1428] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a: Time spent starting tablet: real 0.056s user 0.034s sys 0.012s
I20250809 19:58:55.012298 30160 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a: Bootstrap starting.
I20250809 19:58:55.031306 30269 raft_consensus.cc:695] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [term 3 LEADER]: Becoming Leader. State: Replica: 5206f9ca2f294510a3639afe13cba75a, State: Running, Role: LEADER
I20250809 19:58:55.032181 30269 consensus_queue.cc:237] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 2.8, Last appended by leader: 8, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } }
I20250809 19:58:55.055754 30272 heartbeater.cc:344] Connected to a master server at 127.25.124.190:34851
I20250809 19:58:55.056344 30272 heartbeater.cc:461] Registering TS with master...
I20250809 19:58:55.058022 30272 heartbeater.cc:507] Master 127.25.124.190:34851 requested a full tablet report, sending...
I20250809 19:58:55.063395 30081 ts_manager.cc:194] Registered new tserver with Master: 5206f9ca2f294510a3639afe13cba75a (127.25.124.129:45679)
I20250809 19:58:55.068029 30081 catalog_manager.cc:5582] T 183d980bc8a9431695fca8e1057b20a8 P 5206f9ca2f294510a3639afe13cba75a reported cstate change: term changed from 2 to 3. New cstate: current_term: 3 leader_uuid: "5206f9ca2f294510a3639afe13cba75a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } health_report { overall_health: HEALTHY } } }
I20250809 19:58:55.069252 30081 catalog_manager.cc:5582] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a reported cstate change: config changed from index -1 to 14, term changed from 0 to 2, VOTER 5206f9ca2f294510a3639afe13cba75a (127.25.124.129) added, VOTER 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131) added, VOTER ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130) added. New cstate: current_term: 2 committed_config { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } }
I20250809 19:58:55.119745 30081 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.129:54913
I20250809 19:58:55.123847 30272 heartbeater.cc:499] Master 127.25.124.190:34851 was elected leader, sending a full tablet report...
I20250809 19:58:55.168740 30160 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a: Bootstrap replayed 1/1 log segments. Stats: ops{read=14 overwritten=0 applied=14 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:55.169330 30160 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a: Bootstrap complete.
I20250809 19:58:55.170295 30160 ts_tablet_manager.cc:1397] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a: Time spent bootstrapping tablet: real 0.158s user 0.113s sys 0.023s
I20250809 19:58:55.171726 30160 raft_consensus.cc:357] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } }
I20250809 19:58:55.172164 30160 raft_consensus.cc:738] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5206f9ca2f294510a3639afe13cba75a, State: Initialized, Role: FOLLOWER
I20250809 19:58:55.172649 30160 consensus_queue.cc:260] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 14, Last appended: 2.14, Last appended by leader: 14, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } }
I20250809 19:58:55.174134 30160 ts_tablet_manager.cc:1428] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a: Time spent starting tablet: real 0.004s user 0.004s sys 0.000s
I20250809 19:58:55.174715 30160 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a: Bootstrap starting.
I20250809 19:58:55.254752 30160 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:55.255414 30160 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a: Bootstrap complete.
I20250809 19:58:55.256527 30160 ts_tablet_manager.cc:1397] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a: Time spent bootstrapping tablet: real 0.082s user 0.073s sys 0.008s
I20250809 19:58:55.257859 30160 raft_consensus.cc:357] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false } }
I20250809 19:58:55.258250 30160 raft_consensus.cc:738] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5206f9ca2f294510a3639afe13cba75a, State: Initialized, Role: FOLLOWER
I20250809 19:58:55.258714 30160 consensus_queue.cc:260] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false } }
I20250809 19:58:55.260170 30160 ts_tablet_manager.cc:1428] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a: Time spent starting tablet: real 0.003s user 0.004s sys 0.000s
W20250809 19:58:55.376271 30276 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:55.376714 30276 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:55.377146 30276 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:55.403035 30276 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:55.403764 30276 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.130
I20250809 19:58:55.436041 30276 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:46385
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.130:46405
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.25.124.130
--webserver_port=38491
--tserver_master_addrs=127.25.124.190:34851
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.130
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:55.437158 30276 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:55.438496 30276 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:55.449359 30290 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:55.453208 30293 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:55.452020 30291 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:56.457158 30292 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1003 milliseconds
I20250809 19:58:56.457257 30276 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:58:56.458259 30276 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:56.460597 30276 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:56.461983 30276 hybrid_clock.cc:648] HybridClock initialized: now 1754769536461947 us; error 41 us; skew 500 ppm
I20250809 19:58:56.462643 30276 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:56.468267 30276 webserver.cc:489] Webserver started at http://127.25.124.130:38491/ using document root <none> and password file <none>
I20250809 19:58:56.469005 30276 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:56.469166 30276 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:56.475826 30276 fs_manager.cc:714] Time spent opening directory manager: real 0.004s user 0.003s sys 0.001s
I20250809 19:58:56.479691 30300 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:56.480518 30276 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.000s
I20250809 19:58:56.480785 30276 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175"
format_stamp: "Formatted at 2025-08-09 19:58:29 on dist-test-slave-xzln"
I20250809 19:58:56.482380 30276 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:56.519286 30276 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:56.520435 30276 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:56.520799 30276 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:56.522850 30276 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:58:56.527515 30307 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250809 19:58:56.537002 30276 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250809 19:58:56.537195 30276 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.011s user 0.002s sys 0.000s
I20250809 19:58:56.537415 30276 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250809 19:58:56.541616 30307 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: Bootstrap starting.
I20250809 19:58:56.543776 30276 ts_tablet_manager.cc:610] Registered 2 tablets
I20250809 19:58:56.543943 30276 ts_tablet_manager.cc:589] Time spent register tablets: real 0.007s user 0.006s sys 0.000s
I20250809 19:58:56.585098 30307 log.cc:826] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:56.680996 30307 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: Bootstrap replayed 1/1 log segments. Stats: ops{read=14 overwritten=0 applied=14 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:56.681699 30307 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: Bootstrap complete.
I20250809 19:58:56.683141 30307 ts_tablet_manager.cc:1397] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: Time spent bootstrapping tablet: real 0.142s user 0.108s sys 0.029s
I20250809 19:58:56.697027 30276 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.130:46405
I20250809 19:58:56.697238 30414 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.130:46405 every 8 connection(s)
I20250809 19:58:56.699059 30276 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250809 19:58:56.700059 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 30276
I20250809 19:58:56.698515 30307 raft_consensus.cc:357] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } }
I20250809 19:58:56.700690 30307 raft_consensus.cc:738] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: ba551624a9fc4e14a2ffdc1d5a5e1175, State: Initialized, Role: FOLLOWER
I20250809 19:58:56.701692 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.131:35781
--local_ip_for_outbound_sockets=127.25.124.131
--tserver_master_addrs=127.25.124.190:34851
--webserver_port=40843
--webserver_interface=127.25.124.131
--builtin_ntp_servers=127.25.124.148:46385
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250809 19:58:56.701704 30307 consensus_queue.cc:260] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 14, Last appended: 2.14, Last appended by leader: 14, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } }
I20250809 19:58:56.708799 30307 ts_tablet_manager.cc:1428] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: Time spent starting tablet: real 0.025s user 0.018s sys 0.003s
I20250809 19:58:56.709440 30307 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Bootstrap starting.
I20250809 19:58:56.718343 30415 heartbeater.cc:344] Connected to a master server at 127.25.124.190:34851
I20250809 19:58:56.718665 30415 heartbeater.cc:461] Registering TS with master...
I20250809 19:58:56.719444 30415 heartbeater.cc:507] Master 127.25.124.190:34851 requested a full tablet report, sending...
I20250809 19:58:56.722232 30080 ts_manager.cc:194] Registered new tserver with Master: ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130:46405)
I20250809 19:58:56.725267 30080 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.130:45319
I20250809 19:58:56.727880 30415 heartbeater.cc:499] Master 127.25.124.190:34851 was elected leader, sending a full tablet report...
I20250809 19:58:56.824258 30307 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:56.824823 30307 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Bootstrap complete.
I20250809 19:58:56.825790 30307 ts_tablet_manager.cc:1397] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Time spent bootstrapping tablet: real 0.117s user 0.101s sys 0.012s
I20250809 19:58:56.827070 30307 raft_consensus.cc:357] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false } }
I20250809 19:58:56.827476 30307 raft_consensus.cc:738] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: ba551624a9fc4e14a2ffdc1d5a5e1175, State: Initialized, Role: FOLLOWER
I20250809 19:58:56.827855 30307 consensus_queue.cc:260] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false } }
I20250809 19:58:56.829157 30307 ts_tablet_manager.cc:1428] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175: Time spent starting tablet: real 0.003s user 0.004s sys 0.000s
I20250809 19:58:56.863099 30422 raft_consensus.cc:491] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250809 19:58:56.863466 30422 raft_consensus.cc:513] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false } }
I20250809 19:58:56.865180 30422 leader_election.cc:290] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130:46405), 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781)
I20250809 19:58:56.876084 30422 raft_consensus.cc:491] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250809 19:58:56.876489 30422 raft_consensus.cc:513] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } }
W20250809 19:58:56.876919 30157 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.25.124.131:35781: connect: Connection refused (error 111)
I20250809 19:58:56.878355 30422 leader_election.cc:290] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781), ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130:46405)
W20250809 19:58:56.880757 30157 leader_election.cc:336] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781): Network error: Client connection negotiation failed: client connection to 127.25.124.131:35781: connect: Connection refused (error 111)
W20250809 19:58:56.882742 30157 leader_election.cc:336] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781): Network error: Client connection negotiation failed: client connection to 127.25.124.131:35781: connect: Connection refused (error 111)
I20250809 19:58:56.886076 30370 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "06a690aa4a264e50a47bcb3f31a0fdde" candidate_uuid: "5206f9ca2f294510a3639afe13cba75a" candidate_term: 3 candidate_status { last_received { term: 2 index: 12 } } ignore_live_leader: false dest_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" is_pre_election: true
I20250809 19:58:56.886473 30369 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b24467c3debc41148d89819b96bbd341" candidate_uuid: "5206f9ca2f294510a3639afe13cba75a" candidate_term: 3 candidate_status { last_received { term: 2 index: 14 } } ignore_live_leader: false dest_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" is_pre_election: true
I20250809 19:58:56.886718 30370 raft_consensus.cc:2466] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 5206f9ca2f294510a3639afe13cba75a in term 2.
I20250809 19:58:56.887068 30369 raft_consensus.cc:2466] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 5206f9ca2f294510a3639afe13cba75a in term 2.
I20250809 19:58:56.887948 30156 leader_election.cc:304] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 5206f9ca2f294510a3639afe13cba75a, ba551624a9fc4e14a2ffdc1d5a5e1175; no voters: 6fa68d85f6924882be8d0d10d5c55b1a
I20250809 19:58:56.888913 30156 leader_election.cc:304] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 5206f9ca2f294510a3639afe13cba75a, ba551624a9fc4e14a2ffdc1d5a5e1175; no voters: 6fa68d85f6924882be8d0d10d5c55b1a
I20250809 19:58:56.889163 30422 raft_consensus.cc:2802] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250809 19:58:56.890935 30422 raft_consensus.cc:491] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250809 19:58:56.891144 30427 raft_consensus.cc:2802] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250809 19:58:56.891234 30422 raft_consensus.cc:3058] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Advancing to term 3
I20250809 19:58:56.891427 30427 raft_consensus.cc:491] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250809 19:58:56.891631 30427 raft_consensus.cc:3058] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 2 FOLLOWER]: Advancing to term 3
I20250809 19:58:56.895184 30427 raft_consensus.cc:513] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 3 FOLLOWER]: Starting leader election with config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } }
I20250809 19:58:56.896463 30427 leader_election.cc:290] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 3 election: Requested vote from peers 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781), ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130:46405)
I20250809 19:58:56.897045 30422 raft_consensus.cc:513] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 3 FOLLOWER]: Starting leader election with config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false } }
I20250809 19:58:56.898226 30369 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b24467c3debc41148d89819b96bbd341" candidate_uuid: "5206f9ca2f294510a3639afe13cba75a" candidate_term: 3 candidate_status { last_received { term: 2 index: 14 } } ignore_live_leader: false dest_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175"
I20250809 19:58:56.898742 30369 raft_consensus.cc:3058] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 FOLLOWER]: Advancing to term 3
W20250809 19:58:56.899102 30157 leader_election.cc:336] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781): Network error: Client connection negotiation failed: client connection to 127.25.124.131:35781: connect: Connection refused (error 111)
I20250809 19:58:56.899871 30422 leader_election.cc:290] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 3 election: Requested vote from peers ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130:46405), 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781)
I20250809 19:58:56.900460 30370 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "06a690aa4a264e50a47bcb3f31a0fdde" candidate_uuid: "5206f9ca2f294510a3639afe13cba75a" candidate_term: 3 candidate_status { last_received { term: 2 index: 12 } } ignore_live_leader: false dest_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175"
I20250809 19:58:56.900903 30370 raft_consensus.cc:3058] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 2 FOLLOWER]: Advancing to term 3
W20250809 19:58:56.903982 30157 leader_election.cc:336] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781): Network error: Client connection negotiation failed: client connection to 127.25.124.131:35781: connect: Connection refused (error 111)
I20250809 19:58:56.906149 30370 raft_consensus.cc:2466] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 5206f9ca2f294510a3639afe13cba75a in term 3.
I20250809 19:58:56.906777 30156 leader_election.cc:304] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 5206f9ca2f294510a3639afe13cba75a, ba551624a9fc4e14a2ffdc1d5a5e1175; no voters: 6fa68d85f6924882be8d0d10d5c55b1a
I20250809 19:58:56.907240 30422 raft_consensus.cc:2802] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 3 FOLLOWER]: Leader election won for term 3
I20250809 19:58:56.907262 30369 raft_consensus.cc:2466] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 5206f9ca2f294510a3639afe13cba75a in term 3.
I20250809 19:58:56.907589 30422 raft_consensus.cc:695] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [term 3 LEADER]: Becoming Leader. State: Replica: 5206f9ca2f294510a3639afe13cba75a, State: Running, Role: LEADER
I20250809 19:58:56.908103 30156 leader_election.cc:304] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 5206f9ca2f294510a3639afe13cba75a, ba551624a9fc4e14a2ffdc1d5a5e1175; no voters: 6fa68d85f6924882be8d0d10d5c55b1a
I20250809 19:58:56.908241 30422 consensus_queue.cc:237] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 12, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false } }
I20250809 19:58:56.908703 30427 raft_consensus.cc:2802] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 3 FOLLOWER]: Leader election won for term 3
I20250809 19:58:56.910619 30427 raft_consensus.cc:695] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 3 LEADER]: Becoming Leader. State: Replica: 5206f9ca2f294510a3639afe13cba75a, State: Running, Role: LEADER
I20250809 19:58:56.911481 30427 consensus_queue.cc:237] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 14, Committed index: 14, Last appended: 2.14, Last appended by leader: 14, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } }
I20250809 19:58:56.916993 30080 catalog_manager.cc:5582] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a reported cstate change: term changed from 2 to 3, leader changed from ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130) to 5206f9ca2f294510a3639afe13cba75a (127.25.124.129). New cstate: current_term: 3 leader_uuid: "5206f9ca2f294510a3639afe13cba75a" committed_config { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false } health_report { overall_health: UNKNOWN } } }
I20250809 19:58:56.929420 30080 catalog_manager.cc:5582] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a reported cstate change: term changed from 2 to 3, leader changed from <none> to 5206f9ca2f294510a3639afe13cba75a (127.25.124.129). New cstate: current_term: 3 leader_uuid: "5206f9ca2f294510a3639afe13cba75a" committed_config { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
W20250809 19:58:57.028522 30420 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:58:57.028908 30420 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:58:57.029348 30420 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:58:57.055559 30420 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:58:57.056229 30420 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.131
I20250809 19:58:57.084036 30420 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:46385
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.131:35781
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.25.124.131
--webserver_port=40843
--tserver_master_addrs=127.25.124.190:34851
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.131
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:58:57.085076 30420 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:58:57.086387 30420 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:58:57.096295 30440 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:58:57.309199 30369 raft_consensus.cc:1273] T 06a690aa4a264e50a47bcb3f31a0fdde P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 3 FOLLOWER]: Refusing update from remote peer 5206f9ca2f294510a3639afe13cba75a: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 3 index: 13. (index mismatch)
I20250809 19:58:57.311282 30427 consensus_queue.cc:1035] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a [LEADER]: Connected to new peer: Peer: permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.001s
W20250809 19:58:57.349977 30157 consensus_peers.cc:489] T 06a690aa4a264e50a47bcb3f31a0fdde P 5206f9ca2f294510a3639afe13cba75a -> Peer 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781): Couldn't send request to peer 6fa68d85f6924882be8d0d10d5c55b1a. Status: Network error: Client connection negotiation failed: client connection to 127.25.124.131:35781: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
W20250809 19:58:57.360076 30157 consensus_peers.cc:489] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a -> Peer 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781): Couldn't send request to peer 6fa68d85f6924882be8d0d10d5c55b1a. Status: Network error: Client connection negotiation failed: client connection to 127.25.124.131:35781: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250809 19:58:57.364499 30369 raft_consensus.cc:1273] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 3 FOLLOWER]: Refusing update from remote peer 5206f9ca2f294510a3639afe13cba75a: Log matching property violated. Preceding OpId in replica: term: 2 index: 14. Preceding OpId from leader: term: 3 index: 15. (index mismatch)
I20250809 19:58:57.365587 30449 consensus_queue.cc:1035] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [LEADER]: Connected to new peer: Peer: permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 15, Last known committed idx: 14, Time since last communication: 0.000s
I20250809 19:58:57.395138 30224 consensus_queue.cc:237] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 15, Committed index: 15, Last appended: 3.15, Last appended by leader: 14, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } }
I20250809 19:58:57.398032 30369 raft_consensus.cc:1273] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 3 FOLLOWER]: Refusing update from remote peer 5206f9ca2f294510a3639afe13cba75a: Log matching property violated. Preceding OpId in replica: term: 3 index: 15. Preceding OpId from leader: term: 3 index: 16. (index mismatch)
I20250809 19:58:57.398830 30449 consensus_queue.cc:1035] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [LEADER]: Connected to new peer: Peer: permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 16, Last known committed idx: 15, Time since last communication: 0.000s
I20250809 19:58:57.402690 30428 raft_consensus.cc:2953] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 3 LEADER]: Committing config change with OpId 3.16: config changed from index 14 to 16, VOTER 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131) evicted. New config: { opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } }
I20250809 19:58:57.403926 30369 raft_consensus.cc:2953] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 3 FOLLOWER]: Committing config change with OpId 3.16: config changed from index 14 to 16, VOTER 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131) evicted. New config: { opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } }
I20250809 19:58:57.415062 30080 catalog_manager.cc:5582] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 reported cstate change: config changed from index 14 to 16, VOTER 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131) evicted. New cstate: current_term: 3 leader_uuid: "5206f9ca2f294510a3639afe13cba75a" committed_config { opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } }
I20250809 19:58:57.417094 30065 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet b24467c3debc41148d89819b96bbd341 with cas_config_opid_index 14: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
W20250809 19:58:57.424150 30080 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet b24467c3debc41148d89819b96bbd341 on TS 6fa68d85f6924882be8d0d10d5c55b1a: Not found: failed to reset TS proxy: Could not find TS for UUID 6fa68d85f6924882be8d0d10d5c55b1a
I20250809 19:58:57.428249 30224 consensus_queue.cc:237] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 16, Committed index: 16, Last appended: 3.16, Last appended by leader: 14, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 17 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } }
I20250809 19:58:57.430470 30422 raft_consensus.cc:2953] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a [term 3 LEADER]: Committing config change with OpId 3.17: config changed from index 16 to 17, VOTER ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130) evicted. New config: { opid_index: 17 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } }
I20250809 19:58:57.435009 30065 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet b24467c3debc41148d89819b96bbd341 with cas_config_opid_index 16: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250809 19:58:57.437431 30080 catalog_manager.cc:5582] T b24467c3debc41148d89819b96bbd341 P 5206f9ca2f294510a3639afe13cba75a reported cstate change: config changed from index 16 to 17, VOTER ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130) evicted. New cstate: current_term: 3 leader_uuid: "5206f9ca2f294510a3639afe13cba75a" committed_config { opid_index: 17 OBSOLETE_local: true peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250809 19:58:57.451815 30350 tablet_service.cc:1515] Processing DeleteTablet for tablet b24467c3debc41148d89819b96bbd341 with delete_type TABLET_DATA_TOMBSTONED (TS ba551624a9fc4e14a2ffdc1d5a5e1175 not found in new config with opid_index 17) from {username='slave'} at 127.0.0.1:38124
I20250809 19:58:57.455842 30454 tablet_replica.cc:331] stopping tablet replica
I20250809 19:58:57.456377 30454 raft_consensus.cc:2241] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 3 FOLLOWER]: Raft consensus shutting down.
I20250809 19:58:57.456866 30454 raft_consensus.cc:2270] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175 [term 3 FOLLOWER]: Raft consensus is shut down!
I20250809 19:58:57.459492 30454 ts_tablet_manager.cc:1905] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250809 19:58:57.467690 30454 ts_tablet_manager.cc:1918] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 3.16
I20250809 19:58:57.467938 30454 log.cc:1199] T b24467c3debc41148d89819b96bbd341 P ba551624a9fc4e14a2ffdc1d5a5e1175: Deleting WAL directory at /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal/wals/b24467c3debc41148d89819b96bbd341
I20250809 19:58:57.469027 30067 catalog_manager.cc:4928] TS ba551624a9fc4e14a2ffdc1d5a5e1175 (127.25.124.130:46405): tablet b24467c3debc41148d89819b96bbd341 (table TestTable [id=776bea9d5f4a427d8291e03abba3717a]) successfully deleted
W20250809 19:58:57.476863 30066 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet b24467c3debc41148d89819b96bbd341 on TS 6fa68d85f6924882be8d0d10d5c55b1a failed: Not found: failed to reset TS proxy: Could not find TS for UUID 6fa68d85f6924882be8d0d10d5c55b1a
W20250809 19:58:57.096844 30441 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:58.200840 30443 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:58:58.202600 30442 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1102 milliseconds
I20250809 19:58:58.202674 30420 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:58:58.203712 30420 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:58:58.205965 30420 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:58:58.207306 30420 hybrid_clock.cc:648] HybridClock initialized: now 1754769538207264 us; error 41 us; skew 500 ppm
I20250809 19:58:58.207998 30420 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:58:58.213337 30420 webserver.cc:489] Webserver started at http://127.25.124.131:40843/ using document root <none> and password file <none>
I20250809 19:58:58.214109 30420 fs_manager.cc:362] Metadata directory not provided
I20250809 19:58:58.214273 30420 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:58:58.220778 30420 fs_manager.cc:714] Time spent opening directory manager: real 0.004s user 0.005s sys 0.000s
I20250809 19:58:58.224711 30461 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:58:58.225575 30420 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250809 19:58:58.225880 30420 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "6fa68d85f6924882be8d0d10d5c55b1a"
format_stamp: "Formatted at 2025-08-09 19:58:31 on dist-test-slave-xzln"
I20250809 19:58:58.227562 30420 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:58:58.274550 30420 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:58:58.275786 30420 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:58:58.276144 30420 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:58:58.278198 30420 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:58:58.283074 30468 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250809 19:58:58.292522 30420 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250809 19:58:58.292740 30420 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.011s user 0.002s sys 0.000s
I20250809 19:58:58.292965 30420 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250809 19:58:58.297087 30468 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Bootstrap starting.
I20250809 19:58:58.299253 30420 ts_tablet_manager.cc:610] Registered 2 tablets
I20250809 19:58:58.299437 30420 ts_tablet_manager.cc:589] Time spent register tablets: real 0.006s user 0.007s sys 0.000s
I20250809 19:58:58.342754 30468 log.cc:826] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Log is configured to *not* fsync() on all Append() calls
I20250809 19:58:58.439015 30420 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.131:35781
I20250809 19:58:58.439122 30575 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.131:35781 every 8 connection(s)
I20250809 19:58:58.441942 30420 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250809 19:58:58.446007 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 30420
I20250809 19:58:58.485838 30576 heartbeater.cc:344] Connected to a master server at 127.25.124.190:34851
I20250809 19:58:58.486236 30576 heartbeater.cc:461] Registering TS with master...
I20250809 19:58:58.487275 30576 heartbeater.cc:507] Master 127.25.124.190:34851 requested a full tablet report, sending...
I20250809 19:58:58.490984 30080 ts_manager.cc:194] Registered new tserver with Master: 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781)
I20250809 19:58:58.492872 30468 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Bootstrap replayed 1/1 log segments. Stats: ops{read=14 overwritten=0 applied=14 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:58.493779 30468 tablet_bootstrap.cc:492] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Bootstrap complete.
I20250809 19:58:58.495289 30468 ts_tablet_manager.cc:1397] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Time spent bootstrapping tablet: real 0.198s user 0.161s sys 0.032s
I20250809 19:58:58.496802 30080 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.131:60289
I20250809 19:58:58.500113 26098 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250809 19:58:58.504483 26098 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
W20250809 19:58:58.507596 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
I20250809 19:58:58.511060 30468 raft_consensus.cc:357] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } }
I20250809 19:58:58.512717 30468 raft_consensus.cc:738] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6fa68d85f6924882be8d0d10d5c55b1a, State: Initialized, Role: FOLLOWER
I20250809 19:58:58.513275 30468 consensus_queue.cc:260] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 14, Last appended: 2.14, Last appended by leader: 14, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } } peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } attrs { promote: false } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } }
I20250809 19:58:58.515230 30576 heartbeater.cc:499] Master 127.25.124.190:34851 was elected leader, sending a full tablet report...
I20250809 19:58:58.515959 30468 ts_tablet_manager.cc:1428] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Time spent starting tablet: real 0.020s user 0.017s sys 0.000s
I20250809 19:58:58.516580 30468 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a: Bootstrap starting.
I20250809 19:58:58.591435 30468 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 19:58:58.591982 30468 tablet_bootstrap.cc:492] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a: Bootstrap complete.
I20250809 19:58:58.592926 30468 ts_tablet_manager.cc:1397] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a: Time spent bootstrapping tablet: real 0.076s user 0.067s sys 0.007s
I20250809 19:58:58.594180 30468 raft_consensus.cc:357] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false } }
I20250809 19:58:58.594532 30468 raft_consensus.cc:738] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6fa68d85f6924882be8d0d10d5c55b1a, State: Initialized, Role: FOLLOWER
I20250809 19:58:58.594941 30468 consensus_queue.cc:260] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "ba551624a9fc4e14a2ffdc1d5a5e1175" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46405 } } peers { permanent_uuid: "5206f9ca2f294510a3639afe13cba75a" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45679 } attrs { promote: false } } peers { permanent_uuid: "6fa68d85f6924882be8d0d10d5c55b1a" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35781 } attrs { promote: false } }
I20250809 19:58:58.595983 30468 ts_tablet_manager.cc:1428] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a: Time spent starting tablet: real 0.003s user 0.004s sys 0.000s
I20250809 19:58:58.655253 30511 tablet_service.cc:1515] Processing DeleteTablet for tablet b24467c3debc41148d89819b96bbd341 with delete_type TABLET_DATA_TOMBSTONED (TS 6fa68d85f6924882be8d0d10d5c55b1a not found in new config with opid_index 16) from {username='slave'} at 127.0.0.1:52606
I20250809 19:58:58.657711 30587 tablet_replica.cc:331] stopping tablet replica
I20250809 19:58:58.658232 30587 raft_consensus.cc:2241] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 FOLLOWER]: Raft consensus shutting down.
I20250809 19:58:58.658581 30587 raft_consensus.cc:2270] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 FOLLOWER]: Raft consensus is shut down!
I20250809 19:58:58.660566 30587 ts_tablet_manager.cc:1905] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250809 19:58:58.670219 30587 ts_tablet_manager.cc:1918] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.14
I20250809 19:58:58.670471 30587 log.cc:1199] T b24467c3debc41148d89819b96bbd341 P 6fa68d85f6924882be8d0d10d5c55b1a: Deleting WAL directory at /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal/wals/b24467c3debc41148d89819b96bbd341
I20250809 19:58:58.671705 30068 catalog_manager.cc:4928] TS 6fa68d85f6924882be8d0d10d5c55b1a (127.25.124.131:35781): tablet b24467c3debc41148d89819b96bbd341 (table TestTable [id=776bea9d5f4a427d8291e03abba3717a]) successfully deleted
I20250809 19:58:59.097045 30531 raft_consensus.cc:3058] T 06a690aa4a264e50a47bcb3f31a0fdde P 6fa68d85f6924882be8d0d10d5c55b1a [term 2 FOLLOWER]: Advancing to term 3
W20250809 19:58:59.510643 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:00.513355 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:01.516637 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:02.519562 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:03.522264 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:04.264982 30572 debug-util.cc:398] Leaking SignalData structure 0x7b08000bf540 after lost signal to thread 30435
W20250809 19:59:04.265689 30572 debug-util.cc:398] Leaking SignalData structure 0x7b08000b5120 after lost signal to thread 30575
W20250809 19:59:04.525233 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:05.528656 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:06.531909 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:07.534811 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:08.537837 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:09.540545 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:10.543311 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:11.546284 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:12.549124 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:13.552206 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:14.554896 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:15.558197 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:16.551119 30572 debug-util.cc:398] Leaking SignalData structure 0x7b08000bf980 after lost signal to thread 30435
W20250809 19:59:16.551893 30572 debug-util.cc:398] Leaking SignalData structure 0x7b08000ca4a0 after lost signal to thread 30575
W20250809 19:59:16.561331 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250809 19:59:17.564546 26098 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet b24467c3debc41148d89819b96bbd341: tablet_id: "b24467c3debc41148d89819b96bbd341" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
/home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/tools/kudu-admin-test.cc:3914: Failure
Failed
Bad status: Not found: not all replicas of tablets comprising table TestTable are registered yet
I20250809 19:59:18.566931 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 30118
I20250809 19:59:18.593616 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 30276
I20250809 19:59:18.614609 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 30420
I20250809 19:59:18.638070 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 30048
2025-08-09T19:59:18Z chronyd exiting
I20250809 19:59:18.680265 26098 test_util.cc:183] -----------------------------------------------
I20250809 19:59:18.680431 26098 test_util.cc:184] Had failures, leaving test files at /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754769444608376-26098-0
[ FAILED ] AdminCliTest.TestRebuildTables (53752 ms)
[----------] 5 tests from AdminCliTest (114016 ms total)
[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest
[ RUN ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4
I20250809 19:59:18.683562 26098 test_util.cc:276] Using random seed: 540360599
I20250809 19:59:18.687042 26098 ts_itest-base.cc:115] Starting cluster with:
I20250809 19:59:18.687173 26098 ts_itest-base.cc:116] --------------
I20250809 19:59:18.687341 26098 ts_itest-base.cc:117] 5 tablet servers
I20250809 19:59:18.687460 26098 ts_itest-base.cc:118] 3 replicas per TS
I20250809 19:59:18.687580 26098 ts_itest-base.cc:119] --------------
2025-08-09T19:59:18Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-09T19:59:18Z Disabled control of system clock
I20250809 19:59:18.720630 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:39147
--webserver_interface=127.25.124.190
--webserver_port=0
--builtin_ntp_servers=127.25.124.148:40731
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.25.124.190:39147
--raft_prepare_replacement_before_eviction=true with env {}
W20250809 19:59:18.967941 30611 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:59:18.968420 30611 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:59:18.968801 30611 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:59:18.993664 30611 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250809 19:59:18.993993 30611 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250809 19:59:18.994220 30611 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:59:18.994432 30611 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250809 19:59:18.994637 30611 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250809 19:59:19.022472 30611 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:40731
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.25.124.190:39147
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:39147
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.25.124.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:59:19.023591 30611 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:59:19.024936 30611 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:59:19.033450 30617 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:19.034597 30618 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:19.037604 30620 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:20.043548 30619 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250809 19:59:20.043677 30611 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:59:20.047168 30611 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:59:20.049983 30611 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:59:20.051309 30611 hybrid_clock.cc:648] HybridClock initialized: now 1754769560051275 us; error 55 us; skew 500 ppm
I20250809 19:59:20.052124 30611 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:59:20.061569 30611 webserver.cc:489] Webserver started at http://127.25.124.190:42875/ using document root <none> and password file <none>
I20250809 19:59:20.062323 30611 fs_manager.cc:362] Metadata directory not provided
I20250809 19:59:20.062482 30611 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:59:20.062848 30611 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:59:20.066536 30611 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "df3bc247ebdc49c08138664311af57de"
format_stamp: "Formatted at 2025-08-09 19:59:20 on dist-test-slave-xzln"
I20250809 19:59:20.067457 30611 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "df3bc247ebdc49c08138664311af57de"
format_stamp: "Formatted at 2025-08-09 19:59:20 on dist-test-slave-xzln"
I20250809 19:59:20.073277 30611 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.005s sys 0.000s
I20250809 19:59:20.077725 30627 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:20.078549 30611 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250809 19:59:20.078809 30611 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
uuid: "df3bc247ebdc49c08138664311af57de"
format_stamp: "Formatted at 2025-08-09 19:59:20 on dist-test-slave-xzln"
I20250809 19:59:20.079075 30611 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:59:20.127398 30611 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:59:20.128815 30611 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:59:20.129276 30611 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:59:20.197202 30611 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.190:39147
I20250809 19:59:20.197264 30678 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.190:39147 every 8 connection(s)
I20250809 19:59:20.199457 30611 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250809 19:59:20.203647 30679 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:59:20.207242 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 30611
I20250809 19:59:20.207695 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250809 19:59:20.223941 30679 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de: Bootstrap starting.
I20250809 19:59:20.229172 30679 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de: Neither blocks nor log segments found. Creating new log.
I20250809 19:59:20.230479 30679 log.cc:826] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de: Log is configured to *not* fsync() on all Append() calls
I20250809 19:59:20.233867 30679 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de: No bootstrap required, opened a new log
I20250809 19:59:20.248426 30679 raft_consensus.cc:357] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "df3bc247ebdc49c08138664311af57de" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 39147 } }
I20250809 19:59:20.248927 30679 raft_consensus.cc:383] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:59:20.249126 30679 raft_consensus.cc:738] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: df3bc247ebdc49c08138664311af57de, State: Initialized, Role: FOLLOWER
I20250809 19:59:20.249660 30679 consensus_queue.cc:260] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "df3bc247ebdc49c08138664311af57de" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 39147 } }
I20250809 19:59:20.250088 30679 raft_consensus.cc:397] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:59:20.250303 30679 raft_consensus.cc:491] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:59:20.250555 30679 raft_consensus.cc:3058] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:59:20.253942 30679 raft_consensus.cc:513] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "df3bc247ebdc49c08138664311af57de" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 39147 } }
I20250809 19:59:20.254644 30679 leader_election.cc:304] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: df3bc247ebdc49c08138664311af57de; no voters:
I20250809 19:59:20.256069 30679 leader_election.cc:290] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [CANDIDATE]: Term 1 election: Requested vote from peers
I20250809 19:59:20.256752 30684 raft_consensus.cc:2802] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:59:20.258574 30684 raft_consensus.cc:695] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [term 1 LEADER]: Becoming Leader. State: Replica: df3bc247ebdc49c08138664311af57de, State: Running, Role: LEADER
I20250809 19:59:20.259332 30679 sys_catalog.cc:564] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [sys.catalog]: configured and running, proceeding with master startup.
I20250809 19:59:20.259145 30684 consensus_queue.cc:237] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "df3bc247ebdc49c08138664311af57de" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 39147 } }
I20250809 19:59:20.269076 30685 sys_catalog.cc:455] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "df3bc247ebdc49c08138664311af57de" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "df3bc247ebdc49c08138664311af57de" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 39147 } } }
I20250809 19:59:20.270004 30686 sys_catalog.cc:455] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [sys.catalog]: SysCatalogTable state changed. Reason: New leader df3bc247ebdc49c08138664311af57de. Latest consensus state: current_term: 1 leader_uuid: "df3bc247ebdc49c08138664311af57de" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "df3bc247ebdc49c08138664311af57de" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 39147 } } }
I20250809 19:59:20.270364 30685 sys_catalog.cc:458] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [sys.catalog]: This master's current role is: LEADER
I20250809 19:59:20.270496 30686 sys_catalog.cc:458] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de [sys.catalog]: This master's current role is: LEADER
I20250809 19:59:20.273052 30695 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250809 19:59:20.282277 30695 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250809 19:59:20.294857 30695 catalog_manager.cc:1349] Generated new cluster ID: ac34a1a504414ae182043e17e36df2da
I20250809 19:59:20.295090 30695 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250809 19:59:20.313165 30695 catalog_manager.cc:1372] Generated new certificate authority record
I20250809 19:59:20.314685 30695 catalog_manager.cc:1506] Loading token signing keys...
I20250809 19:59:20.332134 30695 catalog_manager.cc:5955] T 00000000000000000000000000000000 P df3bc247ebdc49c08138664311af57de: Generated new TSK 0
I20250809 19:59:20.332859 30695 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250809 19:59:20.352501 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.129:0
--local_ip_for_outbound_sockets=127.25.124.129
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:39147
--builtin_ntp_servers=127.25.124.148:40731
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
W20250809 19:59:20.606127 30703 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:59:20.606530 30703 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:59:20.606997 30703 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:59:20.632030 30703 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250809 19:59:20.632342 30703 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:59:20.632998 30703 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.129
I20250809 19:59:20.660445 30703 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:40731
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:39147
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.129
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:59:20.661525 30703 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:59:20.662992 30703 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:59:20.673980 30709 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:22.076411 30708 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 30703
W20250809 19:59:22.180696 30708 kernel_stack_watchdog.cc:198] Thread 30703 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 399ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250809 19:59:20.674577 30710 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:22.186019 30703 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.511s user 0.000s sys 0.005s
W20250809 19:59:22.186446 30703 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.511s user 0.000s sys 0.006s
W20250809 19:59:22.187835 30712 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:59:22.187994 30703 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250809 19:59:22.188061 30711 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1511 milliseconds
I20250809 19:59:22.189220 30703 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:59:22.191445 30703 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:59:22.192837 30703 hybrid_clock.cc:648] HybridClock initialized: now 1754769562192796 us; error 46 us; skew 500 ppm
I20250809 19:59:22.193490 30703 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:59:22.199252 30703 webserver.cc:489] Webserver started at http://127.25.124.129:37291/ using document root <none> and password file <none>
I20250809 19:59:22.200187 30703 fs_manager.cc:362] Metadata directory not provided
I20250809 19:59:22.200376 30703 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:59:22.200744 30703 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:59:22.204413 30703 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "77851c911f2a46ea8914cc2f64878c61"
format_stamp: "Formatted at 2025-08-09 19:59:22 on dist-test-slave-xzln"
I20250809 19:59:22.205331 30703 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "77851c911f2a46ea8914cc2f64878c61"
format_stamp: "Formatted at 2025-08-09 19:59:22 on dist-test-slave-xzln"
I20250809 19:59:22.211759 30703 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.001s
I20250809 19:59:22.216817 30719 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:22.217667 30703 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.003s
I20250809 19:59:22.217916 30703 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "77851c911f2a46ea8914cc2f64878c61"
format_stamp: "Formatted at 2025-08-09 19:59:22 on dist-test-slave-xzln"
I20250809 19:59:22.218160 30703 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:59:22.261421 30703 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:59:22.262568 30703 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:59:22.262933 30703 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:59:22.265551 30703 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:59:22.269392 30703 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:59:22.269577 30703 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:22.269786 30703 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:59:22.269973 30703 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.001s sys 0.000s
I20250809 19:59:22.423728 30703 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.129:41883
I20250809 19:59:22.423820 30831 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.129:41883 every 8 connection(s)
I20250809 19:59:22.425993 30703 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250809 19:59:22.433804 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 30703
I20250809 19:59:22.434249 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250809 19:59:22.440821 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.130:0
--local_ip_for_outbound_sockets=127.25.124.130
--webserver_interface=127.25.124.130
--webserver_port=0
--tserver_master_addrs=127.25.124.190:39147
--builtin_ntp_servers=127.25.124.148:40731
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250809 19:59:22.453023 30832 heartbeater.cc:344] Connected to a master server at 127.25.124.190:39147
I20250809 19:59:22.453472 30832 heartbeater.cc:461] Registering TS with master...
I20250809 19:59:22.454516 30832 heartbeater.cc:507] Master 127.25.124.190:39147 requested a full tablet report, sending...
I20250809 19:59:22.457324 30644 ts_manager.cc:194] Registered new tserver with Master: 77851c911f2a46ea8914cc2f64878c61 (127.25.124.129:41883)
I20250809 19:59:22.460093 30644 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.129:50461
W20250809 19:59:22.707175 30836 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:59:22.707659 30836 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:59:22.708105 30836 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:59:22.733669 30836 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250809 19:59:22.733992 30836 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:59:22.734633 30836 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.130
I20250809 19:59:22.762526 30836 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:40731
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.25.124.130
--webserver_port=0
--tserver_master_addrs=127.25.124.190:39147
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.130
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:59:22.763665 30836 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:59:22.765131 30836 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:59:22.776072 30842 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:59:23.463627 30832 heartbeater.cc:499] Master 127.25.124.190:39147 was elected leader, sending a full tablet report...
W20250809 19:59:24.178807 30841 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 30836
W20250809 19:59:24.264106 30836 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.488s user 0.542s sys 0.937s
W20250809 19:59:24.264528 30836 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.489s user 0.542s sys 0.937s
W20250809 19:59:22.776397 30843 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:24.266134 30845 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:24.268628 30844 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1489 milliseconds
I20250809 19:59:24.268682 30836 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:59:24.269727 30836 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:59:24.271528 30836 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:59:24.272841 30836 hybrid_clock.cc:648] HybridClock initialized: now 1754769564272790 us; error 40 us; skew 500 ppm
I20250809 19:59:24.273495 30836 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:59:24.278561 30836 webserver.cc:489] Webserver started at http://127.25.124.130:35079/ using document root <none> and password file <none>
I20250809 19:59:24.279379 30836 fs_manager.cc:362] Metadata directory not provided
I20250809 19:59:24.279546 30836 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:59:24.279882 30836 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:59:24.283458 30836 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "28af70b518b34b289177e053e9597e48"
format_stamp: "Formatted at 2025-08-09 19:59:24 on dist-test-slave-xzln"
I20250809 19:59:24.284322 30836 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "28af70b518b34b289177e053e9597e48"
format_stamp: "Formatted at 2025-08-09 19:59:24 on dist-test-slave-xzln"
I20250809 19:59:24.290089 30836 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.004s sys 0.000s
I20250809 19:59:24.294697 30852 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:24.295539 30836 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.000s
I20250809 19:59:24.295794 30836 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "28af70b518b34b289177e053e9597e48"
format_stamp: "Formatted at 2025-08-09 19:59:24 on dist-test-slave-xzln"
I20250809 19:59:24.296052 30836 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:59:24.341163 30836 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:59:24.342496 30836 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:59:24.342839 30836 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:59:24.344857 30836 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:59:24.348311 30836 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:59:24.348482 30836 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:24.348645 30836 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:59:24.348759 30836 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:24.464008 30836 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.130:46095
I20250809 19:59:24.464083 30964 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.130:46095 every 8 connection(s)
I20250809 19:59:24.466181 30836 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250809 19:59:24.467608 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 30836
I20250809 19:59:24.467942 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250809 19:59:24.475185 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.131:0
--local_ip_for_outbound_sockets=127.25.124.131
--webserver_interface=127.25.124.131
--webserver_port=0
--tserver_master_addrs=127.25.124.190:39147
--builtin_ntp_servers=127.25.124.148:40731
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250809 19:59:24.484793 30965 heartbeater.cc:344] Connected to a master server at 127.25.124.190:39147
I20250809 19:59:24.485232 30965 heartbeater.cc:461] Registering TS with master...
I20250809 19:59:24.486053 30965 heartbeater.cc:507] Master 127.25.124.190:39147 requested a full tablet report, sending...
I20250809 19:59:24.487800 30644 ts_manager.cc:194] Registered new tserver with Master: 28af70b518b34b289177e053e9597e48 (127.25.124.130:46095)
I20250809 19:59:24.488873 30644 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.130:33329
W20250809 19:59:24.731619 30969 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:59:24.732067 30969 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:59:24.732528 30969 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:59:24.759984 30969 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250809 19:59:24.760336 30969 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:59:24.761086 30969 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.131
I20250809 19:59:24.788986 30969 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:40731
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.25.124.131
--webserver_port=0
--tserver_master_addrs=127.25.124.190:39147
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.131
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:59:24.790021 30969 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:59:24.791415 30969 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:59:24.801259 30975 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:59:25.491300 30965 heartbeater.cc:499] Master 127.25.124.190:39147 was elected leader, sending a full tablet report...
W20250809 19:59:24.801574 30976 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:26.205847 30974 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 30969
W20250809 19:59:26.285384 30974 kernel_stack_watchdog.cc:198] Thread 30969 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 401ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250809 19:59:26.285840 30969 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.484s user 0.479s sys 1.000s
W20250809 19:59:26.286168 30969 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.485s user 0.479s sys 1.001s
W20250809 19:59:26.287807 30978 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:26.289886 30977 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1488 milliseconds
I20250809 19:59:26.289901 30969 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:59:26.290939 30969 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:59:26.292671 30969 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:59:26.294055 30969 hybrid_clock.cc:648] HybridClock initialized: now 1754769566294016 us; error 36 us; skew 500 ppm
I20250809 19:59:26.294791 30969 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:59:26.299823 30969 webserver.cc:489] Webserver started at http://127.25.124.131:41631/ using document root <none> and password file <none>
I20250809 19:59:26.300591 30969 fs_manager.cc:362] Metadata directory not provided
I20250809 19:59:26.300770 30969 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:59:26.301143 30969 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:59:26.304772 30969 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "9d5c6d70d1244ff4a42e846fce4cb479"
format_stamp: "Formatted at 2025-08-09 19:59:26 on dist-test-slave-xzln"
I20250809 19:59:26.305681 30969 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "9d5c6d70d1244ff4a42e846fce4cb479"
format_stamp: "Formatted at 2025-08-09 19:59:26 on dist-test-slave-xzln"
I20250809 19:59:26.312953 30969 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.003s sys 0.004s
I20250809 19:59:26.317620 30986 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:26.318431 30969 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250809 19:59:26.318684 30969 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "9d5c6d70d1244ff4a42e846fce4cb479"
format_stamp: "Formatted at 2025-08-09 19:59:26 on dist-test-slave-xzln"
I20250809 19:59:26.318960 30969 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:59:26.369519 30969 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:59:26.370689 30969 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:59:26.371037 30969 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:59:26.373119 30969 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:59:26.376350 30969 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:59:26.376528 30969 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:26.376727 30969 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:59:26.376876 30969 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:26.497062 30969 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.131:35637
I20250809 19:59:26.497116 31098 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.131:35637 every 8 connection(s)
I20250809 19:59:26.499552 30969 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250809 19:59:26.501331 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 30969
I20250809 19:59:26.501957 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250809 19:59:26.508599 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.132:0
--local_ip_for_outbound_sockets=127.25.124.132
--webserver_interface=127.25.124.132
--webserver_port=0
--tserver_master_addrs=127.25.124.190:39147
--builtin_ntp_servers=127.25.124.148:40731
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250809 19:59:26.521288 31099 heartbeater.cc:344] Connected to a master server at 127.25.124.190:39147
I20250809 19:59:26.521615 31099 heartbeater.cc:461] Registering TS with master...
I20250809 19:59:26.522428 31099 heartbeater.cc:507] Master 127.25.124.190:39147 requested a full tablet report, sending...
I20250809 19:59:26.524168 30644 ts_manager.cc:194] Registered new tserver with Master: 9d5c6d70d1244ff4a42e846fce4cb479 (127.25.124.131:35637)
I20250809 19:59:26.525167 30644 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.131:47755
W20250809 19:59:26.758137 31103 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:59:26.758536 31103 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:59:26.758961 31103 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:59:26.784412 31103 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250809 19:59:26.784718 31103 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:59:26.785405 31103 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.132
I20250809 19:59:26.813321 31103 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:40731
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.132:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.25.124.132
--webserver_port=0
--tserver_master_addrs=127.25.124.190:39147
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.132
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:59:26.814406 31103 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:59:26.815831 31103 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:59:26.825361 31109 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:59:27.527683 31099 heartbeater.cc:499] Master 127.25.124.190:39147 was elected leader, sending a full tablet report...
W20250809 19:59:26.826814 31110 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:28.007664 31112 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:28.010228 31111 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1178 milliseconds
W20250809 19:59:28.010242 31103 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.184s user 0.359s sys 0.818s
W20250809 19:59:28.010571 31103 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.184s user 0.361s sys 0.820s
I20250809 19:59:28.010814 31103 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:59:28.011694 31103 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:59:28.013558 31103 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:59:28.014884 31103 hybrid_clock.cc:648] HybridClock initialized: now 1754769568014833 us; error 50 us; skew 500 ppm
I20250809 19:59:28.015622 31103 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:59:28.021497 31103 webserver.cc:489] Webserver started at http://127.25.124.132:38357/ using document root <none> and password file <none>
I20250809 19:59:28.022325 31103 fs_manager.cc:362] Metadata directory not provided
I20250809 19:59:28.022521 31103 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:59:28.022949 31103 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:59:28.026633 31103 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "3b480639d13f4b06af1c2437934692d0"
format_stamp: "Formatted at 2025-08-09 19:59:28 on dist-test-slave-xzln"
I20250809 19:59:28.027590 31103 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "3b480639d13f4b06af1c2437934692d0"
format_stamp: "Formatted at 2025-08-09 19:59:28 on dist-test-slave-xzln"
I20250809 19:59:28.033879 31103 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.001s sys 0.005s
I20250809 19:59:28.039098 31119 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:28.040076 31103 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.001s
I20250809 19:59:28.040321 31103 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "3b480639d13f4b06af1c2437934692d0"
format_stamp: "Formatted at 2025-08-09 19:59:28 on dist-test-slave-xzln"
I20250809 19:59:28.040576 31103 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:59:28.094883 31103 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:59:28.096091 31103 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:59:28.096453 31103 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:59:28.098582 31103 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:59:28.101847 31103 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:59:28.102022 31103 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:28.102233 31103 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:59:28.102368 31103 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:28.217962 31103 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.132:38993
I20250809 19:59:28.218011 31231 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.132:38993 every 8 connection(s)
I20250809 19:59:28.220906 31103 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250809 19:59:28.224910 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 31103
I20250809 19:59:28.225356 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250809 19:59:28.231916 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.133:0
--local_ip_for_outbound_sockets=127.25.124.133
--webserver_interface=127.25.124.133
--webserver_port=0
--tserver_master_addrs=127.25.124.190:39147
--builtin_ntp_servers=127.25.124.148:40731
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250809 19:59:28.244300 31232 heartbeater.cc:344] Connected to a master server at 127.25.124.190:39147
I20250809 19:59:28.244638 31232 heartbeater.cc:461] Registering TS with master...
I20250809 19:59:28.245460 31232 heartbeater.cc:507] Master 127.25.124.190:39147 requested a full tablet report, sending...
I20250809 19:59:28.247368 30644 ts_manager.cc:194] Registered new tserver with Master: 3b480639d13f4b06af1c2437934692d0 (127.25.124.132:38993)
I20250809 19:59:28.248379 30644 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.132:45813
W20250809 19:59:28.493055 31236 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:59:28.493446 31236 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:59:28.493847 31236 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:59:28.520642 31236 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250809 19:59:28.520969 31236 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:59:28.521615 31236 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.133
I20250809 19:59:28.554595 31236 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:40731
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.133:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--webserver_interface=127.25.124.133
--webserver_port=0
--tserver_master_addrs=127.25.124.190:39147
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.133
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:59:28.555825 31236 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:59:28.557235 31236 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:59:28.567606 31242 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:59:29.250901 31232 heartbeater.cc:499] Master 127.25.124.190:39147 was elected leader, sending a full tablet report...
W20250809 19:59:28.568331 31243 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:29.613080 31244 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1043 milliseconds
W20250809 19:59:29.613449 31245 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:59:29.613469 31236 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:59:29.617291 31236 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:59:29.619585 31236 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:59:29.621004 31236 hybrid_clock.cc:648] HybridClock initialized: now 1754769569620961 us; error 42 us; skew 500 ppm
I20250809 19:59:29.621666 31236 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:59:29.626631 31236 webserver.cc:489] Webserver started at http://127.25.124.133:46465/ using document root <none> and password file <none>
I20250809 19:59:29.627430 31236 fs_manager.cc:362] Metadata directory not provided
I20250809 19:59:29.627607 31236 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:59:29.628007 31236 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:59:29.631644 31236 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/data/instance:
uuid: "9463cef1459b45e89c2a1c89fcc6f97c"
format_stamp: "Formatted at 2025-08-09 19:59:29 on dist-test-slave-xzln"
I20250809 19:59:29.632563 31236 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/wal/instance:
uuid: "9463cef1459b45e89c2a1c89fcc6f97c"
format_stamp: "Formatted at 2025-08-09 19:59:29 on dist-test-slave-xzln"
I20250809 19:59:29.638226 31236 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.006s sys 0.000s
I20250809 19:59:29.642838 31253 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:29.643656 31236 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.000s
I20250809 19:59:29.643905 31236 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/wal
uuid: "9463cef1459b45e89c2a1c89fcc6f97c"
format_stamp: "Formatted at 2025-08-09 19:59:29 on dist-test-slave-xzln"
I20250809 19:59:29.644179 31236 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:59:29.691327 31236 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:59:29.692636 31236 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:59:29.693034 31236 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:59:29.695353 31236 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:59:29.698750 31236 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:59:29.698935 31236 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:29.699154 31236 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:59:29.699349 31236 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.001s sys 0.000s
I20250809 19:59:29.821686 31236 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.133:34963
I20250809 19:59:29.821763 31365 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.133:34963 every 8 connection(s)
I20250809 19:59:29.823872 31236 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/data/info.pb
I20250809 19:59:29.833832 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 31236
I20250809 19:59:29.834266 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-4/wal/instance
I20250809 19:59:29.842507 31366 heartbeater.cc:344] Connected to a master server at 127.25.124.190:39147
I20250809 19:59:29.842835 31366 heartbeater.cc:461] Registering TS with master...
I20250809 19:59:29.843642 31366 heartbeater.cc:507] Master 127.25.124.190:39147 requested a full tablet report, sending...
I20250809 19:59:29.845299 30644 ts_manager.cc:194] Registered new tserver with Master: 9463cef1459b45e89c2a1c89fcc6f97c (127.25.124.133:34963)
I20250809 19:59:29.846880 30644 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.133:47421
I20250809 19:59:29.853538 26098 external_mini_cluster.cc:949] 5 TS(s) registered with all masters
I20250809 19:59:29.884477 30644 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:39062:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250809 19:59:29.958561 31034 tablet_service.cc:1468] Processing CreateTablet for tablet acda28ea36a24fd18e06a6aca213a59c (DEFAULT_TABLE table=TestTable [id=481f02969fb14b76a1bc2e65bae0607d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:59:29.959141 30767 tablet_service.cc:1468] Processing CreateTablet for tablet acda28ea36a24fd18e06a6aca213a59c (DEFAULT_TABLE table=TestTable [id=481f02969fb14b76a1bc2e65bae0607d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:59:29.960096 31034 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet acda28ea36a24fd18e06a6aca213a59c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:59:29.960645 30767 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet acda28ea36a24fd18e06a6aca213a59c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:59:29.961283 30900 tablet_service.cc:1468] Processing CreateTablet for tablet acda28ea36a24fd18e06a6aca213a59c (DEFAULT_TABLE table=TestTable [id=481f02969fb14b76a1bc2e65bae0607d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:59:29.963264 30900 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet acda28ea36a24fd18e06a6aca213a59c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:59:29.979933 31385 tablet_bootstrap.cc:492] T acda28ea36a24fd18e06a6aca213a59c P 77851c911f2a46ea8914cc2f64878c61: Bootstrap starting.
I20250809 19:59:29.987803 31386 tablet_bootstrap.cc:492] T acda28ea36a24fd18e06a6aca213a59c P 9d5c6d70d1244ff4a42e846fce4cb479: Bootstrap starting.
I20250809 19:59:29.988242 31385 tablet_bootstrap.cc:654] T acda28ea36a24fd18e06a6aca213a59c P 77851c911f2a46ea8914cc2f64878c61: Neither blocks nor log segments found. Creating new log.
I20250809 19:59:29.990334 31387 tablet_bootstrap.cc:492] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48: Bootstrap starting.
I20250809 19:59:29.990628 31385 log.cc:826] T acda28ea36a24fd18e06a6aca213a59c P 77851c911f2a46ea8914cc2f64878c61: Log is configured to *not* fsync() on all Append() calls
I20250809 19:59:29.994843 31386 tablet_bootstrap.cc:654] T acda28ea36a24fd18e06a6aca213a59c P 9d5c6d70d1244ff4a42e846fce4cb479: Neither blocks nor log segments found. Creating new log.
I20250809 19:59:29.996666 31387 tablet_bootstrap.cc:654] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48: Neither blocks nor log segments found. Creating new log.
I20250809 19:59:29.996938 31385 tablet_bootstrap.cc:492] T acda28ea36a24fd18e06a6aca213a59c P 77851c911f2a46ea8914cc2f64878c61: No bootstrap required, opened a new log
I20250809 19:59:29.997262 31385 ts_tablet_manager.cc:1397] T acda28ea36a24fd18e06a6aca213a59c P 77851c911f2a46ea8914cc2f64878c61: Time spent bootstrapping tablet: real 0.018s user 0.007s sys 0.007s
I20250809 19:59:29.997246 31386 log.cc:826] T acda28ea36a24fd18e06a6aca213a59c P 9d5c6d70d1244ff4a42e846fce4cb479: Log is configured to *not* fsync() on all Append() calls
I20250809 19:59:29.999440 31387 log.cc:826] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48: Log is configured to *not* fsync() on all Append() calls
I20250809 19:59:30.004501 31387 tablet_bootstrap.cc:492] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48: No bootstrap required, opened a new log
I20250809 19:59:30.004814 31387 ts_tablet_manager.cc:1397] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48: Time spent bootstrapping tablet: real 0.016s user 0.009s sys 0.004s
I20250809 19:59:30.005303 31386 tablet_bootstrap.cc:492] T acda28ea36a24fd18e06a6aca213a59c P 9d5c6d70d1244ff4a42e846fce4cb479: No bootstrap required, opened a new log
I20250809 19:59:30.005688 31386 ts_tablet_manager.cc:1397] T acda28ea36a24fd18e06a6aca213a59c P 9d5c6d70d1244ff4a42e846fce4cb479: Time spent bootstrapping tablet: real 0.018s user 0.006s sys 0.009s
I20250809 19:59:30.019297 31385 raft_consensus.cc:357] T acda28ea36a24fd18e06a6aca213a59c P 77851c911f2a46ea8914cc2f64878c61 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9d5c6d70d1244ff4a42e846fce4cb479" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35637 } } peers { permanent_uuid: "77851c911f2a46ea8914cc2f64878c61" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 41883 } } peers { permanent_uuid: "28af70b518b34b289177e053e9597e48" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46095 } }
I20250809 19:59:30.020026 31385 raft_consensus.cc:383] T acda28ea36a24fd18e06a6aca213a59c P 77851c911f2a46ea8914cc2f64878c61 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:59:30.020742 31385 raft_consensus.cc:738] T acda28ea36a24fd18e06a6aca213a59c P 77851c911f2a46ea8914cc2f64878c61 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 77851c911f2a46ea8914cc2f64878c61, State: Initialized, Role: FOLLOWER
I20250809 19:59:30.020887 31387 raft_consensus.cc:357] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9d5c6d70d1244ff4a42e846fce4cb479" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35637 } } peers { permanent_uuid: "77851c911f2a46ea8914cc2f64878c61" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 41883 } } peers { permanent_uuid: "28af70b518b34b289177e053e9597e48" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46095 } }
I20250809 19:59:30.021493 31387 raft_consensus.cc:383] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:59:30.021696 31387 raft_consensus.cc:738] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 28af70b518b34b289177e053e9597e48, State: Initialized, Role: FOLLOWER
I20250809 19:59:30.021478 31385 consensus_queue.cc:260] T acda28ea36a24fd18e06a6aca213a59c P 77851c911f2a46ea8914cc2f64878c61 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9d5c6d70d1244ff4a42e846fce4cb479" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35637 } } peers { permanent_uuid: "77851c911f2a46ea8914cc2f64878c61" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 41883 } } peers { permanent_uuid: "28af70b518b34b289177e053e9597e48" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46095 } }
I20250809 19:59:30.022581 31387 consensus_queue.cc:260] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9d5c6d70d1244ff4a42e846fce4cb479" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35637 } } peers { permanent_uuid: "77851c911f2a46ea8914cc2f64878c61" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 41883 } } peers { permanent_uuid: "28af70b518b34b289177e053e9597e48" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46095 } }
I20250809 19:59:30.025658 31385 ts_tablet_manager.cc:1428] T acda28ea36a24fd18e06a6aca213a59c P 77851c911f2a46ea8914cc2f64878c61: Time spent starting tablet: real 0.028s user 0.022s sys 0.006s
I20250809 19:59:30.027895 31387 ts_tablet_manager.cc:1428] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48: Time spent starting tablet: real 0.023s user 0.023s sys 0.000s
I20250809 19:59:30.029284 31386 raft_consensus.cc:357] T acda28ea36a24fd18e06a6aca213a59c P 9d5c6d70d1244ff4a42e846fce4cb479 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9d5c6d70d1244ff4a42e846fce4cb479" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35637 } } peers { permanent_uuid: "77851c911f2a46ea8914cc2f64878c61" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 41883 } } peers { permanent_uuid: "28af70b518b34b289177e053e9597e48" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46095 } }
I20250809 19:59:30.029827 31386 raft_consensus.cc:383] T acda28ea36a24fd18e06a6aca213a59c P 9d5c6d70d1244ff4a42e846fce4cb479 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:59:30.030021 31386 raft_consensus.cc:738] T acda28ea36a24fd18e06a6aca213a59c P 9d5c6d70d1244ff4a42e846fce4cb479 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9d5c6d70d1244ff4a42e846fce4cb479, State: Initialized, Role: FOLLOWER
I20250809 19:59:30.030562 31386 consensus_queue.cc:260] T acda28ea36a24fd18e06a6aca213a59c P 9d5c6d70d1244ff4a42e846fce4cb479 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9d5c6d70d1244ff4a42e846fce4cb479" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35637 } } peers { permanent_uuid: "77851c911f2a46ea8914cc2f64878c61" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 41883 } } peers { permanent_uuid: "28af70b518b34b289177e053e9597e48" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46095 } }
I20250809 19:59:30.033785 31386 ts_tablet_manager.cc:1428] T acda28ea36a24fd18e06a6aca213a59c P 9d5c6d70d1244ff4a42e846fce4cb479: Time spent starting tablet: real 0.028s user 0.020s sys 0.009s
I20250809 19:59:30.041671 31392 raft_consensus.cc:491] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250809 19:59:30.042002 31392 raft_consensus.cc:513] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9d5c6d70d1244ff4a42e846fce4cb479" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35637 } } peers { permanent_uuid: "77851c911f2a46ea8914cc2f64878c61" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 41883 } } peers { permanent_uuid: "28af70b518b34b289177e053e9597e48" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46095 } }
I20250809 19:59:30.043846 31392 leader_election.cc:290] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 9d5c6d70d1244ff4a42e846fce4cb479 (127.25.124.131:35637), 77851c911f2a46ea8914cc2f64878c61 (127.25.124.129:41883)
I20250809 19:59:30.054908 31054 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "acda28ea36a24fd18e06a6aca213a59c" candidate_uuid: "28af70b518b34b289177e053e9597e48" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9d5c6d70d1244ff4a42e846fce4cb479" is_pre_election: true
I20250809 19:59:30.055226 30787 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "acda28ea36a24fd18e06a6aca213a59c" candidate_uuid: "28af70b518b34b289177e053e9597e48" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "77851c911f2a46ea8914cc2f64878c61" is_pre_election: true
I20250809 19:59:30.055503 31054 raft_consensus.cc:2466] T acda28ea36a24fd18e06a6aca213a59c P 9d5c6d70d1244ff4a42e846fce4cb479 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 28af70b518b34b289177e053e9597e48 in term 0.
I20250809 19:59:30.055799 30787 raft_consensus.cc:2466] T acda28ea36a24fd18e06a6aca213a59c P 77851c911f2a46ea8914cc2f64878c61 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 28af70b518b34b289177e053e9597e48 in term 0.
I20250809 19:59:30.056425 30854 leader_election.cc:304] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 28af70b518b34b289177e053e9597e48, 9d5c6d70d1244ff4a42e846fce4cb479; no voters:
I20250809 19:59:30.057014 31392 raft_consensus.cc:2802] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250809 19:59:30.057271 31392 raft_consensus.cc:491] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250809 19:59:30.057490 31392 raft_consensus.cc:3058] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:59:30.061615 31392 raft_consensus.cc:513] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9d5c6d70d1244ff4a42e846fce4cb479" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35637 } } peers { permanent_uuid: "77851c911f2a46ea8914cc2f64878c61" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 41883 } } peers { permanent_uuid: "28af70b518b34b289177e053e9597e48" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46095 } }
I20250809 19:59:30.062731 31392 leader_election.cc:290] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [CANDIDATE]: Term 1 election: Requested vote from peers 9d5c6d70d1244ff4a42e846fce4cb479 (127.25.124.131:35637), 77851c911f2a46ea8914cc2f64878c61 (127.25.124.129:41883)
I20250809 19:59:30.063283 31054 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "acda28ea36a24fd18e06a6aca213a59c" candidate_uuid: "28af70b518b34b289177e053e9597e48" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9d5c6d70d1244ff4a42e846fce4cb479"
I20250809 19:59:30.063447 30787 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "acda28ea36a24fd18e06a6aca213a59c" candidate_uuid: "28af70b518b34b289177e053e9597e48" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "77851c911f2a46ea8914cc2f64878c61"
I20250809 19:59:30.063648 31054 raft_consensus.cc:3058] T acda28ea36a24fd18e06a6aca213a59c P 9d5c6d70d1244ff4a42e846fce4cb479 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:59:30.063787 30787 raft_consensus.cc:3058] T acda28ea36a24fd18e06a6aca213a59c P 77851c911f2a46ea8914cc2f64878c61 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:59:30.067943 31054 raft_consensus.cc:2466] T acda28ea36a24fd18e06a6aca213a59c P 9d5c6d70d1244ff4a42e846fce4cb479 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 28af70b518b34b289177e053e9597e48 in term 1.
I20250809 19:59:30.068041 30787 raft_consensus.cc:2466] T acda28ea36a24fd18e06a6aca213a59c P 77851c911f2a46ea8914cc2f64878c61 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 28af70b518b34b289177e053e9597e48 in term 1.
I20250809 19:59:30.068550 30854 leader_election.cc:304] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 28af70b518b34b289177e053e9597e48, 9d5c6d70d1244ff4a42e846fce4cb479; no voters:
I20250809 19:59:30.068998 31392 raft_consensus.cc:2802] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:59:30.071697 31392 raft_consensus.cc:695] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [term 1 LEADER]: Becoming Leader. State: Replica: 28af70b518b34b289177e053e9597e48, State: Running, Role: LEADER
I20250809 19:59:30.072453 31392 consensus_queue.cc:237] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9d5c6d70d1244ff4a42e846fce4cb479" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35637 } } peers { permanent_uuid: "77851c911f2a46ea8914cc2f64878c61" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 41883 } } peers { permanent_uuid: "28af70b518b34b289177e053e9597e48" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46095 } }
I20250809 19:59:30.081704 30641 catalog_manager.cc:5582] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 reported cstate change: term changed from 0 to 1, leader changed from <none> to 28af70b518b34b289177e053e9597e48 (127.25.124.130). New cstate: current_term: 1 leader_uuid: "28af70b518b34b289177e053e9597e48" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "9d5c6d70d1244ff4a42e846fce4cb479" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 35637 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "77851c911f2a46ea8914cc2f64878c61" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 41883 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "28af70b518b34b289177e053e9597e48" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 46095 } health_report { overall_health: HEALTHY } } }
I20250809 19:59:30.098066 26098 external_mini_cluster.cc:949] 5 TS(s) registered with all masters
I20250809 19:59:30.100600 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 77851c911f2a46ea8914cc2f64878c61 to finish bootstrapping
I20250809 19:59:30.111996 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 28af70b518b34b289177e053e9597e48 to finish bootstrapping
I20250809 19:59:30.120709 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 9d5c6d70d1244ff4a42e846fce4cb479 to finish bootstrapping
I20250809 19:59:30.129752 26098 test_util.cc:276] Using random seed: 551806790
I20250809 19:59:30.149376 26098 test_workload.cc:405] TestWorkload: Skipping table creation because table TestTable already exists
I20250809 19:59:30.150045 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 30969
I20250809 19:59:30.179045 30787 raft_consensus.cc:1273] T acda28ea36a24fd18e06a6aca213a59c P 77851c911f2a46ea8914cc2f64878c61 [term 1 FOLLOWER]: Refusing update from remote peer 28af70b518b34b289177e053e9597e48: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
W20250809 19:59:30.179543 30854 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.25.124.131:35637: connect: Connection refused (error 111)
I20250809 19:59:30.181125 31396 consensus_queue.cc:1035] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 [LEADER]: Connected to new peer: Peer: permanent_uuid: "77851c911f2a46ea8914cc2f64878c61" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 41883 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250809 19:59:30.191936 30854 consensus_peers.cc:489] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 -> Peer 9d5c6d70d1244ff4a42e846fce4cb479 (127.25.124.131:35637): Couldn't send request to peer 9d5c6d70d1244ff4a42e846fce4cb479. Status: Network error: Client connection negotiation failed: client connection to 127.25.124.131:35637: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
W20250809 19:59:30.193298 30833 tablet.cc:2378] T acda28ea36a24fd18e06a6aca213a59c P 77851c911f2a46ea8914cc2f64878c61: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250809 19:59:30.204923 31406 mvcc.cc:204] Tried to move back new op lower bound from 7187536159437369344 to 7187536159023861760. Current Snapshot: MvccSnapshot[applied={T|T < 7187536159437369344}]
I20250809 19:59:30.850337 31366 heartbeater.cc:499] Master 127.25.124.190:39147 was elected leader, sending a full tablet report...
I20250809 19:59:32.338851 30900 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250809 19:59:32.352118 31167 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250809 19:59:32.354724 31301 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250809 19:59:32.364643 30767 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250809 19:59:32.439060 30854 consensus_peers.cc:489] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 -> Peer 9d5c6d70d1244ff4a42e846fce4cb479 (127.25.124.131:35637): Couldn't send request to peer 9d5c6d70d1244ff4a42e846fce4cb479. Status: Network error: Client connection negotiation failed: client connection to 127.25.124.131:35637: connect: Connection refused (error 111). This is attempt 6: this message will repeat every 5th retry.
I20250809 19:59:34.086123 31167 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250809 19:59:34.090597 30900 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250809 19:59:34.115576 30767 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250809 19:59:34.120568 31301 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250809 19:59:34.757265 30854 consensus_peers.cc:489] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 -> Peer 9d5c6d70d1244ff4a42e846fce4cb479 (127.25.124.131:35637): Couldn't send request to peer 9d5c6d70d1244ff4a42e846fce4cb479. Status: Network error: Client connection negotiation failed: client connection to 127.25.124.131:35637: connect: Connection refused (error 111). This is attempt 11: this message will repeat every 5th retry.
W20250809 19:59:35.223202 30854 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.25.124.131:35637: connect: Connection refused (error 111) [suppressed 10 similar messages]
I20250809 19:59:35.848059 31167 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250809 19:59:35.854873 30900 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250809 19:59:35.880331 30767 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250809 19:59:35.914139 31301 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250809 19:59:36.811767 31228 debug-util.cc:398] Leaking SignalData structure 0x7b08000b6a40 after lost signal to thread 31104
W20250809 19:59:36.813093 31228 debug-util.cc:398] Leaking SignalData structure 0x7b08000ac8a0 after lost signal to thread 31231
W20250809 19:59:37.240185 30828 debug-util.cc:398] Leaking SignalData structure 0x7b08000e4680 after lost signal to thread 30704
W20250809 19:59:37.241096 30828 debug-util.cc:398] Leaking SignalData structure 0x7b08000f3760 after lost signal to thread 30831
W20250809 19:59:37.387689 30854 consensus_peers.cc:489] T acda28ea36a24fd18e06a6aca213a59c P 28af70b518b34b289177e053e9597e48 -> Peer 9d5c6d70d1244ff4a42e846fce4cb479 (127.25.124.131:35637): Couldn't send request to peer 9d5c6d70d1244ff4a42e846fce4cb479. Status: Network error: Client connection negotiation failed: client connection to 127.25.124.131:35637: connect: Connection refused (error 111). This is attempt 16: this message will repeat every 5th retry.
I20250809 19:59:38.279659 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 30703
I20250809 19:59:38.312037 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 30836
I20250809 19:59:38.346566 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 31103
I20250809 19:59:38.366575 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 31236
I20250809 19:59:38.388092 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 30611
2025-08-09T19:59:38Z chronyd exiting
[ OK ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4 (19759 ms)
[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest (19759 ms total)
[----------] 1 test from ListTableCliSimpleParamTest
[ RUN ] ListTableCliSimpleParamTest.TestListTables/2
I20250809 19:59:38.443181 26098 test_util.cc:276] Using random seed: 560120216
I20250809 19:59:38.446578 26098 ts_itest-base.cc:115] Starting cluster with:
I20250809 19:59:38.446717 26098 ts_itest-base.cc:116] --------------
I20250809 19:59:38.446846 26098 ts_itest-base.cc:117] 1 tablet servers
I20250809 19:59:38.446959 26098 ts_itest-base.cc:118] 1 replicas per TS
I20250809 19:59:38.447069 26098 ts_itest-base.cc:119] --------------
2025-08-09T19:59:38Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-09T19:59:38Z Disabled control of system clock
I20250809 19:59:38.479768 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:40651
--webserver_interface=127.25.124.190
--webserver_port=0
--builtin_ntp_servers=127.25.124.148:44847
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.25.124.190:40651 with env {}
W20250809 19:59:38.735620 31539 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:59:38.736080 31539 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:59:38.736469 31539 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:59:38.761214 31539 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250809 19:59:38.761469 31539 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:59:38.761677 31539 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250809 19:59:38.761878 31539 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250809 19:59:38.789149 31539 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:44847
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.25.124.190:40651
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:40651
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.25.124.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:59:38.790144 31539 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:59:38.791496 31539 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:59:38.800629 31545 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:38.800906 31546 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:39.865998 31548 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:39.867785 31547 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1063 milliseconds
W20250809 19:59:39.868961 31539 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.069s user 0.383s sys 0.676s
W20250809 19:59:39.869196 31539 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.069s user 0.383s sys 0.676s
I20250809 19:59:39.869396 31539 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:59:39.870329 31539 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:59:39.872617 31539 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:59:39.873971 31539 hybrid_clock.cc:648] HybridClock initialized: now 1754769579873938 us; error 38 us; skew 500 ppm
I20250809 19:59:39.874668 31539 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:59:39.881165 31539 webserver.cc:489] Webserver started at http://127.25.124.190:35395/ using document root <none> and password file <none>
I20250809 19:59:39.881956 31539 fs_manager.cc:362] Metadata directory not provided
I20250809 19:59:39.882149 31539 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:59:39.882535 31539 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:59:39.886471 31539 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "6f8a7a9b6db647448775ad541ee222d5"
format_stamp: "Formatted at 2025-08-09 19:59:39 on dist-test-slave-xzln"
I20250809 19:59:39.887439 31539 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "6f8a7a9b6db647448775ad541ee222d5"
format_stamp: "Formatted at 2025-08-09 19:59:39 on dist-test-slave-xzln"
I20250809 19:59:39.893887 31539 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.001s
I20250809 19:59:39.898836 31555 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:39.899811 31539 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.000s
I20250809 19:59:39.900074 31539 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
uuid: "6f8a7a9b6db647448775ad541ee222d5"
format_stamp: "Formatted at 2025-08-09 19:59:39 on dist-test-slave-xzln"
I20250809 19:59:39.900331 31539 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:59:39.955391 31539 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:59:39.956548 31539 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:59:39.956904 31539 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:59:40.015676 31539 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.190:40651
I20250809 19:59:40.015734 31606 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.190:40651 every 8 connection(s)
I20250809 19:59:40.017879 31539 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250809 19:59:40.018842 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 31539
I20250809 19:59:40.019356 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250809 19:59:40.023142 31607 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:59:40.043790 31607 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5: Bootstrap starting.
I20250809 19:59:40.048800 31607 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5: Neither blocks nor log segments found. Creating new log.
I20250809 19:59:40.050323 31607 log.cc:826] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5: Log is configured to *not* fsync() on all Append() calls
I20250809 19:59:40.054098 31607 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5: No bootstrap required, opened a new log
I20250809 19:59:40.068645 31607 raft_consensus.cc:357] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f8a7a9b6db647448775ad541ee222d5" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40651 } }
I20250809 19:59:40.069146 31607 raft_consensus.cc:383] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:59:40.069340 31607 raft_consensus.cc:738] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6f8a7a9b6db647448775ad541ee222d5, State: Initialized, Role: FOLLOWER
I20250809 19:59:40.069993 31607 consensus_queue.cc:260] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f8a7a9b6db647448775ad541ee222d5" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40651 } }
I20250809 19:59:40.070544 31607 raft_consensus.cc:397] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:59:40.070843 31607 raft_consensus.cc:491] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:59:40.071117 31607 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:59:40.074297 31607 raft_consensus.cc:513] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f8a7a9b6db647448775ad541ee222d5" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40651 } }
I20250809 19:59:40.074790 31607 leader_election.cc:304] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 6f8a7a9b6db647448775ad541ee222d5; no voters:
I20250809 19:59:40.076200 31607 leader_election.cc:290] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250809 19:59:40.076858 31612 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:59:40.078701 31612 raft_consensus.cc:695] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [term 1 LEADER]: Becoming Leader. State: Replica: 6f8a7a9b6db647448775ad541ee222d5, State: Running, Role: LEADER
I20250809 19:59:40.079646 31607 sys_catalog.cc:564] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [sys.catalog]: configured and running, proceeding with master startup.
I20250809 19:59:40.079458 31612 consensus_queue.cc:237] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f8a7a9b6db647448775ad541ee222d5" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40651 } }
I20250809 19:59:40.087602 31613 sys_catalog.cc:455] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "6f8a7a9b6db647448775ad541ee222d5" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f8a7a9b6db647448775ad541ee222d5" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40651 } } }
I20250809 19:59:40.086711 31614 sys_catalog.cc:455] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 6f8a7a9b6db647448775ad541ee222d5. Latest consensus state: current_term: 1 leader_uuid: "6f8a7a9b6db647448775ad541ee222d5" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "6f8a7a9b6db647448775ad541ee222d5" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 40651 } } }
I20250809 19:59:40.088203 31613 sys_catalog.cc:458] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [sys.catalog]: This master's current role is: LEADER
I20250809 19:59:40.088428 31614 sys_catalog.cc:458] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5 [sys.catalog]: This master's current role is: LEADER
I20250809 19:59:40.091487 31621 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250809 19:59:40.100221 31621 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250809 19:59:40.112146 31621 catalog_manager.cc:1349] Generated new cluster ID: a555f5c81021401b93fc8fc1dcd8ff98
I20250809 19:59:40.112358 31621 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250809 19:59:40.129189 31621 catalog_manager.cc:1372] Generated new certificate authority record
I20250809 19:59:40.130278 31621 catalog_manager.cc:1506] Loading token signing keys...
I20250809 19:59:40.143603 31621 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 6f8a7a9b6db647448775ad541ee222d5: Generated new TSK 0
I20250809 19:59:40.144251 31621 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250809 19:59:40.154641 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.129:0
--local_ip_for_outbound_sockets=127.25.124.129
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:40651
--builtin_ntp_servers=127.25.124.148:44847
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250809 19:59:40.438822 31631 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:59:40.439323 31631 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:59:40.439800 31631 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:59:40.466773 31631 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:59:40.467509 31631 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.129
I20250809 19:59:40.496049 31631 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:44847
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:40651
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.129
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:59:40.497160 31631 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:59:40.498520 31631 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:59:40.509079 31637 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:41.912194 31636 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 31631
W20250809 19:59:42.094964 31636 kernel_stack_watchdog.cc:198] Thread 31631 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 399ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250809 19:59:40.509841 31638 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:42.100378 31631 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.590s user 0.000s sys 0.003s
W20250809 19:59:42.100847 31631 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.590s user 0.000s sys 0.004s
W20250809 19:59:42.101768 31639 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1591 milliseconds
W20250809 19:59:42.102547 31641 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:59:42.102623 31631 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:59:42.103916 31631 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:59:42.106194 31631 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:59:42.107605 31631 hybrid_clock.cc:648] HybridClock initialized: now 1754769582107557 us; error 43 us; skew 500 ppm
I20250809 19:59:42.108525 31631 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:59:42.114811 31631 webserver.cc:489] Webserver started at http://127.25.124.129:33829/ using document root <none> and password file <none>
I20250809 19:59:42.115911 31631 fs_manager.cc:362] Metadata directory not provided
I20250809 19:59:42.116166 31631 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:59:42.116652 31631 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:59:42.121989 31631 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "4501249cf91c448a812d984395e88dfa"
format_stamp: "Formatted at 2025-08-09 19:59:42 on dist-test-slave-xzln"
I20250809 19:59:42.123282 31631 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "4501249cf91c448a812d984395e88dfa"
format_stamp: "Formatted at 2025-08-09 19:59:42 on dist-test-slave-xzln"
I20250809 19:59:42.130990 31631 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.001s
I20250809 19:59:42.137147 31647 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:42.138130 31631 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.004s
I20250809 19:59:42.138453 31631 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "4501249cf91c448a812d984395e88dfa"
format_stamp: "Formatted at 2025-08-09 19:59:42 on dist-test-slave-xzln"
I20250809 19:59:42.138800 31631 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:59:42.193540 31631 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:59:42.194646 31631 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:59:42.194989 31631 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:59:42.197180 31631 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:59:42.200551 31631 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:59:42.200726 31631 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:42.200923 31631 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:59:42.201053 31631 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:42.332564 31631 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.129:45603
I20250809 19:59:42.332718 31759 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.129:45603 every 8 connection(s)
I20250809 19:59:42.334678 31631 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250809 19:59:42.338061 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 31631
I20250809 19:59:42.338481 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754769444608376-26098-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250809 19:59:42.357890 31760 heartbeater.cc:344] Connected to a master server at 127.25.124.190:40651
I20250809 19:59:42.358254 31760 heartbeater.cc:461] Registering TS with master...
I20250809 19:59:42.359059 31760 heartbeater.cc:507] Master 127.25.124.190:40651 requested a full tablet report, sending...
I20250809 19:59:42.361073 31572 ts_manager.cc:194] Registered new tserver with Master: 4501249cf91c448a812d984395e88dfa (127.25.124.129:45603)
I20250809 19:59:42.362735 31572 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.129:47225
I20250809 19:59:42.369959 26098 external_mini_cluster.cc:949] 1 TS(s) registered with all masters
I20250809 19:59:42.400362 31572 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:56694:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250809 19:59:42.453598 31695 tablet_service.cc:1468] Processing CreateTablet for tablet 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 (DEFAULT_TABLE table=TestTable [id=01c8d2fb1bfe44e6af962fb7601f2dce]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:59:42.455469 31695 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0fb22d7c2d4243bb8d1aa9d88a4fbbb8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:59:42.479107 31775 tablet_bootstrap.cc:492] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa: Bootstrap starting.
I20250809 19:59:42.483942 31775 tablet_bootstrap.cc:654] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa: Neither blocks nor log segments found. Creating new log.
I20250809 19:59:42.485725 31775 log.cc:826] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa: Log is configured to *not* fsync() on all Append() calls
I20250809 19:59:42.489565 31775 tablet_bootstrap.cc:492] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa: No bootstrap required, opened a new log
I20250809 19:59:42.489872 31775 ts_tablet_manager.cc:1397] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa: Time spent bootstrapping tablet: real 0.011s user 0.009s sys 0.000s
I20250809 19:59:42.504446 31775 raft_consensus.cc:357] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4501249cf91c448a812d984395e88dfa" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45603 } }
I20250809 19:59:42.504863 31775 raft_consensus.cc:383] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:59:42.505051 31775 raft_consensus.cc:738] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4501249cf91c448a812d984395e88dfa, State: Initialized, Role: FOLLOWER
I20250809 19:59:42.505610 31775 consensus_queue.cc:260] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4501249cf91c448a812d984395e88dfa" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45603 } }
I20250809 19:59:42.506039 31775 raft_consensus.cc:397] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:59:42.506270 31775 raft_consensus.cc:491] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:59:42.506536 31775 raft_consensus.cc:3058] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:59:42.510371 31775 raft_consensus.cc:513] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4501249cf91c448a812d984395e88dfa" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45603 } }
I20250809 19:59:42.510895 31775 leader_election.cc:304] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 4501249cf91c448a812d984395e88dfa; no voters:
I20250809 19:59:42.512759 31775 leader_election.cc:290] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa [CANDIDATE]: Term 1 election: Requested vote from peers
I20250809 19:59:42.513089 31777 raft_consensus.cc:2802] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:59:42.515947 31760 heartbeater.cc:499] Master 127.25.124.190:40651 was elected leader, sending a full tablet report...
I20250809 19:59:42.517030 31775 ts_tablet_manager.cc:1428] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa: Time spent starting tablet: real 0.027s user 0.019s sys 0.008s
I20250809 19:59:42.518115 31777 raft_consensus.cc:695] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa [term 1 LEADER]: Becoming Leader. State: Replica: 4501249cf91c448a812d984395e88dfa, State: Running, Role: LEADER
I20250809 19:59:42.518679 31777 consensus_queue.cc:237] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4501249cf91c448a812d984395e88dfa" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45603 } }
I20250809 19:59:42.528961 31572 catalog_manager.cc:5582] T 0fb22d7c2d4243bb8d1aa9d88a4fbbb8 P 4501249cf91c448a812d984395e88dfa reported cstate change: term changed from 0 to 1, leader changed from <none> to 4501249cf91c448a812d984395e88dfa (127.25.124.129). New cstate: current_term: 1 leader_uuid: "4501249cf91c448a812d984395e88dfa" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4501249cf91c448a812d984395e88dfa" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 45603 } health_report { overall_health: HEALTHY } } }
I20250809 19:59:42.553325 26098 external_mini_cluster.cc:949] 1 TS(s) registered with all masters
I20250809 19:59:42.555738 26098 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 4501249cf91c448a812d984395e88dfa to finish bootstrapping
I20250809 19:59:45.016127 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 31631
I20250809 19:59:45.050153 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 31539
2025-08-09T19:59:45Z chronyd exiting
[ OK ] ListTableCliSimpleParamTest.TestListTables/2 (6658 ms)
[----------] 1 test from ListTableCliSimpleParamTest (6658 ms total)
[----------] 1 test from ListTableCliParamTest
[ RUN ] ListTableCliParamTest.ListTabletWithPartitionInfo/4
I20250809 19:59:45.102296 26098 test_util.cc:276] Using random seed: 566779328
[ OK ] ListTableCliParamTest.ListTabletWithPartitionInfo/4 (11 ms)
[----------] 1 test from ListTableCliParamTest (11 ms total)
[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest
[ RUN ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0
2025-08-09T19:59:45Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-09T19:59:45Z Disabled control of system clock
I20250809 19:59:45.146709 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:44801
--webserver_interface=127.25.124.190
--webserver_port=0
--builtin_ntp_servers=127.25.124.148:36863
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.25.124.190:44801 with env {}
W20250809 19:59:45.404570 31804 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:59:45.405121 31804 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:59:45.405828 31804 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:59:45.436786 31804 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250809 19:59:45.437047 31804 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:59:45.437263 31804 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250809 19:59:45.437464 31804 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250809 19:59:45.471426 31804 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:36863
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.25.124.190:44801
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:44801
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.25.124.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:59:45.472502 31804 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:59:45.473956 31804 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:59:45.483577 31810 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:45.484130 31811 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:46.493230 31813 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:46.495285 31812 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1007 milliseconds
I20250809 19:59:46.495355 31804 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:59:46.496390 31804 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:59:46.498586 31804 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:59:46.499894 31804 hybrid_clock.cc:648] HybridClock initialized: now 1754769586499869 us; error 30 us; skew 500 ppm
I20250809 19:59:46.500573 31804 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:59:46.505587 31804 webserver.cc:489] Webserver started at http://127.25.124.190:43735/ using document root <none> and password file <none>
I20250809 19:59:46.506343 31804 fs_manager.cc:362] Metadata directory not provided
I20250809 19:59:46.506529 31804 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:59:46.506907 31804 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:59:46.510568 31804 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data/instance:
uuid: "4eac797f9fe540cca7f72297855e6a1c"
format_stamp: "Formatted at 2025-08-09 19:59:46 on dist-test-slave-xzln"
I20250809 19:59:46.511529 31804 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal/instance:
uuid: "4eac797f9fe540cca7f72297855e6a1c"
format_stamp: "Formatted at 2025-08-09 19:59:46 on dist-test-slave-xzln"
I20250809 19:59:46.517370 31804 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.005s sys 0.000s
I20250809 19:59:46.521792 31820 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:46.522569 31804 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250809 19:59:46.522827 31804 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal
uuid: "4eac797f9fe540cca7f72297855e6a1c"
format_stamp: "Formatted at 2025-08-09 19:59:46 on dist-test-slave-xzln"
I20250809 19:59:46.523099 31804 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:59:46.569495 31804 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:59:46.570662 31804 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:59:46.571040 31804 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:59:46.628185 31804 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.190:44801
I20250809 19:59:46.628242 31871 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.190:44801 every 8 connection(s)
I20250809 19:59:46.630425 31804 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data/info.pb
I20250809 19:59:46.632426 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 31804
I20250809 19:59:46.632901 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal/instance
I20250809 19:59:46.635807 31872 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:59:46.654891 31872 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c: Bootstrap starting.
I20250809 19:59:46.659710 31872 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c: Neither blocks nor log segments found. Creating new log.
I20250809 19:59:46.661450 31872 log.cc:826] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c: Log is configured to *not* fsync() on all Append() calls
I20250809 19:59:46.664772 31872 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c: No bootstrap required, opened a new log
I20250809 19:59:46.678686 31872 raft_consensus.cc:357] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4eac797f9fe540cca7f72297855e6a1c" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 44801 } }
I20250809 19:59:46.679328 31872 raft_consensus.cc:383] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:59:46.679579 31872 raft_consensus.cc:738] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4eac797f9fe540cca7f72297855e6a1c, State: Initialized, Role: FOLLOWER
I20250809 19:59:46.680227 31872 consensus_queue.cc:260] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4eac797f9fe540cca7f72297855e6a1c" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 44801 } }
I20250809 19:59:46.680629 31872 raft_consensus.cc:397] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:59:46.680812 31872 raft_consensus.cc:491] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:59:46.681030 31872 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:59:46.684266 31872 raft_consensus.cc:513] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4eac797f9fe540cca7f72297855e6a1c" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 44801 } }
I20250809 19:59:46.684777 31872 leader_election.cc:304] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 4eac797f9fe540cca7f72297855e6a1c; no voters:
I20250809 19:59:46.686120 31872 leader_election.cc:290] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [CANDIDATE]: Term 1 election: Requested vote from peers
I20250809 19:59:46.687265 31877 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:59:46.689311 31877 raft_consensus.cc:695] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [term 1 LEADER]: Becoming Leader. State: Replica: 4eac797f9fe540cca7f72297855e6a1c, State: Running, Role: LEADER
I20250809 19:59:46.689589 31872 sys_catalog.cc:564] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [sys.catalog]: configured and running, proceeding with master startup.
I20250809 19:59:46.689960 31877 consensus_queue.cc:237] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4eac797f9fe540cca7f72297855e6a1c" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 44801 } }
I20250809 19:59:46.699186 31878 sys_catalog.cc:455] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "4eac797f9fe540cca7f72297855e6a1c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4eac797f9fe540cca7f72297855e6a1c" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 44801 } } }
I20250809 19:59:46.699853 31878 sys_catalog.cc:458] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [sys.catalog]: This master's current role is: LEADER
I20250809 19:59:46.700976 31879 sys_catalog.cc:455] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [sys.catalog]: SysCatalogTable state changed. Reason: New leader 4eac797f9fe540cca7f72297855e6a1c. Latest consensus state: current_term: 1 leader_uuid: "4eac797f9fe540cca7f72297855e6a1c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "4eac797f9fe540cca7f72297855e6a1c" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 44801 } } }
I20250809 19:59:46.701617 31879 sys_catalog.cc:458] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c [sys.catalog]: This master's current role is: LEADER
I20250809 19:59:46.702179 31887 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250809 19:59:46.712975 31887 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250809 19:59:46.724745 31887 catalog_manager.cc:1349] Generated new cluster ID: 340992b62b114e47aa3dce7fbc244cf2
I20250809 19:59:46.724969 31887 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250809 19:59:46.757539 31887 catalog_manager.cc:1372] Generated new certificate authority record
I20250809 19:59:46.758680 31887 catalog_manager.cc:1506] Loading token signing keys...
I20250809 19:59:46.769775 31887 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 4eac797f9fe540cca7f72297855e6a1c: Generated new TSK 0
I20250809 19:59:46.770502 31887 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250809 19:59:46.791754 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.129:0
--local_ip_for_outbound_sockets=127.25.124.129
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:44801
--builtin_ntp_servers=127.25.124.148:36863
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
W20250809 19:59:47.043771 31896 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:59:47.044193 31896 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:59:47.044616 31896 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:59:47.071926 31896 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:59:47.072619 31896 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.129
I20250809 19:59:47.101158 31896 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:36863
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.25.124.129
--webserver_port=0
--tserver_master_addrs=127.25.124.190:44801
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.129
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:59:47.102252 31896 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:59:47.103657 31896 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:59:47.113791 31902 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:47.114833 31903 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:48.423990 31905 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:48.429025 31904 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1314 milliseconds
W20250809 19:59:48.432147 31896 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.317s user 0.474s sys 0.828s
W20250809 19:59:48.432479 31896 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.318s user 0.474s sys 0.828s
I20250809 19:59:48.432739 31896 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:59:48.434077 31896 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:59:48.436861 31896 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:59:48.438344 31896 hybrid_clock.cc:648] HybridClock initialized: now 1754769588438262 us; error 78 us; skew 500 ppm
I20250809 19:59:48.439589 31896 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:59:48.450671 31896 webserver.cc:489] Webserver started at http://127.25.124.129:37449/ using document root <none> and password file <none>
I20250809 19:59:48.451999 31896 fs_manager.cc:362] Metadata directory not provided
I20250809 19:59:48.452265 31896 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:59:48.452764 31896 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:59:48.458254 31896 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data/instance:
uuid: "073eafa294ee49f99fea73c8b83d6cb0"
format_stamp: "Formatted at 2025-08-09 19:59:48 on dist-test-slave-xzln"
I20250809 19:59:48.459779 31896 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/wal/instance:
uuid: "073eafa294ee49f99fea73c8b83d6cb0"
format_stamp: "Formatted at 2025-08-09 19:59:48 on dist-test-slave-xzln"
I20250809 19:59:48.468657 31896 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.008s sys 0.000s
I20250809 19:59:48.475406 31912 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:48.476537 31896 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.001s
I20250809 19:59:48.476862 31896 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/wal
uuid: "073eafa294ee49f99fea73c8b83d6cb0"
format_stamp: "Formatted at 2025-08-09 19:59:48 on dist-test-slave-xzln"
I20250809 19:59:48.477244 31896 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:59:48.540901 31896 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:59:48.542500 31896 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:59:48.542956 31896 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:59:48.545822 31896 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:59:48.550562 31896 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:59:48.550796 31896 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:48.551049 31896 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:59:48.551304 31896 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.001s sys 0.001s
I20250809 19:59:48.698907 31896 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.129:44645
I20250809 19:59:48.698959 32024 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.129:44645 every 8 connection(s)
I20250809 19:59:48.702107 31896 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data/info.pb
I20250809 19:59:48.705242 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 31896
I20250809 19:59:48.705687 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/wal/instance
I20250809 19:59:48.713513 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.130:0
--local_ip_for_outbound_sockets=127.25.124.130
--webserver_interface=127.25.124.130
--webserver_port=0
--tserver_master_addrs=127.25.124.190:44801
--builtin_ntp_servers=127.25.124.148:36863
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250809 19:59:48.743678 32025 heartbeater.cc:344] Connected to a master server at 127.25.124.190:44801
I20250809 19:59:48.744108 32025 heartbeater.cc:461] Registering TS with master...
I20250809 19:59:48.745155 32025 heartbeater.cc:507] Master 127.25.124.190:44801 requested a full tablet report, sending...
I20250809 19:59:48.747788 31837 ts_manager.cc:194] Registered new tserver with Master: 073eafa294ee49f99fea73c8b83d6cb0 (127.25.124.129:44645)
I20250809 19:59:48.750128 31837 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.129:52899
W20250809 19:59:48.978953 32029 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:59:48.979393 32029 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:59:48.979880 32029 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:59:49.005228 32029 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:59:49.005909 32029 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.130
I20250809 19:59:49.033910 32029 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:36863
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.25.124.130
--webserver_port=0
--tserver_master_addrs=127.25.124.190:44801
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.130
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:59:49.034996 32029 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:59:49.036381 32029 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:59:49.047266 32035 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:59:49.753587 32025 heartbeater.cc:499] Master 127.25.124.190:44801 was elected leader, sending a full tablet report...
W20250809 19:59:49.047547 32036 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:50.216818 32038 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:50.218328 32037 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1166 milliseconds
W20250809 19:59:50.219370 32029 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.172s user 0.412s sys 0.748s
W20250809 19:59:50.219612 32029 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.173s user 0.412s sys 0.748s
I20250809 19:59:50.219794 32029 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:59:50.220696 32029 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:59:50.222679 32029 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:59:50.223977 32029 hybrid_clock.cc:648] HybridClock initialized: now 1754769590223942 us; error 40 us; skew 500 ppm
I20250809 19:59:50.224670 32029 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:59:50.234527 32029 webserver.cc:489] Webserver started at http://127.25.124.130:46081/ using document root <none> and password file <none>
I20250809 19:59:50.235395 32029 fs_manager.cc:362] Metadata directory not provided
I20250809 19:59:50.235584 32029 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:59:50.235965 32029 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:59:50.239701 32029 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data/instance:
uuid: "b4f96236fb9e4a93bfed44c6e495ded7"
format_stamp: "Formatted at 2025-08-09 19:59:50 on dist-test-slave-xzln"
I20250809 19:59:50.240613 32029 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/wal/instance:
uuid: "b4f96236fb9e4a93bfed44c6e495ded7"
format_stamp: "Formatted at 2025-08-09 19:59:50 on dist-test-slave-xzln"
I20250809 19:59:50.247241 32029 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.002s
I20250809 19:59:50.252151 32045 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:50.253118 32029 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.002s
I20250809 19:59:50.253386 32029 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/wal
uuid: "b4f96236fb9e4a93bfed44c6e495ded7"
format_stamp: "Formatted at 2025-08-09 19:59:50 on dist-test-slave-xzln"
I20250809 19:59:50.253654 32029 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:59:50.314738 32029 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:59:50.316357 32029 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:59:50.316813 32029 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:59:50.319321 32029 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:59:50.323894 32029 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:59:50.324142 32029 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:50.324438 32029 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:59:50.324651 32029 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:50.448668 32029 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.130:35769
I20250809 19:59:50.448763 32157 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.130:35769 every 8 connection(s)
I20250809 19:59:50.451104 32029 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data/info.pb
I20250809 19:59:50.459164 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 32029
I20250809 19:59:50.459666 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/wal/instance
I20250809 19:59:50.466609 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.131:0
--local_ip_for_outbound_sockets=127.25.124.131
--webserver_interface=127.25.124.131
--webserver_port=0
--tserver_master_addrs=127.25.124.190:44801
--builtin_ntp_servers=127.25.124.148:36863
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250809 19:59:50.472950 32158 heartbeater.cc:344] Connected to a master server at 127.25.124.190:44801
I20250809 19:59:50.473435 32158 heartbeater.cc:461] Registering TS with master...
I20250809 19:59:50.474609 32158 heartbeater.cc:507] Master 127.25.124.190:44801 requested a full tablet report, sending...
I20250809 19:59:50.476715 31837 ts_manager.cc:194] Registered new tserver with Master: b4f96236fb9e4a93bfed44c6e495ded7 (127.25.124.130:35769)
I20250809 19:59:50.477907 31837 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.130:36083
W20250809 19:59:50.733019 32162 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:59:50.733386 32162 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:59:50.733820 32162 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:59:50.759518 32162 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:59:50.760465 32162 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.131
I20250809 19:59:50.788928 32162 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:36863
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.25.124.131
--webserver_port=0
--tserver_master_addrs=127.25.124.190:44801
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.131
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 19:59:50.789994 32162 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:59:50.791492 32162 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:59:50.801288 32168 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:59:51.480638 32158 heartbeater.cc:499] Master 127.25.124.190:44801 was elected leader, sending a full tablet report...
W20250809 19:59:50.802040 32169 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:51.815833 32171 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:51.817375 32170 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1012 milliseconds
I20250809 19:59:51.817495 32162 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 19:59:51.818523 32162 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 19:59:51.820385 32162 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 19:59:51.821677 32162 hybrid_clock.cc:648] HybridClock initialized: now 1754769591821641 us; error 39 us; skew 500 ppm
I20250809 19:59:51.822340 32162 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:59:51.827451 32162 webserver.cc:489] Webserver started at http://127.25.124.131:33835/ using document root <none> and password file <none>
I20250809 19:59:51.828220 32162 fs_manager.cc:362] Metadata directory not provided
I20250809 19:59:51.828397 32162 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:59:51.828805 32162 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:59:51.832526 32162 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data/instance:
uuid: "94180891930c4216b9293319c2974fb8"
format_stamp: "Formatted at 2025-08-09 19:59:51 on dist-test-slave-xzln"
I20250809 19:59:51.833441 32162 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/wal/instance:
uuid: "94180891930c4216b9293319c2974fb8"
format_stamp: "Formatted at 2025-08-09 19:59:51 on dist-test-slave-xzln"
I20250809 19:59:51.839380 32162 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.007s sys 0.001s
I20250809 19:59:51.843971 32178 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:51.844784 32162 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250809 19:59:51.845047 32162 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/wal
uuid: "94180891930c4216b9293319c2974fb8"
format_stamp: "Formatted at 2025-08-09 19:59:51 on dist-test-slave-xzln"
I20250809 19:59:51.845319 32162 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:59:51.887184 32162 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:59:51.888365 32162 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:59:51.888729 32162 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:59:51.890794 32162 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 19:59:51.894176 32162 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250809 19:59:51.894361 32162 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:51.894605 32162 ts_tablet_manager.cc:610] Registered 0 tablets
I20250809 19:59:51.894743 32162 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:52.008137 32162 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.131:41893
I20250809 19:59:52.008219 32290 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.131:41893 every 8 connection(s)
I20250809 19:59:52.010264 32162 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data/info.pb
I20250809 19:59:52.019240 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 32162
I20250809 19:59:52.019661 26098 external_mini_cluster.cc:1442] Reading /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/wal/instance
I20250809 19:59:52.031059 32291 heartbeater.cc:344] Connected to a master server at 127.25.124.190:44801
I20250809 19:59:52.031404 32291 heartbeater.cc:461] Registering TS with master...
I20250809 19:59:52.032176 32291 heartbeater.cc:507] Master 127.25.124.190:44801 requested a full tablet report, sending...
I20250809 19:59:52.033798 31837 ts_manager.cc:194] Registered new tserver with Master: 94180891930c4216b9293319c2974fb8 (127.25.124.131:41893)
I20250809 19:59:52.034816 31837 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.131:60785
I20250809 19:59:52.038321 26098 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250809 19:59:52.059128 26098 test_util.cc:276] Using random seed: 573736166
I20250809 19:59:52.091418 31837 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:58294:
name: "pre_rebuild"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
W20250809 19:59:52.093410 31837 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table pre_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250809 19:59:52.131973 32226 tablet_service.cc:1468] Processing CreateTablet for tablet 5bc7b51263f8423d89931e8b5e732b32 (DEFAULT_TABLE table=pre_rebuild [id=02b30257ee714879b5090a9d9bad082e]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:59:52.136250 32226 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5bc7b51263f8423d89931e8b5e732b32. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:59:52.135758 31960 tablet_service.cc:1468] Processing CreateTablet for tablet 5bc7b51263f8423d89931e8b5e732b32 (DEFAULT_TABLE table=pre_rebuild [id=02b30257ee714879b5090a9d9bad082e]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:59:52.137416 31960 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5bc7b51263f8423d89931e8b5e732b32. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:59:52.137708 32093 tablet_service.cc:1468] Processing CreateTablet for tablet 5bc7b51263f8423d89931e8b5e732b32 (DEFAULT_TABLE table=pre_rebuild [id=02b30257ee714879b5090a9d9bad082e]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 19:59:52.139237 32093 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5bc7b51263f8423d89931e8b5e732b32. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:59:52.154373 32315 tablet_bootstrap.cc:492] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: Bootstrap starting.
I20250809 19:59:52.160079 32315 tablet_bootstrap.cc:654] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: Neither blocks nor log segments found. Creating new log.
I20250809 19:59:52.160975 32316 tablet_bootstrap.cc:492] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: Bootstrap starting.
I20250809 19:59:52.162216 32315 log.cc:826] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: Log is configured to *not* fsync() on all Append() calls
I20250809 19:59:52.163178 32317 tablet_bootstrap.cc:492] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: Bootstrap starting.
I20250809 19:59:52.167783 32316 tablet_bootstrap.cc:654] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: Neither blocks nor log segments found. Creating new log.
I20250809 19:59:52.167989 32317 tablet_bootstrap.cc:654] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: Neither blocks nor log segments found. Creating new log.
I20250809 19:59:52.169361 32315 tablet_bootstrap.cc:492] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: No bootstrap required, opened a new log
I20250809 19:59:52.169831 32315 ts_tablet_manager.cc:1397] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: Time spent bootstrapping tablet: real 0.016s user 0.015s sys 0.000s
I20250809 19:59:52.169786 32316 log.cc:826] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: Log is configured to *not* fsync() on all Append() calls
I20250809 19:59:52.169999 32317 log.cc:826] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: Log is configured to *not* fsync() on all Append() calls
I20250809 19:59:52.174561 32316 tablet_bootstrap.cc:492] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: No bootstrap required, opened a new log
I20250809 19:59:52.174630 32317 tablet_bootstrap.cc:492] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: No bootstrap required, opened a new log
I20250809 19:59:52.174893 32316 ts_tablet_manager.cc:1397] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: Time spent bootstrapping tablet: real 0.014s user 0.008s sys 0.004s
I20250809 19:59:52.175017 32317 ts_tablet_manager.cc:1397] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: Time spent bootstrapping tablet: real 0.012s user 0.006s sys 0.004s
I20250809 19:59:52.189466 32316 raft_consensus.cc:357] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 19:59:52.190021 32316 raft_consensus.cc:383] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:59:52.190230 32316 raft_consensus.cc:738] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 073eafa294ee49f99fea73c8b83d6cb0, State: Initialized, Role: FOLLOWER
I20250809 19:59:52.190888 32316 consensus_queue.cc:260] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 19:59:52.192500 32315 raft_consensus.cc:357] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 19:59:52.193288 32315 raft_consensus.cc:383] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:59:52.193567 32315 raft_consensus.cc:738] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 94180891930c4216b9293319c2974fb8, State: Initialized, Role: FOLLOWER
I20250809 19:59:52.194344 32316 ts_tablet_manager.cc:1428] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: Time spent starting tablet: real 0.019s user 0.020s sys 0.000s
I20250809 19:59:52.194370 32315 consensus_queue.cc:260] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 19:59:52.197104 32291 heartbeater.cc:499] Master 127.25.124.190:44801 was elected leader, sending a full tablet report...
I20250809 19:59:52.197386 32317 raft_consensus.cc:357] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 19:59:52.198127 32317 raft_consensus.cc:383] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:59:52.198293 32315 ts_tablet_manager.cc:1428] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: Time spent starting tablet: real 0.028s user 0.022s sys 0.004s
I20250809 19:59:52.198372 32317 raft_consensus.cc:738] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b4f96236fb9e4a93bfed44c6e495ded7, State: Initialized, Role: FOLLOWER
I20250809 19:59:52.199158 32317 consensus_queue.cc:260] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 19:59:52.202768 32317 ts_tablet_manager.cc:1428] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: Time spent starting tablet: real 0.027s user 0.023s sys 0.003s
W20250809 19:59:52.206892 32159 tablet.cc:2378] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250809 19:59:52.215417 32321 raft_consensus.cc:491] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250809 19:59:52.215840 32321 raft_consensus.cc:513] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 19:59:52.217970 32321 leader_election.cc:290] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 94180891930c4216b9293319c2974fb8 (127.25.124.131:41893), b4f96236fb9e4a93bfed44c6e495ded7 (127.25.124.130:35769)
I20250809 19:59:52.228521 32113 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5bc7b51263f8423d89931e8b5e732b32" candidate_uuid: "073eafa294ee49f99fea73c8b83d6cb0" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" is_pre_election: true
I20250809 19:59:52.228533 32246 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5bc7b51263f8423d89931e8b5e732b32" candidate_uuid: "073eafa294ee49f99fea73c8b83d6cb0" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "94180891930c4216b9293319c2974fb8" is_pre_election: true
I20250809 19:59:52.229074 32113 raft_consensus.cc:2466] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 073eafa294ee49f99fea73c8b83d6cb0 in term 0.
W20250809 19:59:52.229163 32026 tablet.cc:2378] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250809 19:59:52.229133 32246 raft_consensus.cc:2466] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 073eafa294ee49f99fea73c8b83d6cb0 in term 0.
I20250809 19:59:52.230069 31915 leader_election.cc:304] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 073eafa294ee49f99fea73c8b83d6cb0, b4f96236fb9e4a93bfed44c6e495ded7; no voters:
I20250809 19:59:52.230638 32321 raft_consensus.cc:2802] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250809 19:59:52.230846 32321 raft_consensus.cc:491] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250809 19:59:52.231068 32321 raft_consensus.cc:3058] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:59:52.234838 32321 raft_consensus.cc:513] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 19:59:52.235908 32321 leader_election.cc:290] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [CANDIDATE]: Term 1 election: Requested vote from peers 94180891930c4216b9293319c2974fb8 (127.25.124.131:41893), b4f96236fb9e4a93bfed44c6e495ded7 (127.25.124.130:35769)
I20250809 19:59:52.236515 32246 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5bc7b51263f8423d89931e8b5e732b32" candidate_uuid: "073eafa294ee49f99fea73c8b83d6cb0" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "94180891930c4216b9293319c2974fb8"
I20250809 19:59:52.236599 32113 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5bc7b51263f8423d89931e8b5e732b32" candidate_uuid: "073eafa294ee49f99fea73c8b83d6cb0" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b4f96236fb9e4a93bfed44c6e495ded7"
I20250809 19:59:52.236848 32246 raft_consensus.cc:3058] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:59:52.236933 32113 raft_consensus.cc:3058] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:59:52.240418 32246 raft_consensus.cc:2466] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 073eafa294ee49f99fea73c8b83d6cb0 in term 1.
I20250809 19:59:52.240507 32113 raft_consensus.cc:2466] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 073eafa294ee49f99fea73c8b83d6cb0 in term 1.
I20250809 19:59:52.241211 31916 leader_election.cc:304] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 073eafa294ee49f99fea73c8b83d6cb0, 94180891930c4216b9293319c2974fb8; no voters:
I20250809 19:59:52.241724 32321 raft_consensus.cc:2802] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:59:52.242928 32321 raft_consensus.cc:695] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 1 LEADER]: Becoming Leader. State: Replica: 073eafa294ee49f99fea73c8b83d6cb0, State: Running, Role: LEADER
I20250809 19:59:52.243532 32321 consensus_queue.cc:237] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 19:59:52.251395 31837 catalog_manager.cc:5582] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 reported cstate change: term changed from 0 to 1, leader changed from <none> to 073eafa294ee49f99fea73c8b83d6cb0 (127.25.124.129). New cstate: current_term: 1 leader_uuid: "073eafa294ee49f99fea73c8b83d6cb0" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } health_report { overall_health: UNKNOWN } } }
W20250809 19:59:52.264067 32292 tablet.cc:2378] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250809 19:59:52.396183 32113 raft_consensus.cc:1273] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 1 FOLLOWER]: Refusing update from remote peer 073eafa294ee49f99fea73c8b83d6cb0: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250809 19:59:52.396452 32246 raft_consensus.cc:1273] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8 [term 1 FOLLOWER]: Refusing update from remote peer 073eafa294ee49f99fea73c8b83d6cb0: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250809 19:59:52.397531 32326 consensus_queue.cc:1035] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [LEADER]: Connected to new peer: Peer: permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250809 19:59:52.398108 32321 consensus_queue.cc:1035] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [LEADER]: Connected to new peer: Peer: permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250809 19:59:56.539989 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 31804
W20250809 19:59:56.600594 32158 heartbeater.cc:646] Failed to heartbeat to 127.25.124.190:44801 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.25.124.190:44801: connect: Connection refused (error 111)
W20250809 19:59:56.600649 32291 heartbeater.cc:646] Failed to heartbeat to 127.25.124.190:44801 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.25.124.190:44801: connect: Connection refused (error 111)
W20250809 19:59:56.602778 32025 heartbeater.cc:646] Failed to heartbeat to 127.25.124.190:44801 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.25.124.190:44801: connect: Connection refused (error 111)
W20250809 19:59:56.847153 32364 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:59:56.847668 32364 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:59:56.872753 32364 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250809 19:59:58.224018 32364 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.317s user 0.443s sys 0.855s
W20250809 19:59:58.224288 32364 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.317s user 0.443s sys 0.855s
I20250809 19:59:58.308212 32364 minidump.cc:252] Setting minidump size limit to 20M
I20250809 19:59:58.309849 32364 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 19:59:58.310737 32364 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 19:59:58.319247 32398 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:58.320418 32399 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 19:59:58.322459 32401 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 19:59:58.388043 32364 server_base.cc:1047] running on GCE node
I20250809 19:59:58.388988 32364 hybrid_clock.cc:584] initializing the hybrid clock with 'system' time source
I20250809 19:59:58.389403 32364 hybrid_clock.cc:648] HybridClock initialized: now 1754769598389383 us; error 264016 us; skew 500 ppm
I20250809 19:59:58.390034 32364 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 19:59:58.394016 32364 webserver.cc:489] Webserver started at http://0.0.0.0:34557/ using document root <none> and password file <none>
I20250809 19:59:58.394852 32364 fs_manager.cc:362] Metadata directory not provided
I20250809 19:59:58.395051 32364 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 19:59:58.395522 32364 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250809 19:59:58.399044 32364 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data/instance:
uuid: "9b21be77b1b74c4a8035721db223bb53"
format_stamp: "Formatted at 2025-08-09 19:59:58 on dist-test-slave-xzln"
I20250809 19:59:58.400020 32364 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal/instance:
uuid: "9b21be77b1b74c4a8035721db223bb53"
format_stamp: "Formatted at 2025-08-09 19:59:58 on dist-test-slave-xzln"
I20250809 19:59:58.405094 32364 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.004s sys 0.001s
I20250809 19:59:58.409060 32406 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 19:59:58.409771 32364 fs_manager.cc:730] Time spent opening block manager: real 0.002s user 0.002s sys 0.000s
I20250809 19:59:58.410024 32364 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal
uuid: "9b21be77b1b74c4a8035721db223bb53"
format_stamp: "Formatted at 2025-08-09 19:59:58 on dist-test-slave-xzln"
I20250809 19:59:58.410329 32364 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 19:59:58.487893 32364 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 19:59:58.489058 32364 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 19:59:58.489392 32364 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 19:59:58.493234 32364 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 19:59:58.505512 32364 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53: Bootstrap starting.
I20250809 19:59:58.509649 32364 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53: Neither blocks nor log segments found. Creating new log.
I20250809 19:59:58.511034 32364 log.cc:826] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53: Log is configured to *not* fsync() on all Append() calls
I20250809 19:59:58.514205 32364 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53: No bootstrap required, opened a new log
I20250809 19:59:58.527087 32364 raft_consensus.cc:357] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b21be77b1b74c4a8035721db223bb53" member_type: VOTER }
I20250809 19:59:58.527529 32364 raft_consensus.cc:383] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 19:59:58.527716 32364 raft_consensus.cc:738] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9b21be77b1b74c4a8035721db223bb53, State: Initialized, Role: FOLLOWER
I20250809 19:59:58.528307 32364 consensus_queue.cc:260] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b21be77b1b74c4a8035721db223bb53" member_type: VOTER }
I20250809 19:59:58.528697 32364 raft_consensus.cc:397] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 19:59:58.528898 32364 raft_consensus.cc:491] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 19:59:58.529142 32364 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 0 FOLLOWER]: Advancing to term 1
I20250809 19:59:58.532222 32364 raft_consensus.cc:513] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b21be77b1b74c4a8035721db223bb53" member_type: VOTER }
I20250809 19:59:58.532781 32364 leader_election.cc:304] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9b21be77b1b74c4a8035721db223bb53; no voters:
I20250809 19:59:58.534121 32364 leader_election.cc:290] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250809 19:59:58.534313 32413 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 1 FOLLOWER]: Leader election won for term 1
I20250809 19:59:58.536885 32413 raft_consensus.cc:695] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 1 LEADER]: Becoming Leader. State: Replica: 9b21be77b1b74c4a8035721db223bb53, State: Running, Role: LEADER
I20250809 19:59:58.537616 32413 consensus_queue.cc:237] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b21be77b1b74c4a8035721db223bb53" member_type: VOTER }
I20250809 19:59:58.544157 32414 sys_catalog.cc:455] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "9b21be77b1b74c4a8035721db223bb53" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b21be77b1b74c4a8035721db223bb53" member_type: VOTER } }
I20250809 19:59:58.544292 32415 sys_catalog.cc:455] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 9b21be77b1b74c4a8035721db223bb53. Latest consensus state: current_term: 1 leader_uuid: "9b21be77b1b74c4a8035721db223bb53" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b21be77b1b74c4a8035721db223bb53" member_type: VOTER } }
I20250809 19:59:58.544730 32414 sys_catalog.cc:458] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [sys.catalog]: This master's current role is: LEADER
I20250809 19:59:58.544734 32415 sys_catalog.cc:458] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [sys.catalog]: This master's current role is: LEADER
I20250809 19:59:58.553526 32364 tablet_replica.cc:331] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53: stopping tablet replica
I20250809 19:59:58.553992 32364 raft_consensus.cc:2241] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 1 LEADER]: Raft consensus shutting down.
I20250809 19:59:58.554322 32364 raft_consensus.cc:2270] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250809 19:59:58.555972 32364 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250809 19:59:58.556305 32364 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250809 19:59:58.578688 32364 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
I20250809 19:59:59.603405 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 31896
I20250809 19:59:59.643918 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 32029
I20250809 19:59:59.673447 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 32162
I20250809 19:59:59.706202 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:44801
--webserver_interface=127.25.124.190
--webserver_port=43735
--builtin_ntp_servers=127.25.124.148:36863
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.25.124.190:44801 with env {}
W20250809 19:59:59.968480 32425 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 19:59:59.968968 32425 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 19:59:59.969347 32425 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 19:59:59.994863 32425 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250809 19:59:59.995115 32425 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 19:59:59.995333 32425 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250809 19:59:59.995548 32425 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250809 20:00:00.023933 32425 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:36863
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.25.124.190:44801
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.25.124.190:44801
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.25.124.190
--webserver_port=43735
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 20:00:00.025054 32425 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 20:00:00.026405 32425 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 20:00:00.035904 32431 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 20:00:01.439007 32430 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 32425
W20250809 20:00:01.572042 32425 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.536s user 0.581s sys 0.955s
W20250809 20:00:01.572383 32425 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.536s user 0.581s sys 0.955s
W20250809 20:00:00.036348 32432 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 20:00:01.573671 32433 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1537 milliseconds
W20250809 20:00:01.574232 32434 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 20:00:01.574184 32425 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 20:00:01.576958 32425 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 20:00:01.579166 32425 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 20:00:01.580518 32425 hybrid_clock.cc:648] HybridClock initialized: now 1754769601580472 us; error 37 us; skew 500 ppm
I20250809 20:00:01.581339 32425 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 20:00:01.586464 32425 webserver.cc:489] Webserver started at http://127.25.124.190:43735/ using document root <none> and password file <none>
I20250809 20:00:01.587262 32425 fs_manager.cc:362] Metadata directory not provided
I20250809 20:00:01.587466 32425 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 20:00:01.594156 32425 fs_manager.cc:714] Time spent opening directory manager: real 0.004s user 0.006s sys 0.000s
I20250809 20:00:01.598026 32441 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 20:00:01.598850 32425 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250809 20:00:01.599125 32425 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal
uuid: "9b21be77b1b74c4a8035721db223bb53"
format_stamp: "Formatted at 2025-08-09 19:59:58 on dist-test-slave-xzln"
I20250809 20:00:01.600771 32425 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 20:00:01.642807 32425 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 20:00:01.644099 32425 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 20:00:01.644503 32425 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 20:00:01.706647 32425 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.190:44801
I20250809 20:00:01.706713 32492 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.190:44801 every 8 connection(s)
I20250809 20:00:01.709096 32425 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data/info.pb
I20250809 20:00:01.715291 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 32425
I20250809 20:00:01.717260 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.129:44645
--local_ip_for_outbound_sockets=127.25.124.129
--tserver_master_addrs=127.25.124.190:44801
--webserver_port=37449
--webserver_interface=127.25.124.129
--builtin_ntp_servers=127.25.124.148:36863
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250809 20:00:01.719784 32493 sys_catalog.cc:263] Verifying existing consensus state
I20250809 20:00:01.733912 32493 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53: Bootstrap starting.
I20250809 20:00:01.742893 32493 log.cc:826] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53: Log is configured to *not* fsync() on all Append() calls
I20250809 20:00:01.753377 32493 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=2 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 20:00:01.754086 32493 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53: Bootstrap complete.
I20250809 20:00:01.771997 32493 raft_consensus.cc:357] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b21be77b1b74c4a8035721db223bb53" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 44801 } }
I20250809 20:00:01.772588 32493 raft_consensus.cc:738] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9b21be77b1b74c4a8035721db223bb53, State: Initialized, Role: FOLLOWER
I20250809 20:00:01.773275 32493 consensus_queue.cc:260] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b21be77b1b74c4a8035721db223bb53" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 44801 } }
I20250809 20:00:01.773721 32493 raft_consensus.cc:397] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250809 20:00:01.773952 32493 raft_consensus.cc:491] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250809 20:00:01.774245 32493 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 1 FOLLOWER]: Advancing to term 2
I20250809 20:00:01.777714 32493 raft_consensus.cc:513] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b21be77b1b74c4a8035721db223bb53" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 44801 } }
I20250809 20:00:01.778260 32493 leader_election.cc:304] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9b21be77b1b74c4a8035721db223bb53; no voters:
I20250809 20:00:01.780185 32493 leader_election.cc:290] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250809 20:00:01.780563 32497 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 2 FOLLOWER]: Leader election won for term 2
I20250809 20:00:01.783288 32497 raft_consensus.cc:695] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [term 2 LEADER]: Becoming Leader. State: Replica: 9b21be77b1b74c4a8035721db223bb53, State: Running, Role: LEADER
I20250809 20:00:01.784111 32497 consensus_queue.cc:237] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b21be77b1b74c4a8035721db223bb53" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 44801 } }
I20250809 20:00:01.785089 32493 sys_catalog.cc:564] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [sys.catalog]: configured and running, proceeding with master startup.
I20250809 20:00:01.791985 32498 sys_catalog.cc:455] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "9b21be77b1b74c4a8035721db223bb53" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b21be77b1b74c4a8035721db223bb53" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 44801 } } }
I20250809 20:00:01.792918 32499 sys_catalog.cc:455] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 9b21be77b1b74c4a8035721db223bb53. Latest consensus state: current_term: 2 leader_uuid: "9b21be77b1b74c4a8035721db223bb53" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9b21be77b1b74c4a8035721db223bb53" member_type: VOTER last_known_addr { host: "127.25.124.190" port: 44801 } } }
I20250809 20:00:01.793478 32499 sys_catalog.cc:458] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [sys.catalog]: This master's current role is: LEADER
I20250809 20:00:01.795099 32498 sys_catalog.cc:458] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53 [sys.catalog]: This master's current role is: LEADER
I20250809 20:00:01.801257 32503 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250809 20:00:01.812168 32503 catalog_manager.cc:671] Loaded metadata for table pre_rebuild [id=0b8c2ed43a284b5a89caf340124bd5d8]
I20250809 20:00:01.818120 32503 tablet_loader.cc:96] loaded metadata for tablet 5bc7b51263f8423d89931e8b5e732b32 (table pre_rebuild [id=0b8c2ed43a284b5a89caf340124bd5d8])
I20250809 20:00:01.820015 32503 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250809 20:00:01.850379 32503 catalog_manager.cc:1349] Generated new cluster ID: 080ae2e90f204f558ceec38f633e2e36
I20250809 20:00:01.850711 32503 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250809 20:00:01.871369 32503 catalog_manager.cc:1372] Generated new certificate authority record
I20250809 20:00:01.872558 32503 catalog_manager.cc:1506] Loading token signing keys...
I20250809 20:00:01.886849 32503 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53: Generated new TSK 0
I20250809 20:00:01.887588 32503 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250809 20:00:02.023790 32495 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 20:00:02.024216 32495 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 20:00:02.024638 32495 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 20:00:02.050351 32495 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 20:00:02.051043 32495 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.129
I20250809 20:00:02.079334 32495 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:36863
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.129:44645
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.25.124.129
--webserver_port=37449
--tserver_master_addrs=127.25.124.190:44801
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.129
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 20:00:02.080520 32495 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 20:00:02.081861 32495 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 20:00:02.093097 32521 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 20:00:03.497009 32520 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 32495
W20250809 20:00:03.605293 32495 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.511s user 0.000s sys 0.002s
W20250809 20:00:03.605692 32495 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.511s user 0.000s sys 0.002s
W20250809 20:00:02.094233 32522 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 20:00:03.606153 32523 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1509 milliseconds
W20250809 20:00:03.607935 32524 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 20:00:03.607878 32495 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250809 20:00:03.610355 32495 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 20:00:03.612739 32495 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 20:00:03.614166 32495 hybrid_clock.cc:648] HybridClock initialized: now 1754769603614125 us; error 45 us; skew 500 ppm
I20250809 20:00:03.614862 32495 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 20:00:03.625470 32495 webserver.cc:489] Webserver started at http://127.25.124.129:37449/ using document root <none> and password file <none>
I20250809 20:00:03.626274 32495 fs_manager.cc:362] Metadata directory not provided
I20250809 20:00:03.626480 32495 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 20:00:03.633476 32495 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.005s sys 0.000s
I20250809 20:00:03.637665 32531 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 20:00:03.638608 32495 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250809 20:00:03.638872 32495 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/wal
uuid: "073eafa294ee49f99fea73c8b83d6cb0"
format_stamp: "Formatted at 2025-08-09 19:59:48 on dist-test-slave-xzln"
I20250809 20:00:03.640633 32495 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 20:00:03.688074 32495 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 20:00:03.689281 32495 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 20:00:03.689672 32495 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 20:00:03.692313 32495 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 20:00:03.697739 32538 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250809 20:00:03.704069 32495 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250809 20:00:03.704304 32495 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.008s user 0.001s sys 0.001s
I20250809 20:00:03.704594 32495 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250809 20:00:03.710606 32495 ts_tablet_manager.cc:610] Registered 1 tablets
I20250809 20:00:03.710878 32495 ts_tablet_manager.cc:589] Time spent register tablets: real 0.006s user 0.002s sys 0.005s
I20250809 20:00:03.711134 32538 tablet_bootstrap.cc:492] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: Bootstrap starting.
I20250809 20:00:03.896734 32495 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.129:44645
I20250809 20:00:03.896836 32644 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.129:44645 every 8 connection(s)
I20250809 20:00:03.899085 32495 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data/info.pb
I20250809 20:00:03.907537 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 32495
I20250809 20:00:03.909200 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.130:35769
--local_ip_for_outbound_sockets=127.25.124.130
--tserver_master_addrs=127.25.124.190:44801
--webserver_port=46081
--webserver_interface=127.25.124.130
--builtin_ntp_servers=127.25.124.148:36863
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250809 20:00:03.958956 32645 heartbeater.cc:344] Connected to a master server at 127.25.124.190:44801
I20250809 20:00:03.959358 32645 heartbeater.cc:461] Registering TS with master...
I20250809 20:00:03.960230 32645 heartbeater.cc:507] Master 127.25.124.190:44801 requested a full tablet report, sending...
I20250809 20:00:03.963706 32458 ts_manager.cc:194] Registered new tserver with Master: 073eafa294ee49f99fea73c8b83d6cb0 (127.25.124.129:44645)
I20250809 20:00:03.970155 32538 log.cc:826] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: Log is configured to *not* fsync() on all Append() calls
I20250809 20:00:03.970300 32458 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.129:34009
W20250809 20:00:04.203775 32649 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 20:00:04.204178 32649 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 20:00:04.204625 32649 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 20:00:04.230708 32649 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 20:00:04.231490 32649 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.130
I20250809 20:00:04.259898 32649 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:36863
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.130:35769
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.25.124.130
--webserver_port=46081
--tserver_master_addrs=127.25.124.190:44801
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.130
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 20:00:04.260993 32649 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 20:00:04.262337 32649 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 20:00:04.273756 32656 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 20:00:04.973551 32645 heartbeater.cc:499] Master 127.25.124.190:44801 was elected leader, sending a full tablet report...
W20250809 20:00:04.276459 32657 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 20:00:05.536386 32649 thread.cc:641] GCE (cloud detector) Time spent creating pthread: real 1.261s user 0.519s sys 0.741s
W20250809 20:00:05.536762 32649 thread.cc:608] GCE (cloud detector) Time spent starting thread: real 1.262s user 0.519s sys 0.741s
I20250809 20:00:05.543675 32649 server_base.cc:1047] running on GCE node
W20250809 20:00:05.545135 32662 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 20:00:05.546424 32649 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 20:00:05.548880 32649 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 20:00:05.550328 32649 hybrid_clock.cc:648] HybridClock initialized: now 1754769605550286 us; error 37 us; skew 500 ppm
I20250809 20:00:05.551326 32649 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 20:00:05.559145 32649 webserver.cc:489] Webserver started at http://127.25.124.130:46081/ using document root <none> and password file <none>
I20250809 20:00:05.560281 32649 fs_manager.cc:362] Metadata directory not provided
I20250809 20:00:05.560541 32649 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 20:00:05.570793 32649 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.005s sys 0.001s
I20250809 20:00:05.576335 32667 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 20:00:05.577479 32649 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250809 20:00:05.577836 32649 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/wal
uuid: "b4f96236fb9e4a93bfed44c6e495ded7"
format_stamp: "Formatted at 2025-08-09 19:59:50 on dist-test-slave-xzln"
I20250809 20:00:05.580379 32649 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 20:00:05.664563 32649 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 20:00:05.665742 32649 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 20:00:05.666092 32649 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 20:00:05.668354 32649 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 20:00:05.673671 32674 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250809 20:00:05.683049 32649 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250809 20:00:05.683296 32649 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.011s user 0.001s sys 0.001s
I20250809 20:00:05.683530 32649 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250809 20:00:05.687397 32649 ts_tablet_manager.cc:610] Registered 1 tablets
I20250809 20:00:05.687552 32649 ts_tablet_manager.cc:589] Time spent register tablets: real 0.004s user 0.002s sys 0.002s
I20250809 20:00:05.687955 32674 tablet_bootstrap.cc:492] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: Bootstrap starting.
I20250809 20:00:05.920712 32649 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.130:35769
I20250809 20:00:05.920919 312 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.130:35769 every 8 connection(s)
I20250809 20:00:05.923413 32649 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data/info.pb
I20250809 20:00:05.928906 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 32649
I20250809 20:00:05.931202 26098 external_mini_cluster.cc:1366] Running /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
/tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.25.124.131:41893
--local_ip_for_outbound_sockets=127.25.124.131
--tserver_master_addrs=127.25.124.190:44801
--webserver_port=33835
--webserver_interface=127.25.124.131
--builtin_ntp_servers=127.25.124.148:36863
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250809 20:00:05.963321 32674 log.cc:826] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: Log is configured to *not* fsync() on all Append() calls
I20250809 20:00:05.964687 313 heartbeater.cc:344] Connected to a master server at 127.25.124.190:44801
I20250809 20:00:05.965126 313 heartbeater.cc:461] Registering TS with master...
I20250809 20:00:05.966171 313 heartbeater.cc:507] Master 127.25.124.190:44801 requested a full tablet report, sending...
I20250809 20:00:05.969631 32458 ts_manager.cc:194] Registered new tserver with Master: b4f96236fb9e4a93bfed44c6e495ded7 (127.25.124.130:35769)
I20250809 20:00:05.972257 32458 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.130:42601
W20250809 20:00:06.342963 317 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250809 20:00:06.343536 317 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250809 20:00:06.344174 317 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250809 20:00:06.390578 317 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250809 20:00:06.391748 317 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.25.124.131
I20250809 20:00:06.441502 317 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.25.124.148:36863
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.25.124.131:41893
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.25.124.131
--webserver_port=33835
--tserver_master_addrs=127.25.124.190:44801
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.25.124.131
--log_dir=/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision b92f16d1c86a753c597b46c7575bfa6a1479726a
build type FASTDEBUG
built by None at 09 Aug 2025 19:43:21 UTC on 5fd53c4cbb9d
build id 7489
TSAN enabled
I20250809 20:00:06.442888 317 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250809 20:00:06.444664 317 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250809 20:00:06.457314 324 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 20:00:06.612895 32538 tablet_bootstrap.cc:492] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 20:00:06.614090 32538 tablet_bootstrap.cc:492] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: Bootstrap complete.
I20250809 20:00:06.616192 32538 ts_tablet_manager.cc:1397] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: Time spent bootstrapping tablet: real 2.905s user 2.844s sys 0.048s
I20250809 20:00:06.635401 32538 raft_consensus.cc:357] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 20:00:06.638885 32538 raft_consensus.cc:738] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 073eafa294ee49f99fea73c8b83d6cb0, State: Initialized, Role: FOLLOWER
I20250809 20:00:06.639971 32538 consensus_queue.cc:260] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 20:00:06.655308 32538 ts_tablet_manager.cc:1428] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: Time spent starting tablet: real 0.039s user 0.030s sys 0.008s
I20250809 20:00:06.975597 313 heartbeater.cc:499] Master 127.25.124.190:44801 was elected leader, sending a full tablet report...
I20250809 20:00:07.841075 32674 tablet_bootstrap.cc:492] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 20:00:07.841730 32674 tablet_bootstrap.cc:492] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: Bootstrap complete.
I20250809 20:00:07.842882 32674 ts_tablet_manager.cc:1397] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: Time spent bootstrapping tablet: real 2.155s user 2.103s sys 0.040s
I20250809 20:00:07.851914 32674 raft_consensus.cc:357] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 20:00:07.853565 32674 raft_consensus.cc:738] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: b4f96236fb9e4a93bfed44c6e495ded7, State: Initialized, Role: FOLLOWER
I20250809 20:00:07.854177 32674 consensus_queue.cc:260] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 20:00:07.857036 32674 ts_tablet_manager.cc:1428] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: Time spent starting tablet: real 0.014s user 0.009s sys 0.008s
W20250809 20:00:07.858807 323 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 317
W20250809 20:00:07.902874 317 thread.cc:641] GCE (cloud detector) Time spent creating pthread: real 1.446s user 0.608s sys 0.833s
W20250809 20:00:06.458098 325 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250809 20:00:07.903282 317 thread.cc:608] GCE (cloud detector) Time spent starting thread: real 1.447s user 0.608s sys 0.833s
W20250809 20:00:07.911415 331 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250809 20:00:07.911468 317 server_base.cc:1047] running on GCE node
I20250809 20:00:07.912498 317 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250809 20:00:07.914301 317 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250809 20:00:07.915616 317 hybrid_clock.cc:648] HybridClock initialized: now 1754769607915584 us; error 30 us; skew 500 ppm
I20250809 20:00:07.916329 317 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250809 20:00:07.921573 317 webserver.cc:489] Webserver started at http://127.25.124.131:33835/ using document root <none> and password file <none>
I20250809 20:00:07.922380 317 fs_manager.cc:362] Metadata directory not provided
I20250809 20:00:07.922595 317 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250809 20:00:07.929071 317 fs_manager.cc:714] Time spent opening directory manager: real 0.004s user 0.000s sys 0.004s
I20250809 20:00:07.932940 336 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250809 20:00:07.933763 317 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.002s
I20250809 20:00:07.934026 317 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data,/tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/wal
uuid: "94180891930c4216b9293319c2974fb8"
format_stamp: "Formatted at 2025-08-09 19:59:51 on dist-test-slave-xzln"
I20250809 20:00:07.935686 317 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250809 20:00:07.984329 317 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250809 20:00:07.985524 317 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250809 20:00:07.985896 317 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250809 20:00:07.988034 317 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250809 20:00:07.992749 343 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250809 20:00:07.998840 317 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250809 20:00:07.999035 317 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.007s user 0.000s sys 0.001s
I20250809 20:00:07.999300 317 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250809 20:00:08.003070 317 ts_tablet_manager.cc:610] Registered 1 tablets
I20250809 20:00:08.003270 317 ts_tablet_manager.cc:589] Time spent register tablets: real 0.004s user 0.004s sys 0.000s
I20250809 20:00:08.003644 343 tablet_bootstrap.cc:492] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: Bootstrap starting.
I20250809 20:00:08.153458 317 rpc_server.cc:307] RPC server started. Bound to: 127.25.124.131:41893
I20250809 20:00:08.153708 449 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.25.124.131:41893 every 8 connection(s)
I20250809 20:00:08.155793 317 server_base.cc:1179] Dumped server information to /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data/info.pb
I20250809 20:00:08.161386 26098 external_mini_cluster.cc:1428] Started /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu as pid 317
I20250809 20:00:08.180405 450 heartbeater.cc:344] Connected to a master server at 127.25.124.190:44801
I20250809 20:00:08.181015 450 heartbeater.cc:461] Registering TS with master...
I20250809 20:00:08.182499 450 heartbeater.cc:507] Master 127.25.124.190:44801 requested a full tablet report, sending...
I20250809 20:00:08.186542 32458 ts_manager.cc:194] Registered new tserver with Master: 94180891930c4216b9293319c2974fb8 (127.25.124.131:41893)
I20250809 20:00:08.188416 32458 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.25.124.131:50613
I20250809 20:00:08.194823 26098 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250809 20:00:08.203105 343 log.cc:826] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: Log is configured to *not* fsync() on all Append() calls
I20250809 20:00:08.280328 462 raft_consensus.cc:491] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250809 20:00:08.280709 462 raft_consensus.cc:513] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 20:00:08.282435 462 leader_election.cc:290] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 94180891930c4216b9293319c2974fb8 (127.25.124.131:41893), b4f96236fb9e4a93bfed44c6e495ded7 (127.25.124.130:35769)
I20250809 20:00:08.294874 32736 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5bc7b51263f8423d89931e8b5e732b32" candidate_uuid: "073eafa294ee49f99fea73c8b83d6cb0" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" is_pre_election: true
I20250809 20:00:08.295540 32736 raft_consensus.cc:2466] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 073eafa294ee49f99fea73c8b83d6cb0 in term 1.
I20250809 20:00:08.296841 32534 leader_election.cc:304] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 073eafa294ee49f99fea73c8b83d6cb0, b4f96236fb9e4a93bfed44c6e495ded7; no voters:
I20250809 20:00:08.297411 462 raft_consensus.cc:2802] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250809 20:00:08.297632 462 raft_consensus.cc:491] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250809 20:00:08.297855 462 raft_consensus.cc:3058] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 1 FOLLOWER]: Advancing to term 2
I20250809 20:00:08.302747 462 raft_consensus.cc:513] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 20:00:08.303974 462 leader_election.cc:290] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [CANDIDATE]: Term 2 election: Requested vote from peers 94180891930c4216b9293319c2974fb8 (127.25.124.131:41893), b4f96236fb9e4a93bfed44c6e495ded7 (127.25.124.130:35769)
I20250809 20:00:08.305399 32736 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5bc7b51263f8423d89931e8b5e732b32" candidate_uuid: "073eafa294ee49f99fea73c8b83d6cb0" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "b4f96236fb9e4a93bfed44c6e495ded7"
I20250809 20:00:08.305732 32736 raft_consensus.cc:3058] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 1 FOLLOWER]: Advancing to term 2
I20250809 20:00:08.297907 405 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5bc7b51263f8423d89931e8b5e732b32" candidate_uuid: "073eafa294ee49f99fea73c8b83d6cb0" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "94180891930c4216b9293319c2974fb8" is_pre_election: true
I20250809 20:00:08.305052 404 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5bc7b51263f8423d89931e8b5e732b32" candidate_uuid: "073eafa294ee49f99fea73c8b83d6cb0" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "94180891930c4216b9293319c2974fb8"
W20250809 20:00:08.309670 32535 leader_election.cc:343] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [CANDIDATE]: Term 2 pre-election: Tablet error from VoteRequest() call to peer 94180891930c4216b9293319c2974fb8 (127.25.124.131:41893): Illegal state: must be running to vote when last-logged opid is not known
I20250809 20:00:08.311702 32736 raft_consensus.cc:2466] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 073eafa294ee49f99fea73c8b83d6cb0 in term 2.
I20250809 20:00:08.312318 32534 leader_election.cc:304] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 073eafa294ee49f99fea73c8b83d6cb0, b4f96236fb9e4a93bfed44c6e495ded7; no voters: 94180891930c4216b9293319c2974fb8
I20250809 20:00:08.312803 462 raft_consensus.cc:2802] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 2 FOLLOWER]: Leader election won for term 2
I20250809 20:00:08.314301 462 raft_consensus.cc:695] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 2 LEADER]: Becoming Leader. State: Replica: 073eafa294ee49f99fea73c8b83d6cb0, State: Running, Role: LEADER
I20250809 20:00:08.314952 462 consensus_queue.cc:237] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 205, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 20:00:08.324872 32458 catalog_manager.cc:5582] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 reported cstate change: term changed from 0 to 2, leader changed from <none> to 073eafa294ee49f99fea73c8b83d6cb0 (127.25.124.129), VOTER 073eafa294ee49f99fea73c8b83d6cb0 (127.25.124.129) added, VOTER 94180891930c4216b9293319c2974fb8 (127.25.124.131) added, VOTER b4f96236fb9e4a93bfed44c6e495ded7 (127.25.124.130) added. New cstate: current_term: 2 leader_uuid: "073eafa294ee49f99fea73c8b83d6cb0" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } health_report { overall_health: UNKNOWN } } }
W20250809 20:00:08.707094 32535 consensus_peers.cc:489] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 -> Peer 94180891930c4216b9293319c2974fb8 (127.25.124.131:41893): Couldn't send request to peer 94180891930c4216b9293319c2974fb8. Error code: TABLET_NOT_RUNNING (12). Status: Illegal state: Tablet not RUNNING: BOOTSTRAPPING. This is attempt 1: this message will repeat every 5th retry.
W20250809 20:00:08.831086 26098 scanner-internal.cc:458] Time spent opening tablet: real 0.611s user 0.004s sys 0.001s
I20250809 20:00:08.833727 32736 raft_consensus.cc:1273] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 2 FOLLOWER]: Refusing update from remote peer 073eafa294ee49f99fea73c8b83d6cb0: Log matching property violated. Preceding OpId in replica: term: 1 index: 205. Preceding OpId from leader: term: 2 index: 206. (index mismatch)
I20250809 20:00:08.835665 462 consensus_queue.cc:1035] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [LEADER]: Connected to new peer: Peer: permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 206, Last known committed idx: 205, Time since last communication: 0.000s
I20250809 20:00:08.943871 32600 consensus_queue.cc:237] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 206, Committed index: 206, Last appended: 2.206, Last appended by leader: 205, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 20:00:08.948323 32736 raft_consensus.cc:1273] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 2 FOLLOWER]: Refusing update from remote peer 073eafa294ee49f99fea73c8b83d6cb0: Log matching property violated. Preceding OpId in replica: term: 2 index: 206. Preceding OpId from leader: term: 2 index: 207. (index mismatch)
I20250809 20:00:08.949623 478 consensus_queue.cc:1035] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [LEADER]: Connected to new peer: Peer: permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 207, Last known committed idx: 206, Time since last communication: 0.000s
I20250809 20:00:08.954816 478 raft_consensus.cc:2953] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 2 LEADER]: Committing config change with OpId 2.207: config changed from index -1 to 207, VOTER 94180891930c4216b9293319c2974fb8 (127.25.124.131) evicted. New config: { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } } }
I20250809 20:00:08.956786 32736 raft_consensus.cc:2953] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 2 FOLLOWER]: Committing config change with OpId 2.207: config changed from index -1 to 207, VOTER 94180891930c4216b9293319c2974fb8 (127.25.124.131) evicted. New config: { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } } }
I20250809 20:00:08.968116 32458 catalog_manager.cc:5582] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 reported cstate change: config changed from index -1 to 207, VOTER 94180891930c4216b9293319c2974fb8 (127.25.124.131) evicted. New cstate: current_term: 2 leader_uuid: "073eafa294ee49f99fea73c8b83d6cb0" committed_config { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } } }
I20250809 20:00:08.971412 32442 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 5bc7b51263f8423d89931e8b5e732b32 with cas_config_opid_index -1: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250809 20:00:09.017686 32600 consensus_queue.cc:237] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 207, Committed index: 207, Last appended: 2.207, Last appended by leader: 205, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } }
I20250809 20:00:09.022044 478 raft_consensus.cc:2953] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 2 LEADER]: Committing config change with OpId 2.208: config changed from index 207 to 208, VOTER b4f96236fb9e4a93bfed44c6e495ded7 (127.25.124.130) evicted. New config: { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } }
I20250809 20:00:09.030309 32442 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 5bc7b51263f8423d89931e8b5e732b32 with cas_config_opid_index 207: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250809 20:00:09.037747 32457 catalog_manager.cc:5582] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 reported cstate change: config changed from index 207 to 208, VOTER b4f96236fb9e4a93bfed44c6e495ded7 (127.25.124.130) evicted. New cstate: current_term: 2 leader_uuid: "073eafa294ee49f99fea73c8b83d6cb0" committed_config { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } health_report { overall_health: HEALTHY } } }
I20250809 20:00:09.056123 385 tablet_service.cc:1515] Processing DeleteTablet for tablet 5bc7b51263f8423d89931e8b5e732b32 with delete_type TABLET_DATA_TOMBSTONED (TS 94180891930c4216b9293319c2974fb8 not found in new config with opid_index 207) from {username='slave'} at 127.0.0.1:50920
W20250809 20:00:09.064849 32445 catalog_manager.cc:4908] TS 94180891930c4216b9293319c2974fb8 (127.25.124.131:41893): delete failed for tablet 5bc7b51263f8423d89931e8b5e732b32 because tablet deleting was already in progress. No further retry: Already present: State transition of tablet 5bc7b51263f8423d89931e8b5e732b32 already in progress: opening tablet
I20250809 20:00:09.069103 32716 tablet_service.cc:1515] Processing DeleteTablet for tablet 5bc7b51263f8423d89931e8b5e732b32 with delete_type TABLET_DATA_TOMBSTONED (TS b4f96236fb9e4a93bfed44c6e495ded7 not found in new config with opid_index 208) from {username='slave'} at 127.0.0.1:43450
I20250809 20:00:09.087256 488 tablet_replica.cc:331] stopping tablet replica
I20250809 20:00:09.088045 488 raft_consensus.cc:2241] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250809 20:00:09.088735 488 raft_consensus.cc:2270] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250809 20:00:09.113241 488 ts_tablet_manager.cc:1905] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250809 20:00:09.126901 488 ts_tablet_manager.cc:1918] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.207
I20250809 20:00:09.127323 488 log.cc:1199] T 5bc7b51263f8423d89931e8b5e732b32 P b4f96236fb9e4a93bfed44c6e495ded7: Deleting WAL directory at /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/wal/wals/5bc7b51263f8423d89931e8b5e732b32
I20250809 20:00:09.128934 32444 catalog_manager.cc:4928] TS b4f96236fb9e4a93bfed44c6e495ded7 (127.25.124.130:35769): tablet 5bc7b51263f8423d89931e8b5e732b32 (table pre_rebuild [id=0b8c2ed43a284b5a89caf340124bd5d8]) successfully deleted
I20250809 20:00:09.191444 450 heartbeater.cc:499] Master 127.25.124.190:44801 was elected leader, sending a full tablet report...
I20250809 20:00:09.197379 385 tablet_service.cc:1515] Processing DeleteTablet for tablet 5bc7b51263f8423d89931e8b5e732b32 with delete_type TABLET_DATA_TOMBSTONED (Replica has no consensus available (current committed config index is 208)) from {username='slave'} at 127.0.0.1:50920
W20250809 20:00:09.198714 32445 catalog_manager.cc:4908] TS 94180891930c4216b9293319c2974fb8 (127.25.124.131:41893): delete failed for tablet 5bc7b51263f8423d89931e8b5e732b32 because tablet deleting was already in progress. No further retry: Already present: State transition of tablet 5bc7b51263f8423d89931e8b5e732b32 already in progress: opening tablet
I20250809 20:00:09.631625 32580 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250809 20:00:09.660559 385 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250809 20:00:09.662072 32716 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
Master Summary
UUID | Address | Status
----------------------------------+----------------------+---------
9b21be77b1b74c4a8035721db223bb53 | 127.25.124.190:44801 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+----------------------+-------------------------
builtin_ntp_servers | 127.25.124.148:36863 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+----------------------+---------+----------+----------------+-----------------
073eafa294ee49f99fea73c8b83d6cb0 | 127.25.124.129:44645 | HEALTHY | <none> | 1 | 0
94180891930c4216b9293319c2974fb8 | 127.25.124.131:41893 | HEALTHY | <none> | 0 | 0
b4f96236fb9e4a93bfed44c6e495ded7 | 127.25.124.130:35769 | HEALTHY | <none> | 0 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.25.124.129 | experimental | 127.25.124.129:44645
local_ip_for_outbound_sockets | 127.25.124.130 | experimental | 127.25.124.130:35769
local_ip_for_outbound_sockets | 127.25.124.131 | experimental | 127.25.124.131:41893
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data/info.pb | hidden | 127.25.124.129:44645
server_dump_info_path | /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data/info.pb | hidden | 127.25.124.130:35769
server_dump_info_path | /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data/info.pb | hidden | 127.25.124.131:41893
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+----------------------+-------------------------
builtin_ntp_servers | 127.25.124.148:36863 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.19.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
-------------+----+---------+---------------+---------+------------+------------------+-------------
pre_rebuild | 1 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 0
First Quartile | 0
Median | 0
Third Quartile | 1
Maximum | 1
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 1
Tablets | 1
Replicas | 1
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250809 20:00:10.013598 26098 log_verifier.cc:126] Checking tablet 5bc7b51263f8423d89931e8b5e732b32
I20250809 20:00:10.560206 343 tablet_bootstrap.cc:492] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250809 20:00:10.561076 343 tablet_bootstrap.cc:492] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: Bootstrap complete.
I20250809 20:00:10.562570 343 ts_tablet_manager.cc:1397] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: Time spent bootstrapping tablet: real 2.559s user 2.387s sys 0.056s
I20250809 20:00:10.569118 343 raft_consensus.cc:357] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 20:00:10.571550 343 raft_consensus.cc:738] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 94180891930c4216b9293319c2974fb8, State: Initialized, Role: FOLLOWER
I20250809 20:00:10.572255 343 consensus_queue.cc:260] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } }
I20250809 20:00:10.575029 343 ts_tablet_manager.cc:1428] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: Time spent starting tablet: real 0.012s user 0.011s sys 0.004s
I20250809 20:00:10.579514 385 tablet_service.cc:1515] Processing DeleteTablet for tablet 5bc7b51263f8423d89931e8b5e732b32 with delete_type TABLET_DATA_TOMBSTONED (Replica has no consensus available (current committed config index is 208)) from {username='slave'} at 127.0.0.1:50920
I20250809 20:00:10.584892 517 tablet_replica.cc:331] stopping tablet replica
I20250809 20:00:10.585521 517 raft_consensus.cc:2241] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250809 20:00:10.585928 517 raft_consensus.cc:2270] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250809 20:00:10.607779 517 ts_tablet_manager.cc:1905] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250809 20:00:10.621831 517 ts_tablet_manager.cc:1918] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.205
I20250809 20:00:10.622202 517 log.cc:1199] T 5bc7b51263f8423d89931e8b5e732b32 P 94180891930c4216b9293319c2974fb8: Deleting WAL directory at /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/wal/wals/5bc7b51263f8423d89931e8b5e732b32
I20250809 20:00:10.623670 32445 catalog_manager.cc:4928] TS 94180891930c4216b9293319c2974fb8 (127.25.124.131:41893): tablet 5bc7b51263f8423d89931e8b5e732b32 (table pre_rebuild [id=0b8c2ed43a284b5a89caf340124bd5d8]) successfully deleted
I20250809 20:00:10.676223 26098 log_verifier.cc:177] Verified matching terms for 208 ops in tablet 5bc7b51263f8423d89931e8b5e732b32
I20250809 20:00:10.678452 32458 catalog_manager.cc:2482] Servicing SoftDeleteTable request from {username='slave'} at 127.0.0.1:46860:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250809 20:00:10.679109 32458 catalog_manager.cc:2730] Servicing DeleteTable request from {username='slave'} at 127.0.0.1:46860:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250809 20:00:10.691807 32458 catalog_manager.cc:5869] T 00000000000000000000000000000000 P 9b21be77b1b74c4a8035721db223bb53: Sending DeleteTablet for 1 replicas of tablet 5bc7b51263f8423d89931e8b5e732b32
I20250809 20:00:10.693397 26098 test_util.cc:276] Using random seed: 592370428
I20250809 20:00:10.693290 32580 tablet_service.cc:1515] Processing DeleteTablet for tablet 5bc7b51263f8423d89931e8b5e732b32 with delete_type TABLET_DATA_DELETED (Table deleted at 2025-08-09 20:00:10 UTC) from {username='slave'} at 127.0.0.1:56788
I20250809 20:00:10.694912 522 tablet_replica.cc:331] stopping tablet replica
I20250809 20:00:10.695550 522 raft_consensus.cc:2241] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 2 LEADER]: Raft consensus shutting down.
I20250809 20:00:10.696369 522 raft_consensus.cc:2270] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250809 20:00:10.723594 522 ts_tablet_manager.cc:1905] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250809 20:00:10.735688 522 ts_tablet_manager.cc:1918] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 2.208
I20250809 20:00:10.736019 522 log.cc:1199] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: Deleting WAL directory at /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/wal/wals/5bc7b51263f8423d89931e8b5e732b32
I20250809 20:00:10.736783 522 ts_tablet_manager.cc:1939] T 5bc7b51263f8423d89931e8b5e732b32 P 073eafa294ee49f99fea73c8b83d6cb0: Deleting consensus metadata
I20250809 20:00:10.736842 32458 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:34492:
name: "post_rebuild"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250809 20:00:10.738907 32442 catalog_manager.cc:4928] TS 073eafa294ee49f99fea73c8b83d6cb0 (127.25.124.129:44645): tablet 5bc7b51263f8423d89931e8b5e732b32 (table pre_rebuild [id=0b8c2ed43a284b5a89caf340124bd5d8]) successfully deleted
W20250809 20:00:10.739923 32458 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table post_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250809 20:00:10.761576 32716 tablet_service.cc:1468] Processing CreateTablet for tablet b5e88b64091e4b29bc169d8d3594c6f9 (DEFAULT_TABLE table=post_rebuild [id=b413fbd5678646c9a47cc26e4dcc35fb]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 20:00:10.761991 32580 tablet_service.cc:1468] Processing CreateTablet for tablet b5e88b64091e4b29bc169d8d3594c6f9 (DEFAULT_TABLE table=post_rebuild [id=b413fbd5678646c9a47cc26e4dcc35fb]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 20:00:10.762745 32716 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b5e88b64091e4b29bc169d8d3594c6f9. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 20:00:10.763046 32580 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b5e88b64091e4b29bc169d8d3594c6f9. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 20:00:10.765885 385 tablet_service.cc:1468] Processing CreateTablet for tablet b5e88b64091e4b29bc169d8d3594c6f9 (DEFAULT_TABLE table=post_rebuild [id=b413fbd5678646c9a47cc26e4dcc35fb]), partition=RANGE (key) PARTITION UNBOUNDED
I20250809 20:00:10.766902 385 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b5e88b64091e4b29bc169d8d3594c6f9. 1 dirs total, 0 dirs full, 0 dirs failed
I20250809 20:00:10.776903 343 tablet_bootstrap.cc:492] T b5e88b64091e4b29bc169d8d3594c6f9 P 94180891930c4216b9293319c2974fb8: Bootstrap starting.
I20250809 20:00:10.777491 530 tablet_bootstrap.cc:492] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7: Bootstrap starting.
I20250809 20:00:10.780164 529 tablet_bootstrap.cc:492] T b5e88b64091e4b29bc169d8d3594c6f9 P 073eafa294ee49f99fea73c8b83d6cb0: Bootstrap starting.
I20250809 20:00:10.782282 343 tablet_bootstrap.cc:654] T b5e88b64091e4b29bc169d8d3594c6f9 P 94180891930c4216b9293319c2974fb8: Neither blocks nor log segments found. Creating new log.
I20250809 20:00:10.782855 530 tablet_bootstrap.cc:654] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7: Neither blocks nor log segments found. Creating new log.
I20250809 20:00:10.784193 529 tablet_bootstrap.cc:654] T b5e88b64091e4b29bc169d8d3594c6f9 P 073eafa294ee49f99fea73c8b83d6cb0: Neither blocks nor log segments found. Creating new log.
I20250809 20:00:10.788378 343 tablet_bootstrap.cc:492] T b5e88b64091e4b29bc169d8d3594c6f9 P 94180891930c4216b9293319c2974fb8: No bootstrap required, opened a new log
I20250809 20:00:10.788641 343 ts_tablet_manager.cc:1397] T b5e88b64091e4b29bc169d8d3594c6f9 P 94180891930c4216b9293319c2974fb8: Time spent bootstrapping tablet: real 0.012s user 0.008s sys 0.000s
I20250809 20:00:10.789111 530 tablet_bootstrap.cc:492] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7: No bootstrap required, opened a new log
I20250809 20:00:10.789348 529 tablet_bootstrap.cc:492] T b5e88b64091e4b29bc169d8d3594c6f9 P 073eafa294ee49f99fea73c8b83d6cb0: No bootstrap required, opened a new log
I20250809 20:00:10.789454 530 ts_tablet_manager.cc:1397] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7: Time spent bootstrapping tablet: real 0.012s user 0.010s sys 0.000s
I20250809 20:00:10.789696 529 ts_tablet_manager.cc:1397] T b5e88b64091e4b29bc169d8d3594c6f9 P 073eafa294ee49f99fea73c8b83d6cb0: Time spent bootstrapping tablet: real 0.010s user 0.007s sys 0.000s
I20250809 20:00:10.790211 343 raft_consensus.cc:357] T b5e88b64091e4b29bc169d8d3594c6f9 P 94180891930c4216b9293319c2974fb8 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } }
I20250809 20:00:10.790602 343 raft_consensus.cc:383] T b5e88b64091e4b29bc169d8d3594c6f9 P 94180891930c4216b9293319c2974fb8 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 20:00:10.790807 343 raft_consensus.cc:738] T b5e88b64091e4b29bc169d8d3594c6f9 P 94180891930c4216b9293319c2974fb8 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 94180891930c4216b9293319c2974fb8, State: Initialized, Role: FOLLOWER
I20250809 20:00:10.791307 343 consensus_queue.cc:260] T b5e88b64091e4b29bc169d8d3594c6f9 P 94180891930c4216b9293319c2974fb8 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } }
I20250809 20:00:10.791857 530 raft_consensus.cc:357] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } }
I20250809 20:00:10.792487 530 raft_consensus.cc:383] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 20:00:10.792178 529 raft_consensus.cc:357] T b5e88b64091e4b29bc169d8d3594c6f9 P 073eafa294ee49f99fea73c8b83d6cb0 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } }
I20250809 20:00:10.792752 529 raft_consensus.cc:383] T b5e88b64091e4b29bc169d8d3594c6f9 P 073eafa294ee49f99fea73c8b83d6cb0 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250809 20:00:10.792739 530 raft_consensus.cc:738] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b4f96236fb9e4a93bfed44c6e495ded7, State: Initialized, Role: FOLLOWER
I20250809 20:00:10.793001 529 raft_consensus.cc:738] T b5e88b64091e4b29bc169d8d3594c6f9 P 073eafa294ee49f99fea73c8b83d6cb0 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 073eafa294ee49f99fea73c8b83d6cb0, State: Initialized, Role: FOLLOWER
I20250809 20:00:10.793349 530 consensus_queue.cc:260] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } }
I20250809 20:00:10.793686 529 consensus_queue.cc:260] T b5e88b64091e4b29bc169d8d3594c6f9 P 073eafa294ee49f99fea73c8b83d6cb0 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } }
I20250809 20:00:10.799060 529 ts_tablet_manager.cc:1428] T b5e88b64091e4b29bc169d8d3594c6f9 P 073eafa294ee49f99fea73c8b83d6cb0: Time spent starting tablet: real 0.009s user 0.005s sys 0.003s
I20250809 20:00:10.800345 343 ts_tablet_manager.cc:1428] T b5e88b64091e4b29bc169d8d3594c6f9 P 94180891930c4216b9293319c2974fb8: Time spent starting tablet: real 0.012s user 0.006s sys 0.000s
I20250809 20:00:10.801348 530 ts_tablet_manager.cc:1428] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7: Time spent starting tablet: real 0.012s user 0.005s sys 0.008s
I20250809 20:00:10.817831 534 raft_consensus.cc:491] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250809 20:00:10.818104 534 raft_consensus.cc:513] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } }
I20250809 20:00:10.820084 534 leader_election.cc:290] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 073eafa294ee49f99fea73c8b83d6cb0 (127.25.124.129:44645), 94180891930c4216b9293319c2974fb8 (127.25.124.131:41893)
I20250809 20:00:10.831477 32600 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b5e88b64091e4b29bc169d8d3594c6f9" candidate_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "073eafa294ee49f99fea73c8b83d6cb0" is_pre_election: true
I20250809 20:00:10.832137 32600 raft_consensus.cc:2466] T b5e88b64091e4b29bc169d8d3594c6f9 P 073eafa294ee49f99fea73c8b83d6cb0 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate b4f96236fb9e4a93bfed44c6e495ded7 in term 0.
I20250809 20:00:10.833138 32668 leader_election.cc:304] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 073eafa294ee49f99fea73c8b83d6cb0, b4f96236fb9e4a93bfed44c6e495ded7; no voters:
I20250809 20:00:10.833583 405 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b5e88b64091e4b29bc169d8d3594c6f9" candidate_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "94180891930c4216b9293319c2974fb8" is_pre_election: true
I20250809 20:00:10.833730 534 raft_consensus.cc:2802] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250809 20:00:10.833961 534 raft_consensus.cc:491] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250809 20:00:10.834053 405 raft_consensus.cc:2466] T b5e88b64091e4b29bc169d8d3594c6f9 P 94180891930c4216b9293319c2974fb8 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate b4f96236fb9e4a93bfed44c6e495ded7 in term 0.
I20250809 20:00:10.834196 534 raft_consensus.cc:3058] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [term 0 FOLLOWER]: Advancing to term 1
I20250809 20:00:10.838166 534 raft_consensus.cc:513] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } }
I20250809 20:00:10.839599 534 leader_election.cc:290] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [CANDIDATE]: Term 1 election: Requested vote from peers 073eafa294ee49f99fea73c8b83d6cb0 (127.25.124.129:44645), 94180891930c4216b9293319c2974fb8 (127.25.124.131:41893)
I20250809 20:00:10.840171 32600 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b5e88b64091e4b29bc169d8d3594c6f9" candidate_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "073eafa294ee49f99fea73c8b83d6cb0"
I20250809 20:00:10.840473 405 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b5e88b64091e4b29bc169d8d3594c6f9" candidate_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "94180891930c4216b9293319c2974fb8"
I20250809 20:00:10.840622 32600 raft_consensus.cc:3058] T b5e88b64091e4b29bc169d8d3594c6f9 P 073eafa294ee49f99fea73c8b83d6cb0 [term 0 FOLLOWER]: Advancing to term 1
I20250809 20:00:10.840917 405 raft_consensus.cc:3058] T b5e88b64091e4b29bc169d8d3594c6f9 P 94180891930c4216b9293319c2974fb8 [term 0 FOLLOWER]: Advancing to term 1
I20250809 20:00:10.846691 32600 raft_consensus.cc:2466] T b5e88b64091e4b29bc169d8d3594c6f9 P 073eafa294ee49f99fea73c8b83d6cb0 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate b4f96236fb9e4a93bfed44c6e495ded7 in term 1.
I20250809 20:00:10.846943 405 raft_consensus.cc:2466] T b5e88b64091e4b29bc169d8d3594c6f9 P 94180891930c4216b9293319c2974fb8 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate b4f96236fb9e4a93bfed44c6e495ded7 in term 1.
I20250809 20:00:10.847600 32668 leader_election.cc:304] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 073eafa294ee49f99fea73c8b83d6cb0, b4f96236fb9e4a93bfed44c6e495ded7; no voters:
I20250809 20:00:10.848135 534 raft_consensus.cc:2802] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [term 1 FOLLOWER]: Leader election won for term 1
I20250809 20:00:10.849510 534 raft_consensus.cc:695] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [term 1 LEADER]: Becoming Leader. State: Replica: b4f96236fb9e4a93bfed44c6e495ded7, State: Running, Role: LEADER
I20250809 20:00:10.850106 534 consensus_queue.cc:237] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } } peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } }
I20250809 20:00:10.856109 32456 catalog_manager.cc:5582] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 reported cstate change: term changed from 0 to 1, leader changed from <none> to b4f96236fb9e4a93bfed44c6e495ded7 (127.25.124.130). New cstate: current_term: 1 leader_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b4f96236fb9e4a93bfed44c6e495ded7" member_type: VOTER last_known_addr { host: "127.25.124.130" port: 35769 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 } health_report { overall_health: UNKNOWN } } }
W20250809 20:00:10.912427 451 tablet.cc:2378] T b5e88b64091e4b29bc169d8d3594c6f9 P 94180891930c4216b9293319c2974fb8: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250809 20:00:10.940089 314 tablet.cc:2378] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250809 20:00:10.954722 32646 tablet.cc:2378] T b5e88b64091e4b29bc169d8d3594c6f9 P 073eafa294ee49f99fea73c8b83d6cb0: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250809 20:00:11.016999 405 raft_consensus.cc:1273] T b5e88b64091e4b29bc169d8d3594c6f9 P 94180891930c4216b9293319c2974fb8 [term 1 FOLLOWER]: Refusing update from remote peer b4f96236fb9e4a93bfed44c6e495ded7: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250809 20:00:11.017488 32600 raft_consensus.cc:1273] T b5e88b64091e4b29bc169d8d3594c6f9 P 073eafa294ee49f99fea73c8b83d6cb0 [term 1 FOLLOWER]: Refusing update from remote peer b4f96236fb9e4a93bfed44c6e495ded7: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250809 20:00:11.018241 540 consensus_queue.cc:1035] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [LEADER]: Connected to new peer: Peer: permanent_uuid: "94180891930c4216b9293319c2974fb8" member_type: VOTER last_known_addr { host: "127.25.124.131" port: 41893 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250809 20:00:11.018877 534 consensus_queue.cc:1035] T b5e88b64091e4b29bc169d8d3594c6f9 P b4f96236fb9e4a93bfed44c6e495ded7 [LEADER]: Connected to new peer: Peer: permanent_uuid: "073eafa294ee49f99fea73c8b83d6cb0" member_type: VOTER last_known_addr { host: "127.25.124.129" port: 44645 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250809 20:00:11.043905 549 mvcc.cc:204] Tried to move back new op lower bound from 7187536326714273792 to 7187536326048186368. Current Snapshot: MvccSnapshot[applied={T|T < 7187536326714273792}]
I20250809 20:00:11.044586 550 mvcc.cc:204] Tried to move back new op lower bound from 7187536326714273792 to 7187536326048186368. Current Snapshot: MvccSnapshot[applied={T|T < 7187536326714273792}]
I20250809 20:00:11.047974 548 mvcc.cc:204] Tried to move back new op lower bound from 7187536326714273792 to 7187536326048186368. Current Snapshot: MvccSnapshot[applied={T|T < 7187536326714273792}]
I20250809 20:00:15.012881 32580 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250809 20:00:15.013200 385 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250809 20:00:15.020730 32716 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
Master Summary
UUID | Address | Status
----------------------------------+----------------------+---------
9b21be77b1b74c4a8035721db223bb53 | 127.25.124.190:44801 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+----------------------+-------------------------
builtin_ntp_servers | 127.25.124.148:36863 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+----------------------+---------+----------+----------------+-----------------
073eafa294ee49f99fea73c8b83d6cb0 | 127.25.124.129:44645 | HEALTHY | <none> | 0 | 0
94180891930c4216b9293319c2974fb8 | 127.25.124.131:41893 | HEALTHY | <none> | 0 | 0
b4f96236fb9e4a93bfed44c6e495ded7 | 127.25.124.130:35769 | HEALTHY | <none> | 1 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.25.124.129 | experimental | 127.25.124.129:44645
local_ip_for_outbound_sockets | 127.25.124.130 | experimental | 127.25.124.130:35769
local_ip_for_outbound_sockets | 127.25.124.131 | experimental | 127.25.124.131:41893
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-0/data/info.pb | hidden | 127.25.124.129:44645
server_dump_info_path | /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-1/data/info.pb | hidden | 127.25.124.130:35769
server_dump_info_path | /tmp/dist-test-tasklhVJcm/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754769444608376-26098-0/minicluster-data/ts-2/data/info.pb | hidden | 127.25.124.131:41893
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+----------------------+-------------------------
builtin_ntp_servers | 127.25.124.148:36863 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.19.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
--------------+----+---------+---------------+---------+------------+------------------+-------------
post_rebuild | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 1
First Quartile | 1
Median | 1
Third Quartile | 1
Maximum | 1
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 1
Tablets | 1
Replicas | 3
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250809 20:00:15.201246 26098 log_verifier.cc:126] Checking tablet 5bc7b51263f8423d89931e8b5e732b32
I20250809 20:00:15.201834 26098 log_verifier.cc:177] Verified matching terms for 0 ops in tablet 5bc7b51263f8423d89931e8b5e732b32
I20250809 20:00:15.202107 26098 log_verifier.cc:126] Checking tablet b5e88b64091e4b29bc169d8d3594c6f9
I20250809 20:00:15.821946 26098 log_verifier.cc:177] Verified matching terms for 205 ops in tablet b5e88b64091e4b29bc169d8d3594c6f9
I20250809 20:00:15.843694 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 32495
I20250809 20:00:15.877362 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 32649
I20250809 20:00:15.912675 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 317
I20250809 20:00:15.943528 26098 external_mini_cluster.cc:1658] Killing /tmp/dist-test-tasklhVJcm/build/tsan/bin/kudu with pid 32425
2025-08-09T20:00:15Z chronyd exiting
[ OK ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0 (30883 ms)
[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest (30883 ms total)
[----------] Global test environment tear-down
[==========] 9 tests from 5 test suites ran. (171330 ms total)
[ PASSED ] 8 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] AdminCliTest.TestRebuildTables
1 FAILED TEST