Note: This is test shard 6 of 8.
[==========] Running 9 tests from 5 test suites.
[----------] Global test environment set-up.
[----------] 5 tests from AdminCliTest
[ RUN ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes
WARNING: Logging before InitGoogleLogging() is written to STDERR
I20250629 01:57:14.289456 17741 test_util.cc:276] Using random seed: 988495039
W20250629 01:57:15.462601 17741 thread.cc:640] rpc reactor (reactor) Time spent creating pthread: real 1.133s user 0.434s sys 0.696s
W20250629 01:57:15.462893 17741 thread.cc:606] rpc reactor (reactor) Time spent starting thread: real 1.133s user 0.434s sys 0.696s
I20250629 01:57:15.480410 17741 ts_itest-base.cc:115] Starting cluster with:
I20250629 01:57:15.480592 17741 ts_itest-base.cc:116] --------------
I20250629 01:57:15.480741 17741 ts_itest-base.cc:117] 4 tablet servers
I20250629 01:57:15.480896 17741 ts_itest-base.cc:118] 3 replicas per TS
I20250629 01:57:15.481053 17741 ts_itest-base.cc:119] --------------
2025-06-29T01:57:15Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-29T01:57:15Z Disabled control of system clock
I20250629 01:57:15.514638 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:46827
--webserver_interface=127.17.83.126
--webserver_port=0
--builtin_ntp_servers=127.17.83.84:37271
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.17.83.126:46827 with env {}
W20250629 01:57:15.796096 17755 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:15.796639 17755 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:57:15.797030 17755 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:15.824818 17755 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250629 01:57:15.825098 17755 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:57:15.825309 17755 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250629 01:57:15.825510 17755 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250629 01:57:15.857602 17755 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:37271
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.17.83.126:46827
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:46827
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.17.83.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:57:15.858755 17755 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:57:15.860224 17755 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:57:15.873463 17761 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:15.873868 17762 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:15.874775 17764 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:15.875151 17755 server_base.cc:1048] running on GCE node
I20250629 01:57:17.002768 17755 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:57:17.005182 17755 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:57:17.006500 17755 hybrid_clock.cc:648] HybridClock initialized: now 1751162237006463 us; error 54 us; skew 500 ppm
I20250629 01:57:17.007263 17755 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:57:17.018157 17755 webserver.cc:469] Webserver started at http://127.17.83.126:43665/ using document root <none> and password file <none>
I20250629 01:57:17.019045 17755 fs_manager.cc:362] Metadata directory not provided
I20250629 01:57:17.019251 17755 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:57:17.019680 17755 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:57:17.028245 17755 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "71666a48a24845af945928297d288a83"
format_stamp: "Formatted at 2025-06-29 01:57:17 on dist-test-slave-v1mb"
I20250629 01:57:17.029263 17755 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "71666a48a24845af945928297d288a83"
format_stamp: "Formatted at 2025-06-29 01:57:17 on dist-test-slave-v1mb"
I20250629 01:57:17.035986 17755 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.001s
I20250629 01:57:17.041208 17771 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:17.042150 17755 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.001s
I20250629 01:57:17.042446 17755 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
uuid: "71666a48a24845af945928297d288a83"
format_stamp: "Formatted at 2025-06-29 01:57:17 on dist-test-slave-v1mb"
I20250629 01:57:17.042757 17755 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:57:17.091054 17755 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:57:17.092458 17755 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:57:17.092831 17755 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:57:17.159341 17755 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.126:46827
I20250629 01:57:17.159477 17822 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.126:46827 every 8 connection(s)
I20250629 01:57:17.161824 17755 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250629 01:57:17.162719 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 17755
I20250629 01:57:17.163117 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250629 01:57:17.167474 17823 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:17.184458 17823 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83: Bootstrap starting.
I20250629 01:57:17.189304 17823 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83: Neither blocks nor log segments found. Creating new log.
I20250629 01:57:17.190925 17823 log.cc:826] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83: Log is configured to *not* fsync() on all Append() calls
I20250629 01:57:17.195858 17823 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83: No bootstrap required, opened a new log
I20250629 01:57:17.213590 17823 raft_consensus.cc:357] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "71666a48a24845af945928297d288a83" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 46827 } }
I20250629 01:57:17.214190 17823 raft_consensus.cc:383] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:57:17.214368 17823 raft_consensus.cc:738] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 71666a48a24845af945928297d288a83, State: Initialized, Role: FOLLOWER
I20250629 01:57:17.214895 17823 consensus_queue.cc:260] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "71666a48a24845af945928297d288a83" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 46827 } }
I20250629 01:57:17.215381 17823 raft_consensus.cc:397] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:57:17.215704 17823 raft_consensus.cc:491] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:57:17.216054 17823 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:57:17.219985 17823 raft_consensus.cc:513] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "71666a48a24845af945928297d288a83" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 46827 } }
I20250629 01:57:17.220640 17823 leader_election.cc:304] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 71666a48a24845af945928297d288a83; no voters:
I20250629 01:57:17.222210 17823 leader_election.cc:290] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250629 01:57:17.222868 17828 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:57:17.224854 17828 raft_consensus.cc:695] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 1 LEADER]: Becoming Leader. State: Replica: 71666a48a24845af945928297d288a83, State: Running, Role: LEADER
I20250629 01:57:17.225586 17828 consensus_queue.cc:237] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "71666a48a24845af945928297d288a83" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 46827 } }
I20250629 01:57:17.226852 17823 sys_catalog.cc:564] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [sys.catalog]: configured and running, proceeding with master startup.
I20250629 01:57:17.235910 17829 sys_catalog.cc:455] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "71666a48a24845af945928297d288a83" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "71666a48a24845af945928297d288a83" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 46827 } } }
I20250629 01:57:17.237548 17829 sys_catalog.cc:458] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [sys.catalog]: This master's current role is: LEADER
I20250629 01:57:17.236918 17830 sys_catalog.cc:455] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 71666a48a24845af945928297d288a83. Latest consensus state: current_term: 1 leader_uuid: "71666a48a24845af945928297d288a83" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "71666a48a24845af945928297d288a83" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 46827 } } }
I20250629 01:57:17.238804 17830 sys_catalog.cc:458] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [sys.catalog]: This master's current role is: LEADER
I20250629 01:57:17.240598 17838 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250629 01:57:17.252351 17838 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250629 01:57:17.269325 17838 catalog_manager.cc:1349] Generated new cluster ID: 9f29126063ae4a28a84c0c22cff9dbfc
I20250629 01:57:17.269542 17838 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250629 01:57:17.285701 17838 catalog_manager.cc:1372] Generated new certificate authority record
I20250629 01:57:17.287027 17838 catalog_manager.cc:1506] Loading token signing keys...
I20250629 01:57:17.299849 17838 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83: Generated new TSK 0
I20250629 01:57:17.300750 17838 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250629 01:57:17.320482 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.65:0
--local_ip_for_outbound_sockets=127.17.83.65
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:46827
--builtin_ntp_servers=127.17.83.84:37271
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250629 01:57:17.605207 17847 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:17.605703 17847 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:57:17.606148 17847 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:17.634564 17847 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:57:17.635388 17847 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.65
I20250629 01:57:17.672055 17847 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:37271
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:46827
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.65
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:57:17.673211 17847 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:57:17.675091 17847 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:57:17.692911 17854 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:17.693333 17853 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:17.698642 17856 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:18.844376 17855 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250629 01:57:18.844756 17847 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:57:18.848948 17847 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:57:18.851614 17847 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:57:18.853085 17847 hybrid_clock.cc:648] HybridClock initialized: now 1751162238853033 us; error 50 us; skew 500 ppm
I20250629 01:57:18.854113 17847 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:57:18.861421 17847 webserver.cc:469] Webserver started at http://127.17.83.65:41479/ using document root <none> and password file <none>
I20250629 01:57:18.862578 17847 fs_manager.cc:362] Metadata directory not provided
I20250629 01:57:18.862838 17847 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:57:18.863390 17847 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:57:18.869777 17847 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "379ee11d280540bfa3a9fa5f1d63a4b7"
format_stamp: "Formatted at 2025-06-29 01:57:18 on dist-test-slave-v1mb"
I20250629 01:57:18.871240 17847 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "379ee11d280540bfa3a9fa5f1d63a4b7"
format_stamp: "Formatted at 2025-06-29 01:57:18 on dist-test-slave-v1mb"
I20250629 01:57:18.880169 17847 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.008s sys 0.000s
I20250629 01:57:18.886926 17863 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:18.887987 17847 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.001s
I20250629 01:57:18.888307 17847 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "379ee11d280540bfa3a9fa5f1d63a4b7"
format_stamp: "Formatted at 2025-06-29 01:57:18 on dist-test-slave-v1mb"
I20250629 01:57:18.888619 17847 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:57:18.944036 17847 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:57:18.945469 17847 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:57:18.945896 17847 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:57:18.948660 17847 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:57:18.952360 17847 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:57:18.952553 17847 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:18.952759 17847 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:57:18.952893 17847 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:19.091183 17847 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.65:45377
I20250629 01:57:19.091322 17975 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.65:45377 every 8 connection(s)
I20250629 01:57:19.093561 17847 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250629 01:57:19.097275 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 17847
I20250629 01:57:19.097764 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250629 01:57:19.105147 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.66:0
--local_ip_for_outbound_sockets=127.17.83.66
--webserver_interface=127.17.83.66
--webserver_port=0
--tserver_master_addrs=127.17.83.126:46827
--builtin_ntp_servers=127.17.83.84:37271
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250629 01:57:19.117750 17976 heartbeater.cc:344] Connected to a master server at 127.17.83.126:46827
I20250629 01:57:19.118148 17976 heartbeater.cc:461] Registering TS with master...
I20250629 01:57:19.119110 17976 heartbeater.cc:507] Master 127.17.83.126:46827 requested a full tablet report, sending...
I20250629 01:57:19.121809 17788 ts_manager.cc:194] Registered new tserver with Master: 379ee11d280540bfa3a9fa5f1d63a4b7 (127.17.83.65:45377)
I20250629 01:57:19.124691 17788 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.65:38935
W20250629 01:57:19.391090 17980 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:19.391579 17980 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:57:19.392004 17980 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:19.419858 17980 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:57:19.420575 17980 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.66
I20250629 01:57:19.452333 17980 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:37271
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.66:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.17.83.66
--webserver_port=0
--tserver_master_addrs=127.17.83.126:46827
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.66
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:57:19.453455 17980 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:57:19.454993 17980 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:57:19.471715 17987 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:20.128579 17976 heartbeater.cc:499] Master 127.17.83.126:46827 was elected leader, sending a full tablet report...
W20250629 01:57:19.472631 17986 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:19.473387 17980 server_base.cc:1048] running on GCE node
W20250629 01:57:19.472525 17989 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:20.598932 17980 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:57:20.601070 17980 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:57:20.602427 17980 hybrid_clock.cc:648] HybridClock initialized: now 1751162240602374 us; error 44 us; skew 500 ppm
I20250629 01:57:20.603124 17980 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:57:20.609025 17980 webserver.cc:469] Webserver started at http://127.17.83.66:42045/ using document root <none> and password file <none>
I20250629 01:57:20.609871 17980 fs_manager.cc:362] Metadata directory not provided
I20250629 01:57:20.610045 17980 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:57:20.610421 17980 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:57:20.614645 17980 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "d7b0107584e2409880bf436f68e36cc5"
format_stamp: "Formatted at 2025-06-29 01:57:20 on dist-test-slave-v1mb"
I20250629 01:57:20.615744 17980 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "d7b0107584e2409880bf436f68e36cc5"
format_stamp: "Formatted at 2025-06-29 01:57:20 on dist-test-slave-v1mb"
I20250629 01:57:20.622735 17980 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.001s
I20250629 01:57:20.628098 17996 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:20.629096 17980 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.000s sys 0.004s
I20250629 01:57:20.629413 17980 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "d7b0107584e2409880bf436f68e36cc5"
format_stamp: "Formatted at 2025-06-29 01:57:20 on dist-test-slave-v1mb"
I20250629 01:57:20.629738 17980 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:57:20.682516 17980 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:57:20.683806 17980 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:57:20.684182 17980 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:57:20.686383 17980 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:57:20.690404 17980 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:57:20.690595 17980 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:20.690853 17980 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:57:20.690990 17980 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:20.832672 17980 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.66:43881
I20250629 01:57:20.832777 18108 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.66:43881 every 8 connection(s)
I20250629 01:57:20.834915 17980 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250629 01:57:20.842798 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 17980
I20250629 01:57:20.843108 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250629 01:57:20.847879 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.67:0
--local_ip_for_outbound_sockets=127.17.83.67
--webserver_interface=127.17.83.67
--webserver_port=0
--tserver_master_addrs=127.17.83.126:46827
--builtin_ntp_servers=127.17.83.84:37271
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250629 01:57:20.854516 18109 heartbeater.cc:344] Connected to a master server at 127.17.83.126:46827
I20250629 01:57:20.854950 18109 heartbeater.cc:461] Registering TS with master...
I20250629 01:57:20.856184 18109 heartbeater.cc:507] Master 127.17.83.126:46827 requested a full tablet report, sending...
I20250629 01:57:20.858361 17788 ts_manager.cc:194] Registered new tserver with Master: d7b0107584e2409880bf436f68e36cc5 (127.17.83.66:43881)
I20250629 01:57:20.859577 17788 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.66:49135
W20250629 01:57:21.142822 18113 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:21.143294 18113 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:57:21.143750 18113 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:21.184885 18113 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:57:21.185675 18113 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.67
I20250629 01:57:21.218719 18113 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:37271
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.67:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.17.83.67
--webserver_port=0
--tserver_master_addrs=127.17.83.126:46827
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.67
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:57:21.220100 18113 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:57:21.221984 18113 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:57:21.238826 18120 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:21.862452 18109 heartbeater.cc:499] Master 127.17.83.126:46827 was elected leader, sending a full tablet report...
W20250629 01:57:21.239238 18119 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:21.239926 18122 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:22.394001 18121 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1153 milliseconds
I20250629 01:57:22.394096 18113 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:57:22.395123 18113 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:57:22.397691 18113 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:57:22.399088 18113 hybrid_clock.cc:648] HybridClock initialized: now 1751162242399061 us; error 54 us; skew 500 ppm
I20250629 01:57:22.399842 18113 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:57:22.410563 18113 webserver.cc:469] Webserver started at http://127.17.83.67:36399/ using document root <none> and password file <none>
I20250629 01:57:22.411487 18113 fs_manager.cc:362] Metadata directory not provided
I20250629 01:57:22.411695 18113 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:57:22.412118 18113 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:57:22.416724 18113 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "49db8738eca7484d958caab97fc56240"
format_stamp: "Formatted at 2025-06-29 01:57:22 on dist-test-slave-v1mb"
I20250629 01:57:22.417776 18113 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "49db8738eca7484d958caab97fc56240"
format_stamp: "Formatted at 2025-06-29 01:57:22 on dist-test-slave-v1mb"
I20250629 01:57:22.424281 18113 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.001s
I20250629 01:57:22.430061 18129 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:22.430958 18113 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250629 01:57:22.431279 18113 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "49db8738eca7484d958caab97fc56240"
format_stamp: "Formatted at 2025-06-29 01:57:22 on dist-test-slave-v1mb"
I20250629 01:57:22.431603 18113 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:57:22.479887 18113 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:57:22.481199 18113 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:57:22.481582 18113 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:57:22.484282 18113 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:57:22.488019 18113 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:57:22.488221 18113 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:22.488467 18113 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:57:22.488615 18113 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:22.633860 18113 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.67:41629
I20250629 01:57:22.633942 18241 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.67:41629 every 8 connection(s)
I20250629 01:57:22.636229 18113 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250629 01:57:22.643770 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 18113
I20250629 01:57:22.644119 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250629 01:57:22.649607 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.68:0
--local_ip_for_outbound_sockets=127.17.83.68
--webserver_interface=127.17.83.68
--webserver_port=0
--tserver_master_addrs=127.17.83.126:46827
--builtin_ntp_servers=127.17.83.84:37271
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250629 01:57:22.657552 18242 heartbeater.cc:344] Connected to a master server at 127.17.83.126:46827
I20250629 01:57:22.657948 18242 heartbeater.cc:461] Registering TS with master...
I20250629 01:57:22.658874 18242 heartbeater.cc:507] Master 127.17.83.126:46827 requested a full tablet report, sending...
I20250629 01:57:22.661453 17788 ts_manager.cc:194] Registered new tserver with Master: 49db8738eca7484d958caab97fc56240 (127.17.83.67:41629)
I20250629 01:57:22.662701 17788 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.67:57833
W20250629 01:57:22.936650 18246 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:22.937114 18246 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:57:22.937592 18246 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:22.967511 18246 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:57:22.968289 18246 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.68
I20250629 01:57:22.999222 18246 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:37271
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.68:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.17.83.68
--webserver_port=0
--tserver_master_addrs=127.17.83.126:46827
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.68
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:57:23.000416 18246 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:57:23.001987 18246 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:57:23.018553 18252 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:23.665571 18242 heartbeater.cc:499] Master 127.17.83.126:46827 was elected leader, sending a full tablet report...
W20250629 01:57:23.019846 18253 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:23.018709 18255 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:23.020762 18246 server_base.cc:1048] running on GCE node
I20250629 01:57:24.139359 18246 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:57:24.141449 18246 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:57:24.142793 18246 hybrid_clock.cc:648] HybridClock initialized: now 1751162244142755 us; error 55 us; skew 500 ppm
I20250629 01:57:24.143571 18246 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:57:24.149400 18246 webserver.cc:469] Webserver started at http://127.17.83.68:39401/ using document root <none> and password file <none>
I20250629 01:57:24.150164 18246 fs_manager.cc:362] Metadata directory not provided
I20250629 01:57:24.150344 18246 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:57:24.150768 18246 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:57:24.154829 18246 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "ea5b9f2455844c48875ccbd1041189f5"
format_stamp: "Formatted at 2025-06-29 01:57:24 on dist-test-slave-v1mb"
I20250629 01:57:24.155813 18246 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "ea5b9f2455844c48875ccbd1041189f5"
format_stamp: "Formatted at 2025-06-29 01:57:24 on dist-test-slave-v1mb"
I20250629 01:57:24.163141 18246 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.001s
I20250629 01:57:24.169137 18262 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:24.170071 18246 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.003s
I20250629 01:57:24.170336 18246 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "ea5b9f2455844c48875ccbd1041189f5"
format_stamp: "Formatted at 2025-06-29 01:57:24 on dist-test-slave-v1mb"
I20250629 01:57:24.170595 18246 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:57:24.241670 18246 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:57:24.243057 18246 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:57:24.243485 18246 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:57:24.246225 18246 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:57:24.250361 18246 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:57:24.250545 18246 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:24.250823 18246 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:57:24.250988 18246 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:24.393422 18246 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.68:42055
I20250629 01:57:24.393512 18375 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.68:42055 every 8 connection(s)
I20250629 01:57:24.395733 18246 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250629 01:57:24.402102 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 18246
I20250629 01:57:24.402621 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250629 01:57:24.417093 18376 heartbeater.cc:344] Connected to a master server at 127.17.83.126:46827
I20250629 01:57:24.417490 18376 heartbeater.cc:461] Registering TS with master...
I20250629 01:57:24.418367 18376 heartbeater.cc:507] Master 127.17.83.126:46827 requested a full tablet report, sending...
I20250629 01:57:24.420372 17787 ts_manager.cc:194] Registered new tserver with Master: ea5b9f2455844c48875ccbd1041189f5 (127.17.83.68:42055)
I20250629 01:57:24.422230 17741 external_mini_cluster.cc:934] 4 TS(s) registered with all masters
I20250629 01:57:24.422190 17787 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.68:45361
I20250629 01:57:24.462036 17787 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:56494:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250629 01:57:24.543269 18044 tablet_service.cc:1468] Processing CreateTablet for tablet 98766d41e998414f880e46d4eadf5364 (DEFAULT_TABLE table=TestTable [id=6b0edb946c85444c9655d91e375a3b32]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:57:24.543664 18177 tablet_service.cc:1468] Processing CreateTablet for tablet 98766d41e998414f880e46d4eadf5364 (DEFAULT_TABLE table=TestTable [id=6b0edb946c85444c9655d91e375a3b32]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:57:24.545805 18044 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 98766d41e998414f880e46d4eadf5364. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:24.546231 18177 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 98766d41e998414f880e46d4eadf5364. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:24.549608 18311 tablet_service.cc:1468] Processing CreateTablet for tablet 98766d41e998414f880e46d4eadf5364 (DEFAULT_TABLE table=TestTable [id=6b0edb946c85444c9655d91e375a3b32]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:57:24.551304 18311 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 98766d41e998414f880e46d4eadf5364. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:24.566911 18395 tablet_bootstrap.cc:492] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240: Bootstrap starting.
I20250629 01:57:24.573268 18396 tablet_bootstrap.cc:492] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5: Bootstrap starting.
I20250629 01:57:24.574798 18395 tablet_bootstrap.cc:654] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240: Neither blocks nor log segments found. Creating new log.
I20250629 01:57:24.577921 18395 log.cc:826] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240: Log is configured to *not* fsync() on all Append() calls
I20250629 01:57:24.579715 18397 tablet_bootstrap.cc:492] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5: Bootstrap starting.
I20250629 01:57:24.582623 18396 tablet_bootstrap.cc:654] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5: Neither blocks nor log segments found. Creating new log.
I20250629 01:57:24.583815 18395 tablet_bootstrap.cc:492] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240: No bootstrap required, opened a new log
I20250629 01:57:24.584213 18395 ts_tablet_manager.cc:1397] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240: Time spent bootstrapping tablet: real 0.018s user 0.011s sys 0.004s
I20250629 01:57:24.584666 18396 log.cc:826] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5: Log is configured to *not* fsync() on all Append() calls
I20250629 01:57:24.587574 18397 tablet_bootstrap.cc:654] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5: Neither blocks nor log segments found. Creating new log.
I20250629 01:57:24.590118 18396 tablet_bootstrap.cc:492] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5: No bootstrap required, opened a new log
I20250629 01:57:24.590162 18397 log.cc:826] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5: Log is configured to *not* fsync() on all Append() calls
I20250629 01:57:24.590607 18396 ts_tablet_manager.cc:1397] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5: Time spent bootstrapping tablet: real 0.018s user 0.008s sys 0.009s
I20250629 01:57:24.595372 18397 tablet_bootstrap.cc:492] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5: No bootstrap required, opened a new log
I20250629 01:57:24.595724 18397 ts_tablet_manager.cc:1397] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5: Time spent bootstrapping tablet: real 0.017s user 0.002s sys 0.014s
I20250629 01:57:24.610893 18395 raft_consensus.cc:357] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }
I20250629 01:57:24.611824 18395 raft_consensus.cc:383] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:57:24.612146 18395 raft_consensus.cc:738] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 49db8738eca7484d958caab97fc56240, State: Initialized, Role: FOLLOWER
I20250629 01:57:24.613004 18395 consensus_queue.cc:260] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }
I20250629 01:57:24.613255 18397 raft_consensus.cc:357] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }
I20250629 01:57:24.613850 18397 raft_consensus.cc:383] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:57:24.614107 18397 raft_consensus.cc:738] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ea5b9f2455844c48875ccbd1041189f5, State: Initialized, Role: FOLLOWER
I20250629 01:57:24.615352 18396 raft_consensus.cc:357] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }
I20250629 01:57:24.615655 18397 consensus_queue.cc:260] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }
I20250629 01:57:24.616293 18396 raft_consensus.cc:383] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:57:24.616649 18396 raft_consensus.cc:738] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d7b0107584e2409880bf436f68e36cc5, State: Initialized, Role: FOLLOWER
I20250629 01:57:24.617638 18395 ts_tablet_manager.cc:1428] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240: Time spent starting tablet: real 0.033s user 0.028s sys 0.004s
I20250629 01:57:24.617640 18396 consensus_queue.cc:260] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }
I20250629 01:57:24.619354 18376 heartbeater.cc:499] Master 127.17.83.126:46827 was elected leader, sending a full tablet report...
I20250629 01:57:24.622658 18397 ts_tablet_manager.cc:1428] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5: Time spent starting tablet: real 0.027s user 0.025s sys 0.000s
I20250629 01:57:24.623476 18396 ts_tablet_manager.cc:1428] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5: Time spent starting tablet: real 0.033s user 0.029s sys 0.003s
W20250629 01:57:24.642815 18243 tablet.cc:2378] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250629 01:57:24.650008 18377 tablet.cc:2378] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250629 01:57:24.663066 18402 raft_consensus.cc:491] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250629 01:57:24.663630 18402 raft_consensus.cc:513] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }
I20250629 01:57:24.666920 18402 leader_election.cc:290] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 49db8738eca7484d958caab97fc56240 (127.17.83.67:41629), d7b0107584e2409880bf436f68e36cc5 (127.17.83.66:43881)
I20250629 01:57:24.676204 18197 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "98766d41e998414f880e46d4eadf5364" candidate_uuid: "ea5b9f2455844c48875ccbd1041189f5" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "49db8738eca7484d958caab97fc56240" is_pre_election: true
I20250629 01:57:24.677415 18197 raft_consensus.cc:2466] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate ea5b9f2455844c48875ccbd1041189f5 in term 0.
I20250629 01:57:24.678150 18064 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "98766d41e998414f880e46d4eadf5364" candidate_uuid: "ea5b9f2455844c48875ccbd1041189f5" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d7b0107584e2409880bf436f68e36cc5" is_pre_election: true
I20250629 01:57:24.678637 18263 leader_election.cc:304] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 49db8738eca7484d958caab97fc56240, ea5b9f2455844c48875ccbd1041189f5; no voters:
I20250629 01:57:24.678776 18064 raft_consensus.cc:2466] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate ea5b9f2455844c48875ccbd1041189f5 in term 0.
I20250629 01:57:24.679317 18402 raft_consensus.cc:2802] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250629 01:57:24.679661 18402 raft_consensus.cc:491] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250629 01:57:24.679903 18402 raft_consensus.cc:3058] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:57:24.684900 18402 raft_consensus.cc:513] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }
I20250629 01:57:24.686012 18402 leader_election.cc:290] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [CANDIDATE]: Term 1 election: Requested vote from peers 49db8738eca7484d958caab97fc56240 (127.17.83.67:41629), d7b0107584e2409880bf436f68e36cc5 (127.17.83.66:43881)
I20250629 01:57:24.686718 18197 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "98766d41e998414f880e46d4eadf5364" candidate_uuid: "ea5b9f2455844c48875ccbd1041189f5" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "49db8738eca7484d958caab97fc56240"
I20250629 01:57:24.686848 18064 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "98766d41e998414f880e46d4eadf5364" candidate_uuid: "ea5b9f2455844c48875ccbd1041189f5" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d7b0107584e2409880bf436f68e36cc5"
I20250629 01:57:24.687077 18197 raft_consensus.cc:3058] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:57:24.687453 18064 raft_consensus.cc:3058] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:57:24.691107 18197 raft_consensus.cc:2466] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate ea5b9f2455844c48875ccbd1041189f5 in term 1.
I20250629 01:57:24.692039 18263 leader_election.cc:304] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 49db8738eca7484d958caab97fc56240, ea5b9f2455844c48875ccbd1041189f5; no voters:
I20250629 01:57:24.692242 18064 raft_consensus.cc:2466] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate ea5b9f2455844c48875ccbd1041189f5 in term 1.
I20250629 01:57:24.692737 18402 raft_consensus.cc:2802] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:57:24.694173 18402 raft_consensus.cc:695] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [term 1 LEADER]: Becoming Leader. State: Replica: ea5b9f2455844c48875ccbd1041189f5, State: Running, Role: LEADER
I20250629 01:57:24.695394 18402 consensus_queue.cc:237] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }
I20250629 01:57:24.705149 17787 catalog_manager.cc:5582] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 reported cstate change: term changed from 0 to 1, leader changed from <none> to ea5b9f2455844c48875ccbd1041189f5 (127.17.83.68). New cstate: current_term: 1 leader_uuid: "ea5b9f2455844c48875ccbd1041189f5" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } health_report { overall_health: HEALTHY } } }
I20250629 01:57:24.731153 17741 external_mini_cluster.cc:934] 4 TS(s) registered with all masters
I20250629 01:57:24.734510 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver d7b0107584e2409880bf436f68e36cc5 to finish bootstrapping
I20250629 01:57:24.747330 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 49db8738eca7484d958caab97fc56240 to finish bootstrapping
I20250629 01:57:24.756866 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver ea5b9f2455844c48875ccbd1041189f5 to finish bootstrapping
I20250629 01:57:24.766341 17741 kudu-admin-test.cc:709] Waiting for Master to see the current replicas...
I20250629 01:57:24.769781 17741 kudu-admin-test.cc:716] Tablet locations:
tablet_locations {
tablet_id: "98766d41e998414f880e46d4eadf5364"
DEPRECATED_stale: false
partition {
partition_key_start: ""
partition_key_end: ""
}
interned_replicas {
ts_info_idx: 0
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 1
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 2
role: LEADER
}
}
ts_infos {
permanent_uuid: "49db8738eca7484d958caab97fc56240"
rpc_addresses {
host: "127.17.83.67"
port: 41629
}
}
ts_infos {
permanent_uuid: "d7b0107584e2409880bf436f68e36cc5"
rpc_addresses {
host: "127.17.83.66"
port: 43881
}
}
ts_infos {
permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5"
rpc_addresses {
host: "127.17.83.68"
port: 42055
}
}
W20250629 01:57:24.844699 18110 tablet.cc:2378] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250629 01:57:25.207918 18409 consensus_queue.cc:1035] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250629 01:57:25.224694 18412 consensus_queue.cc:1035] T 98766d41e998414f880e46d4eadf5364 P ea5b9f2455844c48875ccbd1041189f5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250629 01:57:25.230288 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 18246
W20250629 01:57:25.259001 18133 connection.cc:537] server connection from 127.17.83.68:56241 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250629 01:57:25.259634 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 17755
I20250629 01:57:25.288830 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:46827
--webserver_interface=127.17.83.126
--webserver_port=43665
--builtin_ntp_servers=127.17.83.84:37271
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.17.83.126:46827 with env {}
W20250629 01:57:25.573318 18419 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:25.573930 18419 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:57:25.574370 18419 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:25.604259 18419 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250629 01:57:25.604570 18419 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:57:25.604806 18419 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250629 01:57:25.605033 18419 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250629 01:57:25.638129 18419 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:37271
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.17.83.126:46827
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:46827
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.17.83.126
--webserver_port=43665
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:57:25.639369 18419 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:57:25.640821 18419 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:57:25.657289 18426 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:26.153952 17976 heartbeater.cc:646] Failed to heartbeat to 127.17.83.126:46827 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.17.83.126:46827: connect: Connection refused (error 111)
W20250629 01:57:26.239006 18109 heartbeater.cc:646] Failed to heartbeat to 127.17.83.126:46827 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.17.83.126:46827: connect: Connection refused (error 111)
W20250629 01:57:26.254400 18242 heartbeater.cc:646] Failed to heartbeat to 127.17.83.126:46827 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.17.83.126:46827: connect: Connection refused (error 111)
I20250629 01:57:26.722673 18434 raft_consensus.cc:491] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 1 FOLLOWER]: Starting pre-election (detected failure of leader ea5b9f2455844c48875ccbd1041189f5)
I20250629 01:57:26.723169 18434 raft_consensus.cc:513] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }
I20250629 01:57:26.725579 18434 leader_election.cc:290] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 49db8738eca7484d958caab97fc56240 (127.17.83.67:41629), ea5b9f2455844c48875ccbd1041189f5 (127.17.83.68:42055)
W20250629 01:57:26.729279 17998 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.17.83.68:42055: connect: Connection refused (error 111)
I20250629 01:57:26.734127 18438 raft_consensus.cc:491] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 1 FOLLOWER]: Starting pre-election (detected failure of leader ea5b9f2455844c48875ccbd1041189f5)
I20250629 01:57:26.735802 18438 raft_consensus.cc:513] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }
W20250629 01:57:26.741622 17998 leader_election.cc:336] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer ea5b9f2455844c48875ccbd1041189f5 (127.17.83.68:42055): Network error: Client connection negotiation failed: client connection to 127.17.83.68:42055: connect: Connection refused (error 111)
I20250629 01:57:26.749848 18197 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "98766d41e998414f880e46d4eadf5364" candidate_uuid: "d7b0107584e2409880bf436f68e36cc5" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "49db8738eca7484d958caab97fc56240" is_pre_election: true
I20250629 01:57:26.750389 18197 raft_consensus.cc:2466] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d7b0107584e2409880bf436f68e36cc5 in term 1.
I20250629 01:57:26.751716 17997 leader_election.cc:304] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 49db8738eca7484d958caab97fc56240, d7b0107584e2409880bf436f68e36cc5; no voters: ea5b9f2455844c48875ccbd1041189f5
I20250629 01:57:26.752830 18434 raft_consensus.cc:2802] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250629 01:57:26.753177 18434 raft_consensus.cc:491] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 1 FOLLOWER]: Starting leader election (detected failure of leader ea5b9f2455844c48875ccbd1041189f5)
I20250629 01:57:26.753496 18434 raft_consensus.cc:3058] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 1 FOLLOWER]: Advancing to term 2
I20250629 01:57:26.757619 18438 leader_election.cc:290] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers d7b0107584e2409880bf436f68e36cc5 (127.17.83.66:43881), ea5b9f2455844c48875ccbd1041189f5 (127.17.83.68:42055)
I20250629 01:57:26.759935 18434 raft_consensus.cc:513] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }
W20250629 01:57:26.760519 18131 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.17.83.68:42055: connect: Connection refused (error 111)
I20250629 01:57:26.762526 18197 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "98766d41e998414f880e46d4eadf5364" candidate_uuid: "d7b0107584e2409880bf436f68e36cc5" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "49db8738eca7484d958caab97fc56240"
I20250629 01:57:26.763020 18197 raft_consensus.cc:3058] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 1 FOLLOWER]: Advancing to term 2
I20250629 01:57:26.769084 18197 raft_consensus.cc:2466] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d7b0107584e2409880bf436f68e36cc5 in term 2.
I20250629 01:57:26.770059 17997 leader_election.cc:304] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 49db8738eca7484d958caab97fc56240, d7b0107584e2409880bf436f68e36cc5; no voters:
I20250629 01:57:26.773304 18064 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "98766d41e998414f880e46d4eadf5364" candidate_uuid: "49db8738eca7484d958caab97fc56240" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "d7b0107584e2409880bf436f68e36cc5" is_pre_election: true
I20250629 01:57:26.774227 18064 raft_consensus.cc:2391] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 2 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 49db8738eca7484d958caab97fc56240 in current term 2: Already voted for candidate d7b0107584e2409880bf436f68e36cc5 in this term.
W20250629 01:57:26.775499 18131 leader_election.cc:336] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer ea5b9f2455844c48875ccbd1041189f5 (127.17.83.68:42055): Network error: Client connection negotiation failed: client connection to 127.17.83.68:42055: connect: Connection refused (error 111)
I20250629 01:57:26.776739 18131 leader_election.cc:304] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 49db8738eca7484d958caab97fc56240; no voters: d7b0107584e2409880bf436f68e36cc5, ea5b9f2455844c48875ccbd1041189f5
I20250629 01:57:26.777776 18438 raft_consensus.cc:2747] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 2 FOLLOWER]: Leader pre-election lost for term 2. Reason: could not achieve majority
W20250629 01:57:26.778430 17998 leader_election.cc:336] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer ea5b9f2455844c48875ccbd1041189f5 (127.17.83.68:42055): Network error: Client connection negotiation failed: client connection to 127.17.83.68:42055: connect: Connection refused (error 111)
I20250629 01:57:26.779022 18442 raft_consensus.cc:2802] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 2 FOLLOWER]: Leader election won for term 2
I20250629 01:57:26.779268 18434 leader_election.cc:290] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [CANDIDATE]: Term 2 election: Requested vote from peers 49db8738eca7484d958caab97fc56240 (127.17.83.67:41629), ea5b9f2455844c48875ccbd1041189f5 (127.17.83.68:42055)
I20250629 01:57:26.782824 18442 raft_consensus.cc:695] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 2 LEADER]: Becoming Leader. State: Replica: d7b0107584e2409880bf436f68e36cc5, State: Running, Role: LEADER
I20250629 01:57:26.784027 18442 consensus_queue.cc:237] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }
W20250629 01:57:25.657893 18425 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:25.663456 18428 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:26.903012 18427 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1243 milliseconds
I20250629 01:57:26.903124 18419 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:57:26.904297 18419 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:57:26.906587 18419 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:57:26.907908 18419 hybrid_clock.cc:648] HybridClock initialized: now 1751162246907868 us; error 45 us; skew 500 ppm
I20250629 01:57:26.908670 18419 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:57:26.914906 18419 webserver.cc:469] Webserver started at http://127.17.83.126:43665/ using document root <none> and password file <none>
I20250629 01:57:26.915869 18419 fs_manager.cc:362] Metadata directory not provided
I20250629 01:57:26.916078 18419 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:57:26.923264 18419 fs_manager.cc:714] Time spent opening directory manager: real 0.004s user 0.006s sys 0.000s
I20250629 01:57:26.927392 18449 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:26.928390 18419 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250629 01:57:26.928692 18419 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
uuid: "71666a48a24845af945928297d288a83"
format_stamp: "Formatted at 2025-06-29 01:57:17 on dist-test-slave-v1mb"
I20250629 01:57:26.930428 18419 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:57:26.978180 18419 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:57:26.979596 18419 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:57:26.979970 18419 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:57:27.048789 18419 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.126:46827
I20250629 01:57:27.048877 18500 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.126:46827 every 8 connection(s)
I20250629 01:57:27.051391 18419 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250629 01:57:27.057578 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 18419
I20250629 01:57:27.058048 17741 kudu-admin-test.cc:735] Forcing unsafe config change on tserver d7b0107584e2409880bf436f68e36cc5
I20250629 01:57:27.061034 18501 sys_catalog.cc:263] Verifying existing consensus state
I20250629 01:57:27.068565 18501 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83: Bootstrap starting.
I20250629 01:57:27.119287 18501 log.cc:826] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83: Log is configured to *not* fsync() on all Append() calls
I20250629 01:57:27.146863 18501 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=5 ignored=0} mutations{seen=2 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:57:27.147837 18501 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83: Bootstrap complete.
I20250629 01:57:27.180459 18501 raft_consensus.cc:357] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "71666a48a24845af945928297d288a83" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 46827 } }
I20250629 01:57:27.183563 18501 raft_consensus.cc:738] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 71666a48a24845af945928297d288a83, State: Initialized, Role: FOLLOWER
I20250629 01:57:27.184445 18501 consensus_queue.cc:260] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "71666a48a24845af945928297d288a83" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 46827 } }
I20250629 01:57:27.185207 18501 raft_consensus.cc:397] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:57:27.185544 18501 raft_consensus.cc:491] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:57:27.185966 18501 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 1 FOLLOWER]: Advancing to term 2
W20250629 01:57:27.185765 17998 consensus_peers.cc:489] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 -> Peer ea5b9f2455844c48875ccbd1041189f5 (127.17.83.68:42055): Couldn't send request to peer ea5b9f2455844c48875ccbd1041189f5. Status: Network error: Client connection negotiation failed: client connection to 127.17.83.68:42055: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250629 01:57:27.192878 17976 heartbeater.cc:344] Connected to a master server at 127.17.83.126:46827
I20250629 01:57:27.193454 18501 raft_consensus.cc:513] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "71666a48a24845af945928297d288a83" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 46827 } }
I20250629 01:57:27.194124 18501 leader_election.cc:304] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 71666a48a24845af945928297d288a83; no voters:
I20250629 01:57:27.196628 18501 leader_election.cc:290] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250629 01:57:27.197281 18507 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 2 FOLLOWER]: Leader election won for term 2
I20250629 01:57:27.200176 18507 raft_consensus.cc:695] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [term 2 LEADER]: Becoming Leader. State: Replica: 71666a48a24845af945928297d288a83, State: Running, Role: LEADER
I20250629 01:57:27.201259 18501 sys_catalog.cc:564] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [sys.catalog]: configured and running, proceeding with master startup.
I20250629 01:57:27.201020 18507 consensus_queue.cc:237] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "71666a48a24845af945928297d288a83" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 46827 } }
I20250629 01:57:27.213090 18508 sys_catalog.cc:455] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "71666a48a24845af945928297d288a83" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "71666a48a24845af945928297d288a83" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 46827 } } }
I20250629 01:57:27.213232 18509 sys_catalog.cc:455] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 71666a48a24845af945928297d288a83. Latest consensus state: current_term: 2 leader_uuid: "71666a48a24845af945928297d288a83" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "71666a48a24845af945928297d288a83" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 46827 } } }
I20250629 01:57:27.213704 18508 sys_catalog.cc:458] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [sys.catalog]: This master's current role is: LEADER
I20250629 01:57:27.213960 18509 sys_catalog.cc:458] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83 [sys.catalog]: This master's current role is: LEADER
I20250629 01:57:27.228323 18514 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250629 01:57:27.239339 18514 catalog_manager.cc:671] Loaded metadata for table TestTable [id=6b0edb946c85444c9655d91e375a3b32]
I20250629 01:57:27.246376 18514 tablet_loader.cc:96] loaded metadata for tablet 98766d41e998414f880e46d4eadf5364 (table TestTable [id=6b0edb946c85444c9655d91e375a3b32])
I20250629 01:57:27.248890 18514 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250629 01:57:27.255577 18197 raft_consensus.cc:1273] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 2 FOLLOWER]: Refusing update from remote peer d7b0107584e2409880bf436f68e36cc5: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250629 01:57:27.257452 18442 consensus_queue.cc:1035] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250629 01:57:27.264552 18514 catalog_manager.cc:1261] Loaded cluster ID: 9f29126063ae4a28a84c0c22cff9dbfc
I20250629 01:57:27.265602 18514 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250629 01:57:27.270704 18242 heartbeater.cc:344] Connected to a master server at 127.17.83.126:46827
I20250629 01:57:27.299536 18514 catalog_manager.cc:1506] Loading token signing keys...
I20250629 01:57:27.301993 18109 heartbeater.cc:344] Connected to a master server at 127.17.83.126:46827
I20250629 01:57:27.310055 18514 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 71666a48a24845af945928297d288a83: Loaded TSK: 0
I20250629 01:57:27.312108 18514 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250629 01:57:27.447650 18503 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:27.448261 18503 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:27.478155 18503 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250629 01:57:28.204241 18466 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "379ee11d280540bfa3a9fa5f1d63a4b7" instance_seqno: 1751162239058215) as {username='slave'} at 127.17.83.65:55097; Asking this server to re-register.
I20250629 01:57:28.206602 17976 heartbeater.cc:461] Registering TS with master...
I20250629 01:57:28.207548 17976 heartbeater.cc:507] Master 127.17.83.126:46827 requested a full tablet report, sending...
I20250629 01:57:28.210814 18465 ts_manager.cc:194] Registered new tserver with Master: 379ee11d280540bfa3a9fa5f1d63a4b7 (127.17.83.65:45377)
I20250629 01:57:28.310783 18466 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" instance_seqno: 1751162240799384) as {username='slave'} at 127.17.83.66:41105; Asking this server to re-register.
I20250629 01:57:28.313300 18109 heartbeater.cc:461] Registering TS with master...
I20250629 01:57:28.314186 18109 heartbeater.cc:507] Master 127.17.83.126:46827 requested a full tablet report, sending...
I20250629 01:57:28.318683 18466 ts_manager.cc:194] Registered new tserver with Master: d7b0107584e2409880bf436f68e36cc5 (127.17.83.66:43881)
I20250629 01:57:28.321419 18465 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "49db8738eca7484d958caab97fc56240" instance_seqno: 1751162242600181) as {username='slave'} at 127.17.83.67:53507; Asking this server to re-register.
I20250629 01:57:28.323686 18242 heartbeater.cc:461] Registering TS with master...
I20250629 01:57:28.324476 18242 heartbeater.cc:507] Master 127.17.83.126:46827 requested a full tablet report, sending...
I20250629 01:57:28.328400 18465 ts_manager.cc:194] Registered new tserver with Master: 49db8738eca7484d958caab97fc56240 (127.17.83.67:41629)
I20250629 01:57:28.333667 18466 catalog_manager.cc:5582] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 reported cstate change: term changed from 1 to 2, leader changed from ea5b9f2455844c48875ccbd1041189f5 (127.17.83.68) to d7b0107584e2409880bf436f68e36cc5 (127.17.83.66). New cstate: current_term: 2 leader_uuid: "d7b0107584e2409880bf436f68e36cc5" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } health_report { overall_health: UNKNOWN } } }
W20250629 01:57:28.811102 18503 thread.cc:640] rpc reactor (reactor) Time spent creating pthread: real 1.292s user 0.380s sys 0.858s
W20250629 01:57:28.811556 18503 thread.cc:606] rpc reactor (reactor) Time spent starting thread: real 1.293s user 0.381s sys 0.862s
I20250629 01:57:28.876614 18064 tablet_service.cc:1905] Received UnsafeChangeConfig RPC: dest_uuid: "d7b0107584e2409880bf436f68e36cc5"
tablet_id: "98766d41e998414f880e46d4eadf5364"
caller_id: "kudu-tools"
new_config {
peers {
permanent_uuid: "49db8738eca7484d958caab97fc56240"
}
peers {
permanent_uuid: "d7b0107584e2409880bf436f68e36cc5"
}
}
from {username='slave'} at 127.0.0.1:50546
W20250629 01:57:28.877941 18064 raft_consensus.cc:2216] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 2 LEADER]: PROCEEDING WITH UNSAFE CONFIG CHANGE ON THIS SERVER, COMMITTED CONFIG: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }NEW CONFIG: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } unsafe_config_change: true
I20250629 01:57:28.879060 18064 raft_consensus.cc:3053] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 2 LEADER]: Stepping down as leader of term 2
I20250629 01:57:28.879405 18064 raft_consensus.cc:738] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 2 LEADER]: Becoming Follower/Learner. State: Replica: d7b0107584e2409880bf436f68e36cc5, State: Running, Role: LEADER
I20250629 01:57:28.880124 18064 consensus_queue.cc:260] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 2.2, Last appended by leader: 2, Current term: 2, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }
I20250629 01:57:28.881297 18064 raft_consensus.cc:3058] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 2 FOLLOWER]: Advancing to term 3
I20250629 01:57:30.280154 18558 raft_consensus.cc:491] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 2 FOLLOWER]: Starting pre-election (detected failure of leader d7b0107584e2409880bf436f68e36cc5)
I20250629 01:57:30.280495 18558 raft_consensus.cc:513] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } }
I20250629 01:57:30.281936 18558 leader_election.cc:290] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers d7b0107584e2409880bf436f68e36cc5 (127.17.83.66:43881), ea5b9f2455844c48875ccbd1041189f5 (127.17.83.68:42055)
I20250629 01:57:30.283105 18064 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "98766d41e998414f880e46d4eadf5364" candidate_uuid: "49db8738eca7484d958caab97fc56240" candidate_term: 3 candidate_status { last_received { term: 2 index: 2 } } ignore_live_leader: false dest_uuid: "d7b0107584e2409880bf436f68e36cc5" is_pre_election: true
W20250629 01:57:30.286281 18131 leader_election.cc:336] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer ea5b9f2455844c48875ccbd1041189f5 (127.17.83.68:42055): Network error: Client connection negotiation failed: client connection to 127.17.83.68:42055: connect: Connection refused (error 111)
I20250629 01:57:30.286548 18131 leader_election.cc:304] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 49db8738eca7484d958caab97fc56240; no voters: d7b0107584e2409880bf436f68e36cc5, ea5b9f2455844c48875ccbd1041189f5
I20250629 01:57:30.287034 18558 raft_consensus.cc:2747] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250629 01:57:30.388716 18561 raft_consensus.cc:491] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 3 FOLLOWER]: Starting pre-election (detected failure of leader kudu-tools)
I20250629 01:57:30.389159 18561 raft_consensus.cc:513] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 3 FOLLOWER]: Starting pre-election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } unsafe_config_change: true
I20250629 01:57:30.390266 18561 leader_election.cc:290] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [CANDIDATE]: Term 4 pre-election: Requested pre-vote from peers 49db8738eca7484d958caab97fc56240 (127.17.83.67:41629)
I20250629 01:57:30.391460 18196 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "98766d41e998414f880e46d4eadf5364" candidate_uuid: "d7b0107584e2409880bf436f68e36cc5" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "49db8738eca7484d958caab97fc56240" is_pre_election: true
I20250629 01:57:30.391930 18196 raft_consensus.cc:2466] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d7b0107584e2409880bf436f68e36cc5 in term 2.
I20250629 01:57:30.392870 17997 leader_election.cc:304] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [CANDIDATE]: Term 4 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: 49db8738eca7484d958caab97fc56240, d7b0107584e2409880bf436f68e36cc5; no voters:
I20250629 01:57:30.393422 18561 raft_consensus.cc:2802] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 3 FOLLOWER]: Leader pre-election won for term 4
I20250629 01:57:30.393659 18561 raft_consensus.cc:491] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 3 FOLLOWER]: Starting leader election (detected failure of leader kudu-tools)
I20250629 01:57:30.393930 18561 raft_consensus.cc:3058] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 3 FOLLOWER]: Advancing to term 4
I20250629 01:57:30.398308 18561 raft_consensus.cc:513] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 4 FOLLOWER]: Starting leader election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } unsafe_config_change: true
I20250629 01:57:30.399149 18561 leader_election.cc:290] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [CANDIDATE]: Term 4 election: Requested vote from peers 49db8738eca7484d958caab97fc56240 (127.17.83.67:41629)
I20250629 01:57:30.399955 18196 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "98766d41e998414f880e46d4eadf5364" candidate_uuid: "d7b0107584e2409880bf436f68e36cc5" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "49db8738eca7484d958caab97fc56240"
I20250629 01:57:30.400291 18196 raft_consensus.cc:3058] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 2 FOLLOWER]: Advancing to term 4
I20250629 01:57:30.404176 18196 raft_consensus.cc:2466] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 4 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d7b0107584e2409880bf436f68e36cc5 in term 4.
I20250629 01:57:30.404928 17997 leader_election.cc:304] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: 49db8738eca7484d958caab97fc56240, d7b0107584e2409880bf436f68e36cc5; no voters:
I20250629 01:57:30.405444 18561 raft_consensus.cc:2802] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 4 FOLLOWER]: Leader election won for term 4
I20250629 01:57:30.406200 18561 raft_consensus.cc:695] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 4 LEADER]: Becoming Leader. State: Replica: d7b0107584e2409880bf436f68e36cc5, State: Running, Role: LEADER
I20250629 01:57:30.406801 18561 consensus_queue.cc:237] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 3.3, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } unsafe_config_change: true
I20250629 01:57:30.412456 18466 catalog_manager.cc:5582] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 reported cstate change: term changed from 2 to 4, now has a pending config: VOTER 49db8738eca7484d958caab97fc56240 (127.17.83.67), VOTER d7b0107584e2409880bf436f68e36cc5 (127.17.83.66). New cstate: current_term: 4 leader_uuid: "d7b0107584e2409880bf436f68e36cc5" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "ea5b9f2455844c48875ccbd1041189f5" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 42055 } } } pending_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } unsafe_config_change: true }
I20250629 01:57:30.824760 18196 raft_consensus.cc:1273] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 4 FOLLOWER]: Refusing update from remote peer d7b0107584e2409880bf436f68e36cc5: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 4 index: 4. (index mismatch)
I20250629 01:57:30.825765 18561 consensus_queue.cc:1035] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 2, Time since last communication: 0.000s
I20250629 01:57:30.832404 18562 raft_consensus.cc:2953] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 4 LEADER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER ea5b9f2455844c48875ccbd1041189f5 (127.17.83.68) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } unsafe_config_change: true }
I20250629 01:57:30.835156 18196 raft_consensus.cc:2953] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 4 FOLLOWER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER ea5b9f2455844c48875ccbd1041189f5 (127.17.83.68) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } unsafe_config_change: true }
I20250629 01:57:30.845069 18466 catalog_manager.cc:5582] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 reported cstate change: config changed from index -1 to 3, VOTER ea5b9f2455844c48875ccbd1041189f5 (127.17.83.68) evicted, no longer has a pending config: VOTER 49db8738eca7484d958caab97fc56240 (127.17.83.67), VOTER d7b0107584e2409880bf436f68e36cc5 (127.17.83.66). New cstate: current_term: 4 leader_uuid: "d7b0107584e2409880bf436f68e36cc5" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } unsafe_config_change: true }
W20250629 01:57:30.852439 18466 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet 98766d41e998414f880e46d4eadf5364 on TS ea5b9f2455844c48875ccbd1041189f5: Not found: failed to reset TS proxy: Could not find TS for UUID ea5b9f2455844c48875ccbd1041189f5
I20250629 01:57:30.868413 18064 consensus_queue.cc:237] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 4.4, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "379ee11d280540bfa3a9fa5f1d63a4b7" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 45377 } attrs { promote: true } } unsafe_config_change: true
I20250629 01:57:30.873896 18196 raft_consensus.cc:1273] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 4 FOLLOWER]: Refusing update from remote peer d7b0107584e2409880bf436f68e36cc5: Log matching property violated. Preceding OpId in replica: term: 4 index: 4. Preceding OpId from leader: term: 4 index: 5. (index mismatch)
I20250629 01:57:30.874796 18562 consensus_queue.cc:1035] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250629 01:57:30.878895 18561 raft_consensus.cc:2953] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 4 LEADER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER 379ee11d280540bfa3a9fa5f1d63a4b7 (127.17.83.65) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "379ee11d280540bfa3a9fa5f1d63a4b7" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 45377 } attrs { promote: true } } unsafe_config_change: true }
I20250629 01:57:30.880242 18196 raft_consensus.cc:2953] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 4 FOLLOWER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER 379ee11d280540bfa3a9fa5f1d63a4b7 (127.17.83.65) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "379ee11d280540bfa3a9fa5f1d63a4b7" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 45377 } attrs { promote: true } } unsafe_config_change: true }
W20250629 01:57:30.881974 17997 consensus_peers.cc:489] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 -> Peer 379ee11d280540bfa3a9fa5f1d63a4b7 (127.17.83.65:45377): Couldn't send request to peer 379ee11d280540bfa3a9fa5f1d63a4b7. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 98766d41e998414f880e46d4eadf5364. This is attempt 1: this message will repeat every 5th retry.
I20250629 01:57:30.886018 18451 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 98766d41e998414f880e46d4eadf5364 with cas_config_opid_index 3: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250629 01:57:30.888711 18466 catalog_manager.cc:5582] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 reported cstate change: config changed from index 3 to 5, NON_VOTER 379ee11d280540bfa3a9fa5f1d63a4b7 (127.17.83.65) added. New cstate: current_term: 4 leader_uuid: "d7b0107584e2409880bf436f68e36cc5" committed_config { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "379ee11d280540bfa3a9fa5f1d63a4b7" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 45377 } attrs { promote: true } health_report { overall_health: UNKNOWN } } unsafe_config_change: true }
W20250629 01:57:30.904917 18451 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet 98766d41e998414f880e46d4eadf5364 on TS ea5b9f2455844c48875ccbd1041189f5 failed: Not found: failed to reset TS proxy: Could not find TS for UUID ea5b9f2455844c48875ccbd1041189f5
I20250629 01:57:31.333053 18578 ts_tablet_manager.cc:927] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7: Initiating tablet copy from peer d7b0107584e2409880bf436f68e36cc5 (127.17.83.66:43881)
I20250629 01:57:31.335445 18578 tablet_copy_client.cc:323] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7: tablet copy: Beginning tablet copy session from remote peer at address 127.17.83.66:43881
I20250629 01:57:31.346217 18084 tablet_copy_service.cc:140] P d7b0107584e2409880bf436f68e36cc5: Received BeginTabletCopySession request for tablet 98766d41e998414f880e46d4eadf5364 from peer 379ee11d280540bfa3a9fa5f1d63a4b7 ({username='slave'} at 127.17.83.65:52583)
I20250629 01:57:31.346724 18084 tablet_copy_service.cc:161] P d7b0107584e2409880bf436f68e36cc5: Beginning new tablet copy session on tablet 98766d41e998414f880e46d4eadf5364 from peer 379ee11d280540bfa3a9fa5f1d63a4b7 at {username='slave'} at 127.17.83.65:52583: session id = 379ee11d280540bfa3a9fa5f1d63a4b7-98766d41e998414f880e46d4eadf5364
I20250629 01:57:31.351789 18084 tablet_copy_source_session.cc:215] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5: Tablet Copy: opened 0 blocks and 1 log segments
I20250629 01:57:31.356307 18578 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 98766d41e998414f880e46d4eadf5364. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:31.374719 18578 tablet_copy_client.cc:806] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7: tablet copy: Starting download of 0 data blocks...
I20250629 01:57:31.375308 18578 tablet_copy_client.cc:670] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7: tablet copy: Starting download of 1 WAL segments...
I20250629 01:57:31.378435 18578 tablet_copy_client.cc:538] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250629 01:57:31.383844 18578 tablet_bootstrap.cc:492] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7: Bootstrap starting.
I20250629 01:57:31.394670 18578 log.cc:826] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7: Log is configured to *not* fsync() on all Append() calls
I20250629 01:57:31.404106 18578 tablet_bootstrap.cc:492] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7: Bootstrap replayed 1/1 log segments. Stats: ops{read=5 overwritten=0 applied=5 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:57:31.404712 18578 tablet_bootstrap.cc:492] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7: Bootstrap complete.
I20250629 01:57:31.405161 18578 ts_tablet_manager.cc:1397] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7: Time spent bootstrapping tablet: real 0.022s user 0.019s sys 0.001s
I20250629 01:57:31.420266 18578 raft_consensus.cc:357] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7 [term 4 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "379ee11d280540bfa3a9fa5f1d63a4b7" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 45377 } attrs { promote: true } } unsafe_config_change: true
I20250629 01:57:31.420994 18578 raft_consensus.cc:738] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7 [term 4 LEARNER]: Becoming Follower/Learner. State: Replica: 379ee11d280540bfa3a9fa5f1d63a4b7, State: Initialized, Role: LEARNER
I20250629 01:57:31.421538 18578 consensus_queue.cc:260] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 5, Last appended: 4.5, Last appended by leader: 5, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "379ee11d280540bfa3a9fa5f1d63a4b7" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 45377 } attrs { promote: true } } unsafe_config_change: true
I20250629 01:57:31.424975 18578 ts_tablet_manager.cc:1428] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7: Time spent starting tablet: real 0.020s user 0.020s sys 0.001s
I20250629 01:57:31.426486 18084 tablet_copy_service.cc:342] P d7b0107584e2409880bf436f68e36cc5: Request end of tablet copy session 379ee11d280540bfa3a9fa5f1d63a4b7-98766d41e998414f880e46d4eadf5364 received from {username='slave'} at 127.17.83.65:52583
I20250629 01:57:31.426870 18084 tablet_copy_service.cc:434] P d7b0107584e2409880bf436f68e36cc5: ending tablet copy session 379ee11d280540bfa3a9fa5f1d63a4b7-98766d41e998414f880e46d4eadf5364 on tablet 98766d41e998414f880e46d4eadf5364 with peer 379ee11d280540bfa3a9fa5f1d63a4b7
I20250629 01:57:31.935262 17931 raft_consensus.cc:1215] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7 [term 4 LEARNER]: Deduplicated request from leader. Original: 4.4->[4.5-4.5] Dedup: 4.5->[]
W20250629 01:57:32.071362 18451 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet 98766d41e998414f880e46d4eadf5364 on TS ea5b9f2455844c48875ccbd1041189f5 failed: Not found: failed to reset TS proxy: Could not find TS for UUID ea5b9f2455844c48875ccbd1041189f5
I20250629 01:57:32.390869 18584 raft_consensus.cc:1062] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5: attempting to promote NON_VOTER 379ee11d280540bfa3a9fa5f1d63a4b7 to VOTER
I20250629 01:57:32.392239 18584 consensus_queue.cc:237] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 5, Committed index: 5, Last appended: 4.5, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "379ee11d280540bfa3a9fa5f1d63a4b7" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 45377 } attrs { promote: false } } unsafe_config_change: true
I20250629 01:57:32.396363 17931 raft_consensus.cc:1273] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7 [term 4 LEARNER]: Refusing update from remote peer d7b0107584e2409880bf436f68e36cc5: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250629 01:57:32.396780 18196 raft_consensus.cc:1273] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 4 FOLLOWER]: Refusing update from remote peer d7b0107584e2409880bf436f68e36cc5: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250629 01:57:32.397627 18583 consensus_queue.cc:1035] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "379ee11d280540bfa3a9fa5f1d63a4b7" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 45377 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250629 01:57:32.398342 18584 consensus_queue.cc:1035] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250629 01:57:32.404240 18583 raft_consensus.cc:2953] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 [term 4 LEADER]: Committing config change with OpId 4.6: config changed from index 5 to 6, 379ee11d280540bfa3a9fa5f1d63a4b7 (127.17.83.65) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "379ee11d280540bfa3a9fa5f1d63a4b7" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 45377 } attrs { promote: false } } unsafe_config_change: true }
I20250629 01:57:32.405553 18196 raft_consensus.cc:2953] T 98766d41e998414f880e46d4eadf5364 P 49db8738eca7484d958caab97fc56240 [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, 379ee11d280540bfa3a9fa5f1d63a4b7 (127.17.83.65) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "379ee11d280540bfa3a9fa5f1d63a4b7" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 45377 } attrs { promote: false } } unsafe_config_change: true }
I20250629 01:57:32.407196 17931 raft_consensus.cc:2953] T 98766d41e998414f880e46d4eadf5364 P 379ee11d280540bfa3a9fa5f1d63a4b7 [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, 379ee11d280540bfa3a9fa5f1d63a4b7 (127.17.83.65) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } } peers { permanent_uuid: "379ee11d280540bfa3a9fa5f1d63a4b7" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 45377 } attrs { promote: false } } unsafe_config_change: true }
I20250629 01:57:32.414624 18466 catalog_manager.cc:5582] T 98766d41e998414f880e46d4eadf5364 P d7b0107584e2409880bf436f68e36cc5 reported cstate change: config changed from index 5 to 6, 379ee11d280540bfa3a9fa5f1d63a4b7 (127.17.83.65) changed from NON_VOTER to VOTER. New cstate: current_term: 4 leader_uuid: "d7b0107584e2409880bf436f68e36cc5" committed_config { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "49db8738eca7484d958caab97fc56240" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41629 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d7b0107584e2409880bf436f68e36cc5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 43881 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "379ee11d280540bfa3a9fa5f1d63a4b7" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 45377 } attrs { promote: false } health_report { overall_health: HEALTHY } } unsafe_config_change: true }
I20250629 01:57:32.423013 17741 kudu-admin-test.cc:751] Waiting for Master to see new config...
I20250629 01:57:32.436542 17741 kudu-admin-test.cc:756] Tablet locations:
tablet_locations {
tablet_id: "98766d41e998414f880e46d4eadf5364"
DEPRECATED_stale: false
partition {
partition_key_start: ""
partition_key_end: ""
}
interned_replicas {
ts_info_idx: 0
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 1
role: LEADER
}
interned_replicas {
ts_info_idx: 2
role: FOLLOWER
}
}
ts_infos {
permanent_uuid: "49db8738eca7484d958caab97fc56240"
rpc_addresses {
host: "127.17.83.67"
port: 41629
}
}
ts_infos {
permanent_uuid: "d7b0107584e2409880bf436f68e36cc5"
rpc_addresses {
host: "127.17.83.66"
port: 43881
}
}
ts_infos {
permanent_uuid: "379ee11d280540bfa3a9fa5f1d63a4b7"
rpc_addresses {
host: "127.17.83.65"
port: 45377
}
}
I20250629 01:57:32.438695 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 17847
I20250629 01:57:32.462168 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 17980
I20250629 01:57:32.490303 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 18113
I20250629 01:57:32.515825 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 18419
2025-06-29T01:57:32Z chronyd exiting
[ OK ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes (18283 ms)
[ RUN ] AdminCliTest.TestGracefulSpecificLeaderStepDown
I20250629 01:57:32.571511 17741 test_util.cc:276] Using random seed: 1006777185
I20250629 01:57:32.577019 17741 ts_itest-base.cc:115] Starting cluster with:
I20250629 01:57:32.577183 17741 ts_itest-base.cc:116] --------------
I20250629 01:57:32.577335 17741 ts_itest-base.cc:117] 3 tablet servers
I20250629 01:57:32.577477 17741 ts_itest-base.cc:118] 3 replicas per TS
I20250629 01:57:32.577644 17741 ts_itest-base.cc:119] --------------
2025-06-29T01:57:32Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-29T01:57:32Z Disabled control of system clock
I20250629 01:57:32.609839 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:33573
--webserver_interface=127.17.83.126
--webserver_port=0
--builtin_ntp_servers=127.17.83.84:33799
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.17.83.126:33573
--catalog_manager_wait_for_new_tablets_to_elect_leader=false with env {}
W20250629 01:57:32.884152 18603 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:32.884743 18603 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:57:32.885177 18603 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:32.913085 18603 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250629 01:57:32.913374 18603 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:57:32.913652 18603 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250629 01:57:32.913892 18603 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250629 01:57:32.945099 18603 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:33799
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--catalog_manager_wait_for_new_tablets_to_elect_leader=false
--ipki_ca_key_size=768
--master_addresses=127.17.83.126:33573
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:33573
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.17.83.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:57:32.946310 18603 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:57:32.947727 18603 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:57:32.960794 18612 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:32.961195 18610 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:32.961217 18609 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:32.963071 18603 server_base.cc:1048] running on GCE node
I20250629 01:57:34.094439 18603 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:57:34.096768 18603 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:57:34.098109 18603 hybrid_clock.cc:648] HybridClock initialized: now 1751162254098076 us; error 55 us; skew 500 ppm
I20250629 01:57:34.098846 18603 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:57:34.110597 18603 webserver.cc:469] Webserver started at http://127.17.83.126:39135/ using document root <none> and password file <none>
I20250629 01:57:34.111443 18603 fs_manager.cc:362] Metadata directory not provided
I20250629 01:57:34.111608 18603 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:57:34.111954 18603 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:57:34.115970 18603 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "d2467d431af24629a2b147163498629f"
format_stamp: "Formatted at 2025-06-29 01:57:34 on dist-test-slave-v1mb"
I20250629 01:57:34.116995 18603 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "d2467d431af24629a2b147163498629f"
format_stamp: "Formatted at 2025-06-29 01:57:34 on dist-test-slave-v1mb"
I20250629 01:57:34.123613 18603 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.000s
I20250629 01:57:34.128561 18619 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:34.129459 18603 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.005s sys 0.000s
I20250629 01:57:34.129761 18603 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
uuid: "d2467d431af24629a2b147163498629f"
format_stamp: "Formatted at 2025-06-29 01:57:34 on dist-test-slave-v1mb"
I20250629 01:57:34.130039 18603 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:57:34.174286 18603 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:57:34.175647 18603 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:57:34.176036 18603 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:57:34.241712 18603 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.126:33573
I20250629 01:57:34.241818 18670 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.126:33573 every 8 connection(s)
I20250629 01:57:34.244328 18603 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250629 01:57:34.249188 18671 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:34.250084 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 18603
I20250629 01:57:34.250581 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250629 01:57:34.273221 18671 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f: Bootstrap starting.
I20250629 01:57:34.277979 18671 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f: Neither blocks nor log segments found. Creating new log.
I20250629 01:57:34.279374 18671 log.cc:826] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f: Log is configured to *not* fsync() on all Append() calls
I20250629 01:57:34.283393 18671 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f: No bootstrap required, opened a new log
I20250629 01:57:34.298460 18671 raft_consensus.cc:357] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2467d431af24629a2b147163498629f" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 33573 } }
I20250629 01:57:34.299002 18671 raft_consensus.cc:383] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:57:34.299206 18671 raft_consensus.cc:738] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d2467d431af24629a2b147163498629f, State: Initialized, Role: FOLLOWER
I20250629 01:57:34.299857 18671 consensus_queue.cc:260] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2467d431af24629a2b147163498629f" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 33573 } }
I20250629 01:57:34.300309 18671 raft_consensus.cc:397] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:57:34.300575 18671 raft_consensus.cc:491] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:57:34.300863 18671 raft_consensus.cc:3058] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:57:34.304425 18671 raft_consensus.cc:513] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2467d431af24629a2b147163498629f" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 33573 } }
I20250629 01:57:34.305030 18671 leader_election.cc:304] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: d2467d431af24629a2b147163498629f; no voters:
I20250629 01:57:34.306689 18671 leader_election.cc:290] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [CANDIDATE]: Term 1 election: Requested vote from peers
I20250629 01:57:34.307471 18676 raft_consensus.cc:2802] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:57:34.309414 18676 raft_consensus.cc:695] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [term 1 LEADER]: Becoming Leader. State: Replica: d2467d431af24629a2b147163498629f, State: Running, Role: LEADER
I20250629 01:57:34.310109 18676 consensus_queue.cc:237] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2467d431af24629a2b147163498629f" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 33573 } }
I20250629 01:57:34.311345 18671 sys_catalog.cc:564] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [sys.catalog]: configured and running, proceeding with master startup.
I20250629 01:57:34.320767 18678 sys_catalog.cc:455] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [sys.catalog]: SysCatalogTable state changed. Reason: New leader d2467d431af24629a2b147163498629f. Latest consensus state: current_term: 1 leader_uuid: "d2467d431af24629a2b147163498629f" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2467d431af24629a2b147163498629f" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 33573 } } }
I20250629 01:57:34.321468 18678 sys_catalog.cc:458] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [sys.catalog]: This master's current role is: LEADER
I20250629 01:57:34.321702 18677 sys_catalog.cc:455] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "d2467d431af24629a2b147163498629f" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2467d431af24629a2b147163498629f" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 33573 } } }
I20250629 01:57:34.322237 18677 sys_catalog.cc:458] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f [sys.catalog]: This master's current role is: LEADER
I20250629 01:57:34.324000 18685 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250629 01:57:34.335403 18685 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250629 01:57:34.349583 18685 catalog_manager.cc:1349] Generated new cluster ID: 2165a34fd9da4b4fb63ab837be942222
I20250629 01:57:34.349855 18685 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250629 01:57:34.378592 18685 catalog_manager.cc:1372] Generated new certificate authority record
I20250629 01:57:34.379953 18685 catalog_manager.cc:1506] Loading token signing keys...
I20250629 01:57:34.395571 18685 catalog_manager.cc:5955] T 00000000000000000000000000000000 P d2467d431af24629a2b147163498629f: Generated new TSK 0
I20250629 01:57:34.396323 18685 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250629 01:57:34.409415 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.65:0
--local_ip_for_outbound_sockets=127.17.83.65
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:33573
--builtin_ntp_servers=127.17.83.84:33799
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
W20250629 01:57:34.691654 18696 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250629 01:57:34.692219 18696 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:34.692462 18696 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:57:34.692910 18696 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:34.722389 18696 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:57:34.723299 18696 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.65
I20250629 01:57:34.756167 18696 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:33799
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:33573
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.65
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:57:34.757570 18696 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:57:34.759184 18696 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:57:34.775398 18703 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:34.775918 18702 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:34.777999 18705 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:36.226935 18704 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1449 milliseconds
I20250629 01:57:36.227051 18696 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:57:36.228320 18696 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:57:36.230914 18696 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:57:36.232321 18696 hybrid_clock.cc:648] HybridClock initialized: now 1751162256232286 us; error 50 us; skew 500 ppm
I20250629 01:57:36.233057 18696 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:57:36.239459 18696 webserver.cc:469] Webserver started at http://127.17.83.65:44197/ using document root <none> and password file <none>
I20250629 01:57:36.240338 18696 fs_manager.cc:362] Metadata directory not provided
I20250629 01:57:36.240528 18696 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:57:36.240970 18696 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:57:36.245249 18696 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "8f8cc07b3f7942f7b309addefc821a26"
format_stamp: "Formatted at 2025-06-29 01:57:36 on dist-test-slave-v1mb"
I20250629 01:57:36.246258 18696 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "8f8cc07b3f7942f7b309addefc821a26"
format_stamp: "Formatted at 2025-06-29 01:57:36 on dist-test-slave-v1mb"
I20250629 01:57:36.252832 18696 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.004s sys 0.004s
I20250629 01:57:36.258265 18712 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:36.259254 18696 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.001s
I20250629 01:57:36.259534 18696 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "8f8cc07b3f7942f7b309addefc821a26"
format_stamp: "Formatted at 2025-06-29 01:57:36 on dist-test-slave-v1mb"
I20250629 01:57:36.259838 18696 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:57:36.308530 18696 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:57:36.309896 18696 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:57:36.310299 18696 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:57:36.312731 18696 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:57:36.316629 18696 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:57:36.316818 18696 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:36.317075 18696 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:57:36.317231 18696 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:36.442615 18696 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.65:35467
I20250629 01:57:36.442714 18824 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.65:35467 every 8 connection(s)
I20250629 01:57:36.445083 18696 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250629 01:57:36.448428 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 18696
I20250629 01:57:36.448851 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250629 01:57:36.455827 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.66:0
--local_ip_for_outbound_sockets=127.17.83.66
--webserver_interface=127.17.83.66
--webserver_port=0
--tserver_master_addrs=127.17.83.126:33573
--builtin_ntp_servers=127.17.83.84:33799
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250629 01:57:36.467921 18825 heartbeater.cc:344] Connected to a master server at 127.17.83.126:33573
I20250629 01:57:36.468284 18825 heartbeater.cc:461] Registering TS with master...
I20250629 01:57:36.469174 18825 heartbeater.cc:507] Master 127.17.83.126:33573 requested a full tablet report, sending...
I20250629 01:57:36.471628 18636 ts_manager.cc:194] Registered new tserver with Master: 8f8cc07b3f7942f7b309addefc821a26 (127.17.83.65:35467)
I20250629 01:57:36.474566 18636 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.65:41287
W20250629 01:57:36.731498 18829 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250629 01:57:36.732064 18829 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:36.732267 18829 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:57:36.732683 18829 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:36.762888 18829 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:57:36.763720 18829 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.66
I20250629 01:57:36.796294 18829 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:33799
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.66:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.17.83.66
--webserver_port=0
--tserver_master_addrs=127.17.83.126:33573
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.66
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:57:36.797492 18829 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:57:36.798961 18829 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:57:36.812776 18835 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:37.477732 18825 heartbeater.cc:499] Master 127.17.83.126:33573 was elected leader, sending a full tablet report...
W20250629 01:57:36.813289 18836 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:36.815117 18829 server_base.cc:1048] running on GCE node
W20250629 01:57:36.814311 18838 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:37.927893 18829 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:57:37.930106 18829 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:57:37.931432 18829 hybrid_clock.cc:648] HybridClock initialized: now 1751162257931402 us; error 57 us; skew 500 ppm
I20250629 01:57:37.932260 18829 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:57:37.938261 18829 webserver.cc:469] Webserver started at http://127.17.83.66:34143/ using document root <none> and password file <none>
I20250629 01:57:37.939126 18829 fs_manager.cc:362] Metadata directory not provided
I20250629 01:57:37.939361 18829 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:57:37.939786 18829 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:57:37.944242 18829 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "509e22d57e234d0b927a30167daacb10"
format_stamp: "Formatted at 2025-06-29 01:57:37 on dist-test-slave-v1mb"
I20250629 01:57:37.945348 18829 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "509e22d57e234d0b927a30167daacb10"
format_stamp: "Formatted at 2025-06-29 01:57:37 on dist-test-slave-v1mb"
I20250629 01:57:37.951941 18829 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.001s
I20250629 01:57:37.957253 18845 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:37.958204 18829 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.001s
I20250629 01:57:37.958506 18829 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "509e22d57e234d0b927a30167daacb10"
format_stamp: "Formatted at 2025-06-29 01:57:37 on dist-test-slave-v1mb"
I20250629 01:57:37.958823 18829 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:57:38.002295 18829 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:57:38.003736 18829 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:57:38.004144 18829 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:57:38.006597 18829 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:57:38.010457 18829 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:57:38.010668 18829 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:38.010898 18829 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:57:38.011054 18829 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:38.137883 18829 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.66:39125
I20250629 01:57:38.138075 18957 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.66:39125 every 8 connection(s)
I20250629 01:57:38.140364 18829 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250629 01:57:38.149191 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 18829
I20250629 01:57:38.149650 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250629 01:57:38.156387 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.67:0
--local_ip_for_outbound_sockets=127.17.83.67
--webserver_interface=127.17.83.67
--webserver_port=0
--tserver_master_addrs=127.17.83.126:33573
--builtin_ntp_servers=127.17.83.84:33799
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250629 01:57:38.163286 18958 heartbeater.cc:344] Connected to a master server at 127.17.83.126:33573
I20250629 01:57:38.163697 18958 heartbeater.cc:461] Registering TS with master...
I20250629 01:57:38.164912 18958 heartbeater.cc:507] Master 127.17.83.126:33573 requested a full tablet report, sending...
I20250629 01:57:38.167322 18636 ts_manager.cc:194] Registered new tserver with Master: 509e22d57e234d0b927a30167daacb10 (127.17.83.66:39125)
I20250629 01:57:38.168468 18636 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.66:50825
W20250629 01:57:38.446769 18962 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250629 01:57:38.447441 18962 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:38.447696 18962 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:57:38.448177 18962 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:38.479897 18962 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:57:38.480741 18962 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.67
I20250629 01:57:38.513751 18962 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:33799
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.67:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.17.83.67
--webserver_port=0
--tserver_master_addrs=127.17.83.126:33573
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.67
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:57:38.514993 18962 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:57:38.516626 18962 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:57:38.532915 18971 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:39.171895 18958 heartbeater.cc:499] Master 127.17.83.126:33573 was elected leader, sending a full tablet report...
W20250629 01:57:38.532936 18969 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:38.533106 18968 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:39.659304 18970 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250629 01:57:39.661437 18962 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:57:39.665897 18962 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:57:39.668625 18962 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:57:39.670015 18962 hybrid_clock.cc:648] HybridClock initialized: now 1751162259669963 us; error 68 us; skew 500 ppm
I20250629 01:57:39.670717 18962 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:57:39.686376 18962 webserver.cc:469] Webserver started at http://127.17.83.67:38959/ using document root <none> and password file <none>
I20250629 01:57:39.687261 18962 fs_manager.cc:362] Metadata directory not provided
I20250629 01:57:39.687433 18962 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:57:39.687815 18962 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:57:39.691843 18962 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "876d88e50936493b8192f709d704be32"
format_stamp: "Formatted at 2025-06-29 01:57:39 on dist-test-slave-v1mb"
I20250629 01:57:39.692852 18962 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "876d88e50936493b8192f709d704be32"
format_stamp: "Formatted at 2025-06-29 01:57:39 on dist-test-slave-v1mb"
I20250629 01:57:39.700461 18962 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.003s
I20250629 01:57:39.705726 18978 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:39.706681 18962 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250629 01:57:39.707006 18962 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "876d88e50936493b8192f709d704be32"
format_stamp: "Formatted at 2025-06-29 01:57:39 on dist-test-slave-v1mb"
I20250629 01:57:39.707350 18962 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:57:39.763895 18962 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:57:39.765425 18962 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:57:39.765892 18962 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:57:39.768430 18962 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:57:39.772692 18962 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:57:39.772941 18962 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:39.773242 18962 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:57:39.773401 18962 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:39.913610 18962 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.67:32771
I20250629 01:57:39.913699 19090 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.67:32771 every 8 connection(s)
I20250629 01:57:39.916010 18962 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250629 01:57:39.923084 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 18962
I20250629 01:57:39.923432 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250629 01:57:39.934295 19091 heartbeater.cc:344] Connected to a master server at 127.17.83.126:33573
I20250629 01:57:39.934672 19091 heartbeater.cc:461] Registering TS with master...
I20250629 01:57:39.935573 19091 heartbeater.cc:507] Master 127.17.83.126:33573 requested a full tablet report, sending...
I20250629 01:57:39.937386 18636 ts_manager.cc:194] Registered new tserver with Master: 876d88e50936493b8192f709d704be32 (127.17.83.67:32771)
I20250629 01:57:39.938484 18636 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.67:39859
I20250629 01:57:39.941991 17741 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250629 01:57:39.972728 18636 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:39934:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250629 01:57:39.990188 18636 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250629 01:57:40.042006 18893 tablet_service.cc:1468] Processing CreateTablet for tablet f97f32b0d0ae4ca0be29c4254d468ad4 (DEFAULT_TABLE table=TestTable [id=0dc09da6e4a143a8b5488fe0e15f3b3d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:57:40.043135 19026 tablet_service.cc:1468] Processing CreateTablet for tablet f97f32b0d0ae4ca0be29c4254d468ad4 (DEFAULT_TABLE table=TestTable [id=0dc09da6e4a143a8b5488fe0e15f3b3d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:57:40.043746 18893 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f97f32b0d0ae4ca0be29c4254d468ad4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:40.043543 18760 tablet_service.cc:1468] Processing CreateTablet for tablet f97f32b0d0ae4ca0be29c4254d468ad4 (DEFAULT_TABLE table=TestTable [id=0dc09da6e4a143a8b5488fe0e15f3b3d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:57:40.044554 19026 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f97f32b0d0ae4ca0be29c4254d468ad4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:40.045254 18760 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet f97f32b0d0ae4ca0be29c4254d468ad4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:40.063848 19110 tablet_bootstrap.cc:492] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32: Bootstrap starting.
I20250629 01:57:40.069623 19110 tablet_bootstrap.cc:654] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32: Neither blocks nor log segments found. Creating new log.
I20250629 01:57:40.071092 19111 tablet_bootstrap.cc:492] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26: Bootstrap starting.
I20250629 01:57:40.072098 19110 log.cc:826] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32: Log is configured to *not* fsync() on all Append() calls
I20250629 01:57:40.077733 19110 tablet_bootstrap.cc:492] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32: No bootstrap required, opened a new log
I20250629 01:57:40.078181 19110 ts_tablet_manager.cc:1397] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32: Time spent bootstrapping tablet: real 0.015s user 0.009s sys 0.005s
I20250629 01:57:40.078421 19112 tablet_bootstrap.cc:492] T f97f32b0d0ae4ca0be29c4254d468ad4 P 509e22d57e234d0b927a30167daacb10: Bootstrap starting.
I20250629 01:57:40.078497 19111 tablet_bootstrap.cc:654] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26: Neither blocks nor log segments found. Creating new log.
I20250629 01:57:40.080710 19111 log.cc:826] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26: Log is configured to *not* fsync() on all Append() calls
I20250629 01:57:40.084007 19112 tablet_bootstrap.cc:654] T f97f32b0d0ae4ca0be29c4254d468ad4 P 509e22d57e234d0b927a30167daacb10: Neither blocks nor log segments found. Creating new log.
I20250629 01:57:40.085923 19112 log.cc:826] T f97f32b0d0ae4ca0be29c4254d468ad4 P 509e22d57e234d0b927a30167daacb10: Log is configured to *not* fsync() on all Append() calls
I20250629 01:57:40.087121 19111 tablet_bootstrap.cc:492] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26: No bootstrap required, opened a new log
I20250629 01:57:40.087673 19111 ts_tablet_manager.cc:1397] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26: Time spent bootstrapping tablet: real 0.017s user 0.016s sys 0.000s
I20250629 01:57:40.090056 19112 tablet_bootstrap.cc:492] T f97f32b0d0ae4ca0be29c4254d468ad4 P 509e22d57e234d0b927a30167daacb10: No bootstrap required, opened a new log
I20250629 01:57:40.090380 19112 ts_tablet_manager.cc:1397] T f97f32b0d0ae4ca0be29c4254d468ad4 P 509e22d57e234d0b927a30167daacb10: Time spent bootstrapping tablet: real 0.012s user 0.007s sys 0.003s
I20250629 01:57:40.103389 19110 raft_consensus.cc:357] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e22d57e234d0b927a30167daacb10" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 39125 } } peers { permanent_uuid: "876d88e50936493b8192f709d704be32" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 32771 } } peers { permanent_uuid: "8f8cc07b3f7942f7b309addefc821a26" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 35467 } }
I20250629 01:57:40.104251 19110 raft_consensus.cc:738] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 876d88e50936493b8192f709d704be32, State: Initialized, Role: FOLLOWER
I20250629 01:57:40.104935 19110 consensus_queue.cc:260] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e22d57e234d0b927a30167daacb10" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 39125 } } peers { permanent_uuid: "876d88e50936493b8192f709d704be32" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 32771 } } peers { permanent_uuid: "8f8cc07b3f7942f7b309addefc821a26" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 35467 } }
I20250629 01:57:40.107714 19091 heartbeater.cc:499] Master 127.17.83.126:33573 was elected leader, sending a full tablet report...
I20250629 01:57:40.107621 19112 raft_consensus.cc:357] T f97f32b0d0ae4ca0be29c4254d468ad4 P 509e22d57e234d0b927a30167daacb10 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e22d57e234d0b927a30167daacb10" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 39125 } } peers { permanent_uuid: "876d88e50936493b8192f709d704be32" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 32771 } } peers { permanent_uuid: "8f8cc07b3f7942f7b309addefc821a26" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 35467 } }
I20250629 01:57:40.108359 19112 raft_consensus.cc:738] T f97f32b0d0ae4ca0be29c4254d468ad4 P 509e22d57e234d0b927a30167daacb10 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 509e22d57e234d0b927a30167daacb10, State: Initialized, Role: FOLLOWER
I20250629 01:57:40.108934 19110 ts_tablet_manager.cc:1428] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32: Time spent starting tablet: real 0.030s user 0.026s sys 0.004s
I20250629 01:57:40.108974 19112 consensus_queue.cc:260] T f97f32b0d0ae4ca0be29c4254d468ad4 P 509e22d57e234d0b927a30167daacb10 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e22d57e234d0b927a30167daacb10" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 39125 } } peers { permanent_uuid: "876d88e50936493b8192f709d704be32" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 32771 } } peers { permanent_uuid: "8f8cc07b3f7942f7b309addefc821a26" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 35467 } }
I20250629 01:57:40.112591 19112 ts_tablet_manager.cc:1428] T f97f32b0d0ae4ca0be29c4254d468ad4 P 509e22d57e234d0b927a30167daacb10: Time spent starting tablet: real 0.022s user 0.023s sys 0.001s
I20250629 01:57:40.112412 19111 raft_consensus.cc:357] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e22d57e234d0b927a30167daacb10" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 39125 } } peers { permanent_uuid: "876d88e50936493b8192f709d704be32" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 32771 } } peers { permanent_uuid: "8f8cc07b3f7942f7b309addefc821a26" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 35467 } }
I20250629 01:57:40.113390 19111 raft_consensus.cc:738] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8f8cc07b3f7942f7b309addefc821a26, State: Initialized, Role: FOLLOWER
I20250629 01:57:40.114017 19111 consensus_queue.cc:260] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e22d57e234d0b927a30167daacb10" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 39125 } } peers { permanent_uuid: "876d88e50936493b8192f709d704be32" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 32771 } } peers { permanent_uuid: "8f8cc07b3f7942f7b309addefc821a26" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 35467 } }
I20250629 01:57:40.117880 19111 ts_tablet_manager.cc:1428] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26: Time spent starting tablet: real 0.030s user 0.025s sys 0.003s
I20250629 01:57:40.132619 17741 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250629 01:57:40.135627 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 8f8cc07b3f7942f7b309addefc821a26 to finish bootstrapping
W20250629 01:57:40.146881 18959 tablet.cc:2378] T f97f32b0d0ae4ca0be29c4254d468ad4 P 509e22d57e234d0b927a30167daacb10: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250629 01:57:40.149962 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 509e22d57e234d0b927a30167daacb10 to finish bootstrapping
I20250629 01:57:40.161262 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 876d88e50936493b8192f709d704be32 to finish bootstrapping
W20250629 01:57:40.170260 19092 tablet.cc:2378] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250629 01:57:40.200105 18780 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "f97f32b0d0ae4ca0be29c4254d468ad4"
dest_uuid: "8f8cc07b3f7942f7b309addefc821a26"
from {username='slave'} at 127.0.0.1:37742
I20250629 01:57:40.200639 18780 raft_consensus.cc:491] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 0 FOLLOWER]: Starting forced leader election (received explicit request)
I20250629 01:57:40.200917 18780 raft_consensus.cc:3058] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 0 FOLLOWER]: Advancing to term 1
W20250629 01:57:40.202684 18826 tablet.cc:2378] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250629 01:57:40.205634 18780 raft_consensus.cc:513] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 1 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e22d57e234d0b927a30167daacb10" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 39125 } } peers { permanent_uuid: "876d88e50936493b8192f709d704be32" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 32771 } } peers { permanent_uuid: "8f8cc07b3f7942f7b309addefc821a26" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 35467 } }
I20250629 01:57:40.207671 18780 leader_election.cc:290] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [CANDIDATE]: Term 1 election: Requested vote from peers 509e22d57e234d0b927a30167daacb10 (127.17.83.66:39125), 876d88e50936493b8192f709d704be32 (127.17.83.67:32771)
I20250629 01:57:40.214145 17741 cluster_itest_util.cc:257] Not converged past 1 yet: 0.0 0.0 0.0
I20250629 01:57:40.218272 18913 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f97f32b0d0ae4ca0be29c4254d468ad4" candidate_uuid: "8f8cc07b3f7942f7b309addefc821a26" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "509e22d57e234d0b927a30167daacb10"
I20250629 01:57:40.218809 18913 raft_consensus.cc:3058] T f97f32b0d0ae4ca0be29c4254d468ad4 P 509e22d57e234d0b927a30167daacb10 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:57:40.219384 19046 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f97f32b0d0ae4ca0be29c4254d468ad4" candidate_uuid: "8f8cc07b3f7942f7b309addefc821a26" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "876d88e50936493b8192f709d704be32"
I20250629 01:57:40.219866 19046 raft_consensus.cc:3058] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:57:40.222611 18913 raft_consensus.cc:2466] T f97f32b0d0ae4ca0be29c4254d468ad4 P 509e22d57e234d0b927a30167daacb10 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 8f8cc07b3f7942f7b309addefc821a26 in term 1.
I20250629 01:57:40.223479 18716 leader_election.cc:304] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 509e22d57e234d0b927a30167daacb10, 8f8cc07b3f7942f7b309addefc821a26; no voters:
I20250629 01:57:40.224030 19046 raft_consensus.cc:2466] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 8f8cc07b3f7942f7b309addefc821a26 in term 1.
I20250629 01:57:40.224160 19118 raft_consensus.cc:2802] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:57:40.225759 19118 raft_consensus.cc:695] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 1 LEADER]: Becoming Leader. State: Replica: 8f8cc07b3f7942f7b309addefc821a26, State: Running, Role: LEADER
I20250629 01:57:40.226373 19118 consensus_queue.cc:237] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e22d57e234d0b927a30167daacb10" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 39125 } } peers { permanent_uuid: "876d88e50936493b8192f709d704be32" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 32771 } } peers { permanent_uuid: "8f8cc07b3f7942f7b309addefc821a26" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 35467 } }
I20250629 01:57:40.236824 18636 catalog_manager.cc:5582] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 reported cstate change: term changed from 0 to 1, leader changed from <none> to 8f8cc07b3f7942f7b309addefc821a26 (127.17.83.65). New cstate: current_term: 1 leader_uuid: "8f8cc07b3f7942f7b309addefc821a26" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e22d57e234d0b927a30167daacb10" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 39125 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "876d88e50936493b8192f709d704be32" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 32771 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "8f8cc07b3f7942f7b309addefc821a26" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 35467 } health_report { overall_health: HEALTHY } } }
I20250629 01:57:40.319010 17741 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
I20250629 01:57:40.523789 17741 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
I20250629 01:57:40.726665 19118 consensus_queue.cc:1035] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [LEADER]: Connected to new peer: Peer: permanent_uuid: "876d88e50936493b8192f709d704be32" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 32771 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250629 01:57:40.742568 19118 consensus_queue.cc:1035] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [LEADER]: Connected to new peer: Peer: permanent_uuid: "509e22d57e234d0b927a30167daacb10" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 39125 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250629 01:57:42.465883 18780 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "f97f32b0d0ae4ca0be29c4254d468ad4"
dest_uuid: "8f8cc07b3f7942f7b309addefc821a26"
mode: GRACEFUL
from {username='slave'} at 127.0.0.1:37754
I20250629 01:57:42.466495 18780 raft_consensus.cc:604] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 1 LEADER]: Received request to transfer leadership
I20250629 01:57:42.715783 19157 raft_consensus.cc:991] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26: : Instructing follower 876d88e50936493b8192f709d704be32 to start an election
I20250629 01:57:42.716189 19144 raft_consensus.cc:1079] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 1 LEADER]: Signalling peer 876d88e50936493b8192f709d704be32 to start an election
I20250629 01:57:42.717406 19046 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "f97f32b0d0ae4ca0be29c4254d468ad4"
dest_uuid: "876d88e50936493b8192f709d704be32"
from {username='slave'} at 127.17.83.65:33009
I20250629 01:57:42.717882 19046 raft_consensus.cc:491] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 [term 1 FOLLOWER]: Starting forced leader election (received explicit request)
I20250629 01:57:42.718132 19046 raft_consensus.cc:3058] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 [term 1 FOLLOWER]: Advancing to term 2
I20250629 01:57:42.722074 19046 raft_consensus.cc:513] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 [term 2 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e22d57e234d0b927a30167daacb10" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 39125 } } peers { permanent_uuid: "876d88e50936493b8192f709d704be32" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 32771 } } peers { permanent_uuid: "8f8cc07b3f7942f7b309addefc821a26" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 35467 } }
I20250629 01:57:42.724006 19046 leader_election.cc:290] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 [CANDIDATE]: Term 2 election: Requested vote from peers 509e22d57e234d0b927a30167daacb10 (127.17.83.66:39125), 8f8cc07b3f7942f7b309addefc821a26 (127.17.83.65:35467)
I20250629 01:57:42.734684 18780 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f97f32b0d0ae4ca0be29c4254d468ad4" candidate_uuid: "876d88e50936493b8192f709d704be32" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "8f8cc07b3f7942f7b309addefc821a26"
I20250629 01:57:42.735001 18913 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "f97f32b0d0ae4ca0be29c4254d468ad4" candidate_uuid: "876d88e50936493b8192f709d704be32" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "509e22d57e234d0b927a30167daacb10"
I20250629 01:57:42.735147 18780 raft_consensus.cc:3053] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 1 LEADER]: Stepping down as leader of term 1
I20250629 01:57:42.735471 18913 raft_consensus.cc:3058] T f97f32b0d0ae4ca0be29c4254d468ad4 P 509e22d57e234d0b927a30167daacb10 [term 1 FOLLOWER]: Advancing to term 2
I20250629 01:57:42.735471 18780 raft_consensus.cc:738] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 1 LEADER]: Becoming Follower/Learner. State: Replica: 8f8cc07b3f7942f7b309addefc821a26, State: Running, Role: LEADER
I20250629 01:57:42.736042 18780 consensus_queue.cc:260] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 1, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e22d57e234d0b927a30167daacb10" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 39125 } } peers { permanent_uuid: "876d88e50936493b8192f709d704be32" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 32771 } } peers { permanent_uuid: "8f8cc07b3f7942f7b309addefc821a26" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 35467 } }
I20250629 01:57:42.736901 18780 raft_consensus.cc:3058] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 1 FOLLOWER]: Advancing to term 2
I20250629 01:57:42.739388 18913 raft_consensus.cc:2466] T f97f32b0d0ae4ca0be29c4254d468ad4 P 509e22d57e234d0b927a30167daacb10 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 876d88e50936493b8192f709d704be32 in term 2.
I20250629 01:57:42.740319 18982 leader_election.cc:304] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 509e22d57e234d0b927a30167daacb10, 876d88e50936493b8192f709d704be32; no voters:
I20250629 01:57:42.741276 18780 raft_consensus.cc:2466] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 876d88e50936493b8192f709d704be32 in term 2.
I20250629 01:57:42.742360 19161 raft_consensus.cc:2802] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 [term 2 FOLLOWER]: Leader election won for term 2
I20250629 01:57:42.743599 19161 raft_consensus.cc:695] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 [term 2 LEADER]: Becoming Leader. State: Replica: 876d88e50936493b8192f709d704be32, State: Running, Role: LEADER
I20250629 01:57:42.744297 19161 consensus_queue.cc:237] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e22d57e234d0b927a30167daacb10" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 39125 } } peers { permanent_uuid: "876d88e50936493b8192f709d704be32" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 32771 } } peers { permanent_uuid: "8f8cc07b3f7942f7b309addefc821a26" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 35467 } }
I20250629 01:57:42.750878 18634 catalog_manager.cc:5582] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 reported cstate change: term changed from 1 to 2, leader changed from 8f8cc07b3f7942f7b309addefc821a26 (127.17.83.65) to 876d88e50936493b8192f709d704be32 (127.17.83.67). New cstate: current_term: 2 leader_uuid: "876d88e50936493b8192f709d704be32" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "509e22d57e234d0b927a30167daacb10" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 39125 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "876d88e50936493b8192f709d704be32" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 32771 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8f8cc07b3f7942f7b309addefc821a26" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 35467 } health_report { overall_health: UNKNOWN } } }
I20250629 01:57:43.125217 18780 raft_consensus.cc:1273] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 2 FOLLOWER]: Refusing update from remote peer 876d88e50936493b8192f709d704be32: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250629 01:57:43.126346 19161 consensus_queue.cc:1035] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 [LEADER]: Connected to new peer: Peer: permanent_uuid: "8f8cc07b3f7942f7b309addefc821a26" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 35467 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250629 01:57:43.136116 18913 raft_consensus.cc:1273] T f97f32b0d0ae4ca0be29c4254d468ad4 P 509e22d57e234d0b927a30167daacb10 [term 2 FOLLOWER]: Refusing update from remote peer 876d88e50936493b8192f709d704be32: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250629 01:57:43.138478 19162 consensus_queue.cc:1035] T f97f32b0d0ae4ca0be29c4254d468ad4 P 876d88e50936493b8192f709d704be32 [LEADER]: Connected to new peer: Peer: permanent_uuid: "509e22d57e234d0b927a30167daacb10" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 39125 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.001s
I20250629 01:57:45.083763 18780 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "f97f32b0d0ae4ca0be29c4254d468ad4"
dest_uuid: "8f8cc07b3f7942f7b309addefc821a26"
mode: GRACEFUL
from {username='slave'} at 127.0.0.1:37764
I20250629 01:57:45.084357 18780 raft_consensus.cc:604] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 2 FOLLOWER]: Received request to transfer leadership
I20250629 01:57:45.084695 18780 raft_consensus.cc:612] T f97f32b0d0ae4ca0be29c4254d468ad4 P 8f8cc07b3f7942f7b309addefc821a26 [term 2 FOLLOWER]: Rejecting request to transer leadership while not leader
I20250629 01:57:46.119153 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 18696
I20250629 01:57:46.145150 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 18829
I20250629 01:57:46.170154 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 18962
I20250629 01:57:46.194525 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 18603
2025-06-29T01:57:46Z chronyd exiting
[ OK ] AdminCliTest.TestGracefulSpecificLeaderStepDown (13675 ms)
[ RUN ] AdminCliTest.TestDescribeTableColumnFlags
I20250629 01:57:46.246902 17741 test_util.cc:276] Using random seed: 1020452576
I20250629 01:57:46.250849 17741 ts_itest-base.cc:115] Starting cluster with:
I20250629 01:57:46.250990 17741 ts_itest-base.cc:116] --------------
I20250629 01:57:46.251108 17741 ts_itest-base.cc:117] 3 tablet servers
I20250629 01:57:46.251268 17741 ts_itest-base.cc:118] 3 replicas per TS
I20250629 01:57:46.251395 17741 ts_itest-base.cc:119] --------------
2025-06-29T01:57:46Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-29T01:57:46Z Disabled control of system clock
I20250629 01:57:46.285946 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:33297
--webserver_interface=127.17.83.126
--webserver_port=0
--builtin_ntp_servers=127.17.83.84:46375
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.17.83.126:33297 with env {}
W20250629 01:57:46.561352 19206 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:46.561844 19206 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:57:46.562209 19206 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:46.589789 19206 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250629 01:57:46.590047 19206 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:57:46.590241 19206 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250629 01:57:46.590430 19206 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250629 01:57:46.622313 19206 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:46375
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.17.83.126:33297
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:33297
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.17.83.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:57:46.623561 19206 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:57:46.625514 19206 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:57:46.640426 19215 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:46.640633 19213 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:46.640666 19212 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:46.641649 19206 server_base.cc:1048] running on GCE node
I20250629 01:57:47.753715 19206 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:57:47.756242 19206 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:57:47.757685 19206 hybrid_clock.cc:648] HybridClock initialized: now 1751162267757667 us; error 56 us; skew 500 ppm
I20250629 01:57:47.758489 19206 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:57:47.764546 19206 webserver.cc:469] Webserver started at http://127.17.83.126:40199/ using document root <none> and password file <none>
I20250629 01:57:47.765539 19206 fs_manager.cc:362] Metadata directory not provided
I20250629 01:57:47.765740 19206 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:57:47.766191 19206 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:57:47.770218 19206 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "a767e093ab06400a94d01804aa4fe2d1"
format_stamp: "Formatted at 2025-06-29 01:57:47 on dist-test-slave-v1mb"
I20250629 01:57:47.771255 19206 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "a767e093ab06400a94d01804aa4fe2d1"
format_stamp: "Formatted at 2025-06-29 01:57:47 on dist-test-slave-v1mb"
I20250629 01:57:47.777882 19206 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.000s
I20250629 01:57:47.783293 19222 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:47.784235 19206 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250629 01:57:47.784502 19206 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
uuid: "a767e093ab06400a94d01804aa4fe2d1"
format_stamp: "Formatted at 2025-06-29 01:57:47 on dist-test-slave-v1mb"
I20250629 01:57:47.784835 19206 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:57:47.838793 19206 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:57:47.840153 19206 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:57:47.840548 19206 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:57:47.904244 19206 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.126:33297
I20250629 01:57:47.904310 19273 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.126:33297 every 8 connection(s)
I20250629 01:57:47.906761 19206 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250629 01:57:47.911322 19274 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:47.913883 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 19206
I20250629 01:57:47.914328 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250629 01:57:47.931396 19274 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1: Bootstrap starting.
I20250629 01:57:47.937223 19274 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1: Neither blocks nor log segments found. Creating new log.
I20250629 01:57:47.939015 19274 log.cc:826] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1: Log is configured to *not* fsync() on all Append() calls
I20250629 01:57:47.942881 19274 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1: No bootstrap required, opened a new log
I20250629 01:57:47.958685 19274 raft_consensus.cc:357] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a767e093ab06400a94d01804aa4fe2d1" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 33297 } }
I20250629 01:57:47.959280 19274 raft_consensus.cc:383] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:57:47.959517 19274 raft_consensus.cc:738] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a767e093ab06400a94d01804aa4fe2d1, State: Initialized, Role: FOLLOWER
I20250629 01:57:47.960227 19274 consensus_queue.cc:260] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a767e093ab06400a94d01804aa4fe2d1" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 33297 } }
I20250629 01:57:47.960798 19274 raft_consensus.cc:397] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:57:47.961021 19274 raft_consensus.cc:491] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:57:47.961277 19274 raft_consensus.cc:3058] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:57:47.964839 19274 raft_consensus.cc:513] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a767e093ab06400a94d01804aa4fe2d1" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 33297 } }
I20250629 01:57:47.965442 19274 leader_election.cc:304] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: a767e093ab06400a94d01804aa4fe2d1; no voters:
I20250629 01:57:47.967073 19274 leader_election.cc:290] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250629 01:57:47.967736 19279 raft_consensus.cc:2802] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:57:47.969627 19279 raft_consensus.cc:695] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [term 1 LEADER]: Becoming Leader. State: Replica: a767e093ab06400a94d01804aa4fe2d1, State: Running, Role: LEADER
I20250629 01:57:47.970317 19279 consensus_queue.cc:237] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a767e093ab06400a94d01804aa4fe2d1" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 33297 } }
I20250629 01:57:47.971174 19274 sys_catalog.cc:564] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [sys.catalog]: configured and running, proceeding with master startup.
I20250629 01:57:47.980406 19281 sys_catalog.cc:455] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [sys.catalog]: SysCatalogTable state changed. Reason: New leader a767e093ab06400a94d01804aa4fe2d1. Latest consensus state: current_term: 1 leader_uuid: "a767e093ab06400a94d01804aa4fe2d1" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a767e093ab06400a94d01804aa4fe2d1" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 33297 } } }
I20250629 01:57:47.981165 19281 sys_catalog.cc:458] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [sys.catalog]: This master's current role is: LEADER
I20250629 01:57:47.982633 19280 sys_catalog.cc:455] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "a767e093ab06400a94d01804aa4fe2d1" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a767e093ab06400a94d01804aa4fe2d1" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 33297 } } }
I20250629 01:57:47.983335 19280 sys_catalog.cc:458] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1 [sys.catalog]: This master's current role is: LEADER
I20250629 01:57:47.983952 19287 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250629 01:57:47.994513 19287 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250629 01:57:48.010417 19287 catalog_manager.cc:1349] Generated new cluster ID: d56bc98c826d41d3ac44f05ea927fcff
I20250629 01:57:48.010696 19287 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250629 01:57:48.033466 19287 catalog_manager.cc:1372] Generated new certificate authority record
I20250629 01:57:48.034941 19287 catalog_manager.cc:1506] Loading token signing keys...
I20250629 01:57:48.047773 19287 catalog_manager.cc:5955] T 00000000000000000000000000000000 P a767e093ab06400a94d01804aa4fe2d1: Generated new TSK 0
I20250629 01:57:48.048702 19287 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250629 01:57:48.063839 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.65:0
--local_ip_for_outbound_sockets=127.17.83.65
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:33297
--builtin_ntp_servers=127.17.83.84:46375
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250629 01:57:48.364275 19298 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:48.364791 19298 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:57:48.365317 19298 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:48.397221 19298 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:57:48.398002 19298 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.65
I20250629 01:57:48.430872 19298 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:46375
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:33297
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.65
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:57:48.432155 19298 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:57:48.433806 19298 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:57:48.450915 19304 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:48.451332 19305 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:48.453474 19298 server_base.cc:1048] running on GCE node
W20250629 01:57:48.451697 19307 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:49.582895 19298 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:57:49.585667 19298 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:57:49.587074 19298 hybrid_clock.cc:648] HybridClock initialized: now 1751162269587019 us; error 72 us; skew 500 ppm
I20250629 01:57:49.587898 19298 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:57:49.594915 19298 webserver.cc:469] Webserver started at http://127.17.83.65:32791/ using document root <none> and password file <none>
I20250629 01:57:49.595849 19298 fs_manager.cc:362] Metadata directory not provided
I20250629 01:57:49.596058 19298 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:57:49.596536 19298 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:57:49.600689 19298 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "d18569b65a0543dfa404a5e71d2d92cc"
format_stamp: "Formatted at 2025-06-29 01:57:49 on dist-test-slave-v1mb"
I20250629 01:57:49.601786 19298 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "d18569b65a0543dfa404a5e71d2d92cc"
format_stamp: "Formatted at 2025-06-29 01:57:49 on dist-test-slave-v1mb"
I20250629 01:57:49.608866 19298 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.009s sys 0.000s
I20250629 01:57:49.613910 19314 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:49.614929 19298 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250629 01:57:49.615264 19298 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "d18569b65a0543dfa404a5e71d2d92cc"
format_stamp: "Formatted at 2025-06-29 01:57:49 on dist-test-slave-v1mb"
I20250629 01:57:49.615622 19298 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:57:49.672175 19298 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:57:49.673622 19298 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:57:49.674053 19298 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:57:49.676586 19298 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:57:49.680310 19298 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:57:49.680502 19298 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:49.680733 19298 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:57:49.680882 19298 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:49.805403 19298 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.65:39751
I20250629 01:57:49.805490 19426 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.65:39751 every 8 connection(s)
I20250629 01:57:49.807976 19298 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250629 01:57:49.818168 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 19298
I20250629 01:57:49.818512 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250629 01:57:49.823894 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.66:0
--local_ip_for_outbound_sockets=127.17.83.66
--webserver_interface=127.17.83.66
--webserver_port=0
--tserver_master_addrs=127.17.83.126:33297
--builtin_ntp_servers=127.17.83.84:46375
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250629 01:57:49.828400 19427 heartbeater.cc:344] Connected to a master server at 127.17.83.126:33297
I20250629 01:57:49.828904 19427 heartbeater.cc:461] Registering TS with master...
I20250629 01:57:49.830120 19427 heartbeater.cc:507] Master 127.17.83.126:33297 requested a full tablet report, sending...
I20250629 01:57:49.832581 19239 ts_manager.cc:194] Registered new tserver with Master: d18569b65a0543dfa404a5e71d2d92cc (127.17.83.65:39751)
I20250629 01:57:49.834374 19239 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.65:49289
W20250629 01:57:50.109694 19431 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:50.110153 19431 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:57:50.110618 19431 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:50.140583 19431 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:57:50.141412 19431 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.66
I20250629 01:57:50.174854 19431 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:46375
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.66:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.17.83.66
--webserver_port=0
--tserver_master_addrs=127.17.83.126:33297
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.66
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:57:50.176139 19431 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:57:50.177721 19431 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:57:50.192317 19440 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:50.837603 19427 heartbeater.cc:499] Master 127.17.83.126:33297 was elected leader, sending a full tablet report...
W20250629 01:57:50.192971 19437 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:50.192530 19438 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:51.325695 19439 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250629 01:57:51.325803 19431 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:57:51.329962 19431 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:57:51.332191 19431 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:57:51.333585 19431 hybrid_clock.cc:648] HybridClock initialized: now 1751162271333540 us; error 51 us; skew 500 ppm
I20250629 01:57:51.334336 19431 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:57:51.343082 19431 webserver.cc:469] Webserver started at http://127.17.83.66:38281/ using document root <none> and password file <none>
I20250629 01:57:51.344007 19431 fs_manager.cc:362] Metadata directory not provided
I20250629 01:57:51.344213 19431 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:57:51.344645 19431 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:57:51.348773 19431 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "5c242c94341d409ba83a81d23b0dbf69"
format_stamp: "Formatted at 2025-06-29 01:57:51 on dist-test-slave-v1mb"
I20250629 01:57:51.349705 19431 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "5c242c94341d409ba83a81d23b0dbf69"
format_stamp: "Formatted at 2025-06-29 01:57:51 on dist-test-slave-v1mb"
I20250629 01:57:51.356312 19431 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.001s
I20250629 01:57:51.361138 19447 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:51.361907 19431 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.001s
I20250629 01:57:51.362191 19431 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "5c242c94341d409ba83a81d23b0dbf69"
format_stamp: "Formatted at 2025-06-29 01:57:51 on dist-test-slave-v1mb"
I20250629 01:57:51.362483 19431 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:57:51.406546 19431 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:57:51.407800 19431 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:57:51.408178 19431 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:57:51.410290 19431 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:57:51.413803 19431 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:57:51.414007 19431 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:51.414244 19431 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:57:51.414383 19431 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:51.541783 19431 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.66:33593
I20250629 01:57:51.541911 19559 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.66:33593 every 8 connection(s)
I20250629 01:57:51.544201 19431 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250629 01:57:51.547919 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 19431
I20250629 01:57:51.548391 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250629 01:57:51.554461 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.67:0
--local_ip_for_outbound_sockets=127.17.83.67
--webserver_interface=127.17.83.67
--webserver_port=0
--tserver_master_addrs=127.17.83.126:33297
--builtin_ntp_servers=127.17.83.84:46375
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250629 01:57:51.564067 19560 heartbeater.cc:344] Connected to a master server at 127.17.83.126:33297
I20250629 01:57:51.564432 19560 heartbeater.cc:461] Registering TS with master...
I20250629 01:57:51.565328 19560 heartbeater.cc:507] Master 127.17.83.126:33297 requested a full tablet report, sending...
I20250629 01:57:51.567219 19239 ts_manager.cc:194] Registered new tserver with Master: 5c242c94341d409ba83a81d23b0dbf69 (127.17.83.66:33593)
I20250629 01:57:51.568219 19239 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.66:50405
W20250629 01:57:51.794183 19270 debug-util.cc:398] Leaking SignalData structure 0x7b08000a3d80 after lost signal to thread 19207
W20250629 01:57:51.843652 19564 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:51.844228 19564 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:57:51.844720 19564 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:51.875306 19564 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:57:51.876101 19564 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.67
I20250629 01:57:51.907866 19564 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:46375
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.67:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.17.83.67
--webserver_port=0
--tserver_master_addrs=127.17.83.126:33297
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.67
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:57:51.909077 19564 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:57:51.910439 19564 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:57:51.926632 19570 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:52.571063 19560 heartbeater.cc:499] Master 127.17.83.126:33297 was elected leader, sending a full tablet report...
I20250629 01:57:51.931294 19564 server_base.cc:1048] running on GCE node
W20250629 01:57:51.928946 19573 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:57:51.927075 19571 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:57:53.074759 19564 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:57:53.077487 19564 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:57:53.078946 19564 hybrid_clock.cc:648] HybridClock initialized: now 1751162273078890 us; error 79 us; skew 500 ppm
I20250629 01:57:53.079700 19564 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:57:53.086437 19564 webserver.cc:469] Webserver started at http://127.17.83.67:33087/ using document root <none> and password file <none>
I20250629 01:57:53.087412 19564 fs_manager.cc:362] Metadata directory not provided
I20250629 01:57:53.087648 19564 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:57:53.088128 19564 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:57:53.092897 19564 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "467c5e28d78046908817e54f008e9b09"
format_stamp: "Formatted at 2025-06-29 01:57:53 on dist-test-slave-v1mb"
I20250629 01:57:53.093912 19564 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "467c5e28d78046908817e54f008e9b09"
format_stamp: "Formatted at 2025-06-29 01:57:53 on dist-test-slave-v1mb"
I20250629 01:57:53.100790 19564 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.003s
I20250629 01:57:53.106374 19580 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:53.107374 19564 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250629 01:57:53.107640 19564 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "467c5e28d78046908817e54f008e9b09"
format_stamp: "Formatted at 2025-06-29 01:57:53 on dist-test-slave-v1mb"
I20250629 01:57:53.107895 19564 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:57:53.171857 19564 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:57:53.173309 19564 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:57:53.173712 19564 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:57:53.176330 19564 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:57:53.180635 19564 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:57:53.180822 19564 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:53.181067 19564 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:57:53.181228 19564 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:57:53.342110 19564 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.67:45325
I20250629 01:57:53.342207 19692 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.67:45325 every 8 connection(s)
I20250629 01:57:53.344630 19564 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250629 01:57:53.352787 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 19564
I20250629 01:57:53.353232 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250629 01:57:53.365156 19693 heartbeater.cc:344] Connected to a master server at 127.17.83.126:33297
I20250629 01:57:53.365657 19693 heartbeater.cc:461] Registering TS with master...
I20250629 01:57:53.366855 19693 heartbeater.cc:507] Master 127.17.83.126:33297 requested a full tablet report, sending...
I20250629 01:57:53.368932 19239 ts_manager.cc:194] Registered new tserver with Master: 467c5e28d78046908817e54f008e9b09 (127.17.83.67:45325)
I20250629 01:57:53.370041 19239 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.67:50089
I20250629 01:57:53.373653 17741 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250629 01:57:53.401641 19239 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:45918:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250629 01:57:53.444516 19239 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250629 01:57:53.488059 19495 tablet_service.cc:1468] Processing CreateTablet for tablet 9c7e14b48f624eb4a536071f3f93db77 (DEFAULT_TABLE table=TestTable [id=9397944c29a04177b45e14afa0f98609]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:57:53.489089 19628 tablet_service.cc:1468] Processing CreateTablet for tablet 9c7e14b48f624eb4a536071f3f93db77 (DEFAULT_TABLE table=TestTable [id=9397944c29a04177b45e14afa0f98609]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:57:53.489244 19362 tablet_service.cc:1468] Processing CreateTablet for tablet 9c7e14b48f624eb4a536071f3f93db77 (DEFAULT_TABLE table=TestTable [id=9397944c29a04177b45e14afa0f98609]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:57:53.490003 19495 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9c7e14b48f624eb4a536071f3f93db77. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:53.490478 19628 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9c7e14b48f624eb4a536071f3f93db77. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:53.490924 19362 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9c7e14b48f624eb4a536071f3f93db77. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:53.509987 19712 tablet_bootstrap.cc:492] T 9c7e14b48f624eb4a536071f3f93db77 P 467c5e28d78046908817e54f008e9b09: Bootstrap starting.
I20250629 01:57:53.515928 19713 tablet_bootstrap.cc:492] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69: Bootstrap starting.
I20250629 01:57:53.516855 19712 tablet_bootstrap.cc:654] T 9c7e14b48f624eb4a536071f3f93db77 P 467c5e28d78046908817e54f008e9b09: Neither blocks nor log segments found. Creating new log.
I20250629 01:57:53.518311 19714 tablet_bootstrap.cc:492] T 9c7e14b48f624eb4a536071f3f93db77 P d18569b65a0543dfa404a5e71d2d92cc: Bootstrap starting.
I20250629 01:57:53.519574 19712 log.cc:826] T 9c7e14b48f624eb4a536071f3f93db77 P 467c5e28d78046908817e54f008e9b09: Log is configured to *not* fsync() on all Append() calls
I20250629 01:57:53.523097 19713 tablet_bootstrap.cc:654] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69: Neither blocks nor log segments found. Creating new log.
I20250629 01:57:53.523969 19714 tablet_bootstrap.cc:654] T 9c7e14b48f624eb4a536071f3f93db77 P d18569b65a0543dfa404a5e71d2d92cc: Neither blocks nor log segments found. Creating new log.
I20250629 01:57:53.525008 19712 tablet_bootstrap.cc:492] T 9c7e14b48f624eb4a536071f3f93db77 P 467c5e28d78046908817e54f008e9b09: No bootstrap required, opened a new log
I20250629 01:57:53.525322 19713 log.cc:826] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69: Log is configured to *not* fsync() on all Append() calls
I20250629 01:57:53.525583 19712 ts_tablet_manager.cc:1397] T 9c7e14b48f624eb4a536071f3f93db77 P 467c5e28d78046908817e54f008e9b09: Time spent bootstrapping tablet: real 0.016s user 0.015s sys 0.000s
I20250629 01:57:53.526247 19714 log.cc:826] T 9c7e14b48f624eb4a536071f3f93db77 P d18569b65a0543dfa404a5e71d2d92cc: Log is configured to *not* fsync() on all Append() calls
I20250629 01:57:53.532176 19713 tablet_bootstrap.cc:492] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69: No bootstrap required, opened a new log
I20250629 01:57:53.532610 19714 tablet_bootstrap.cc:492] T 9c7e14b48f624eb4a536071f3f93db77 P d18569b65a0543dfa404a5e71d2d92cc: No bootstrap required, opened a new log
I20250629 01:57:53.532653 19713 ts_tablet_manager.cc:1397] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69: Time spent bootstrapping tablet: real 0.017s user 0.009s sys 0.007s
I20250629 01:57:53.533035 19714 ts_tablet_manager.cc:1397] T 9c7e14b48f624eb4a536071f3f93db77 P d18569b65a0543dfa404a5e71d2d92cc: Time spent bootstrapping tablet: real 0.015s user 0.013s sys 0.000s
I20250629 01:57:53.545321 19712 raft_consensus.cc:357] T 9c7e14b48f624eb4a536071f3f93db77 P 467c5e28d78046908817e54f008e9b09 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } } peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } }
I20250629 01:57:53.546279 19712 raft_consensus.cc:383] T 9c7e14b48f624eb4a536071f3f93db77 P 467c5e28d78046908817e54f008e9b09 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:57:53.546627 19712 raft_consensus.cc:738] T 9c7e14b48f624eb4a536071f3f93db77 P 467c5e28d78046908817e54f008e9b09 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 467c5e28d78046908817e54f008e9b09, State: Initialized, Role: FOLLOWER
I20250629 01:57:53.547691 19712 consensus_queue.cc:260] T 9c7e14b48f624eb4a536071f3f93db77 P 467c5e28d78046908817e54f008e9b09 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } } peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } }
I20250629 01:57:53.556397 19713 raft_consensus.cc:357] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } } peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } }
I20250629 01:57:53.557310 19713 raft_consensus.cc:383] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:57:53.557658 19713 raft_consensus.cc:738] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5c242c94341d409ba83a81d23b0dbf69, State: Initialized, Role: FOLLOWER
I20250629 01:57:53.558615 19713 consensus_queue.cc:260] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } } peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } }
I20250629 01:57:53.560281 19693 heartbeater.cc:499] Master 127.17.83.126:33297 was elected leader, sending a full tablet report...
I20250629 01:57:53.561390 19712 ts_tablet_manager.cc:1428] T 9c7e14b48f624eb4a536071f3f93db77 P 467c5e28d78046908817e54f008e9b09: Time spent starting tablet: real 0.035s user 0.014s sys 0.019s
I20250629 01:57:53.562631 19714 raft_consensus.cc:357] T 9c7e14b48f624eb4a536071f3f93db77 P d18569b65a0543dfa404a5e71d2d92cc [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } } peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } }
I20250629 01:57:53.563474 19714 raft_consensus.cc:383] T 9c7e14b48f624eb4a536071f3f93db77 P d18569b65a0543dfa404a5e71d2d92cc [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:57:53.563773 19714 raft_consensus.cc:738] T 9c7e14b48f624eb4a536071f3f93db77 P d18569b65a0543dfa404a5e71d2d92cc [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d18569b65a0543dfa404a5e71d2d92cc, State: Initialized, Role: FOLLOWER
I20250629 01:57:53.564656 19714 consensus_queue.cc:260] T 9c7e14b48f624eb4a536071f3f93db77 P d18569b65a0543dfa404a5e71d2d92cc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } } peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } }
I20250629 01:57:53.581436 19720 raft_consensus.cc:491] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250629 01:57:53.584059 19714 ts_tablet_manager.cc:1428] T 9c7e14b48f624eb4a536071f3f93db77 P d18569b65a0543dfa404a5e71d2d92cc: Time spent starting tablet: real 0.051s user 0.038s sys 0.008s
I20250629 01:57:53.583885 19720 raft_consensus.cc:513] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } } peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } }
I20250629 01:57:53.583333 19713 ts_tablet_manager.cc:1428] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69: Time spent starting tablet: real 0.050s user 0.029s sys 0.008s
I20250629 01:57:53.596827 19720 leader_election.cc:290] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 467c5e28d78046908817e54f008e9b09 (127.17.83.67:45325), d18569b65a0543dfa404a5e71d2d92cc (127.17.83.65:39751)
W20250629 01:57:53.599359 19694 tablet.cc:2378] T 9c7e14b48f624eb4a536071f3f93db77 P 467c5e28d78046908817e54f008e9b09: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250629 01:57:53.606158 19648 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "9c7e14b48f624eb4a536071f3f93db77" candidate_uuid: "5c242c94341d409ba83a81d23b0dbf69" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "467c5e28d78046908817e54f008e9b09" is_pre_election: true
I20250629 01:57:53.606796 19648 raft_consensus.cc:2466] T 9c7e14b48f624eb4a536071f3f93db77 P 467c5e28d78046908817e54f008e9b09 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 5c242c94341d409ba83a81d23b0dbf69 in term 0.
I20250629 01:57:53.607755 19451 leader_election.cc:304] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 467c5e28d78046908817e54f008e9b09, 5c242c94341d409ba83a81d23b0dbf69; no voters:
I20250629 01:57:53.608353 19720 raft_consensus.cc:2802] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250629 01:57:53.608644 19720 raft_consensus.cc:491] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250629 01:57:53.608884 19720 raft_consensus.cc:3058] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:57:53.608951 19382 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "9c7e14b48f624eb4a536071f3f93db77" candidate_uuid: "5c242c94341d409ba83a81d23b0dbf69" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d18569b65a0543dfa404a5e71d2d92cc" is_pre_election: true
I20250629 01:57:53.609521 19382 raft_consensus.cc:2466] T 9c7e14b48f624eb4a536071f3f93db77 P d18569b65a0543dfa404a5e71d2d92cc [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 5c242c94341d409ba83a81d23b0dbf69 in term 0.
I20250629 01:57:53.613224 19720 raft_consensus.cc:513] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } } peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } }
I20250629 01:57:53.614533 19720 leader_election.cc:290] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [CANDIDATE]: Term 1 election: Requested vote from peers 467c5e28d78046908817e54f008e9b09 (127.17.83.67:45325), d18569b65a0543dfa404a5e71d2d92cc (127.17.83.65:39751)
I20250629 01:57:53.615154 19648 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "9c7e14b48f624eb4a536071f3f93db77" candidate_uuid: "5c242c94341d409ba83a81d23b0dbf69" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "467c5e28d78046908817e54f008e9b09"
I20250629 01:57:53.615401 19382 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "9c7e14b48f624eb4a536071f3f93db77" candidate_uuid: "5c242c94341d409ba83a81d23b0dbf69" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d18569b65a0543dfa404a5e71d2d92cc"
I20250629 01:57:53.615603 19648 raft_consensus.cc:3058] T 9c7e14b48f624eb4a536071f3f93db77 P 467c5e28d78046908817e54f008e9b09 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:57:53.615864 19382 raft_consensus.cc:3058] T 9c7e14b48f624eb4a536071f3f93db77 P d18569b65a0543dfa404a5e71d2d92cc [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:57:53.620131 19382 raft_consensus.cc:2466] T 9c7e14b48f624eb4a536071f3f93db77 P d18569b65a0543dfa404a5e71d2d92cc [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 5c242c94341d409ba83a81d23b0dbf69 in term 1.
I20250629 01:57:53.620289 19648 raft_consensus.cc:2466] T 9c7e14b48f624eb4a536071f3f93db77 P 467c5e28d78046908817e54f008e9b09 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 5c242c94341d409ba83a81d23b0dbf69 in term 1.
I20250629 01:57:53.621089 19450 leader_election.cc:304] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 5c242c94341d409ba83a81d23b0dbf69, d18569b65a0543dfa404a5e71d2d92cc; no voters:
I20250629 01:57:53.621701 19720 raft_consensus.cc:2802] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:57:53.622061 19720 raft_consensus.cc:695] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [term 1 LEADER]: Becoming Leader. State: Replica: 5c242c94341d409ba83a81d23b0dbf69, State: Running, Role: LEADER
I20250629 01:57:53.622846 19720 consensus_queue.cc:237] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } } peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } }
I20250629 01:57:53.629850 19239 catalog_manager.cc:5582] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 reported cstate change: term changed from 0 to 1, leader changed from <none> to 5c242c94341d409ba83a81d23b0dbf69 (127.17.83.66). New cstate: current_term: 1 leader_uuid: "5c242c94341d409ba83a81d23b0dbf69" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } health_report { overall_health: HEALTHY } } }
I20250629 01:57:53.679023 17741 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250629 01:57:53.681962 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver d18569b65a0543dfa404a5e71d2d92cc to finish bootstrapping
I20250629 01:57:53.694386 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 5c242c94341d409ba83a81d23b0dbf69 to finish bootstrapping
I20250629 01:57:53.703718 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 467c5e28d78046908817e54f008e9b09 to finish bootstrapping
I20250629 01:57:53.715574 19239 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:45918:
name: "TestAnotherTable"
schema {
columns {
name: "foo"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "bar"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
comment: "comment for bar"
immutable: false
}
}
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "foo"
}
}
}
W20250629 01:57:53.717049 19239 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestAnotherTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250629 01:57:53.731828 19362 tablet_service.cc:1468] Processing CreateTablet for tablet 417a6be1c51c424aaa771b1331350fcf (DEFAULT_TABLE table=TestAnotherTable [id=b7f2711ff34a489688b016f653c088c5]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250629 01:57:53.732424 19495 tablet_service.cc:1468] Processing CreateTablet for tablet 417a6be1c51c424aaa771b1331350fcf (DEFAULT_TABLE table=TestAnotherTable [id=b7f2711ff34a489688b016f653c088c5]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250629 01:57:53.732721 19362 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 417a6be1c51c424aaa771b1331350fcf. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:53.733274 19495 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 417a6be1c51c424aaa771b1331350fcf. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:53.734167 19628 tablet_service.cc:1468] Processing CreateTablet for tablet 417a6be1c51c424aaa771b1331350fcf (DEFAULT_TABLE table=TestAnotherTable [id=b7f2711ff34a489688b016f653c088c5]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250629 01:57:53.735167 19628 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 417a6be1c51c424aaa771b1331350fcf. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:57:53.743409 19714 tablet_bootstrap.cc:492] T 417a6be1c51c424aaa771b1331350fcf P d18569b65a0543dfa404a5e71d2d92cc: Bootstrap starting.
I20250629 01:57:53.746812 19713 tablet_bootstrap.cc:492] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69: Bootstrap starting.
I20250629 01:57:53.748507 19714 tablet_bootstrap.cc:654] T 417a6be1c51c424aaa771b1331350fcf P d18569b65a0543dfa404a5e71d2d92cc: Neither blocks nor log segments found. Creating new log.
I20250629 01:57:53.748950 19712 tablet_bootstrap.cc:492] T 417a6be1c51c424aaa771b1331350fcf P 467c5e28d78046908817e54f008e9b09: Bootstrap starting.
I20250629 01:57:53.751791 19713 tablet_bootstrap.cc:654] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69: Neither blocks nor log segments found. Creating new log.
I20250629 01:57:53.755120 19712 tablet_bootstrap.cc:654] T 417a6be1c51c424aaa771b1331350fcf P 467c5e28d78046908817e54f008e9b09: Neither blocks nor log segments found. Creating new log.
I20250629 01:57:53.756520 19714 tablet_bootstrap.cc:492] T 417a6be1c51c424aaa771b1331350fcf P d18569b65a0543dfa404a5e71d2d92cc: No bootstrap required, opened a new log
I20250629 01:57:53.756925 19714 ts_tablet_manager.cc:1397] T 417a6be1c51c424aaa771b1331350fcf P d18569b65a0543dfa404a5e71d2d92cc: Time spent bootstrapping tablet: real 0.014s user 0.009s sys 0.003s
I20250629 01:57:53.758988 19713 tablet_bootstrap.cc:492] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69: No bootstrap required, opened a new log
I20250629 01:57:53.759435 19713 ts_tablet_manager.cc:1397] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69: Time spent bootstrapping tablet: real 0.013s user 0.010s sys 0.002s
I20250629 01:57:53.759853 19714 raft_consensus.cc:357] T 417a6be1c51c424aaa771b1331350fcf P d18569b65a0543dfa404a5e71d2d92cc [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } } peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } }
I20250629 01:57:53.760700 19714 raft_consensus.cc:383] T 417a6be1c51c424aaa771b1331350fcf P d18569b65a0543dfa404a5e71d2d92cc [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:57:53.760887 19714 raft_consensus.cc:738] T 417a6be1c51c424aaa771b1331350fcf P d18569b65a0543dfa404a5e71d2d92cc [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d18569b65a0543dfa404a5e71d2d92cc, State: Initialized, Role: FOLLOWER
I20250629 01:57:53.761693 19713 raft_consensus.cc:357] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } } peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } }
I20250629 01:57:53.761510 19714 consensus_queue.cc:260] T 417a6be1c51c424aaa771b1331350fcf P d18569b65a0543dfa404a5e71d2d92cc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } } peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } }
I20250629 01:57:53.762118 19713 raft_consensus.cc:383] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:57:53.762375 19712 tablet_bootstrap.cc:492] T 417a6be1c51c424aaa771b1331350fcf P 467c5e28d78046908817e54f008e9b09: No bootstrap required, opened a new log
I20250629 01:57:53.762369 19713 raft_consensus.cc:738] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5c242c94341d409ba83a81d23b0dbf69, State: Initialized, Role: FOLLOWER
I20250629 01:57:53.762768 19712 ts_tablet_manager.cc:1397] T 417a6be1c51c424aaa771b1331350fcf P 467c5e28d78046908817e54f008e9b09: Time spent bootstrapping tablet: real 0.014s user 0.009s sys 0.005s
I20250629 01:57:53.763473 19714 ts_tablet_manager.cc:1428] T 417a6be1c51c424aaa771b1331350fcf P d18569b65a0543dfa404a5e71d2d92cc: Time spent starting tablet: real 0.006s user 0.006s sys 0.000s
I20250629 01:57:53.763078 19713 consensus_queue.cc:260] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } } peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } }
I20250629 01:57:53.765277 19713 ts_tablet_manager.cc:1428] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69: Time spent starting tablet: real 0.006s user 0.004s sys 0.000s
I20250629 01:57:53.765033 19712 raft_consensus.cc:357] T 417a6be1c51c424aaa771b1331350fcf P 467c5e28d78046908817e54f008e9b09 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } } peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } }
I20250629 01:57:53.765715 19712 raft_consensus.cc:383] T 417a6be1c51c424aaa771b1331350fcf P 467c5e28d78046908817e54f008e9b09 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:57:53.766026 19712 raft_consensus.cc:738] T 417a6be1c51c424aaa771b1331350fcf P 467c5e28d78046908817e54f008e9b09 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 467c5e28d78046908817e54f008e9b09, State: Initialized, Role: FOLLOWER
I20250629 01:57:53.766685 19712 consensus_queue.cc:260] T 417a6be1c51c424aaa771b1331350fcf P 467c5e28d78046908817e54f008e9b09 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } } peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } }
I20250629 01:57:53.768705 19712 ts_tablet_manager.cc:1428] T 417a6be1c51c424aaa771b1331350fcf P 467c5e28d78046908817e54f008e9b09: Time spent starting tablet: real 0.006s user 0.002s sys 0.003s
I20250629 01:57:53.789208 19720 raft_consensus.cc:491] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250629 01:57:53.789597 19720 raft_consensus.cc:513] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } } peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } }
I20250629 01:57:53.791110 19720 leader_election.cc:290] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers d18569b65a0543dfa404a5e71d2d92cc (127.17.83.65:39751), 467c5e28d78046908817e54f008e9b09 (127.17.83.67:45325)
I20250629 01:57:53.791930 19382 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "417a6be1c51c424aaa771b1331350fcf" candidate_uuid: "5c242c94341d409ba83a81d23b0dbf69" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d18569b65a0543dfa404a5e71d2d92cc" is_pre_election: true
I20250629 01:57:53.792129 19648 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "417a6be1c51c424aaa771b1331350fcf" candidate_uuid: "5c242c94341d409ba83a81d23b0dbf69" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "467c5e28d78046908817e54f008e9b09" is_pre_election: true
I20250629 01:57:53.792580 19382 raft_consensus.cc:2466] T 417a6be1c51c424aaa771b1331350fcf P d18569b65a0543dfa404a5e71d2d92cc [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 5c242c94341d409ba83a81d23b0dbf69 in term 0.
I20250629 01:57:53.792688 19648 raft_consensus.cc:2466] T 417a6be1c51c424aaa771b1331350fcf P 467c5e28d78046908817e54f008e9b09 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 5c242c94341d409ba83a81d23b0dbf69 in term 0.
I20250629 01:57:53.793491 19450 leader_election.cc:304] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 5c242c94341d409ba83a81d23b0dbf69, d18569b65a0543dfa404a5e71d2d92cc; no voters:
I20250629 01:57:53.794086 19720 raft_consensus.cc:2802] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250629 01:57:53.794359 19720 raft_consensus.cc:491] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250629 01:57:53.794608 19720 raft_consensus.cc:3058] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:57:53.798519 19720 raft_consensus.cc:513] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } } peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } }
I20250629 01:57:53.799921 19720 leader_election.cc:290] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [CANDIDATE]: Term 1 election: Requested vote from peers d18569b65a0543dfa404a5e71d2d92cc (127.17.83.65:39751), 467c5e28d78046908817e54f008e9b09 (127.17.83.67:45325)
W20250629 01:57:53.800519 19561 tablet.cc:2378] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250629 01:57:53.800750 19382 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "417a6be1c51c424aaa771b1331350fcf" candidate_uuid: "5c242c94341d409ba83a81d23b0dbf69" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d18569b65a0543dfa404a5e71d2d92cc"
I20250629 01:57:53.800964 19648 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "417a6be1c51c424aaa771b1331350fcf" candidate_uuid: "5c242c94341d409ba83a81d23b0dbf69" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "467c5e28d78046908817e54f008e9b09"
I20250629 01:57:53.801247 19382 raft_consensus.cc:3058] T 417a6be1c51c424aaa771b1331350fcf P d18569b65a0543dfa404a5e71d2d92cc [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:57:53.801452 19648 raft_consensus.cc:3058] T 417a6be1c51c424aaa771b1331350fcf P 467c5e28d78046908817e54f008e9b09 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:57:53.807318 19382 raft_consensus.cc:2466] T 417a6be1c51c424aaa771b1331350fcf P d18569b65a0543dfa404a5e71d2d92cc [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 5c242c94341d409ba83a81d23b0dbf69 in term 1.
I20250629 01:57:53.807318 19648 raft_consensus.cc:2466] T 417a6be1c51c424aaa771b1331350fcf P 467c5e28d78046908817e54f008e9b09 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 5c242c94341d409ba83a81d23b0dbf69 in term 1.
I20250629 01:57:53.808092 19450 leader_election.cc:304] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 5c242c94341d409ba83a81d23b0dbf69, d18569b65a0543dfa404a5e71d2d92cc; no voters:
I20250629 01:57:53.808854 19720 raft_consensus.cc:2802] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:57:53.809224 19720 raft_consensus.cc:695] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [term 1 LEADER]: Becoming Leader. State: Replica: 5c242c94341d409ba83a81d23b0dbf69, State: Running, Role: LEADER
I20250629 01:57:53.809783 19720 consensus_queue.cc:237] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } } peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } }
W20250629 01:57:53.815357 19428 tablet.cc:2378] T 417a6be1c51c424aaa771b1331350fcf P d18569b65a0543dfa404a5e71d2d92cc: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250629 01:57:53.816031 19239 catalog_manager.cc:5582] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 reported cstate change: term changed from 0 to 1, leader changed from <none> to 5c242c94341d409ba83a81d23b0dbf69 (127.17.83.66). New cstate: current_term: 1 leader_uuid: "5c242c94341d409ba83a81d23b0dbf69" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "5c242c94341d409ba83a81d23b0dbf69" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 33593 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 } health_report { overall_health: UNKNOWN } } }
W20250629 01:57:54.109745 19730 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:54.110248 19730 flags.cc:425] Enabled unsafe flag: --never_fsync=true
I20250629 01:57:54.121974 19720 consensus_queue.cc:1035] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250629 01:57:54.137373 19720 consensus_queue.cc:1035] T 9c7e14b48f624eb4a536071f3f93db77 P 5c242c94341d409ba83a81d23b0dbf69 [LEADER]: Connected to new peer: Peer: permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250629 01:57:54.150182 19730 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250629 01:57:54.229362 19720 consensus_queue.cc:1035] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [LEADER]: Connected to new peer: Peer: permanent_uuid: "467c5e28d78046908817e54f008e9b09" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 45325 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250629 01:57:54.244820 19719 consensus_queue.cc:1035] T 417a6be1c51c424aaa771b1331350fcf P 5c242c94341d409ba83a81d23b0dbf69 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d18569b65a0543dfa404a5e71d2d92cc" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 39751 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250629 01:57:54.417629 19270 debug-util.cc:398] Leaking SignalData structure 0x7b0800088b00 after lost signal to thread 19207
W20250629 01:57:54.418429 19270 debug-util.cc:398] Leaking SignalData structure 0x7b0800088bc0 after lost signal to thread 19273
W20250629 01:57:55.444701 19730 thread.cc:640] rpc reactor (reactor) Time spent creating pthread: real 1.256s user 0.460s sys 0.792s
W20250629 01:57:55.445111 19730 thread.cc:606] rpc reactor (reactor) Time spent starting thread: real 1.256s user 0.460s sys 0.792s
W20250629 01:57:56.800956 19751 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:56.801463 19751 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:56.831969 19751 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250629 01:57:57.911104 19556 debug-util.cc:398] Leaking SignalData structure 0x7b08000cb900 after lost signal to thread 19432
W20250629 01:57:57.912135 19556 debug-util.cc:398] Leaking SignalData structure 0x7b08000c6c60 after lost signal to thread 19559
W20250629 01:57:58.213770 19751 thread.cc:640] rpc reactor (reactor) Time spent creating pthread: real 1.335s user 0.000s sys 0.003s
W20250629 01:57:58.214059 19751 thread.cc:606] rpc reactor (reactor) Time spent starting thread: real 1.335s user 0.000s sys 0.003s
W20250629 01:57:59.588487 19767 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:57:59.589005 19767 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:57:59.617980 19767 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250629 01:58:00.919203 19767 thread.cc:640] rpc reactor (reactor) Time spent creating pthread: real 1.260s user 0.433s sys 0.812s
W20250629 01:58:00.919529 19767 thread.cc:606] rpc reactor (reactor) Time spent starting thread: real 1.260s user 0.436s sys 0.812s
W20250629 01:58:02.286407 19784 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:02.286947 19784 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:02.319998 19784 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250629 01:58:03.556624 19784 thread.cc:640] rpc reactor (reactor) Time spent creating pthread: real 1.195s user 0.491s sys 0.703s
W20250629 01:58:03.556916 19784 thread.cc:606] rpc reactor (reactor) Time spent starting thread: real 1.195s user 0.491s sys 0.703s
I20250629 01:58:04.628721 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 19298
I20250629 01:58:04.653429 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 19431
I20250629 01:58:04.680004 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 19564
I20250629 01:58:04.709321 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 19206
2025-06-29T01:58:04Z chronyd exiting
[ OK ] AdminCliTest.TestDescribeTableColumnFlags (18518 ms)
[ RUN ] AdminCliTest.TestAuthzResetCacheNotAuthorized
I20250629 01:58:04.765267 17741 test_util.cc:276] Using random seed: 1038970934
I20250629 01:58:04.769281 17741 ts_itest-base.cc:115] Starting cluster with:
I20250629 01:58:04.769420 17741 ts_itest-base.cc:116] --------------
I20250629 01:58:04.769529 17741 ts_itest-base.cc:117] 3 tablet servers
I20250629 01:58:04.769640 17741 ts_itest-base.cc:118] 3 replicas per TS
I20250629 01:58:04.769748 17741 ts_itest-base.cc:119] --------------
2025-06-29T01:58:04Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-29T01:58:04Z Disabled control of system clock
I20250629 01:58:04.802314 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:36043
--webserver_interface=127.17.83.126
--webserver_port=0
--builtin_ntp_servers=127.17.83.84:46063
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.17.83.126:36043
--superuser_acl=no-such-user with env {}
W20250629 01:58:05.069690 19806 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:05.070238 19806 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:05.070717 19806 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:05.100395 19806 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250629 01:58:05.100709 19806 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:05.101029 19806 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250629 01:58:05.101276 19806 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250629 01:58:05.134809 19806 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:46063
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.17.83.126:36043
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:36043
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--superuser_acl=<redacted>
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.17.83.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:05.136119 19806 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:05.137691 19806 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:05.152748 19813 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:05.152786 19812 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:05.152806 19815 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:06.335011 19814 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1180 milliseconds
I20250629 01:58:06.335119 19806 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:58:06.336256 19806 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:06.338744 19806 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:06.340076 19806 hybrid_clock.cc:648] HybridClock initialized: now 1751162286340039 us; error 50 us; skew 500 ppm
I20250629 01:58:06.340821 19806 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:06.347151 19806 webserver.cc:469] Webserver started at http://127.17.83.126:39693/ using document root <none> and password file <none>
I20250629 01:58:06.348093 19806 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:06.348300 19806 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:06.348729 19806 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:58:06.352818 19806 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "13317dded07a48afa00b2e1772975560"
format_stamp: "Formatted at 2025-06-29 01:58:06 on dist-test-slave-v1mb"
I20250629 01:58:06.354120 19806 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "13317dded07a48afa00b2e1772975560"
format_stamp: "Formatted at 2025-06-29 01:58:06 on dist-test-slave-v1mb"
I20250629 01:58:06.360960 19806 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.001s sys 0.008s
I20250629 01:58:06.366000 19822 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:06.366941 19806 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250629 01:58:06.367305 19806 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
uuid: "13317dded07a48afa00b2e1772975560"
format_stamp: "Formatted at 2025-06-29 01:58:06 on dist-test-slave-v1mb"
I20250629 01:58:06.367619 19806 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:06.417109 19806 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:06.418511 19806 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:06.418919 19806 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:06.486603 19806 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.126:36043
I20250629 01:58:06.486683 19873 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.126:36043 every 8 connection(s)
I20250629 01:58:06.489190 19806 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250629 01:58:06.493949 19874 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:58:06.496618 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 19806
I20250629 01:58:06.497014 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250629 01:58:06.515842 19874 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560: Bootstrap starting.
I20250629 01:58:06.522131 19874 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560: Neither blocks nor log segments found. Creating new log.
I20250629 01:58:06.523711 19874 log.cc:826] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:06.528126 19874 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560: No bootstrap required, opened a new log
I20250629 01:58:06.545104 19874 raft_consensus.cc:357] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "13317dded07a48afa00b2e1772975560" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 36043 } }
I20250629 01:58:06.545683 19874 raft_consensus.cc:383] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:58:06.545866 19874 raft_consensus.cc:738] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 13317dded07a48afa00b2e1772975560, State: Initialized, Role: FOLLOWER
I20250629 01:58:06.546406 19874 consensus_queue.cc:260] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "13317dded07a48afa00b2e1772975560" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 36043 } }
I20250629 01:58:06.546897 19874 raft_consensus.cc:397] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:58:06.547119 19874 raft_consensus.cc:491] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:58:06.547390 19874 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:58:06.551870 19874 raft_consensus.cc:513] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "13317dded07a48afa00b2e1772975560" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 36043 } }
I20250629 01:58:06.552474 19874 leader_election.cc:304] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 13317dded07a48afa00b2e1772975560; no voters:
I20250629 01:58:06.553948 19874 leader_election.cc:290] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250629 01:58:06.554618 19879 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:58:06.557014 19879 raft_consensus.cc:695] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [term 1 LEADER]: Becoming Leader. State: Replica: 13317dded07a48afa00b2e1772975560, State: Running, Role: LEADER
I20250629 01:58:06.557771 19879 consensus_queue.cc:237] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "13317dded07a48afa00b2e1772975560" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 36043 } }
I20250629 01:58:06.558086 19874 sys_catalog.cc:564] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [sys.catalog]: configured and running, proceeding with master startup.
I20250629 01:58:06.568707 19881 sys_catalog.cc:455] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 13317dded07a48afa00b2e1772975560. Latest consensus state: current_term: 1 leader_uuid: "13317dded07a48afa00b2e1772975560" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "13317dded07a48afa00b2e1772975560" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 36043 } } }
I20250629 01:58:06.568620 19880 sys_catalog.cc:455] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "13317dded07a48afa00b2e1772975560" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "13317dded07a48afa00b2e1772975560" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 36043 } } }
I20250629 01:58:06.569306 19880 sys_catalog.cc:458] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [sys.catalog]: This master's current role is: LEADER
I20250629 01:58:06.569306 19881 sys_catalog.cc:458] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560 [sys.catalog]: This master's current role is: LEADER
I20250629 01:58:06.572820 19888 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250629 01:58:06.584731 19888 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250629 01:58:06.599647 19888 catalog_manager.cc:1349] Generated new cluster ID: 0bc9ce80d47947d1a90e1a981661920d
I20250629 01:58:06.599877 19888 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250629 01:58:06.614377 19888 catalog_manager.cc:1372] Generated new certificate authority record
I20250629 01:58:06.615742 19888 catalog_manager.cc:1506] Loading token signing keys...
I20250629 01:58:06.626820 19888 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 13317dded07a48afa00b2e1772975560: Generated new TSK 0
I20250629 01:58:06.627799 19888 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250629 01:58:06.643673 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.65:0
--local_ip_for_outbound_sockets=127.17.83.65
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:36043
--builtin_ntp_servers=127.17.83.84:46063
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250629 01:58:06.924739 19898 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:06.925192 19898 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:06.925631 19898 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:06.953800 19898 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:06.954547 19898 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.65
I20250629 01:58:06.987562 19898 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:46063
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:36043
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.65
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:06.988808 19898 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:06.990363 19898 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:07.007679 19905 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:07.010443 19907 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:07.011045 19898 server_base.cc:1048] running on GCE node
W20250629 01:58:07.009519 19904 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:08.141350 19898 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:08.144097 19898 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:08.145499 19898 hybrid_clock.cc:648] HybridClock initialized: now 1751162288145448 us; error 69 us; skew 500 ppm
I20250629 01:58:08.146279 19898 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:08.153023 19898 webserver.cc:469] Webserver started at http://127.17.83.65:35891/ using document root <none> and password file <none>
I20250629 01:58:08.153831 19898 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:08.154063 19898 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:08.154470 19898 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:58:08.158665 19898 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "f3284a72ac914825b8e973e0a5211d16"
format_stamp: "Formatted at 2025-06-29 01:58:08 on dist-test-slave-v1mb"
I20250629 01:58:08.159736 19898 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "f3284a72ac914825b8e973e0a5211d16"
format_stamp: "Formatted at 2025-06-29 01:58:08 on dist-test-slave-v1mb"
I20250629 01:58:08.166513 19898 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.001s
I20250629 01:58:08.172245 19914 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:08.173255 19898 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250629 01:58:08.173555 19898 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "f3284a72ac914825b8e973e0a5211d16"
format_stamp: "Formatted at 2025-06-29 01:58:08 on dist-test-slave-v1mb"
I20250629 01:58:08.173841 19898 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:08.242904 19898 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:08.244232 19898 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:08.244621 19898 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:08.247107 19898 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:58:08.251303 19898 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:58:08.251502 19898 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:08.251753 19898 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:58:08.251901 19898 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:08.379376 19898 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.65:40405
I20250629 01:58:08.379516 20026 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.65:40405 every 8 connection(s)
I20250629 01:58:08.381770 19898 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250629 01:58:08.388233 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 19898
I20250629 01:58:08.388756 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250629 01:58:08.395498 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.66:0
--local_ip_for_outbound_sockets=127.17.83.66
--webserver_interface=127.17.83.66
--webserver_port=0
--tserver_master_addrs=127.17.83.126:36043
--builtin_ntp_servers=127.17.83.84:46063
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250629 01:58:08.403333 20027 heartbeater.cc:344] Connected to a master server at 127.17.83.126:36043
I20250629 01:58:08.403741 20027 heartbeater.cc:461] Registering TS with master...
I20250629 01:58:08.404949 20027 heartbeater.cc:507] Master 127.17.83.126:36043 requested a full tablet report, sending...
I20250629 01:58:08.407687 19839 ts_manager.cc:194] Registered new tserver with Master: f3284a72ac914825b8e973e0a5211d16 (127.17.83.65:40405)
I20250629 01:58:08.409454 19839 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.65:59203
W20250629 01:58:08.682406 20031 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:08.682808 20031 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:08.683286 20031 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:08.711687 20031 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:08.712347 20031 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.66
I20250629 01:58:08.743518 20031 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:46063
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.66:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.17.83.66
--webserver_port=0
--tserver_master_addrs=127.17.83.126:36043
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.66
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:08.744702 20031 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:08.746228 20031 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:08.759840 20037 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:09.412879 20027 heartbeater.cc:499] Master 127.17.83.126:36043 was elected leader, sending a full tablet report...
W20250629 01:58:08.760035 20038 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:09.876150 20040 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:09.877957 20039 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1116 milliseconds
I20250629 01:58:09.878047 20031 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:58:09.879184 20031 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:09.881803 20031 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:09.883234 20031 hybrid_clock.cc:648] HybridClock initialized: now 1751162289883173 us; error 42 us; skew 500 ppm
I20250629 01:58:09.883920 20031 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:09.895320 20031 webserver.cc:469] Webserver started at http://127.17.83.66:42355/ using document root <none> and password file <none>
I20250629 01:58:09.896241 20031 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:09.896467 20031 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:09.897051 20031 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:58:09.901588 20031 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "5a417b5991864f7a97671a9bdf9e27c4"
format_stamp: "Formatted at 2025-06-29 01:58:09 on dist-test-slave-v1mb"
I20250629 01:58:09.902577 20031 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "5a417b5991864f7a97671a9bdf9e27c4"
format_stamp: "Formatted at 2025-06-29 01:58:09 on dist-test-slave-v1mb"
I20250629 01:58:09.909716 20031 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.004s
I20250629 01:58:09.914723 20047 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:09.915637 20031 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.001s sys 0.002s
I20250629 01:58:09.915930 20031 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "5a417b5991864f7a97671a9bdf9e27c4"
format_stamp: "Formatted at 2025-06-29 01:58:09 on dist-test-slave-v1mb"
I20250629 01:58:09.916218 20031 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:09.958446 20031 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:09.959872 20031 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:09.960271 20031 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:09.962523 20031 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:58:09.966208 20031 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:58:09.966389 20031 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:09.966626 20031 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:58:09.966831 20031 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:10.090128 20031 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.66:46651
I20250629 01:58:10.090250 20159 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.66:46651 every 8 connection(s)
I20250629 01:58:10.092641 20031 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250629 01:58:10.101397 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 20031
I20250629 01:58:10.101861 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250629 01:58:10.107856 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.67:0
--local_ip_for_outbound_sockets=127.17.83.67
--webserver_interface=127.17.83.67
--webserver_port=0
--tserver_master_addrs=127.17.83.126:36043
--builtin_ntp_servers=127.17.83.84:46063
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250629 01:58:10.113582 20160 heartbeater.cc:344] Connected to a master server at 127.17.83.126:36043
I20250629 01:58:10.114034 20160 heartbeater.cc:461] Registering TS with master...
I20250629 01:58:10.115087 20160 heartbeater.cc:507] Master 127.17.83.126:36043 requested a full tablet report, sending...
I20250629 01:58:10.117180 19839 ts_manager.cc:194] Registered new tserver with Master: 5a417b5991864f7a97671a9bdf9e27c4 (127.17.83.66:46651)
I20250629 01:58:10.118933 19839 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.66:57677
W20250629 01:58:10.399391 20164 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:10.399889 20164 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:10.400355 20164 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:10.431712 20164 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:10.432544 20164 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.67
I20250629 01:58:10.465533 20164 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:46063
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.67:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.17.83.67
--webserver_port=0
--tserver_master_addrs=127.17.83.126:36043
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.67
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:10.466804 20164 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:10.468714 20164 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:10.484069 20171 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:11.121861 20160 heartbeater.cc:499] Master 127.17.83.126:36043 was elected leader, sending a full tablet report...
W20250629 01:58:10.484105 20170 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:10.485744 20173 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:11.628346 20172 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250629 01:58:11.628403 20164 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:58:11.632180 20164 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:11.634184 20164 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:11.635500 20164 hybrid_clock.cc:648] HybridClock initialized: now 1751162291635466 us; error 51 us; skew 500 ppm
I20250629 01:58:11.636230 20164 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:11.642112 20164 webserver.cc:469] Webserver started at http://127.17.83.67:46723/ using document root <none> and password file <none>
I20250629 01:58:11.642911 20164 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:11.643097 20164 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:11.643518 20164 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:58:11.647747 20164 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "869f23be9b5c4eb19683957de1abe462"
format_stamp: "Formatted at 2025-06-29 01:58:11 on dist-test-slave-v1mb"
I20250629 01:58:11.648761 20164 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "869f23be9b5c4eb19683957de1abe462"
format_stamp: "Formatted at 2025-06-29 01:58:11 on dist-test-slave-v1mb"
I20250629 01:58:11.655128 20164 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.001s
I20250629 01:58:11.661190 20180 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:11.662460 20164 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.006s sys 0.000s
I20250629 01:58:11.662856 20164 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "869f23be9b5c4eb19683957de1abe462"
format_stamp: "Formatted at 2025-06-29 01:58:11 on dist-test-slave-v1mb"
I20250629 01:58:11.663334 20164 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:11.721257 20164 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:11.722621 20164 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:11.723029 20164 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:11.725381 20164 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:58:11.729174 20164 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:58:11.729368 20164 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:11.729590 20164 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:58:11.729734 20164 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:11.855043 20164 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.67:41283
I20250629 01:58:11.855129 20292 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.67:41283 every 8 connection(s)
I20250629 01:58:11.857342 20164 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250629 01:58:11.865895 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 20164
I20250629 01:58:11.866250 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250629 01:58:11.876608 20293 heartbeater.cc:344] Connected to a master server at 127.17.83.126:36043
I20250629 01:58:11.877000 20293 heartbeater.cc:461] Registering TS with master...
I20250629 01:58:11.877882 20293 heartbeater.cc:507] Master 127.17.83.126:36043 requested a full tablet report, sending...
I20250629 01:58:11.879693 19839 ts_manager.cc:194] Registered new tserver with Master: 869f23be9b5c4eb19683957de1abe462 (127.17.83.67:41283)
I20250629 01:58:11.880774 19839 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.67:38505
I20250629 01:58:11.885246 17741 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250629 01:58:11.913070 19839 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:57326:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250629 01:58:11.930761 19839 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250629 01:58:11.976153 20095 tablet_service.cc:1468] Processing CreateTablet for tablet 6b9b5305aeed4d6aa4003640ed43cf4f (DEFAULT_TABLE table=TestTable [id=45d32f6365e4490493a48c60a937f89e]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:58:11.977986 20095 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 6b9b5305aeed4d6aa4003640ed43cf4f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:58:11.981571 19962 tablet_service.cc:1468] Processing CreateTablet for tablet 6b9b5305aeed4d6aa4003640ed43cf4f (DEFAULT_TABLE table=TestTable [id=45d32f6365e4490493a48c60a937f89e]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:58:11.983182 19962 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 6b9b5305aeed4d6aa4003640ed43cf4f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:58:11.985374 20228 tablet_service.cc:1468] Processing CreateTablet for tablet 6b9b5305aeed4d6aa4003640ed43cf4f (DEFAULT_TABLE table=TestTable [id=45d32f6365e4490493a48c60a937f89e]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:58:11.987941 20228 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 6b9b5305aeed4d6aa4003640ed43cf4f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:58:11.999092 20312 tablet_bootstrap.cc:492] T 6b9b5305aeed4d6aa4003640ed43cf4f P 5a417b5991864f7a97671a9bdf9e27c4: Bootstrap starting.
I20250629 01:58:12.004741 20312 tablet_bootstrap.cc:654] T 6b9b5305aeed4d6aa4003640ed43cf4f P 5a417b5991864f7a97671a9bdf9e27c4: Neither blocks nor log segments found. Creating new log.
I20250629 01:58:12.006438 20312 log.cc:826] T 6b9b5305aeed4d6aa4003640ed43cf4f P 5a417b5991864f7a97671a9bdf9e27c4: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:12.011004 20312 tablet_bootstrap.cc:492] T 6b9b5305aeed4d6aa4003640ed43cf4f P 5a417b5991864f7a97671a9bdf9e27c4: No bootstrap required, opened a new log
I20250629 01:58:12.011447 20312 ts_tablet_manager.cc:1397] T 6b9b5305aeed4d6aa4003640ed43cf4f P 5a417b5991864f7a97671a9bdf9e27c4: Time spent bootstrapping tablet: real 0.013s user 0.012s sys 0.000s
I20250629 01:58:12.012269 20314 tablet_bootstrap.cc:492] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16: Bootstrap starting.
I20250629 01:58:12.016173 20315 tablet_bootstrap.cc:492] T 6b9b5305aeed4d6aa4003640ed43cf4f P 869f23be9b5c4eb19683957de1abe462: Bootstrap starting.
I20250629 01:58:12.019515 20314 tablet_bootstrap.cc:654] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16: Neither blocks nor log segments found. Creating new log.
I20250629 01:58:12.021165 20315 tablet_bootstrap.cc:654] T 6b9b5305aeed4d6aa4003640ed43cf4f P 869f23be9b5c4eb19683957de1abe462: Neither blocks nor log segments found. Creating new log.
I20250629 01:58:12.021584 20314 log.cc:826] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:12.022608 20315 log.cc:826] T 6b9b5305aeed4d6aa4003640ed43cf4f P 869f23be9b5c4eb19683957de1abe462: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:12.028179 20315 tablet_bootstrap.cc:492] T 6b9b5305aeed4d6aa4003640ed43cf4f P 869f23be9b5c4eb19683957de1abe462: No bootstrap required, opened a new log
I20250629 01:58:12.028541 20315 ts_tablet_manager.cc:1397] T 6b9b5305aeed4d6aa4003640ed43cf4f P 869f23be9b5c4eb19683957de1abe462: Time spent bootstrapping tablet: real 0.013s user 0.007s sys 0.004s
I20250629 01:58:12.029495 20314 tablet_bootstrap.cc:492] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16: No bootstrap required, opened a new log
I20250629 01:58:12.029924 20314 ts_tablet_manager.cc:1397] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16: Time spent bootstrapping tablet: real 0.019s user 0.015s sys 0.000s
I20250629 01:58:12.036847 20312 raft_consensus.cc:357] T 6b9b5305aeed4d6aa4003640ed43cf4f P 5a417b5991864f7a97671a9bdf9e27c4 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5a417b5991864f7a97671a9bdf9e27c4" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46651 } } peers { permanent_uuid: "869f23be9b5c4eb19683957de1abe462" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41283 } } peers { permanent_uuid: "f3284a72ac914825b8e973e0a5211d16" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40405 } }
I20250629 01:58:12.037724 20312 raft_consensus.cc:383] T 6b9b5305aeed4d6aa4003640ed43cf4f P 5a417b5991864f7a97671a9bdf9e27c4 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:58:12.038004 20312 raft_consensus.cc:738] T 6b9b5305aeed4d6aa4003640ed43cf4f P 5a417b5991864f7a97671a9bdf9e27c4 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5a417b5991864f7a97671a9bdf9e27c4, State: Initialized, Role: FOLLOWER
I20250629 01:58:12.038859 20312 consensus_queue.cc:260] T 6b9b5305aeed4d6aa4003640ed43cf4f P 5a417b5991864f7a97671a9bdf9e27c4 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5a417b5991864f7a97671a9bdf9e27c4" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46651 } } peers { permanent_uuid: "869f23be9b5c4eb19683957de1abe462" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41283 } } peers { permanent_uuid: "f3284a72ac914825b8e973e0a5211d16" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40405 } }
I20250629 01:58:12.042737 20312 ts_tablet_manager.cc:1428] T 6b9b5305aeed4d6aa4003640ed43cf4f P 5a417b5991864f7a97671a9bdf9e27c4: Time spent starting tablet: real 0.031s user 0.030s sys 0.000s
I20250629 01:58:12.046263 20315 raft_consensus.cc:357] T 6b9b5305aeed4d6aa4003640ed43cf4f P 869f23be9b5c4eb19683957de1abe462 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5a417b5991864f7a97671a9bdf9e27c4" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46651 } } peers { permanent_uuid: "869f23be9b5c4eb19683957de1abe462" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41283 } } peers { permanent_uuid: "f3284a72ac914825b8e973e0a5211d16" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40405 } }
I20250629 01:58:12.046921 20315 raft_consensus.cc:383] T 6b9b5305aeed4d6aa4003640ed43cf4f P 869f23be9b5c4eb19683957de1abe462 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:58:12.047128 20315 raft_consensus.cc:738] T 6b9b5305aeed4d6aa4003640ed43cf4f P 869f23be9b5c4eb19683957de1abe462 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 869f23be9b5c4eb19683957de1abe462, State: Initialized, Role: FOLLOWER
I20250629 01:58:12.047886 20315 consensus_queue.cc:260] T 6b9b5305aeed4d6aa4003640ed43cf4f P 869f23be9b5c4eb19683957de1abe462 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5a417b5991864f7a97671a9bdf9e27c4" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46651 } } peers { permanent_uuid: "869f23be9b5c4eb19683957de1abe462" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41283 } } peers { permanent_uuid: "f3284a72ac914825b8e973e0a5211d16" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40405 } }
I20250629 01:58:12.052373 20293 heartbeater.cc:499] Master 127.17.83.126:36043 was elected leader, sending a full tablet report...
I20250629 01:58:12.053256 20315 ts_tablet_manager.cc:1428] T 6b9b5305aeed4d6aa4003640ed43cf4f P 869f23be9b5c4eb19683957de1abe462: Time spent starting tablet: real 0.024s user 0.022s sys 0.003s
I20250629 01:58:12.053577 20314 raft_consensus.cc:357] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5a417b5991864f7a97671a9bdf9e27c4" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46651 } } peers { permanent_uuid: "869f23be9b5c4eb19683957de1abe462" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41283 } } peers { permanent_uuid: "f3284a72ac914825b8e973e0a5211d16" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40405 } }
I20250629 01:58:12.054454 20314 raft_consensus.cc:383] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:58:12.054740 20314 raft_consensus.cc:738] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f3284a72ac914825b8e973e0a5211d16, State: Initialized, Role: FOLLOWER
I20250629 01:58:12.055675 20314 consensus_queue.cc:260] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5a417b5991864f7a97671a9bdf9e27c4" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46651 } } peers { permanent_uuid: "869f23be9b5c4eb19683957de1abe462" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41283 } } peers { permanent_uuid: "f3284a72ac914825b8e973e0a5211d16" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40405 } }
I20250629 01:58:12.059454 20314 ts_tablet_manager.cc:1428] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16: Time spent starting tablet: real 0.029s user 0.022s sys 0.009s
W20250629 01:58:12.098613 20161 tablet.cc:2378] T 6b9b5305aeed4d6aa4003640ed43cf4f P 5a417b5991864f7a97671a9bdf9e27c4: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250629 01:58:12.111773 20294 tablet.cc:2378] T 6b9b5305aeed4d6aa4003640ed43cf4f P 869f23be9b5c4eb19683957de1abe462: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250629 01:58:12.139539 20028 tablet.cc:2378] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250629 01:58:12.284330 20320 raft_consensus.cc:491] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250629 01:58:12.284910 20320 raft_consensus.cc:513] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5a417b5991864f7a97671a9bdf9e27c4" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46651 } } peers { permanent_uuid: "869f23be9b5c4eb19683957de1abe462" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41283 } } peers { permanent_uuid: "f3284a72ac914825b8e973e0a5211d16" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40405 } }
I20250629 01:58:12.288023 20320 leader_election.cc:290] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 5a417b5991864f7a97671a9bdf9e27c4 (127.17.83.66:46651), 869f23be9b5c4eb19683957de1abe462 (127.17.83.67:41283)
I20250629 01:58:12.300786 20115 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "6b9b5305aeed4d6aa4003640ed43cf4f" candidate_uuid: "f3284a72ac914825b8e973e0a5211d16" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "5a417b5991864f7a97671a9bdf9e27c4" is_pre_election: true
I20250629 01:58:12.301097 20248 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "6b9b5305aeed4d6aa4003640ed43cf4f" candidate_uuid: "f3284a72ac914825b8e973e0a5211d16" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "869f23be9b5c4eb19683957de1abe462" is_pre_election: true
I20250629 01:58:12.301734 20115 raft_consensus.cc:2466] T 6b9b5305aeed4d6aa4003640ed43cf4f P 5a417b5991864f7a97671a9bdf9e27c4 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f3284a72ac914825b8e973e0a5211d16 in term 0.
I20250629 01:58:12.301862 20248 raft_consensus.cc:2466] T 6b9b5305aeed4d6aa4003640ed43cf4f P 869f23be9b5c4eb19683957de1abe462 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f3284a72ac914825b8e973e0a5211d16 in term 0.
I20250629 01:58:12.303049 19917 leader_election.cc:304] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 5a417b5991864f7a97671a9bdf9e27c4, f3284a72ac914825b8e973e0a5211d16; no voters:
I20250629 01:58:12.303957 20320 raft_consensus.cc:2802] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250629 01:58:12.304335 20320 raft_consensus.cc:491] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250629 01:58:12.304654 20320 raft_consensus.cc:3058] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:58:12.310606 20320 raft_consensus.cc:513] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5a417b5991864f7a97671a9bdf9e27c4" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46651 } } peers { permanent_uuid: "869f23be9b5c4eb19683957de1abe462" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41283 } } peers { permanent_uuid: "f3284a72ac914825b8e973e0a5211d16" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40405 } }
I20250629 01:58:12.312868 20115 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "6b9b5305aeed4d6aa4003640ed43cf4f" candidate_uuid: "f3284a72ac914825b8e973e0a5211d16" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "5a417b5991864f7a97671a9bdf9e27c4"
I20250629 01:58:12.313102 20248 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "6b9b5305aeed4d6aa4003640ed43cf4f" candidate_uuid: "f3284a72ac914825b8e973e0a5211d16" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "869f23be9b5c4eb19683957de1abe462"
I20250629 01:58:12.313382 20115 raft_consensus.cc:3058] T 6b9b5305aeed4d6aa4003640ed43cf4f P 5a417b5991864f7a97671a9bdf9e27c4 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:58:12.313594 20248 raft_consensus.cc:3058] T 6b9b5305aeed4d6aa4003640ed43cf4f P 869f23be9b5c4eb19683957de1abe462 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:58:12.312546 20320 leader_election.cc:290] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [CANDIDATE]: Term 1 election: Requested vote from peers 5a417b5991864f7a97671a9bdf9e27c4 (127.17.83.66:46651), 869f23be9b5c4eb19683957de1abe462 (127.17.83.67:41283)
I20250629 01:58:12.320454 20115 raft_consensus.cc:2466] T 6b9b5305aeed4d6aa4003640ed43cf4f P 5a417b5991864f7a97671a9bdf9e27c4 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f3284a72ac914825b8e973e0a5211d16 in term 1.
I20250629 01:58:12.320541 20248 raft_consensus.cc:2466] T 6b9b5305aeed4d6aa4003640ed43cf4f P 869f23be9b5c4eb19683957de1abe462 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f3284a72ac914825b8e973e0a5211d16 in term 1.
I20250629 01:58:12.321530 19917 leader_election.cc:304] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 5a417b5991864f7a97671a9bdf9e27c4, f3284a72ac914825b8e973e0a5211d16; no voters:
I20250629 01:58:12.322137 20320 raft_consensus.cc:2802] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:58:12.324237 20320 raft_consensus.cc:695] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [term 1 LEADER]: Becoming Leader. State: Replica: f3284a72ac914825b8e973e0a5211d16, State: Running, Role: LEADER
I20250629 01:58:12.325067 20320 consensus_queue.cc:237] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5a417b5991864f7a97671a9bdf9e27c4" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46651 } } peers { permanent_uuid: "869f23be9b5c4eb19683957de1abe462" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41283 } } peers { permanent_uuid: "f3284a72ac914825b8e973e0a5211d16" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40405 } }
I20250629 01:58:12.336604 19839 catalog_manager.cc:5582] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 reported cstate change: term changed from 0 to 1, leader changed from <none> to f3284a72ac914825b8e973e0a5211d16 (127.17.83.65). New cstate: current_term: 1 leader_uuid: "f3284a72ac914825b8e973e0a5211d16" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "5a417b5991864f7a97671a9bdf9e27c4" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46651 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "869f23be9b5c4eb19683957de1abe462" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41283 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "f3284a72ac914825b8e973e0a5211d16" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40405 } health_report { overall_health: HEALTHY } } }
I20250629 01:58:12.353291 17741 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250629 01:58:12.357149 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver f3284a72ac914825b8e973e0a5211d16 to finish bootstrapping
I20250629 01:58:12.368748 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 5a417b5991864f7a97671a9bdf9e27c4 to finish bootstrapping
I20250629 01:58:12.378588 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 869f23be9b5c4eb19683957de1abe462 to finish bootstrapping
W20250629 01:58:12.532389 19870 debug-util.cc:398] Leaking SignalData structure 0x7b08000a8de0 after lost signal to thread 19807
I20250629 01:58:12.808110 20320 consensus_queue.cc:1035] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [LEADER]: Connected to new peer: Peer: permanent_uuid: "869f23be9b5c4eb19683957de1abe462" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 41283 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250629 01:58:12.882135 20341 consensus_queue.cc:1035] T 6b9b5305aeed4d6aa4003640ed43cf4f P f3284a72ac914825b8e973e0a5211d16 [LEADER]: Connected to new peer: Peer: permanent_uuid: "5a417b5991864f7a97671a9bdf9e27c4" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46651 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250629 01:58:14.136445 19838 server_base.cc:1130] Unauthorized access attempt to method kudu.master.MasterService.RefreshAuthzCache from {username='slave'} at 127.0.0.1:57344
I20250629 01:58:15.169628 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 19898
I20250629 01:58:15.195003 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 20031
I20250629 01:58:15.219373 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 20164
I20250629 01:58:15.247831 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 19806
2025-06-29T01:58:15Z chronyd exiting
[ OK ] AdminCliTest.TestAuthzResetCacheNotAuthorized (10532 ms)
[ RUN ] AdminCliTest.TestRebuildTables
I20250629 01:58:15.298108 17741 test_util.cc:276] Using random seed: 1049503773
I20250629 01:58:15.302165 17741 ts_itest-base.cc:115] Starting cluster with:
I20250629 01:58:15.302313 17741 ts_itest-base.cc:116] --------------
I20250629 01:58:15.302428 17741 ts_itest-base.cc:117] 3 tablet servers
I20250629 01:58:15.302533 17741 ts_itest-base.cc:118] 3 replicas per TS
I20250629 01:58:15.302652 17741 ts_itest-base.cc:119] --------------
2025-06-29T01:58:15Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-29T01:58:15Z Disabled control of system clock
I20250629 01:58:15.335961 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:44751
--webserver_interface=127.17.83.126
--webserver_port=0
--builtin_ntp_servers=127.17.83.84:42817
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.17.83.126:44751 with env {}
W20250629 01:58:15.616339 20364 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:15.616936 20364 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:15.617381 20364 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:15.646507 20364 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250629 01:58:15.646898 20364 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:15.647146 20364 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250629 01:58:15.647394 20364 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250629 01:58:15.683027 20364 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:42817
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.17.83.126:44751
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:44751
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.17.83.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:15.684322 20364 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:15.685808 20364 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:15.701104 20370 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:15.701112 20371 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:15.702296 20364 server_base.cc:1048] running on GCE node
W20250629 01:58:15.701273 20373 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:16.820827 20364 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:16.823379 20364 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:16.824699 20364 hybrid_clock.cc:648] HybridClock initialized: now 1751162296824678 us; error 43 us; skew 500 ppm
I20250629 01:58:16.825430 20364 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:16.835752 20364 webserver.cc:469] Webserver started at http://127.17.83.126:41719/ using document root <none> and password file <none>
I20250629 01:58:16.836603 20364 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:16.836786 20364 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:16.837190 20364 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:58:16.841449 20364 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "e51a6e2ab5fe446482623814821c0960"
format_stamp: "Formatted at 2025-06-29 01:58:16 on dist-test-slave-v1mb"
I20250629 01:58:16.842372 20364 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "e51a6e2ab5fe446482623814821c0960"
format_stamp: "Formatted at 2025-06-29 01:58:16 on dist-test-slave-v1mb"
I20250629 01:58:16.848867 20364 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.000s sys 0.009s
I20250629 01:58:16.853837 20380 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:16.854857 20364 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.001s
I20250629 01:58:16.855116 20364 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
uuid: "e51a6e2ab5fe446482623814821c0960"
format_stamp: "Formatted at 2025-06-29 01:58:16 on dist-test-slave-v1mb"
I20250629 01:58:16.855439 20364 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:16.907166 20364 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:16.908515 20364 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:16.908901 20364 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:16.973855 20364 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.126:44751
I20250629 01:58:16.973975 20431 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.126:44751 every 8 connection(s)
I20250629 01:58:16.976389 20364 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250629 01:58:16.981452 20432 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:58:16.984370 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 20364
I20250629 01:58:16.984757 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250629 01:58:17.002584 20432 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Bootstrap starting.
I20250629 01:58:17.008572 20432 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Neither blocks nor log segments found. Creating new log.
I20250629 01:58:17.010131 20432 log.cc:826] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:17.014233 20432 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: No bootstrap required, opened a new log
I20250629 01:58:17.030241 20432 raft_consensus.cc:357] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:17.030979 20432 raft_consensus.cc:383] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:58:17.031289 20432 raft_consensus.cc:738] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e51a6e2ab5fe446482623814821c0960, State: Initialized, Role: FOLLOWER
I20250629 01:58:17.032017 20432 consensus_queue.cc:260] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:17.032526 20432 raft_consensus.cc:397] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:58:17.032792 20432 raft_consensus.cc:491] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:58:17.033072 20432 raft_consensus.cc:3058] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:58:17.036686 20432 raft_consensus.cc:513] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:17.037377 20432 leader_election.cc:304] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: e51a6e2ab5fe446482623814821c0960; no voters:
I20250629 01:58:17.039182 20432 leader_election.cc:290] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250629 01:58:17.039882 20437 raft_consensus.cc:2802] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:58:17.041909 20437 raft_consensus.cc:695] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 1 LEADER]: Becoming Leader. State: Replica: e51a6e2ab5fe446482623814821c0960, State: Running, Role: LEADER
I20250629 01:58:17.042619 20437 consensus_queue.cc:237] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:17.043493 20432 sys_catalog.cc:564] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: configured and running, proceeding with master startup.
I20250629 01:58:17.052520 20439 sys_catalog.cc:455] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: SysCatalogTable state changed. Reason: New leader e51a6e2ab5fe446482623814821c0960. Latest consensus state: current_term: 1 leader_uuid: "e51a6e2ab5fe446482623814821c0960" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } } }
I20250629 01:58:17.053015 20439 sys_catalog.cc:458] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: This master's current role is: LEADER
I20250629 01:58:17.053472 20438 sys_catalog.cc:455] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "e51a6e2ab5fe446482623814821c0960" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } } }
I20250629 01:58:17.054255 20438 sys_catalog.cc:458] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: This master's current role is: LEADER
I20250629 01:58:17.057495 20446 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250629 01:58:17.069010 20446 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250629 01:58:17.083496 20446 catalog_manager.cc:1349] Generated new cluster ID: 900c6c66404c42b0bd9faf4c96645d7e
I20250629 01:58:17.083693 20446 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250629 01:58:17.116338 20446 catalog_manager.cc:1372] Generated new certificate authority record
I20250629 01:58:17.118276 20446 catalog_manager.cc:1506] Loading token signing keys...
I20250629 01:58:17.133927 20446 catalog_manager.cc:5955] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Generated new TSK 0
I20250629 01:58:17.134896 20446 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250629 01:58:17.156855 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.65:0
--local_ip_for_outbound_sockets=127.17.83.65
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:44751
--builtin_ntp_servers=127.17.83.84:42817
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250629 01:58:17.439711 20456 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:17.440137 20456 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:17.440562 20456 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:17.470211 20456 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:17.471009 20456 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.65
I20250629 01:58:17.503897 20456 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:42817
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:44751
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.65
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:17.505472 20456 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:17.507357 20456 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:17.523787 20463 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:17.526664 20465 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:17.528195 20456 server_base.cc:1048] running on GCE node
W20250629 01:58:17.527065 20462 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:18.685276 20456 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:18.687845 20456 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:18.689280 20456 hybrid_clock.cc:648] HybridClock initialized: now 1751162298689233 us; error 45 us; skew 500 ppm
I20250629 01:58:18.690237 20456 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:18.697214 20456 webserver.cc:469] Webserver started at http://127.17.83.65:33231/ using document root <none> and password file <none>
I20250629 01:58:18.698349 20456 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:18.698598 20456 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:18.699115 20456 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:58:18.704840 20456 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "a30df5c9a5634e1b9e6f378d3d3db88d"
format_stamp: "Formatted at 2025-06-29 01:58:18 on dist-test-slave-v1mb"
I20250629 01:58:18.706076 20456 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "a30df5c9a5634e1b9e6f378d3d3db88d"
format_stamp: "Formatted at 2025-06-29 01:58:18 on dist-test-slave-v1mb"
I20250629 01:58:18.714311 20456 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.005s sys 0.004s
I20250629 01:58:18.720788 20472 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:18.721638 20456 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.005s sys 0.000s
I20250629 01:58:18.721925 20456 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "a30df5c9a5634e1b9e6f378d3d3db88d"
format_stamp: "Formatted at 2025-06-29 01:58:18 on dist-test-slave-v1mb"
I20250629 01:58:18.722203 20456 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:18.792198 20456 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:18.793521 20456 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:18.793943 20456 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:18.796085 20456 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:58:18.800062 20456 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:58:18.800238 20456 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:18.800449 20456 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:58:18.800588 20456 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:18.923940 20456 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.65:43367
I20250629 01:58:18.924032 20584 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.65:43367 every 8 connection(s)
I20250629 01:58:18.926678 20456 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250629 01:58:18.937096 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 20456
I20250629 01:58:18.937577 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250629 01:58:18.943774 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.66:0
--local_ip_for_outbound_sockets=127.17.83.66
--webserver_interface=127.17.83.66
--webserver_port=0
--tserver_master_addrs=127.17.83.126:44751
--builtin_ntp_servers=127.17.83.84:42817
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250629 01:58:18.948062 20585 heartbeater.cc:344] Connected to a master server at 127.17.83.126:44751
I20250629 01:58:18.948622 20585 heartbeater.cc:461] Registering TS with master...
I20250629 01:58:18.949604 20585 heartbeater.cc:507] Master 127.17.83.126:44751 requested a full tablet report, sending...
I20250629 01:58:18.951961 20397 ts_manager.cc:194] Registered new tserver with Master: a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65:43367)
I20250629 01:58:18.953908 20397 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.65:40505
W20250629 01:58:19.230834 20589 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:19.231379 20589 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:19.231870 20589 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:19.261708 20589 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:19.262553 20589 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.66
I20250629 01:58:19.296299 20589 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:42817
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.66:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.17.83.66
--webserver_port=0
--tserver_master_addrs=127.17.83.126:44751
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.66
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:19.297616 20589 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:19.299161 20589 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:19.313231 20595 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:19.957441 20585 heartbeater.cc:499] Master 127.17.83.126:44751 was elected leader, sending a full tablet report...
W20250629 01:58:19.313741 20596 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:19.315862 20589 server_base.cc:1048] running on GCE node
W20250629 01:58:19.316810 20598 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:20.437398 20589 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:20.439736 20589 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:20.441084 20589 hybrid_clock.cc:648] HybridClock initialized: now 1751162300441048 us; error 53 us; skew 500 ppm
I20250629 01:58:20.441845 20589 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:20.447835 20589 webserver.cc:469] Webserver started at http://127.17.83.66:34819/ using document root <none> and password file <none>
I20250629 01:58:20.448810 20589 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:20.449016 20589 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:20.449412 20589 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:58:20.454170 20589 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "d2f2badc6b0341918964e73a7e3a0fe5"
format_stamp: "Formatted at 2025-06-29 01:58:20 on dist-test-slave-v1mb"
I20250629 01:58:20.455281 20589 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "d2f2badc6b0341918964e73a7e3a0fe5"
format_stamp: "Formatted at 2025-06-29 01:58:20 on dist-test-slave-v1mb"
I20250629 01:58:20.462253 20589 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.000s
I20250629 01:58:20.467530 20605 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:20.468364 20589 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250629 01:58:20.468652 20589 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "d2f2badc6b0341918964e73a7e3a0fe5"
format_stamp: "Formatted at 2025-06-29 01:58:20 on dist-test-slave-v1mb"
I20250629 01:58:20.468940 20589 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:20.520025 20589 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:20.521390 20589 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:20.521767 20589 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:20.524088 20589 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:58:20.527989 20589 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:58:20.528174 20589 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:20.528367 20589 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:58:20.528523 20589 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:20.656749 20589 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.66:45511
I20250629 01:58:20.656903 20717 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.66:45511 every 8 connection(s)
I20250629 01:58:20.659148 20589 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250629 01:58:20.668980 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 20589
I20250629 01:58:20.669366 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250629 01:58:20.675465 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.67:0
--local_ip_for_outbound_sockets=127.17.83.67
--webserver_interface=127.17.83.67
--webserver_port=0
--tserver_master_addrs=127.17.83.126:44751
--builtin_ntp_servers=127.17.83.84:42817
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250629 01:58:20.679100 20718 heartbeater.cc:344] Connected to a master server at 127.17.83.126:44751
I20250629 01:58:20.679646 20718 heartbeater.cc:461] Registering TS with master...
I20250629 01:58:20.680928 20718 heartbeater.cc:507] Master 127.17.83.126:44751 requested a full tablet report, sending...
I20250629 01:58:20.683422 20397 ts_manager.cc:194] Registered new tserver with Master: d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66:45511)
I20250629 01:58:20.684599 20397 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.66:50811
W20250629 01:58:20.966552 20722 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:20.967041 20722 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:20.967535 20722 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:20.997963 20722 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:20.998817 20722 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.67
I20250629 01:58:21.033286 20722 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:42817
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.67:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.17.83.67
--webserver_port=0
--tserver_master_addrs=127.17.83.126:44751
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.67
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:21.034595 20722 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:21.036231 20722 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:21.053316 20729 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:21.688294 20718 heartbeater.cc:499] Master 127.17.83.126:44751 was elected leader, sending a full tablet report...
W20250629 01:58:21.053612 20731 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:21.055053 20722 server_base.cc:1048] running on GCE node
W20250629 01:58:21.053584 20728 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:22.263598 20722 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:22.273587 20722 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:22.275244 20722 hybrid_clock.cc:648] HybridClock initialized: now 1751162302275155 us; error 91 us; skew 500 ppm
I20250629 01:58:22.276634 20722 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:22.329365 20722 webserver.cc:469] Webserver started at http://127.17.83.67:35205/ using document root <none> and password file <none>
I20250629 01:58:22.330636 20722 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:22.330971 20722 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:22.331610 20722 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:58:22.338356 20722 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "8e0c882ed93e411495d3bc24bb61eb11"
format_stamp: "Formatted at 2025-06-29 01:58:22 on dist-test-slave-v1mb"
I20250629 01:58:22.340185 20722 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "8e0c882ed93e411495d3bc24bb61eb11"
format_stamp: "Formatted at 2025-06-29 01:58:22 on dist-test-slave-v1mb"
I20250629 01:58:22.349601 20722 fs_manager.cc:696] Time spent creating directory manager: real 0.009s user 0.000s sys 0.008s
I20250629 01:58:22.356870 20738 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:22.357970 20722 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.004s
I20250629 01:58:22.358279 20722 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "8e0c882ed93e411495d3bc24bb61eb11"
format_stamp: "Formatted at 2025-06-29 01:58:22 on dist-test-slave-v1mb"
I20250629 01:58:22.358595 20722 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:22.415133 20722 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:22.416489 20722 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:22.416896 20722 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:22.419400 20722 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:58:22.423049 20722 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:58:22.423273 20722 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.001s sys 0.000s
I20250629 01:58:22.423511 20722 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:58:22.423664 20722 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:22.550438 20722 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.67:37557
I20250629 01:58:22.550537 20850 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.67:37557 every 8 connection(s)
I20250629 01:58:22.552841 20722 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250629 01:58:22.558547 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 20722
I20250629 01:58:22.558933 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250629 01:58:22.571900 20851 heartbeater.cc:344] Connected to a master server at 127.17.83.126:44751
I20250629 01:58:22.572321 20851 heartbeater.cc:461] Registering TS with master...
I20250629 01:58:22.573246 20851 heartbeater.cc:507] Master 127.17.83.126:44751 requested a full tablet report, sending...
I20250629 01:58:22.575295 20397 ts_manager.cc:194] Registered new tserver with Master: 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557)
I20250629 01:58:22.576627 20397 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.67:38061
I20250629 01:58:22.577852 17741 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250629 01:58:22.606961 17741 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250629 01:58:22.607288 17741 test_util.cc:276] Using random seed: 1056812965
I20250629 01:58:22.645605 20396 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:47530:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250629 01:58:22.686318 20653 tablet_service.cc:1468] Processing CreateTablet for tablet 63f8fd3146fb4036bf60a5a070659dc8 (DEFAULT_TABLE table=TestTable [id=26b1f0dde35c41a7a4d4f70c9e95856d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:58:22.687776 20653 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 63f8fd3146fb4036bf60a5a070659dc8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:58:22.706735 20871 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Bootstrap starting.
I20250629 01:58:22.711722 20871 tablet_bootstrap.cc:654] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Neither blocks nor log segments found. Creating new log.
I20250629 01:58:22.713375 20871 log.cc:826] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:22.717568 20871 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: No bootstrap required, opened a new log
I20250629 01:58:22.717912 20871 ts_tablet_manager.cc:1397] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Time spent bootstrapping tablet: real 0.012s user 0.003s sys 0.007s
I20250629 01:58:22.734601 20871 raft_consensus.cc:357] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } }
I20250629 01:58:22.735176 20871 raft_consensus.cc:383] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:58:22.735416 20871 raft_consensus.cc:738] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d2f2badc6b0341918964e73a7e3a0fe5, State: Initialized, Role: FOLLOWER
I20250629 01:58:22.735996 20871 consensus_queue.cc:260] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } }
I20250629 01:58:22.736470 20871 raft_consensus.cc:397] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:58:22.736693 20871 raft_consensus.cc:491] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:58:22.736958 20871 raft_consensus.cc:3058] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:58:22.741498 20871 raft_consensus.cc:513] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } }
I20250629 01:58:22.742131 20871 leader_election.cc:304] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: d2f2badc6b0341918964e73a7e3a0fe5; no voters:
I20250629 01:58:22.743772 20871 leader_election.cc:290] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250629 01:58:22.744491 20873 raft_consensus.cc:2802] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:58:22.746771 20873 raft_consensus.cc:695] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 1 LEADER]: Becoming Leader. State: Replica: d2f2badc6b0341918964e73a7e3a0fe5, State: Running, Role: LEADER
I20250629 01:58:22.747611 20873 consensus_queue.cc:237] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } }
I20250629 01:58:22.747938 20871 ts_tablet_manager.cc:1428] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Time spent starting tablet: real 0.030s user 0.026s sys 0.005s
I20250629 01:58:22.756804 20396 catalog_manager.cc:5582] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 reported cstate change: term changed from 0 to 1, leader changed from <none> to d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66). New cstate: current_term: 1 leader_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } health_report { overall_health: HEALTHY } } }
I20250629 01:58:22.927069 17741 test_util.cc:276] Using random seed: 1057132735
I20250629 01:58:22.947670 20396 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:47542:
name: "TestTable1"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250629 01:58:22.973254 20786 tablet_service.cc:1468] Processing CreateTablet for tablet 35f5dbaa898b47369f784618572fd3c8 (DEFAULT_TABLE table=TestTable1 [id=5011e20c18c04b1e9259bf797c315b3a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:58:22.974499 20786 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 35f5dbaa898b47369f784618572fd3c8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:58:22.993029 20892 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Bootstrap starting.
I20250629 01:58:22.998608 20892 tablet_bootstrap.cc:654] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Neither blocks nor log segments found. Creating new log.
I20250629 01:58:23.000347 20892 log.cc:826] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:23.004434 20892 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: No bootstrap required, opened a new log
I20250629 01:58:23.004778 20892 ts_tablet_manager.cc:1397] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Time spent bootstrapping tablet: real 0.012s user 0.010s sys 0.000s
I20250629 01:58:23.021148 20892 raft_consensus.cc:357] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } }
I20250629 01:58:23.021603 20892 raft_consensus.cc:383] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:58:23.021831 20892 raft_consensus.cc:738] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8e0c882ed93e411495d3bc24bb61eb11, State: Initialized, Role: FOLLOWER
I20250629 01:58:23.022445 20892 consensus_queue.cc:260] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } }
I20250629 01:58:23.022904 20892 raft_consensus.cc:397] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:58:23.023131 20892 raft_consensus.cc:491] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:58:23.023445 20892 raft_consensus.cc:3058] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:58:23.027132 20892 raft_consensus.cc:513] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } }
I20250629 01:58:23.027786 20892 leader_election.cc:304] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8e0c882ed93e411495d3bc24bb61eb11; no voters:
I20250629 01:58:23.029469 20892 leader_election.cc:290] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250629 01:58:23.029868 20894 raft_consensus.cc:2802] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:58:23.032400 20851 heartbeater.cc:499] Master 127.17.83.126:44751 was elected leader, sending a full tablet report...
I20250629 01:58:23.032611 20892 ts_tablet_manager.cc:1428] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Time spent starting tablet: real 0.028s user 0.028s sys 0.000s
I20250629 01:58:23.033102 20894 raft_consensus.cc:695] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 1 LEADER]: Becoming Leader. State: Replica: 8e0c882ed93e411495d3bc24bb61eb11, State: Running, Role: LEADER
I20250629 01:58:23.033735 20894 consensus_queue.cc:237] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } }
I20250629 01:58:23.043869 20396 catalog_manager.cc:5582] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 reported cstate change: term changed from 0 to 1, leader changed from <none> to 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67). New cstate: current_term: 1 leader_uuid: "8e0c882ed93e411495d3bc24bb61eb11" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } health_report { overall_health: HEALTHY } } }
I20250629 01:58:23.180015 17741 test_util.cc:276] Using random seed: 1057385675
I20250629 01:58:23.202226 20397 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:47558:
name: "TestTable2"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250629 01:58:23.229701 20520 tablet_service.cc:1468] Processing CreateTablet for tablet aa66fb7daabf41dcab7d7e05bdd17b4a (DEFAULT_TABLE table=TestTable2 [id=2b2de49158564208bc7e05e7ff05a665]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:58:23.231029 20520 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet aa66fb7daabf41dcab7d7e05bdd17b4a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:58:23.248912 20913 tablet_bootstrap.cc:492] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap starting.
I20250629 01:58:23.254462 20913 tablet_bootstrap.cc:654] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Neither blocks nor log segments found. Creating new log.
I20250629 01:58:23.256500 20913 log.cc:826] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:23.260473 20913 tablet_bootstrap.cc:492] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: No bootstrap required, opened a new log
I20250629 01:58:23.260841 20913 ts_tablet_manager.cc:1397] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Time spent bootstrapping tablet: real 0.012s user 0.010s sys 0.000s
I20250629 01:58:23.276590 20913 raft_consensus.cc:357] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } }
I20250629 01:58:23.277097 20913 raft_consensus.cc:383] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:58:23.277314 20913 raft_consensus.cc:738] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a30df5c9a5634e1b9e6f378d3d3db88d, State: Initialized, Role: FOLLOWER
I20250629 01:58:23.277928 20913 consensus_queue.cc:260] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } }
I20250629 01:58:23.278373 20913 raft_consensus.cc:397] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:58:23.278605 20913 raft_consensus.cc:491] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:58:23.278892 20913 raft_consensus.cc:3058] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:58:23.282639 20913 raft_consensus.cc:513] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } }
I20250629 01:58:23.283201 20913 leader_election.cc:304] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: a30df5c9a5634e1b9e6f378d3d3db88d; no voters:
I20250629 01:58:23.284830 20913 leader_election.cc:290] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 1 election: Requested vote from peers
I20250629 01:58:23.285187 20915 raft_consensus.cc:2802] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:58:23.288852 20915 raft_consensus.cc:695] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 1 LEADER]: Becoming Leader. State: Replica: a30df5c9a5634e1b9e6f378d3d3db88d, State: Running, Role: LEADER
I20250629 01:58:23.289577 20915 consensus_queue.cc:237] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } }
I20250629 01:58:23.290012 20913 ts_tablet_manager.cc:1428] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Time spent starting tablet: real 0.029s user 0.026s sys 0.004s
I20250629 01:58:23.299669 20397 catalog_manager.cc:5582] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d reported cstate change: term changed from 0 to 1, leader changed from <none> to a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65). New cstate: current_term: 1 leader_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } health_report { overall_health: HEALTHY } } }
W20250629 01:58:23.332907 20714 debug-util.cc:398] Leaking SignalData structure 0x7b08000ace80 after lost signal to thread 20590
I20250629 01:58:23.523977 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 20364
W20250629 01:58:23.773072 20718 heartbeater.cc:646] Failed to heartbeat to 127.17.83.126:44751 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.17.83.126:44751: connect: Connection refused (error 111)
W20250629 01:58:24.074434 20851 heartbeater.cc:646] Failed to heartbeat to 127.17.83.126:44751 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.17.83.126:44751: connect: Connection refused (error 111)
W20250629 01:58:24.328425 20585 heartbeater.cc:646] Failed to heartbeat to 127.17.83.126:44751 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.17.83.126:44751: connect: Connection refused (error 111)
I20250629 01:58:28.380343 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 20456
I20250629 01:58:28.403327 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 20589
I20250629 01:58:28.430737 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 20722
I20250629 01:58:28.456677 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:44751
--webserver_interface=127.17.83.126
--webserver_port=41719
--builtin_ntp_servers=127.17.83.84:42817
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.17.83.126:44751 with env {}
W20250629 01:58:28.730700 20993 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:28.731226 20993 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:28.731611 20993 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:28.760485 20993 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250629 01:58:28.760746 20993 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:28.760960 20993 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250629 01:58:28.761147 20993 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250629 01:58:28.791463 20993 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:42817
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.17.83.126:44751
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:44751
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.17.83.126
--webserver_port=41719
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:28.792589 20993 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:28.794061 20993 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:28.808842 20999 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:28.808887 21000 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:28.809767 21002 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:28.810979 20993 server_base.cc:1048] running on GCE node
I20250629 01:58:29.913362 20993 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:29.915966 20993 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:29.917331 20993 hybrid_clock.cc:648] HybridClock initialized: now 1751162309917291 us; error 49 us; skew 500 ppm
I20250629 01:58:29.918140 20993 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:29.928544 20993 webserver.cc:469] Webserver started at http://127.17.83.126:41719/ using document root <none> and password file <none>
I20250629 01:58:29.930038 20993 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:29.930403 20993 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:29.941844 20993 fs_manager.cc:714] Time spent opening directory manager: real 0.007s user 0.004s sys 0.004s
I20250629 01:58:29.947919 21009 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:29.949486 20993 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.002s sys 0.002s
I20250629 01:58:29.949971 20993 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
uuid: "e51a6e2ab5fe446482623814821c0960"
format_stamp: "Formatted at 2025-06-29 01:58:16 on dist-test-slave-v1mb"
I20250629 01:58:29.952350 20993 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:30.003124 20993 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:30.004612 20993 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:30.005045 20993 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:30.070247 20993 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.126:44751
I20250629 01:58:30.070318 21060 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.126:44751 every 8 connection(s)
I20250629 01:58:30.072830 20993 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250629 01:58:30.075904 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 20993
I20250629 01:58:30.077283 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.65:43367
--local_ip_for_outbound_sockets=127.17.83.65
--tserver_master_addrs=127.17.83.126:44751
--webserver_port=33231
--webserver_interface=127.17.83.65
--builtin_ntp_servers=127.17.83.84:42817
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250629 01:58:30.082670 21061 sys_catalog.cc:263] Verifying existing consensus state
I20250629 01:58:30.087067 21061 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Bootstrap starting.
I20250629 01:58:30.096493 21061 log.cc:826] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:30.141223 21061 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Bootstrap replayed 1/1 log segments. Stats: ops{read=18 overwritten=0 applied=18 ignored=0} inserts{seen=13 ignored=0} mutations{seen=10 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:30.142006 21061 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Bootstrap complete.
I20250629 01:58:30.162173 21061 raft_consensus.cc:357] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:30.164197 21061 raft_consensus.cc:738] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: e51a6e2ab5fe446482623814821c0960, State: Initialized, Role: FOLLOWER
I20250629 01:58:30.164989 21061 consensus_queue.cc:260] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 18, Last appended: 2.18, Last appended by leader: 18, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:30.165462 21061 raft_consensus.cc:397] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:58:30.165714 21061 raft_consensus.cc:491] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:58:30.166011 21061 raft_consensus.cc:3058] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 2 FOLLOWER]: Advancing to term 3
I20250629 01:58:30.171087 21061 raft_consensus.cc:513] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:30.171701 21061 leader_election.cc:304] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: e51a6e2ab5fe446482623814821c0960; no voters:
I20250629 01:58:30.173799 21061 leader_election.cc:290] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [CANDIDATE]: Term 3 election: Requested vote from peers
I20250629 01:58:30.174304 21065 raft_consensus.cc:2802] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 3 FOLLOWER]: Leader election won for term 3
I20250629 01:58:30.177424 21065 raft_consensus.cc:695] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 3 LEADER]: Becoming Leader. State: Replica: e51a6e2ab5fe446482623814821c0960, State: Running, Role: LEADER
I20250629 01:58:30.178265 21065 consensus_queue.cc:237] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 18, Committed index: 18, Last appended: 2.18, Last appended by leader: 18, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:30.178656 21061 sys_catalog.cc:564] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: configured and running, proceeding with master startup.
I20250629 01:58:30.189349 21066 sys_catalog.cc:455] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 3 leader_uuid: "e51a6e2ab5fe446482623814821c0960" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } } }
I20250629 01:58:30.190150 21066 sys_catalog.cc:458] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: This master's current role is: LEADER
I20250629 01:58:30.192520 21067 sys_catalog.cc:455] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: SysCatalogTable state changed. Reason: New leader e51a6e2ab5fe446482623814821c0960. Latest consensus state: current_term: 3 leader_uuid: "e51a6e2ab5fe446482623814821c0960" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } } }
I20250629 01:58:30.193341 21067 sys_catalog.cc:458] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: This master's current role is: LEADER
I20250629 01:58:30.199628 21072 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250629 01:58:30.211983 21072 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=2b2de49158564208bc7e05e7ff05a665]
I20250629 01:58:30.213496 21072 catalog_manager.cc:671] Loaded metadata for table TestTable [id=b41558b1fab6485a8927ac0695ac7d4f]
I20250629 01:58:30.214921 21072 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=ed12c0fe4a09403b8fef6746d56b567d]
I20250629 01:58:30.222190 21072 tablet_loader.cc:96] loaded metadata for tablet 35f5dbaa898b47369f784618572fd3c8 (table TestTable1 [id=ed12c0fe4a09403b8fef6746d56b567d])
I20250629 01:58:30.223301 21072 tablet_loader.cc:96] loaded metadata for tablet 63f8fd3146fb4036bf60a5a070659dc8 (table TestTable [id=b41558b1fab6485a8927ac0695ac7d4f])
I20250629 01:58:30.224531 21072 tablet_loader.cc:96] loaded metadata for tablet aa66fb7daabf41dcab7d7e05bdd17b4a (table TestTable2 [id=2b2de49158564208bc7e05e7ff05a665])
I20250629 01:58:30.227003 21072 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250629 01:58:30.231981 21072 catalog_manager.cc:1261] Loaded cluster ID: 900c6c66404c42b0bd9faf4c96645d7e
I20250629 01:58:30.232262 21072 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250629 01:58:30.240257 21072 catalog_manager.cc:1506] Loading token signing keys...
I20250629 01:58:30.245563 21072 catalog_manager.cc:5966] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Loaded TSK: 0
I20250629 01:58:30.247274 21072 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250629 01:58:30.403028 21063 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:30.403501 21063 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:30.403995 21063 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:30.437780 21063 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:30.438723 21063 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.65
I20250629 01:58:30.472663 21063 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:42817
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.65:43367
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.17.83.65
--webserver_port=33231
--tserver_master_addrs=127.17.83.126:44751
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.65
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:30.473919 21063 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:30.475373 21063 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:30.492759 21091 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:30.499508 21088 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:32.071122 21090 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1577 milliseconds
I20250629 01:58:32.071148 21063 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250629 01:58:30.493275 21089 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:32.076125 21063 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:32.080606 21063 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:32.082103 21063 hybrid_clock.cc:648] HybridClock initialized: now 1751162312082035 us; error 78 us; skew 500 ppm
I20250629 01:58:32.083243 21063 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:32.090993 21063 webserver.cc:469] Webserver started at http://127.17.83.65:33231/ using document root <none> and password file <none>
I20250629 01:58:32.091871 21063 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:32.092104 21063 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:32.099993 21063 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.003s sys 0.005s
I20250629 01:58:32.104665 21098 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:32.105737 21063 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250629 01:58:32.106035 21063 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "a30df5c9a5634e1b9e6f378d3d3db88d"
format_stamp: "Formatted at 2025-06-29 01:58:18 on dist-test-slave-v1mb"
I20250629 01:58:32.107914 21063 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:32.153378 21063 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:32.154806 21063 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:32.155272 21063 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:32.157661 21063 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:58:32.162879 21105 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250629 01:58:32.169809 21063 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250629 01:58:32.170055 21063 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.008s user 0.002s sys 0.000s
I20250629 01:58:32.170310 21063 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250629 01:58:32.174811 21063 ts_tablet_manager.cc:610] Registered 1 tablets
I20250629 01:58:32.175014 21063 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.000s sys 0.002s
I20250629 01:58:32.175388 21105 tablet_bootstrap.cc:492] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap starting.
I20250629 01:58:32.229187 21105 log.cc:826] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:32.308131 21105 tablet_bootstrap.cc:492] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:32.309000 21105 tablet_bootstrap.cc:492] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap complete.
I20250629 01:58:32.310319 21105 ts_tablet_manager.cc:1397] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Time spent bootstrapping tablet: real 0.135s user 0.096s sys 0.035s
I20250629 01:58:32.325453 21105 raft_consensus.cc:357] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } }
I20250629 01:58:32.327805 21105 raft_consensus.cc:738] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: a30df5c9a5634e1b9e6f378d3d3db88d, State: Initialized, Role: FOLLOWER
I20250629 01:58:32.328747 21105 consensus_queue.cc:260] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } }
I20250629 01:58:32.329511 21105 raft_consensus.cc:397] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:58:32.329854 21105 raft_consensus.cc:491] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:58:32.330302 21105 raft_consensus.cc:3058] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 1 FOLLOWER]: Advancing to term 2
I20250629 01:58:32.336179 21105 raft_consensus.cc:513] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } }
I20250629 01:58:32.336884 21105 leader_election.cc:304] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: a30df5c9a5634e1b9e6f378d3d3db88d; no voters:
I20250629 01:58:32.339532 21105 leader_election.cc:290] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 2 election: Requested vote from peers
I20250629 01:58:32.340209 21199 raft_consensus.cc:2802] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Leader election won for term 2
I20250629 01:58:32.344476 21199 raft_consensus.cc:695] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 LEADER]: Becoming Leader. State: Replica: a30df5c9a5634e1b9e6f378d3d3db88d, State: Running, Role: LEADER
I20250629 01:58:32.344650 21105 ts_tablet_manager.cc:1428] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Time spent starting tablet: real 0.034s user 0.035s sys 0.001s
I20250629 01:58:32.345561 21199 consensus_queue.cc:237] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } }
I20250629 01:58:32.363191 21063 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.65:43367
I20250629 01:58:32.363698 21217 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.65:43367 every 8 connection(s)
I20250629 01:58:32.365478 21063 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250629 01:58:32.368045 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 21063
I20250629 01:58:32.369789 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.66:45511
--local_ip_for_outbound_sockets=127.17.83.66
--tserver_master_addrs=127.17.83.126:44751
--webserver_port=34819
--webserver_interface=127.17.83.66
--builtin_ntp_servers=127.17.83.84:42817
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250629 01:58:32.389161 21218 heartbeater.cc:344] Connected to a master server at 127.17.83.126:44751
I20250629 01:58:32.389698 21218 heartbeater.cc:461] Registering TS with master...
I20250629 01:58:32.391454 21218 heartbeater.cc:507] Master 127.17.83.126:44751 requested a full tablet report, sending...
I20250629 01:58:32.397029 21026 ts_manager.cc:194] Registered new tserver with Master: a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65:43367)
I20250629 01:58:32.399752 21026 catalog_manager.cc:5582] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } health_report { overall_health: HEALTHY } } }
I20250629 01:58:32.443753 21026 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.65:56017
I20250629 01:58:32.447392 21218 heartbeater.cc:499] Master 127.17.83.126:44751 was elected leader, sending a full tablet report...
W20250629 01:58:32.678804 21222 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:32.679322 21222 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:32.679831 21222 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:32.709905 21222 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:32.710736 21222 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.66
I20250629 01:58:32.743048 21222 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:42817
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.66:45511
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.17.83.66
--webserver_port=34819
--tserver_master_addrs=127.17.83.126:44751
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.66
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:32.744275 21222 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:32.745852 21222 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:32.760228 21232 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:32.761096 21233 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:32.761220 21235 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:32.761868 21222 server_base.cc:1048] running on GCE node
I20250629 01:58:33.890429 21222 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:33.893112 21222 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:33.894500 21222 hybrid_clock.cc:648] HybridClock initialized: now 1751162313894446 us; error 73 us; skew 500 ppm
I20250629 01:58:33.895182 21222 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:33.901803 21222 webserver.cc:469] Webserver started at http://127.17.83.66:34819/ using document root <none> and password file <none>
I20250629 01:58:33.902604 21222 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:33.902833 21222 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:33.909569 21222 fs_manager.cc:714] Time spent opening directory manager: real 0.004s user 0.005s sys 0.000s
I20250629 01:58:33.914005 21242 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:33.914901 21222 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250629 01:58:33.915215 21222 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "d2f2badc6b0341918964e73a7e3a0fe5"
format_stamp: "Formatted at 2025-06-29 01:58:20 on dist-test-slave-v1mb"
I20250629 01:58:33.917060 21222 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:33.973176 21222 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:33.974620 21222 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:33.975000 21222 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:33.977450 21222 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:58:33.982735 21249 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250629 01:58:33.990375 21222 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250629 01:58:33.990597 21222 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.001s sys 0.001s
I20250629 01:58:33.990862 21222 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250629 01:58:33.994882 21222 ts_tablet_manager.cc:610] Registered 1 tablets
I20250629 01:58:33.995056 21222 ts_tablet_manager.cc:589] Time spent register tablets: real 0.004s user 0.001s sys 0.000s
I20250629 01:58:33.995431 21249 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Bootstrap starting.
I20250629 01:58:34.051652 21249 log.cc:826] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:34.169137 21222 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.66:45511
I20250629 01:58:34.169365 21356 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.66:45511 every 8 connection(s)
I20250629 01:58:34.169865 21249 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Bootstrap replayed 1/1 log segments. Stats: ops{read=9 overwritten=0 applied=9 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:34.170580 21249 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Bootstrap complete.
I20250629 01:58:34.171614 21222 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250629 01:58:34.171902 21249 ts_tablet_manager.cc:1397] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Time spent bootstrapping tablet: real 0.177s user 0.128s sys 0.046s
I20250629 01:58:34.178486 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 21222
I20250629 01:58:34.180317 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.67:37557
--local_ip_for_outbound_sockets=127.17.83.67
--tserver_master_addrs=127.17.83.126:44751
--webserver_port=35205
--webserver_interface=127.17.83.67
--builtin_ntp_servers=127.17.83.84:42817
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250629 01:58:34.191107 21249 raft_consensus.cc:357] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } }
I20250629 01:58:34.193641 21249 raft_consensus.cc:738] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: d2f2badc6b0341918964e73a7e3a0fe5, State: Initialized, Role: FOLLOWER
I20250629 01:58:34.194494 21249 consensus_queue.cc:260] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 9, Last appended: 1.9, Last appended by leader: 9, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } }
I20250629 01:58:34.194944 21249 raft_consensus.cc:397] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:58:34.195261 21249 raft_consensus.cc:491] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:58:34.195428 21357 heartbeater.cc:344] Connected to a master server at 127.17.83.126:44751
I20250629 01:58:34.195595 21249 raft_consensus.cc:3058] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 1 FOLLOWER]: Advancing to term 2
I20250629 01:58:34.195940 21357 heartbeater.cc:461] Registering TS with master...
I20250629 01:58:34.197227 21357 heartbeater.cc:507] Master 127.17.83.126:44751 requested a full tablet report, sending...
I20250629 01:58:34.200785 21249 raft_consensus.cc:513] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } }
I20250629 01:58:34.201395 21249 leader_election.cc:304] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: d2f2badc6b0341918964e73a7e3a0fe5; no voters:
I20250629 01:58:34.203882 21249 leader_election.cc:290] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250629 01:58:34.204617 21362 raft_consensus.cc:2802] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 FOLLOWER]: Leader election won for term 2
I20250629 01:58:34.210764 21026 ts_manager.cc:194] Registered new tserver with Master: d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66:45511)
I20250629 01:58:34.213439 21362 raft_consensus.cc:695] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 LEADER]: Becoming Leader. State: Replica: d2f2badc6b0341918964e73a7e3a0fe5, State: Running, Role: LEADER
I20250629 01:58:34.213824 21026 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.66:44571
I20250629 01:58:34.214521 21362 consensus_queue.cc:237] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 1.9, Last appended by leader: 9, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } }
I20250629 01:58:34.217630 21357 heartbeater.cc:499] Master 127.17.83.126:44751 was elected leader, sending a full tablet report...
I20250629 01:58:34.220501 21249 ts_tablet_manager.cc:1428] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Time spent starting tablet: real 0.048s user 0.035s sys 0.009s
I20250629 01:58:34.225767 21026 catalog_manager.cc:5582] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 reported cstate change: term changed from 0 to 2, leader changed from <none> to d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66), VOTER d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66) added. New cstate: current_term: 2 leader_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } health_report { overall_health: HEALTHY } } }
I20250629 01:58:34.259140 21312 consensus_queue.cc:237] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 9, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: true } }
I20250629 01:58:34.262375 21363 raft_consensus.cc:2953] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index -1 to 11, NON_VOTER a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65) added. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: true } } }
I20250629 01:58:34.271315 21011 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 63f8fd3146fb4036bf60a5a070659dc8 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250629 01:58:34.274338 21026 catalog_manager.cc:5582] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 reported cstate change: config changed from index -1 to 11, NON_VOTER a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65) added. New cstate: current_term: 2 leader_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
W20250629 01:58:34.275986 21243 consensus_peers.cc:489] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 -> Peer a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65:43367): Couldn't send request to peer a30df5c9a5634e1b9e6f378d3d3db88d. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 63f8fd3146fb4036bf60a5a070659dc8. This is attempt 1: this message will repeat every 5th retry.
W20250629 01:58:34.282251 21026 catalog_manager.cc:5260] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 63f8fd3146fb4036bf60a5a070659dc8 with cas_config_opid_index 11: no extra replica candidate found for tablet 63f8fd3146fb4036bf60a5a070659dc8 (table TestTable [id=b41558b1fab6485a8927ac0695ac7d4f]): Not found: could not select location for extra replica: not enough tablet servers to satisfy replica placement policy: the total number of registered tablet servers (2) does not allow for adding an extra replica; consider bringing up more to have at least 4 tablet servers up and running
W20250629 01:58:34.490115 21361 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:34.490556 21361 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:34.490989 21361 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:34.521616 21361 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:34.522393 21361 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.67
I20250629 01:58:34.553685 21361 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:42817
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.67:37557
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.17.83.67
--webserver_port=35205
--tserver_master_addrs=127.17.83.126:44751
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.67
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:34.554800 21361 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:34.556249 21361 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:34.572244 21381 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:34.815593 21387 ts_tablet_manager.cc:927] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: Initiating tablet copy from peer d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66:45511)
I20250629 01:58:34.818015 21387 tablet_copy_client.cc:323] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: tablet copy: Beginning tablet copy session from remote peer at address 127.17.83.66:45511
I20250629 01:58:34.828039 21332 tablet_copy_service.cc:140] P d2f2badc6b0341918964e73a7e3a0fe5: Received BeginTabletCopySession request for tablet 63f8fd3146fb4036bf60a5a070659dc8 from peer a30df5c9a5634e1b9e6f378d3d3db88d ({username='slave'} at 127.17.83.65:41769)
I20250629 01:58:34.828855 21332 tablet_copy_service.cc:161] P d2f2badc6b0341918964e73a7e3a0fe5: Beginning new tablet copy session on tablet 63f8fd3146fb4036bf60a5a070659dc8 from peer a30df5c9a5634e1b9e6f378d3d3db88d at {username='slave'} at 127.17.83.65:41769: session id = a30df5c9a5634e1b9e6f378d3d3db88d-63f8fd3146fb4036bf60a5a070659dc8
I20250629 01:58:34.839351 21332 tablet_copy_source_session.cc:215] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Tablet Copy: opened 0 blocks and 1 log segments
I20250629 01:58:34.844046 21387 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 63f8fd3146fb4036bf60a5a070659dc8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:58:34.853096 21387 tablet_copy_client.cc:806] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: tablet copy: Starting download of 0 data blocks...
I20250629 01:58:34.853521 21387 tablet_copy_client.cc:670] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: tablet copy: Starting download of 1 WAL segments...
I20250629 01:58:34.857430 21387 tablet_copy_client.cc:538] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250629 01:58:34.862425 21387 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap starting.
I20250629 01:58:34.956521 21387 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:34.957134 21387 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap complete.
I20250629 01:58:34.957538 21387 ts_tablet_manager.cc:1397] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: Time spent bootstrapping tablet: real 0.095s user 0.091s sys 0.004s
I20250629 01:58:34.959159 21387 raft_consensus.cc:357] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: true } }
I20250629 01:58:34.959596 21387 raft_consensus.cc:738] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: a30df5c9a5634e1b9e6f378d3d3db88d, State: Initialized, Role: LEARNER
I20250629 01:58:34.959983 21387 consensus_queue.cc:260] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: true } }
I20250629 01:58:34.962721 21387 ts_tablet_manager.cc:1428] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: Time spent starting tablet: real 0.005s user 0.006s sys 0.000s
I20250629 01:58:34.964213 21332 tablet_copy_service.cc:342] P d2f2badc6b0341918964e73a7e3a0fe5: Request end of tablet copy session a30df5c9a5634e1b9e6f378d3d3db88d-63f8fd3146fb4036bf60a5a070659dc8 received from {username='slave'} at 127.17.83.65:41769
I20250629 01:58:34.964582 21332 tablet_copy_service.cc:434] P d2f2badc6b0341918964e73a7e3a0fe5: ending tablet copy session a30df5c9a5634e1b9e6f378d3d3db88d-63f8fd3146fb4036bf60a5a070659dc8 on tablet 63f8fd3146fb4036bf60a5a070659dc8 with peer a30df5c9a5634e1b9e6f378d3d3db88d
I20250629 01:58:35.328362 21168 raft_consensus.cc:1215] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 LEARNER]: Deduplicated request from leader. Original: 2.10->[2.11-2.11] Dedup: 2.11->[]
W20250629 01:58:34.572271 21383 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:34.574105 21361 server_base.cc:1048] running on GCE node
W20250629 01:58:34.572975 21380 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:35.696570 21361 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:35.699276 21361 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:35.700716 21361 hybrid_clock.cc:648] HybridClock initialized: now 1751162315700673 us; error 53 us; skew 500 ppm
I20250629 01:58:35.701491 21361 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:35.711463 21361 webserver.cc:469] Webserver started at http://127.17.83.67:35205/ using document root <none> and password file <none>
I20250629 01:58:35.712340 21361 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:35.712551 21361 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:35.717090 21395 raft_consensus.cc:1062] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: attempting to promote NON_VOTER a30df5c9a5634e1b9e6f378d3d3db88d to VOTER
I20250629 01:58:35.718389 21395 consensus_queue.cc:237] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 9, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
I20250629 01:58:35.722046 21361 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.002s sys 0.003s
I20250629 01:58:35.722563 21168 raft_consensus.cc:1273] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 LEARNER]: Refusing update from remote peer d2f2badc6b0341918964e73a7e3a0fe5: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 2 index: 12. (index mismatch)
I20250629 01:58:35.723793 21393 consensus_queue.cc:1035] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.001s
I20250629 01:58:35.727074 21401 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:35.728147 21361 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250629 01:58:35.728443 21361 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "8e0c882ed93e411495d3bc24bb61eb11"
format_stamp: "Formatted at 2025-06-29 01:58:22 on dist-test-slave-v1mb"
I20250629 01:58:35.730376 21361 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:35.730451 21395 raft_consensus.cc:2953] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 LEADER]: Committing config change with OpId 2.12: config changed from index 11 to 12, a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65) changed from NON_VOTER to VOTER. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } }
I20250629 01:58:35.732069 21168 raft_consensus.cc:2953] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Committing config change with OpId 2.12: config changed from index 11 to 12, a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65) changed from NON_VOTER to VOTER. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } }
I20250629 01:58:35.741854 21026 catalog_manager.cc:5582] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 reported cstate change: config changed from index 11 to 12, a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" committed_config { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250629 01:58:35.784819 21361 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:35.786188 21361 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:35.786513 21361 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:35.788625 21361 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:58:35.793919 21413 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250629 01:58:35.803345 21361 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250629 01:58:35.803524 21361 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.011s user 0.001s sys 0.000s
I20250629 01:58:35.803741 21361 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250629 01:58:35.807765 21361 ts_tablet_manager.cc:610] Registered 1 tablets
I20250629 01:58:35.807905 21361 ts_tablet_manager.cc:589] Time spent register tablets: real 0.004s user 0.004s sys 0.000s
I20250629 01:58:35.808271 21413 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Bootstrap starting.
I20250629 01:58:35.858646 21413 log.cc:826] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:35.925242 21413 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Bootstrap replayed 1/1 log segments. Stats: ops{read=6 overwritten=0 applied=6 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:35.926029 21413 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Bootstrap complete.
I20250629 01:58:35.927385 21413 ts_tablet_manager.cc:1397] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Time spent bootstrapping tablet: real 0.119s user 0.098s sys 0.020s
I20250629 01:58:35.940328 21413 raft_consensus.cc:357] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } }
I20250629 01:58:35.942524 21413 raft_consensus.cc:738] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8e0c882ed93e411495d3bc24bb61eb11, State: Initialized, Role: FOLLOWER
I20250629 01:58:35.943495 21413 consensus_queue.cc:260] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } }
I20250629 01:58:35.944172 21413 raft_consensus.cc:397] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:58:35.944563 21413 raft_consensus.cc:491] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:58:35.944973 21413 raft_consensus.cc:3058] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 1 FOLLOWER]: Advancing to term 2
I20250629 01:58:35.950317 21413 raft_consensus.cc:513] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } }
I20250629 01:58:35.950906 21413 leader_election.cc:304] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8e0c882ed93e411495d3bc24bb61eb11; no voters:
I20250629 01:58:35.953111 21413 leader_election.cc:290] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250629 01:58:35.954181 21499 raft_consensus.cc:2802] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 FOLLOWER]: Leader election won for term 2
I20250629 01:58:35.960096 21499 raft_consensus.cc:695] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 LEADER]: Becoming Leader. State: Replica: 8e0c882ed93e411495d3bc24bb61eb11, State: Running, Role: LEADER
I20250629 01:58:35.960703 21413 ts_tablet_manager.cc:1428] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Time spent starting tablet: real 0.033s user 0.029s sys 0.000s
I20250629 01:58:35.961073 21499 consensus_queue.cc:237] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 6, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } }
I20250629 01:58:35.982754 21361 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.67:37557
I20250629 01:58:35.983253 21525 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.67:37557 every 8 connection(s)
I20250629 01:58:35.984948 21361 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250629 01:58:35.995456 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 21361
I20250629 01:58:36.003962 21526 heartbeater.cc:344] Connected to a master server at 127.17.83.126:44751
I20250629 01:58:36.004604 21526 heartbeater.cc:461] Registering TS with master...
I20250629 01:58:36.005995 21526 heartbeater.cc:507] Master 127.17.83.126:44751 requested a full tablet report, sending...
I20250629 01:58:36.009845 21026 ts_manager.cc:194] Registered new tserver with Master: 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557)
I20250629 01:58:36.010738 21026 catalog_manager.cc:5582] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 reported cstate change: term changed from 0 to 2, leader changed from <none> to 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67), VOTER 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67) added. New cstate: current_term: 2 leader_uuid: "8e0c882ed93e411495d3bc24bb61eb11" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } health_report { overall_health: HEALTHY } } }
I20250629 01:58:36.019335 21026 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.67:35143
I20250629 01:58:36.021601 17741 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250629 01:58:36.022818 21526 heartbeater.cc:499] Master 127.17.83.126:44751 was elected leader, sending a full tablet report...
I20250629 01:58:36.026979 17741 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
W20250629 01:58:36.030675 17741 ts_itest-base.cc:209] found only 2 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER } interned_replicas { ts_info_idx: 1 role: FOLLOWER }
I20250629 01:58:36.034188 21476 consensus_queue.cc:237] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 2.7, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } }
I20250629 01:58:36.037145 21499 raft_consensus.cc:2953] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 LEADER]: Committing config change with OpId 2.8: config changed from index -1 to 8, NON_VOTER d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66) added. New config: { opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } } }
I20250629 01:58:36.043948 21010 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 35f5dbaa898b47369f784618572fd3c8 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250629 01:58:36.046331 21026 catalog_manager.cc:5582] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 reported cstate change: config changed from index -1 to 8, NON_VOTER d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66) added. New cstate: current_term: 2 leader_uuid: "8e0c882ed93e411495d3bc24bb61eb11" committed_config { opid_index: 8 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
W20250629 01:58:36.047705 21408 consensus_peers.cc:489] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 -> Peer d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66:45511): Couldn't send request to peer d2f2badc6b0341918964e73a7e3a0fe5. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 35f5dbaa898b47369f784618572fd3c8. This is attempt 1: this message will repeat every 5th retry.
I20250629 01:58:36.054755 21476 consensus_queue.cc:237] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 2.8, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: true } }
I20250629 01:58:36.057605 21499 raft_consensus.cc:2953] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 LEADER]: Committing config change with OpId 2.9: config changed from index 8 to 9, NON_VOTER a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65) added. New config: { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: true } } }
W20250629 01:58:36.059298 21408 consensus_peers.cc:489] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 -> Peer d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66:45511): Couldn't send request to peer d2f2badc6b0341918964e73a7e3a0fe5. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 35f5dbaa898b47369f784618572fd3c8. This is attempt 1: this message will repeat every 5th retry.
I20250629 01:58:36.064132 21010 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 35f5dbaa898b47369f784618572fd3c8 with cas_config_opid_index 8: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250629 01:58:36.067260 21026 catalog_manager.cc:5582] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 reported cstate change: config changed from index 8 to 9, NON_VOTER a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65) added. New cstate: current_term: 2 leader_uuid: "8e0c882ed93e411495d3bc24bb61eb11" committed_config { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
W20250629 01:58:36.068308 21407 consensus_peers.cc:489] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 -> Peer a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65:43367): Couldn't send request to peer a30df5c9a5634e1b9e6f378d3d3db88d. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 35f5dbaa898b47369f784618572fd3c8. This is attempt 1: this message will repeat every 5th retry.
I20250629 01:58:36.098759 21312 consensus_queue.cc:237] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 12, Committed index: 12, Last appended: 2.12, Last appended by leader: 9, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: NON_VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: true } }
I20250629 01:58:36.106151 21168 raft_consensus.cc:1273] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Refusing update from remote peer d2f2badc6b0341918964e73a7e3a0fe5: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 2 index: 13. (index mismatch)
I20250629 01:58:36.107223 21393 consensus_queue.cc:1035] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.000s
I20250629 01:58:36.112759 21394 raft_consensus.cc:2953] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 LEADER]: Committing config change with OpId 2.13: config changed from index 12 to 13, NON_VOTER 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67) added. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: NON_VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: true } } }
W20250629 01:58:36.114342 21243 consensus_peers.cc:489] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 -> Peer 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557): Couldn't send request to peer 8e0c882ed93e411495d3bc24bb61eb11. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 63f8fd3146fb4036bf60a5a070659dc8. This is attempt 1: this message will repeat every 5th retry.
I20250629 01:58:36.114219 21168 raft_consensus.cc:2953] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Committing config change with OpId 2.13: config changed from index 12 to 13, NON_VOTER 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67) added. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: NON_VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: true } } }
I20250629 01:58:36.119113 21011 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 63f8fd3146fb4036bf60a5a070659dc8 with cas_config_opid_index 12: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 5)
I20250629 01:58:36.122079 21026 catalog_manager.cc:5582] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 reported cstate change: config changed from index 12 to 13, NON_VOTER 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67) added. New cstate: current_term: 2 leader_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" committed_config { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: NON_VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250629 01:58:36.557806 21537 ts_tablet_manager.cc:927] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d: Initiating tablet copy from peer 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557)
I20250629 01:58:36.558884 21537 tablet_copy_client.cc:323] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d: tablet copy: Beginning tablet copy session from remote peer at address 127.17.83.67:37557
I20250629 01:58:36.567510 21496 tablet_copy_service.cc:140] P 8e0c882ed93e411495d3bc24bb61eb11: Received BeginTabletCopySession request for tablet 35f5dbaa898b47369f784618572fd3c8 from peer a30df5c9a5634e1b9e6f378d3d3db88d ({username='slave'} at 127.17.83.65:36645)
I20250629 01:58:36.567907 21496 tablet_copy_service.cc:161] P 8e0c882ed93e411495d3bc24bb61eb11: Beginning new tablet copy session on tablet 35f5dbaa898b47369f784618572fd3c8 from peer a30df5c9a5634e1b9e6f378d3d3db88d at {username='slave'} at 127.17.83.65:36645: session id = a30df5c9a5634e1b9e6f378d3d3db88d-35f5dbaa898b47369f784618572fd3c8
I20250629 01:58:36.572373 21496 tablet_copy_source_session.cc:215] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Tablet Copy: opened 0 blocks and 1 log segments
I20250629 01:58:36.575341 21537 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 35f5dbaa898b47369f784618572fd3c8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:58:36.582078 21011 catalog_manager.cc:5129] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 63f8fd3146fb4036bf60a5a070659dc8 with cas_config_opid_index 11: aborting the task: latest config opid_index 13; task opid_index 11
I20250629 01:58:36.584677 21537 tablet_copy_client.cc:806] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d: tablet copy: Starting download of 0 data blocks...
I20250629 01:58:36.585217 21537 tablet_copy_client.cc:670] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d: tablet copy: Starting download of 1 WAL segments...
I20250629 01:58:36.588667 21537 tablet_copy_client.cc:538] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250629 01:58:36.593561 21537 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap starting.
I20250629 01:58:36.663637 21537 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap replayed 1/1 log segments. Stats: ops{read=9 overwritten=0 applied=9 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:36.664156 21537 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap complete.
I20250629 01:58:36.664532 21537 ts_tablet_manager.cc:1397] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d: Time spent bootstrapping tablet: real 0.071s user 0.067s sys 0.005s
I20250629 01:58:36.665844 21537 raft_consensus.cc:357] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: true } }
I20250629 01:58:36.666271 21537 raft_consensus.cc:738] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: a30df5c9a5634e1b9e6f378d3d3db88d, State: Initialized, Role: LEARNER
I20250629 01:58:36.666630 21537 consensus_queue.cc:260] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 9, Last appended: 2.9, Last appended by leader: 9, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: true } }
I20250629 01:58:36.668406 21537 ts_tablet_manager.cc:1428] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d: Time spent starting tablet: real 0.004s user 0.004s sys 0.000s
I20250629 01:58:36.669667 21496 tablet_copy_service.cc:342] P 8e0c882ed93e411495d3bc24bb61eb11: Request end of tablet copy session a30df5c9a5634e1b9e6f378d3d3db88d-35f5dbaa898b47369f784618572fd3c8 received from {username='slave'} at 127.17.83.65:36645
I20250629 01:58:36.669986 21496 tablet_copy_service.cc:434] P 8e0c882ed93e411495d3bc24bb61eb11: ending tablet copy session a30df5c9a5634e1b9e6f378d3d3db88d-35f5dbaa898b47369f784618572fd3c8 on tablet 35f5dbaa898b47369f784618572fd3c8 with peer a30df5c9a5634e1b9e6f378d3d3db88d
I20250629 01:58:36.681602 21542 ts_tablet_manager.cc:927] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5: Initiating tablet copy from peer 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557)
I20250629 01:58:36.682978 21542 tablet_copy_client.cc:323] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5: tablet copy: Beginning tablet copy session from remote peer at address 127.17.83.67:37557
I20250629 01:58:36.684341 21496 tablet_copy_service.cc:140] P 8e0c882ed93e411495d3bc24bb61eb11: Received BeginTabletCopySession request for tablet 35f5dbaa898b47369f784618572fd3c8 from peer d2f2badc6b0341918964e73a7e3a0fe5 ({username='slave'} at 127.17.83.66:38247)
I20250629 01:58:36.684676 21496 tablet_copy_service.cc:161] P 8e0c882ed93e411495d3bc24bb61eb11: Beginning new tablet copy session on tablet 35f5dbaa898b47369f784618572fd3c8 from peer d2f2badc6b0341918964e73a7e3a0fe5 at {username='slave'} at 127.17.83.66:38247: session id = d2f2badc6b0341918964e73a7e3a0fe5-35f5dbaa898b47369f784618572fd3c8
I20250629 01:58:36.688728 21496 tablet_copy_source_session.cc:215] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Tablet Copy: opened 0 blocks and 1 log segments
I20250629 01:58:36.691111 21542 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 35f5dbaa898b47369f784618572fd3c8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:58:36.695622 21544 ts_tablet_manager.cc:927] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: Initiating tablet copy from peer d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66:45511)
I20250629 01:58:36.697888 21544 tablet_copy_client.cc:323] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: tablet copy: Beginning tablet copy session from remote peer at address 127.17.83.66:45511
I20250629 01:58:36.699515 21332 tablet_copy_service.cc:140] P d2f2badc6b0341918964e73a7e3a0fe5: Received BeginTabletCopySession request for tablet 63f8fd3146fb4036bf60a5a070659dc8 from peer 8e0c882ed93e411495d3bc24bb61eb11 ({username='slave'} at 127.17.83.67:49289)
I20250629 01:58:36.699904 21332 tablet_copy_service.cc:161] P d2f2badc6b0341918964e73a7e3a0fe5: Beginning new tablet copy session on tablet 63f8fd3146fb4036bf60a5a070659dc8 from peer 8e0c882ed93e411495d3bc24bb61eb11 at {username='slave'} at 127.17.83.67:49289: session id = 8e0c882ed93e411495d3bc24bb61eb11-63f8fd3146fb4036bf60a5a070659dc8
I20250629 01:58:36.703014 21542 tablet_copy_client.cc:806] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5: tablet copy: Starting download of 0 data blocks...
I20250629 01:58:36.703529 21542 tablet_copy_client.cc:670] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5: tablet copy: Starting download of 1 WAL segments...
I20250629 01:58:36.704098 21332 tablet_copy_source_session.cc:215] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Tablet Copy: opened 0 blocks and 1 log segments
I20250629 01:58:36.706936 21544 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 63f8fd3146fb4036bf60a5a070659dc8. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:58:36.706918 21542 tablet_copy_client.cc:538] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250629 01:58:36.712361 21542 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5: Bootstrap starting.
I20250629 01:58:36.719060 21544 tablet_copy_client.cc:806] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: tablet copy: Starting download of 0 data blocks...
I20250629 01:58:36.719767 21544 tablet_copy_client.cc:670] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: tablet copy: Starting download of 1 WAL segments...
I20250629 01:58:36.723663 21544 tablet_copy_client.cc:538] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250629 01:58:36.729023 21544 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: Bootstrap starting.
I20250629 01:58:36.782575 21542 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5: Bootstrap replayed 1/1 log segments. Stats: ops{read=9 overwritten=0 applied=9 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:36.783175 21542 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5: Bootstrap complete.
I20250629 01:58:36.783623 21542 ts_tablet_manager.cc:1397] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5: Time spent bootstrapping tablet: real 0.071s user 0.057s sys 0.015s
I20250629 01:58:36.785259 21542 raft_consensus.cc:357] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: true } }
I20250629 01:58:36.785732 21542 raft_consensus.cc:738] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: d2f2badc6b0341918964e73a7e3a0fe5, State: Initialized, Role: LEARNER
I20250629 01:58:36.786141 21542 consensus_queue.cc:260] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 9, Last appended: 2.9, Last appended by leader: 9, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: NON_VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: true } }
I20250629 01:58:36.787542 21542 ts_tablet_manager.cc:1428] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5: Time spent starting tablet: real 0.004s user 0.003s sys 0.001s
I20250629 01:58:36.789000 21496 tablet_copy_service.cc:342] P 8e0c882ed93e411495d3bc24bb61eb11: Request end of tablet copy session d2f2badc6b0341918964e73a7e3a0fe5-35f5dbaa898b47369f784618572fd3c8 received from {username='slave'} at 127.17.83.66:38247
I20250629 01:58:36.789342 21496 tablet_copy_service.cc:434] P 8e0c882ed93e411495d3bc24bb61eb11: ending tablet copy session d2f2badc6b0341918964e73a7e3a0fe5-35f5dbaa898b47369f784618572fd3c8 on tablet 35f5dbaa898b47369f784618572fd3c8 with peer d2f2badc6b0341918964e73a7e3a0fe5
I20250629 01:58:36.829946 21544 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: Bootstrap replayed 1/1 log segments. Stats: ops{read=13 overwritten=0 applied=13 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:36.830526 21544 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: Bootstrap complete.
I20250629 01:58:36.830926 21544 ts_tablet_manager.cc:1397] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: Time spent bootstrapping tablet: real 0.102s user 0.087s sys 0.012s
I20250629 01:58:36.832544 21544 raft_consensus.cc:357] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: NON_VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: true } }
I20250629 01:58:36.832993 21544 raft_consensus.cc:738] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 8e0c882ed93e411495d3bc24bb61eb11, State: Initialized, Role: LEARNER
I20250629 01:58:36.833321 21544 consensus_queue.cc:260] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: NON_VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: true } }
I20250629 01:58:36.834614 21544 ts_tablet_manager.cc:1428] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: Time spent starting tablet: real 0.004s user 0.000s sys 0.004s
I20250629 01:58:36.836148 21332 tablet_copy_service.cc:342] P d2f2badc6b0341918964e73a7e3a0fe5: Request end of tablet copy session 8e0c882ed93e411495d3bc24bb61eb11-63f8fd3146fb4036bf60a5a070659dc8 received from {username='slave'} at 127.17.83.67:49289
I20250629 01:58:36.836458 21332 tablet_copy_service.cc:434] P d2f2badc6b0341918964e73a7e3a0fe5: ending tablet copy session 8e0c882ed93e411495d3bc24bb61eb11-63f8fd3146fb4036bf60a5a070659dc8 on tablet 63f8fd3146fb4036bf60a5a070659dc8 with peer 8e0c882ed93e411495d3bc24bb61eb11
I20250629 01:58:37.034759 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver a30df5c9a5634e1b9e6f378d3d3db88d to finish bootstrapping
I20250629 01:58:37.050140 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver d2f2badc6b0341918964e73a7e3a0fe5 to finish bootstrapping
I20250629 01:58:37.060943 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 8e0c882ed93e411495d3bc24bb61eb11 to finish bootstrapping
I20250629 01:58:37.103945 21168 raft_consensus.cc:1215] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 LEARNER]: Deduplicated request from leader. Original: 2.8->[2.9-2.9] Dedup: 2.9->[]
I20250629 01:58:37.220227 21312 raft_consensus.cc:1215] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.8->[2.9-2.9] Dedup: 2.9->[]
I20250629 01:58:37.258841 21476 raft_consensus.cc:1215] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.12->[2.13-2.13] Dedup: 2.13->[]
I20250629 01:58:37.330507 21456 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250629 01:58:37.334589 21148 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250629 01:58:37.336669 21292 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250629 01:58:37.500669 21558 raft_consensus.cc:1062] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: attempting to promote NON_VOTER a30df5c9a5634e1b9e6f378d3d3db88d to VOTER
I20250629 01:58:37.502254 21558 consensus_queue.cc:237] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 2.9, Last appended by leader: 6, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
I20250629 01:58:37.506875 21312 raft_consensus.cc:1273] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 LEARNER]: Refusing update from remote peer 8e0c882ed93e411495d3bc24bb61eb11: Log matching property violated. Preceding OpId in replica: term: 2 index: 9. Preceding OpId from leader: term: 2 index: 10. (index mismatch)
I20250629 01:58:37.507323 21168 raft_consensus.cc:1273] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 LEARNER]: Refusing update from remote peer 8e0c882ed93e411495d3bc24bb61eb11: Log matching property violated. Preceding OpId in replica: term: 2 index: 9. Preceding OpId from leader: term: 2 index: 10. (index mismatch)
I20250629 01:58:37.508750 21559 consensus_queue.cc:1035] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 10, Last known committed idx: 9, Time since last communication: 0.001s
I20250629 01:58:37.509459 21558 consensus_queue.cc:1035] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [LEADER]: Connected to new peer: Peer: permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 10, Last known committed idx: 9, Time since last communication: 0.000s
I20250629 01:58:37.525693 21559 raft_consensus.cc:2953] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 LEADER]: Committing config change with OpId 2.10: config changed from index 9 to 10, a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65) changed from NON_VOTER to VOTER. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } }
I20250629 01:58:37.531404 21311 raft_consensus.cc:2953] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 LEARNER]: Committing config change with OpId 2.10: config changed from index 9 to 10, a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65) changed from NON_VOTER to VOTER. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } }
I20250629 01:58:37.535534 21167 raft_consensus.cc:2953] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Committing config change with OpId 2.10: config changed from index 9 to 10, a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65) changed from NON_VOTER to VOTER. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } }
I20250629 01:58:37.538105 21026 catalog_manager.cc:5582] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 reported cstate change: config changed from index 9 to 10, a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "8e0c882ed93e411495d3bc24bb61eb11" committed_config { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250629 01:58:37.544096 21559 raft_consensus.cc:1062] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: attempting to promote NON_VOTER d2f2badc6b0341918964e73a7e3a0fe5 to VOTER
I20250629 01:58:37.545039 21499 raft_consensus.cc:1062] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: attempting to promote NON_VOTER d2f2badc6b0341918964e73a7e3a0fe5 to VOTER
I20250629 01:58:37.548899 21499 consensus_queue.cc:237] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 6, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
I20250629 01:58:37.555524 21167 raft_consensus.cc:1273] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Refusing update from remote peer 8e0c882ed93e411495d3bc24bb61eb11: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250629 01:58:37.556059 21311 raft_consensus.cc:1273] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 LEARNER]: Refusing update from remote peer 8e0c882ed93e411495d3bc24bb61eb11: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250629 01:58:37.559142 21558 consensus_queue.cc:1035] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [LEADER]: Connected to new peer: Peer: permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.001s
W20250629 01:58:37.569034 21559 raft_consensus.cc:1066] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Unable to promote non-voter d2f2badc6b0341918964e73a7e3a0fe5: Illegal state: RaftConfig change currently pending. Only one is allowed at a time.
Committed config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: NON_VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: true } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }.
Pending config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
I20250629 01:58:37.569597 21558 consensus_queue.cc:1035] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.000s
I20250629 01:58:37.571604 21585 raft_consensus.cc:2953] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index 10 to 11, d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } }
I20250629 01:58:37.580427 21168 raft_consensus.cc:2953] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } }
Master Summary
I20250629 01:58:37.583137 21311 raft_consensus.cc:2953] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } }
UUID | Address | Status
----------------------------------+---------------------+---------
e51a6e2ab5fe446482623814821c0960 | 127.17.83.126:44751 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+--------------------+-------------------------
builtin_ntp_servers | 127.17.83.84:42817 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+--------------------+---------+----------+----------------+-----------------
8e0c882ed93e411495d3bc24bb61eb11 | 127.17.83.67:37557 | HEALTHY | <none> | 1 | 0
a30df5c9a5634e1b9e6f378d3d3db88d | 127.17.83.65:43367 | HEALTHY | <none> | 1 | 0
d2f2badc6b0341918964e73a7e3a0fe5 | 127.17.83.66:45511 | HEALTHY | <none> | 1 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.17.83.65 | experimental | 127.17.83.65:43367
local_ip_for_outbound_sockets | 127.17.83.66 | experimental | 127.17.83.66:45511
local_ip_for_outbound_sockets | 127.17.83.67 | experimental | 127.17.83.67:37557
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb | hidden | 127.17.83.65:43367
server_dump_info_path | /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb | hidden | 127.17.83.66:45511
server_dump_info_path | /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb | hidden | 127.17.83.67:37557
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+--------------------+-------------------------
builtin_ntp_servers | 127.17.83.84:42817 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.18.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
------------+----+---------+---------------+---------+------------+------------------+-------------
TestTable | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
TestTable1 | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
TestTable2 | 1 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 2
First Quartile | 2
Median | 2
Third Quartile | 3
Maximum | 3
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 3
Tablets | 3
Replicas | 7
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250629 01:58:37.604034 17741 log_verifier.cc:126] Checking tablet 35f5dbaa898b47369f784618572fd3c8
I20250629 01:58:37.603372 21024 catalog_manager.cc:5582] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d reported cstate change: config changed from index 10 to 11, d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "8e0c882ed93e411495d3bc24bb61eb11" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } }
I20250629 01:58:37.695112 17741 log_verifier.cc:177] Verified matching terms for 11 ops in tablet 35f5dbaa898b47369f784618572fd3c8
I20250629 01:58:37.695423 17741 log_verifier.cc:126] Checking tablet 63f8fd3146fb4036bf60a5a070659dc8
I20250629 01:58:37.783675 17741 log_verifier.cc:177] Verified matching terms for 13 ops in tablet 63f8fd3146fb4036bf60a5a070659dc8
I20250629 01:58:37.783923 17741 log_verifier.cc:126] Checking tablet aa66fb7daabf41dcab7d7e05bdd17b4a
I20250629 01:58:37.808307 21566 raft_consensus.cc:1062] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: attempting to promote NON_VOTER 8e0c882ed93e411495d3bc24bb61eb11 to VOTER
I20250629 01:58:37.810137 21566 consensus_queue.cc:237] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 13, Committed index: 13, Last appended: 2.13, Last appended by leader: 9, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false } }
I20250629 01:58:37.814102 17741 log_verifier.cc:177] Verified matching terms for 8 ops in tablet aa66fb7daabf41dcab7d7e05bdd17b4a
I20250629 01:58:37.814678 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 20993
I20250629 01:58:37.815261 21476 raft_consensus.cc:1273] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 LEARNER]: Refusing update from remote peer d2f2badc6b0341918964e73a7e3a0fe5: Log matching property violated. Preceding OpId in replica: term: 2 index: 13. Preceding OpId from leader: term: 2 index: 14. (index mismatch)
I20250629 01:58:37.816473 21168 raft_consensus.cc:1273] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Refusing update from remote peer d2f2badc6b0341918964e73a7e3a0fe5: Log matching property violated. Preceding OpId in replica: term: 2 index: 13. Preceding OpId from leader: term: 2 index: 14. (index mismatch)
I20250629 01:58:37.816996 21591 consensus_queue.cc:1035] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 14, Last known committed idx: 13, Time since last communication: 0.000s
I20250629 01:58:37.817718 21566 consensus_queue.cc:1035] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 14, Last known committed idx: 13, Time since last communication: 0.000s
I20250629 01:58:37.824774 21591 raft_consensus.cc:2953] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 LEADER]: Committing config change with OpId 2.14: config changed from index 13 to 14, 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67) changed from NON_VOTER to VOTER. New config: { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false } } }
I20250629 01:58:37.827672 21476 raft_consensus.cc:2953] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 FOLLOWER]: Committing config change with OpId 2.14: config changed from index 13 to 14, 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67) changed from NON_VOTER to VOTER. New config: { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false } } }
I20250629 01:58:37.828826 21167 raft_consensus.cc:2953] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Committing config change with OpId 2.14: config changed from index 13 to 14, 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67) changed from NON_VOTER to VOTER. New config: { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false } } }
W20250629 01:58:37.849802 21408 connection.cc:537] client connection to 127.17.83.126:44751 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250629 01:58:37.850241 21100 connection.cc:537] client connection to 127.17.83.126:44751 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250629 01:58:37.850729 21526 heartbeater.cc:646] Failed to heartbeat to 127.17.83.126:44751 (0 consecutive failures): Network error: Failed to send heartbeat to master: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250629 01:58:37.851099 17741 minidump.cc:252] Setting minidump size limit to 20M
W20250629 01:58:37.851312 21244 connection.cc:537] client connection to 127.17.83.126:44751 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250629 01:58:37.851511 21218 heartbeater.cc:646] Failed to heartbeat to 127.17.83.126:44751 (0 consecutive failures): Network error: Failed to send heartbeat to master: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250629 01:58:37.852200 21357 heartbeater.cc:646] Failed to heartbeat to 127.17.83.126:44751 (0 consecutive failures): Network error: Failed to send heartbeat to master: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250629 01:58:37.852874 17741 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:37.854216 17741 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:37.863971 21597 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:37.864187 21596 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:37.865310 21599 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:37.865397 17741 server_base.cc:1048] running on GCE node
I20250629 01:58:37.954445 17741 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250629 01:58:37.954653 17741 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250629 01:58:37.954813 17741 hybrid_clock.cc:648] HybridClock initialized: now 1751162317954790 us; error 0 us; skew 500 ppm
I20250629 01:58:37.955497 17741 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:37.958702 17741 webserver.cc:469] Webserver started at http://0.0.0.0:35881/ using document root <none> and password file <none>
I20250629 01:58:37.959443 17741 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:37.959609 17741 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:37.964260 17741 fs_manager.cc:714] Time spent opening directory manager: real 0.003s user 0.004s sys 0.000s
I20250629 01:58:37.967620 21604 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:37.968559 17741 fs_manager.cc:730] Time spent opening block manager: real 0.002s user 0.002s sys 0.001s
I20250629 01:58:37.968868 17741 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
uuid: "e51a6e2ab5fe446482623814821c0960"
format_stamp: "Formatted at 2025-06-29 01:58:16 on dist-test-slave-v1mb"
I20250629 01:58:37.970319 17741 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:37.992750 17741 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:37.994129 17741 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:37.994517 17741 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:38.002733 17741 sys_catalog.cc:263] Verifying existing consensus state
W20250629 01:58:38.005985 17741 sys_catalog.cc:243] For a single master config, on-disk Raft master: 127.17.83.126:44751 exists but no master address supplied!
I20250629 01:58:38.007805 17741 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Bootstrap starting.
I20250629 01:58:38.047263 17741 log.cc:826] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:38.102900 17741 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Bootstrap replayed 1/1 log segments. Stats: ops{read=29 overwritten=0 applied=29 ignored=0} inserts{seen=13 ignored=0} mutations{seen=20 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:38.103641 17741 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Bootstrap complete.
I20250629 01:58:38.116027 17741 raft_consensus.cc:357] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 3 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:38.116593 17741 raft_consensus.cc:738] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 3 FOLLOWER]: Becoming Follower/Learner. State: Replica: e51a6e2ab5fe446482623814821c0960, State: Initialized, Role: FOLLOWER
I20250629 01:58:38.117236 17741 consensus_queue.cc:260] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 29, Last appended: 3.29, Last appended by leader: 29, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:38.117679 17741 raft_consensus.cc:397] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 3 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:58:38.117929 17741 raft_consensus.cc:491] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 3 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:58:38.118265 17741 raft_consensus.cc:3058] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 3 FOLLOWER]: Advancing to term 4
I20250629 01:58:38.123492 17741 raft_consensus.cc:513] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 4 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:38.124089 17741 leader_election.cc:304] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: e51a6e2ab5fe446482623814821c0960; no voters:
I20250629 01:58:38.125082 17741 leader_election.cc:290] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [CANDIDATE]: Term 4 election: Requested vote from peers
I20250629 01:58:38.125305 21611 raft_consensus.cc:2802] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 4 FOLLOWER]: Leader election won for term 4
I20250629 01:58:38.126430 21611 raft_consensus.cc:695] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 4 LEADER]: Becoming Leader. State: Replica: e51a6e2ab5fe446482623814821c0960, State: Running, Role: LEADER
I20250629 01:58:38.127053 21611 consensus_queue.cc:237] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 29, Committed index: 29, Last appended: 3.29, Last appended by leader: 29, Current term: 4, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:38.133831 21612 sys_catalog.cc:455] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 4 leader_uuid: "e51a6e2ab5fe446482623814821c0960" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } } }
I20250629 01:58:38.134346 21612 sys_catalog.cc:458] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: This master's current role is: LEADER
I20250629 01:58:38.134357 21613 sys_catalog.cc:455] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: SysCatalogTable state changed. Reason: New leader e51a6e2ab5fe446482623814821c0960. Latest consensus state: current_term: 4 leader_uuid: "e51a6e2ab5fe446482623814821c0960" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } } }
I20250629 01:58:38.134959 21613 sys_catalog.cc:458] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: This master's current role is: LEADER
I20250629 01:58:38.158680 17741 tablet_replica.cc:331] stopping tablet replica
I20250629 01:58:38.159271 17741 raft_consensus.cc:2241] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 4 LEADER]: Raft consensus shutting down.
I20250629 01:58:38.159662 17741 raft_consensus.cc:2270] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 4 FOLLOWER]: Raft consensus is shut down!
I20250629 01:58:38.161748 17741 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250629 01:58:38.162216 17741 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250629 01:58:38.197876 17741 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
I20250629 01:58:43.164287 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 21063
I20250629 01:58:43.197635 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 21222
I20250629 01:58:43.221468 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 21361
I20250629 01:58:43.250404 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:44751
--webserver_interface=127.17.83.126
--webserver_port=41719
--builtin_ntp_servers=127.17.83.84:42817
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.17.83.126:44751 with env {}
W20250629 01:58:43.539525 21686 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:43.540091 21686 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:43.540490 21686 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:43.567987 21686 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250629 01:58:43.568250 21686 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:43.568475 21686 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250629 01:58:43.568686 21686 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250629 01:58:43.600608 21686 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:42817
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.17.83.126:44751
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:44751
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.17.83.126
--webserver_port=41719
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:43.601848 21686 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:43.603389 21686 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:43.616118 21695 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:43.616195 21692 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:43.616356 21693 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:44.716557 21694 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250629 01:58:44.716727 21686 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:58:44.720381 21686 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:44.722826 21686 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:44.724187 21686 hybrid_clock.cc:648] HybridClock initialized: now 1751162324724137 us; error 49 us; skew 500 ppm
I20250629 01:58:44.724884 21686 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:44.730746 21686 webserver.cc:469] Webserver started at http://127.17.83.126:41719/ using document root <none> and password file <none>
I20250629 01:58:44.731560 21686 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:44.731804 21686 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:44.739169 21686 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.006s sys 0.001s
I20250629 01:58:44.743330 21703 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:44.744158 21686 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.000s sys 0.004s
I20250629 01:58:44.744419 21686 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
uuid: "e51a6e2ab5fe446482623814821c0960"
format_stamp: "Formatted at 2025-06-29 01:58:16 on dist-test-slave-v1mb"
I20250629 01:58:44.746054 21686 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:44.802557 21686 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:44.803959 21686 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:44.804383 21686 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:44.867098 21686 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.126:44751
I20250629 01:58:44.867177 21754 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.126:44751 every 8 connection(s)
I20250629 01:58:44.869560 21686 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250629 01:58:44.872188 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 21686
I20250629 01:58:44.874493 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.65:43367
--local_ip_for_outbound_sockets=127.17.83.65
--tserver_master_addrs=127.17.83.126:44751
--webserver_port=33231
--webserver_interface=127.17.83.65
--builtin_ntp_servers=127.17.83.84:42817
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250629 01:58:44.883772 21755 sys_catalog.cc:263] Verifying existing consensus state
I20250629 01:58:44.888584 21755 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Bootstrap starting.
I20250629 01:58:44.898097 21755 log.cc:826] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:44.965698 21755 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Bootstrap replayed 1/1 log segments. Stats: ops{read=33 overwritten=0 applied=33 ignored=0} inserts{seen=15 ignored=0} mutations{seen=22 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:44.966408 21755 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Bootstrap complete.
I20250629 01:58:44.983855 21755 raft_consensus.cc:357] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 5 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:44.985591 21755 raft_consensus.cc:738] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 5 FOLLOWER]: Becoming Follower/Learner. State: Replica: e51a6e2ab5fe446482623814821c0960, State: Initialized, Role: FOLLOWER
I20250629 01:58:44.986299 21755 consensus_queue.cc:260] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 33, Last appended: 5.33, Last appended by leader: 33, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:44.986722 21755 raft_consensus.cc:397] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 5 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:58:44.986956 21755 raft_consensus.cc:491] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 5 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:58:44.987257 21755 raft_consensus.cc:3058] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 5 FOLLOWER]: Advancing to term 6
I20250629 01:58:44.991989 21755 raft_consensus.cc:513] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 6 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:44.992532 21755 leader_election.cc:304] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [CANDIDATE]: Term 6 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: e51a6e2ab5fe446482623814821c0960; no voters:
I20250629 01:58:44.994565 21755 leader_election.cc:290] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [CANDIDATE]: Term 6 election: Requested vote from peers
I20250629 01:58:44.994988 21759 raft_consensus.cc:2802] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 6 FOLLOWER]: Leader election won for term 6
I20250629 01:58:44.998227 21759 raft_consensus.cc:695] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [term 6 LEADER]: Becoming Leader. State: Replica: e51a6e2ab5fe446482623814821c0960, State: Running, Role: LEADER
I20250629 01:58:44.998989 21759 consensus_queue.cc:237] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 33, Committed index: 33, Last appended: 5.33, Last appended by leader: 33, Current term: 6, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } }
I20250629 01:58:44.999439 21755 sys_catalog.cc:564] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: configured and running, proceeding with master startup.
I20250629 01:58:45.008491 21760 sys_catalog.cc:455] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 6 leader_uuid: "e51a6e2ab5fe446482623814821c0960" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } } }
I20250629 01:58:45.010100 21760 sys_catalog.cc:458] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: This master's current role is: LEADER
I20250629 01:58:45.009544 21761 sys_catalog.cc:455] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: SysCatalogTable state changed. Reason: New leader e51a6e2ab5fe446482623814821c0960. Latest consensus state: current_term: 6 leader_uuid: "e51a6e2ab5fe446482623814821c0960" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e51a6e2ab5fe446482623814821c0960" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 44751 } } }
I20250629 01:58:45.011649 21761 sys_catalog.cc:458] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960 [sys.catalog]: This master's current role is: LEADER
I20250629 01:58:45.020259 21766 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250629 01:58:45.033456 21766 catalog_manager.cc:671] Loaded metadata for table TestTable [id=1d3100c4c10a44869af35d12503d40d6]
I20250629 01:58:45.035089 21766 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=2b2de49158564208bc7e05e7ff05a665]
I20250629 01:58:45.036814 21766 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=ed12c0fe4a09403b8fef6746d56b567d]
I20250629 01:58:45.045116 21766 tablet_loader.cc:96] loaded metadata for tablet 35f5dbaa898b47369f784618572fd3c8 (table TestTable1 [id=ed12c0fe4a09403b8fef6746d56b567d])
I20250629 01:58:45.047045 21766 tablet_loader.cc:96] loaded metadata for tablet 63f8fd3146fb4036bf60a5a070659dc8 (table TestTable [id=1d3100c4c10a44869af35d12503d40d6])
I20250629 01:58:45.048291 21766 tablet_loader.cc:96] loaded metadata for tablet aa66fb7daabf41dcab7d7e05bdd17b4a (table TestTable2 [id=2b2de49158564208bc7e05e7ff05a665])
I20250629 01:58:45.049666 21766 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250629 01:58:45.054957 21766 catalog_manager.cc:1261] Loaded cluster ID: 900c6c66404c42b0bd9faf4c96645d7e
I20250629 01:58:45.055235 21766 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250629 01:58:45.063284 21766 catalog_manager.cc:1506] Loading token signing keys...
I20250629 01:58:45.068539 21766 catalog_manager.cc:5966] T 00000000000000000000000000000000 P e51a6e2ab5fe446482623814821c0960: Loaded TSK: 0
I20250629 01:58:45.069885 21766 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250629 01:58:45.190399 21757 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:45.190898 21757 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:45.191422 21757 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:45.223074 21757 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:45.223938 21757 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.65
I20250629 01:58:45.257685 21757 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:42817
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.65:43367
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.17.83.65
--webserver_port=33231
--tserver_master_addrs=127.17.83.126:44751
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.65
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:45.258924 21757 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:45.260448 21757 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:45.277143 21783 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:45.281754 21782 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:45.282431 21757 server_base.cc:1048] running on GCE node
W20250629 01:58:45.282141 21785 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:46.422861 21757 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:46.425855 21757 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:46.427341 21757 hybrid_clock.cc:648] HybridClock initialized: now 1751162326427294 us; error 66 us; skew 500 ppm
I20250629 01:58:46.428189 21757 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:46.435738 21757 webserver.cc:469] Webserver started at http://127.17.83.65:33231/ using document root <none> and password file <none>
I20250629 01:58:46.436686 21757 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:46.436909 21757 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:46.445228 21757 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.001s sys 0.004s
I20250629 01:58:46.450873 21792 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:46.452005 21757 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250629 01:58:46.452325 21757 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "a30df5c9a5634e1b9e6f378d3d3db88d"
format_stamp: "Formatted at 2025-06-29 01:58:18 on dist-test-slave-v1mb"
I20250629 01:58:46.454300 21757 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:46.512852 21757 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:46.514267 21757 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:46.514694 21757 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:46.517306 21757 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:58:46.523943 21799 ts_tablet_manager.cc:542] Loading tablet metadata (0/3 complete)
I20250629 01:58:46.538899 21757 ts_tablet_manager.cc:579] Loaded tablet metadata (3 total tablets, 3 live tablets)
I20250629 01:58:46.539112 21757 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.017s user 0.000s sys 0.002s
I20250629 01:58:46.539420 21757 ts_tablet_manager.cc:594] Registering tablets (0/3 complete)
I20250629 01:58:46.544497 21799 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap starting.
I20250629 01:58:46.554603 21757 ts_tablet_manager.cc:610] Registered 3 tablets
I20250629 01:58:46.554872 21757 ts_tablet_manager.cc:589] Time spent register tablets: real 0.015s user 0.006s sys 0.006s
I20250629 01:58:46.623586 21799 log.cc:826] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:46.748528 21757 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.65:43367
I20250629 01:58:46.748728 21906 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.65:43367 every 8 connection(s)
I20250629 01:58:46.751843 21757 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250629 01:58:46.756186 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 21757
I20250629 01:58:46.758126 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.66:45511
--local_ip_for_outbound_sockets=127.17.83.66
--tserver_master_addrs=127.17.83.126:44751
--webserver_port=34819
--webserver_interface=127.17.83.66
--builtin_ntp_servers=127.17.83.84:42817
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250629 01:58:46.789194 21799 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap replayed 1/1 log segments. Stats: ops{read=14 overwritten=0 applied=14 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:46.790319 21799 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap complete.
I20250629 01:58:46.792105 21799 ts_tablet_manager.cc:1397] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: Time spent bootstrapping tablet: real 0.248s user 0.202s sys 0.040s
I20250629 01:58:46.805452 21907 heartbeater.cc:344] Connected to a master server at 127.17.83.126:44751
I20250629 01:58:46.805979 21907 heartbeater.cc:461] Registering TS with master...
I20250629 01:58:46.807148 21907 heartbeater.cc:507] Master 127.17.83.126:44751 requested a full tablet report, sending...
I20250629 01:58:46.809293 21799 raft_consensus.cc:357] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false } }
I20250629 01:58:46.812192 21720 ts_manager.cc:194] Registered new tserver with Master: a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65:43367)
I20250629 01:58:46.812413 21799 raft_consensus.cc:738] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: a30df5c9a5634e1b9e6f378d3d3db88d, State: Initialized, Role: FOLLOWER
I20250629 01:58:46.813472 21799 consensus_queue.cc:260] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 14, Last appended: 2.14, Last appended by leader: 14, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false } }
I20250629 01:58:46.819633 21720 catalog_manager.cc:5582] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d reported cstate change: config changed from index -1 to 14, term changed from 0 to 2, VOTER 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67) added, VOTER a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65) added, VOTER d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66) added. New cstate: current_term: 2 committed_config { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false } } }
I20250629 01:58:46.823158 21799 ts_tablet_manager.cc:1428] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d: Time spent starting tablet: real 0.031s user 0.024s sys 0.004s
I20250629 01:58:46.823951 21799 tablet_bootstrap.cc:492] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap starting.
I20250629 01:58:46.880950 21720 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.65:58383
I20250629 01:58:46.885114 21907 heartbeater.cc:499] Master 127.17.83.126:44751 was elected leader, sending a full tablet report...
I20250629 01:58:46.945170 21799 tablet_bootstrap.cc:492] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap replayed 1/1 log segments. Stats: ops{read=8 overwritten=0 applied=8 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:46.945806 21799 tablet_bootstrap.cc:492] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap complete.
I20250629 01:58:46.946910 21799 ts_tablet_manager.cc:1397] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Time spent bootstrapping tablet: real 0.123s user 0.094s sys 0.024s
I20250629 01:58:46.948339 21799 raft_consensus.cc:357] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } }
I20250629 01:58:46.948726 21799 raft_consensus.cc:738] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: a30df5c9a5634e1b9e6f378d3d3db88d, State: Initialized, Role: FOLLOWER
I20250629 01:58:46.949151 21799 consensus_queue.cc:260] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 8, Last appended: 2.8, Last appended by leader: 8, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } }
I20250629 01:58:46.949463 21799 raft_consensus.cc:397] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:58:46.949697 21799 raft_consensus.cc:491] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:58:46.949976 21799 raft_consensus.cc:3058] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Advancing to term 3
I20250629 01:58:46.954783 21799 raft_consensus.cc:513] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } }
I20250629 01:58:46.955403 21799 leader_election.cc:304] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: a30df5c9a5634e1b9e6f378d3d3db88d; no voters:
I20250629 01:58:46.955808 21799 leader_election.cc:290] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 3 election: Requested vote from peers
I20250629 01:58:46.956076 21912 raft_consensus.cc:2802] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 3 FOLLOWER]: Leader election won for term 3
I20250629 01:58:46.958489 21912 raft_consensus.cc:695] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [term 3 LEADER]: Becoming Leader. State: Replica: a30df5c9a5634e1b9e6f378d3d3db88d, State: Running, Role: LEADER
I20250629 01:58:46.959005 21799 ts_tablet_manager.cc:1428] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d: Time spent starting tablet: real 0.012s user 0.006s sys 0.008s
I20250629 01:58:46.959628 21799 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap starting.
I20250629 01:58:46.959322 21912 consensus_queue.cc:237] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 2.8, Last appended by leader: 8, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } }
I20250629 01:58:46.972688 21719 catalog_manager.cc:5582] T aa66fb7daabf41dcab7d7e05bdd17b4a P a30df5c9a5634e1b9e6f378d3d3db88d reported cstate change: term changed from 2 to 3. New cstate: current_term: 3 leader_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } health_report { overall_health: HEALTHY } } }
I20250629 01:58:47.053576 21799 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:47.054235 21799 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d: Bootstrap complete.
I20250629 01:58:47.055413 21799 ts_tablet_manager.cc:1397] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d: Time spent bootstrapping tablet: real 0.096s user 0.076s sys 0.019s
I20250629 01:58:47.056905 21799 raft_consensus.cc:357] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
I20250629 01:58:47.057330 21799 raft_consensus.cc:738] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: a30df5c9a5634e1b9e6f378d3d3db88d, State: Initialized, Role: FOLLOWER
I20250629 01:58:47.057819 21799 consensus_queue.cc:260] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
I20250629 01:58:47.059412 21799 ts_tablet_manager.cc:1428] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d: Time spent starting tablet: real 0.004s user 0.004s sys 0.000s
W20250629 01:58:47.115043 21911 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:47.115512 21911 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:47.115959 21911 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:47.145877 21911 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:47.146725 21911 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.66
I20250629 01:58:47.180030 21911 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:42817
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.66:45511
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.17.83.66
--webserver_port=34819
--tserver_master_addrs=127.17.83.126:44751
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.66
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:47.181288 21911 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:47.182843 21911 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:47.197978 21928 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:47.198019 21929 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:47.198587 21931 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:48.337095 21930 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1136 milliseconds
I20250629 01:58:48.337200 21911 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:58:48.338302 21911 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:48.340652 21911 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:48.342033 21911 hybrid_clock.cc:648] HybridClock initialized: now 1751162328341976 us; error 50 us; skew 500 ppm
I20250629 01:58:48.342768 21911 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:48.353297 21911 webserver.cc:469] Webserver started at http://127.17.83.66:34819/ using document root <none> and password file <none>
I20250629 01:58:48.354046 21911 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:48.354202 21911 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:48.361115 21911 fs_manager.cc:714] Time spent opening directory manager: real 0.004s user 0.005s sys 0.000s
I20250629 01:58:48.366214 21938 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:48.367048 21911 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.000s
I20250629 01:58:48.367343 21911 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "d2f2badc6b0341918964e73a7e3a0fe5"
format_stamp: "Formatted at 2025-06-29 01:58:20 on dist-test-slave-v1mb"
I20250629 01:58:48.369154 21911 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:48.426694 21911 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:48.428071 21911 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:48.428501 21911 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:48.430989 21911 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:58:48.436389 21945 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250629 01:58:48.447120 21911 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250629 01:58:48.447381 21911 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.012s user 0.000s sys 0.002s
I20250629 01:58:48.447645 21911 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250629 01:58:48.452497 21945 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Bootstrap starting.
I20250629 01:58:48.454942 21911 ts_tablet_manager.cc:610] Registered 2 tablets
I20250629 01:58:48.455128 21911 ts_tablet_manager.cc:589] Time spent register tablets: real 0.008s user 0.000s sys 0.004s
I20250629 01:58:48.506306 21945 log.cc:826] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Log is configured to *not* fsync() on all Append() calls
I20250629 01:58:48.555737 22002 raft_consensus.cc:491] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250629 01:58:48.556306 22002 raft_consensus.cc:513] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
W20250629 01:58:48.562971 21793 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.17.83.67:37557: connect: Connection refused (error 111)
I20250629 01:58:48.564692 22002 leader_election.cc:290] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557), d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66:45511)
W20250629 01:58:48.570238 21793 leader_election.cc:336] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557): Network error: Client connection negotiation failed: client connection to 127.17.83.67:37557: connect: Connection refused (error 111)
W20250629 01:58:48.571033 21794 leader_election.cc:336] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66:45511): Network error: Client connection negotiation failed: client connection to 127.17.83.66:45511: connect: Connection refused (error 111)
I20250629 01:58:48.571579 21794 leader_election.cc:304] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: a30df5c9a5634e1b9e6f378d3d3db88d; no voters: 8e0c882ed93e411495d3bc24bb61eb11, d2f2badc6b0341918964e73a7e3a0fe5
I20250629 01:58:48.572316 22002 raft_consensus.cc:2747] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250629 01:58:48.624696 21945 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Bootstrap replayed 1/1 log segments. Stats: ops{read=14 overwritten=0 applied=14 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:48.625414 21945 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Bootstrap complete.
I20250629 01:58:48.626606 21945 ts_tablet_manager.cc:1397] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Time spent bootstrapping tablet: real 0.174s user 0.133s sys 0.036s
I20250629 01:58:48.638800 21911 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.66:45511
I20250629 01:58:48.638957 22056 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.66:45511 every 8 connection(s)
I20250629 01:58:48.641247 21911 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250629 01:58:48.641141 21945 raft_consensus.cc:357] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false } }
I20250629 01:58:48.643239 21945 raft_consensus.cc:738] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: d2f2badc6b0341918964e73a7e3a0fe5, State: Initialized, Role: FOLLOWER
I20250629 01:58:48.643985 21945 consensus_queue.cc:260] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 14, Last appended: 2.14, Last appended by leader: 14, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false } }
I20250629 01:58:48.648294 21945 ts_tablet_manager.cc:1428] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Time spent starting tablet: real 0.021s user 0.023s sys 0.001s
I20250629 01:58:48.649078 21945 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5: Bootstrap starting.
I20250629 01:58:48.650002 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 21911
I20250629 01:58:48.651914 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.67:37557
--local_ip_for_outbound_sockets=127.17.83.67
--tserver_master_addrs=127.17.83.126:44751
--webserver_port=35205
--webserver_interface=127.17.83.67
--builtin_ntp_servers=127.17.83.84:42817
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250629 01:58:48.659526 22057 heartbeater.cc:344] Connected to a master server at 127.17.83.126:44751
I20250629 01:58:48.659919 22057 heartbeater.cc:461] Registering TS with master...
I20250629 01:58:48.660370 22002 raft_consensus.cc:491] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250629 01:58:48.661067 22057 heartbeater.cc:507] Master 127.17.83.126:44751 requested a full tablet report, sending...
I20250629 01:58:48.660782 22002 raft_consensus.cc:513] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false } }
I20250629 01:58:48.662799 22002 leader_election.cc:290] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66:45511), 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557)
I20250629 01:58:48.670279 21719 ts_manager.cc:194] Registered new tserver with Master: d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66:45511)
W20250629 01:58:48.676666 21793 leader_election.cc:336] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557): Network error: Client connection negotiation failed: client connection to 127.17.83.67:37557: connect: Connection refused (error 111)
I20250629 01:58:48.679407 21719 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.66:37211
I20250629 01:58:48.684623 22057 heartbeater.cc:499] Master 127.17.83.126:44751 was elected leader, sending a full tablet report...
I20250629 01:58:48.695514 22012 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" candidate_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" candidate_term: 3 candidate_status { last_received { term: 2 index: 14 } } ignore_live_leader: false dest_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" is_pre_election: true
I20250629 01:58:48.696237 22012 raft_consensus.cc:2466] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate a30df5c9a5634e1b9e6f378d3d3db88d in term 2.
I20250629 01:58:48.697387 21794 leader_election.cc:304] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: a30df5c9a5634e1b9e6f378d3d3db88d, d2f2badc6b0341918964e73a7e3a0fe5; no voters: 8e0c882ed93e411495d3bc24bb61eb11
I20250629 01:58:48.698024 22002 raft_consensus.cc:2802] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250629 01:58:48.698333 22002 raft_consensus.cc:491] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250629 01:58:48.698602 22002 raft_consensus.cc:3058] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Advancing to term 3
I20250629 01:58:48.703177 22002 raft_consensus.cc:513] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 3 FOLLOWER]: Starting leader election with config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false } }
I20250629 01:58:48.704603 22002 leader_election.cc:290] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 3 election: Requested vote from peers d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66:45511), 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557)
I20250629 01:58:48.706194 22012 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" candidate_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" candidate_term: 3 candidate_status { last_received { term: 2 index: 14 } } ignore_live_leader: false dest_uuid: "d2f2badc6b0341918964e73a7e3a0fe5"
I20250629 01:58:48.706620 22012 raft_consensus.cc:3058] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 FOLLOWER]: Advancing to term 3
W20250629 01:58:48.713285 21793 leader_election.cc:336] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557): Network error: Client connection negotiation failed: client connection to 127.17.83.67:37557: connect: Connection refused (error 111)
I20250629 01:58:48.714403 22012 raft_consensus.cc:2466] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate a30df5c9a5634e1b9e6f378d3d3db88d in term 3.
I20250629 01:58:48.715178 21794 leader_election.cc:304] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: a30df5c9a5634e1b9e6f378d3d3db88d, d2f2badc6b0341918964e73a7e3a0fe5; no voters: 8e0c882ed93e411495d3bc24bb61eb11
I20250629 01:58:48.715787 22002 raft_consensus.cc:2802] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 3 FOLLOWER]: Leader election won for term 3
I20250629 01:58:48.717705 22002 raft_consensus.cc:695] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 3 LEADER]: Becoming Leader. State: Replica: a30df5c9a5634e1b9e6f378d3d3db88d, State: Running, Role: LEADER
I20250629 01:58:48.718444 22002 consensus_queue.cc:237] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 14, Committed index: 14, Last appended: 2.14, Last appended by leader: 14, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false } }
I20250629 01:58:48.725688 21719 catalog_manager.cc:5582] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d reported cstate change: term changed from 2 to 3, leader changed from <none> to a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65). New cstate: current_term: 3 leader_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" committed_config { opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false } health_report { overall_health: UNKNOWN } } }
I20250629 01:58:48.795859 21945 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:48.796818 21945 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5: Bootstrap complete.
I20250629 01:58:48.798411 21945 ts_tablet_manager.cc:1397] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5: Time spent bootstrapping tablet: real 0.149s user 0.121s sys 0.024s
I20250629 01:58:48.800730 21945 raft_consensus.cc:357] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
I20250629 01:58:48.801487 21945 raft_consensus.cc:738] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: d2f2badc6b0341918964e73a7e3a0fe5, State: Initialized, Role: FOLLOWER
I20250629 01:58:48.802126 21945 consensus_queue.cc:260] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
I20250629 01:58:48.804117 21945 ts_tablet_manager.cc:1428] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5: Time spent starting tablet: real 0.005s user 0.008s sys 0.000s
W20250629 01:58:48.989660 22062 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:58:48.990109 22062 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:58:48.990657 22062 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:58:49.021219 22062 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:58:49.021978 22062 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.67
I20250629 01:58:49.055132 22062 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:42817
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.67:37557
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.17.83.67
--webserver_port=35205
--tserver_master_addrs=127.17.83.126:44751
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.67
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:58:49.056324 22062 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:58:49.057862 22062 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:58:49.072733 22079 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:49.115152 22012 raft_consensus.cc:1273] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 3 FOLLOWER]: Refusing update from remote peer a30df5c9a5634e1b9e6f378d3d3db88d: Log matching property violated. Preceding OpId in replica: term: 2 index: 14. Preceding OpId from leader: term: 3 index: 15. (index mismatch)
I20250629 01:58:49.117200 22002 consensus_queue.cc:1035] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [LEADER]: Connected to new peer: Peer: permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 15, Last known committed idx: 14, Time since last communication: 0.000s
W20250629 01:58:49.142596 21793 consensus_peers.cc:489] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d -> Peer 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557): Couldn't send request to peer 8e0c882ed93e411495d3bc24bb61eb11. Status: Network error: Client connection negotiation failed: client connection to 127.17.83.67:37557: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250629 01:58:49.164629 21862 consensus_queue.cc:237] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 15, Committed index: 15, Last appended: 3.15, Last appended by leader: 14, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
I20250629 01:58:49.168682 22012 raft_consensus.cc:1273] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 3 FOLLOWER]: Refusing update from remote peer a30df5c9a5634e1b9e6f378d3d3db88d: Log matching property violated. Preceding OpId in replica: term: 3 index: 15. Preceding OpId from leader: term: 3 index: 16. (index mismatch)
I20250629 01:58:49.169798 22085 consensus_queue.cc:1035] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [LEADER]: Connected to new peer: Peer: permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 16, Last known committed idx: 15, Time since last communication: 0.000s
I20250629 01:58:49.174860 22065 raft_consensus.cc:2953] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 3 LEADER]: Committing config change with OpId 3.16: config changed from index 14 to 16, VOTER 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67) evicted. New config: { opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } }
I20250629 01:58:49.176018 22012 raft_consensus.cc:2953] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 3 FOLLOWER]: Committing config change with OpId 3.16: config changed from index 14 to 16, VOTER 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67) evicted. New config: { opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } }
I20250629 01:58:49.185282 21720 catalog_manager.cc:5582] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 reported cstate change: config changed from index 14 to 16, VOTER 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67) evicted. New cstate: current_term: 3 leader_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" committed_config { opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } }
I20250629 01:58:49.187888 21704 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 63f8fd3146fb4036bf60a5a070659dc8 with cas_config_opid_index 14: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
W20250629 01:58:49.193406 21720 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet 63f8fd3146fb4036bf60a5a070659dc8 on TS 8e0c882ed93e411495d3bc24bb61eb11: Not found: failed to reset TS proxy: Could not find TS for UUID 8e0c882ed93e411495d3bc24bb61eb11
I20250629 01:58:49.196669 21862 consensus_queue.cc:237] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 16, Committed index: 16, Last appended: 3.16, Last appended by leader: 14, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 17 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
I20250629 01:58:49.198401 22065 raft_consensus.cc:2953] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 3 LEADER]: Committing config change with OpId 3.17: config changed from index 16 to 17, VOTER d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66) evicted. New config: { opid_index: 17 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } }
I20250629 01:58:49.204360 21704 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 63f8fd3146fb4036bf60a5a070659dc8 with cas_config_opid_index 16: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250629 01:58:49.210275 21719 catalog_manager.cc:5582] T 63f8fd3146fb4036bf60a5a070659dc8 P a30df5c9a5634e1b9e6f378d3d3db88d reported cstate change: config changed from index 16 to 17, VOTER d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66) evicted. New cstate: current_term: 3 leader_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" committed_config { opid_index: 17 OBSOLETE_local: true peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250629 01:58:49.228874 21988 tablet_service.cc:1515] Processing DeleteTablet for tablet 63f8fd3146fb4036bf60a5a070659dc8 with delete_type TABLET_DATA_TOMBSTONED (TS d2f2badc6b0341918964e73a7e3a0fe5 not found in new config with opid_index 17) from {username='slave'} at 127.0.0.1:60666
I20250629 01:58:49.232997 22089 tablet_replica.cc:331] stopping tablet replica
I20250629 01:58:49.233552 22089 raft_consensus.cc:2241] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 3 FOLLOWER]: Raft consensus shutting down.
I20250629 01:58:49.233954 22089 raft_consensus.cc:2270] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 3 FOLLOWER]: Raft consensus is shut down!
I20250629 01:58:49.236291 22089 ts_tablet_manager.cc:1905] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250629 01:58:49.245450 22089 ts_tablet_manager.cc:1918] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 3.16
I20250629 01:58:49.245707 22089 log.cc:1199] T 63f8fd3146fb4036bf60a5a070659dc8 P d2f2badc6b0341918964e73a7e3a0fe5: Deleting WAL directory at /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal/wals/63f8fd3146fb4036bf60a5a070659dc8
W20250629 01:58:49.247032 21705 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet 63f8fd3146fb4036bf60a5a070659dc8 on TS 8e0c882ed93e411495d3bc24bb61eb11 failed: Not found: failed to reset TS proxy: Could not find TS for UUID 8e0c882ed93e411495d3bc24bb61eb11
I20250629 01:58:49.247699 21705 catalog_manager.cc:4928] TS d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66:45511): tablet 63f8fd3146fb4036bf60a5a070659dc8 (table TestTable [id=1d3100c4c10a44869af35d12503d40d6]) successfully deleted
W20250629 01:58:49.072860 22076 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:58:49.073531 22077 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:58:49.074414 22062 server_base.cc:1048] running on GCE node
I20250629 01:58:50.181589 22062 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:58:50.183836 22062 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:58:50.185174 22062 hybrid_clock.cc:648] HybridClock initialized: now 1751162330185117 us; error 55 us; skew 500 ppm
I20250629 01:58:50.185971 22062 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:58:50.191833 22062 webserver.cc:469] Webserver started at http://127.17.83.67:35205/ using document root <none> and password file <none>
I20250629 01:58:50.192737 22062 fs_manager.cc:362] Metadata directory not provided
I20250629 01:58:50.192938 22062 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:58:50.201014 22062 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.001s sys 0.004s
I20250629 01:58:50.206060 22094 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:58:50.207055 22062 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250629 01:58:50.207369 22062 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "8e0c882ed93e411495d3bc24bb61eb11"
format_stamp: "Formatted at 2025-06-29 01:58:22 on dist-test-slave-v1mb"
I20250629 01:58:50.209033 22062 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:58:50.277415 22062 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:58:50.278831 22062 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:58:50.279265 22062 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:58:50.281581 22062 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:58:50.287335 22101 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250629 01:58:50.298849 22062 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250629 01:58:50.299080 22062 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.013s user 0.002s sys 0.000s
I20250629 01:58:50.299419 22062 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250629 01:58:50.304317 22101 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: Bootstrap starting.
I20250629 01:58:50.306664 22062 ts_tablet_manager.cc:610] Registered 2 tablets
I20250629 01:58:50.306826 22062 ts_tablet_manager.cc:589] Time spent register tablets: real 0.007s user 0.004s sys 0.000s
I20250629 01:58:50.354089 22101 log.cc:826] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: Log is configured to *not* fsync() on all Append() calls
W20250629 01:58:50.415515 21705 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet 63f8fd3146fb4036bf60a5a070659dc8 on TS 8e0c882ed93e411495d3bc24bb61eb11 failed: Not found: failed to reset TS proxy: Could not find TS for UUID 8e0c882ed93e411495d3bc24bb61eb11
I20250629 01:58:50.462668 22062 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.67:37557
I20250629 01:58:50.462792 22208 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.67:37557 every 8 connection(s)
I20250629 01:58:50.465873 22062 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250629 01:58:50.470026 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 22062
I20250629 01:58:50.504012 22209 heartbeater.cc:344] Connected to a master server at 127.17.83.126:44751
I20250629 01:58:50.504384 22209 heartbeater.cc:461] Registering TS with master...
I20250629 01:58:50.505251 22209 heartbeater.cc:507] Master 127.17.83.126:44751 requested a full tablet report, sending...
I20250629 01:58:50.508596 21719 ts_manager.cc:194] Registered new tserver with Master: 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557)
I20250629 01:58:50.512434 21719 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.67:33221
I20250629 01:58:50.518514 22101 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: Bootstrap replayed 1/1 log segments. Stats: ops{read=14 overwritten=0 applied=14 ignored=0} inserts{seen=400 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:50.519346 22101 tablet_bootstrap.cc:492] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: Bootstrap complete.
I20250629 01:58:50.520866 22101 ts_tablet_manager.cc:1397] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: Time spent bootstrapping tablet: real 0.217s user 0.181s sys 0.032s
I20250629 01:58:50.521435 17741 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250629 01:58:50.525537 17741 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
W20250629 01:58:50.528532 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
I20250629 01:58:50.532825 22101 raft_consensus.cc:357] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false } }
I20250629 01:58:50.535138 22101 raft_consensus.cc:738] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8e0c882ed93e411495d3bc24bb61eb11, State: Initialized, Role: FOLLOWER
I20250629 01:58:50.535907 22101 consensus_queue.cc:260] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 14, Last appended: 2.14, Last appended by leader: 14, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 14 OBSOLETE_local: true peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } } peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } attrs { promote: false } }
I20250629 01:58:50.538095 22209 heartbeater.cc:499] Master 127.17.83.126:44751 was elected leader, sending a full tablet report...
I20250629 01:58:50.538514 22101 ts_tablet_manager.cc:1428] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: Time spent starting tablet: real 0.017s user 0.018s sys 0.000s
I20250629 01:58:50.539114 22101 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Bootstrap starting.
I20250629 01:58:50.608748 22217 raft_consensus.cc:491] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250629 01:58:50.609146 22217 raft_consensus.cc:513] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
I20250629 01:58:50.611179 22217 leader_election.cc:290] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557), a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65:43367)
I20250629 01:58:50.628778 22101 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:58:50.629628 22101 tablet_bootstrap.cc:492] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Bootstrap complete.
I20250629 01:58:50.630998 22101 ts_tablet_manager.cc:1397] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Time spent bootstrapping tablet: real 0.092s user 0.082s sys 0.008s
I20250629 01:58:50.630892 21862 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "35f5dbaa898b47369f784618572fd3c8" candidate_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" is_pre_election: true
I20250629 01:58:50.631572 21862 raft_consensus.cc:2466] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d2f2badc6b0341918964e73a7e3a0fe5 in term 2.
I20250629 01:58:50.632679 21939 leader_election.cc:304] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: a30df5c9a5634e1b9e6f378d3d3db88d, d2f2badc6b0341918964e73a7e3a0fe5; no voters:
I20250629 01:58:50.632843 22101 raft_consensus.cc:357] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
I20250629 01:58:50.633395 22217 raft_consensus.cc:2802] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250629 01:58:50.633424 22101 raft_consensus.cc:738] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8e0c882ed93e411495d3bc24bb61eb11, State: Initialized, Role: FOLLOWER
I20250629 01:58:50.633742 22217 raft_consensus.cc:491] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250629 01:58:50.633553 22164 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "35f5dbaa898b47369f784618572fd3c8" candidate_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "8e0c882ed93e411495d3bc24bb61eb11" is_pre_election: true
I20250629 01:58:50.634027 22217 raft_consensus.cc:3058] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 2 FOLLOWER]: Advancing to term 3
I20250629 01:58:50.634007 22101 consensus_queue.cc:260] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
I20250629 01:58:50.634940 22164 raft_consensus.cc:2466] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d2f2badc6b0341918964e73a7e3a0fe5 in term 2.
I20250629 01:58:50.635999 22101 ts_tablet_manager.cc:1428] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11: Time spent starting tablet: real 0.005s user 0.004s sys 0.000s
I20250629 01:58:50.639529 22217 raft_consensus.cc:513] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 3 FOLLOWER]: Starting leader election with config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
I20250629 01:58:50.641264 22217 leader_election.cc:290] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [CANDIDATE]: Term 3 election: Requested vote from peers 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557), a30df5c9a5634e1b9e6f378d3d3db88d (127.17.83.65:43367)
I20250629 01:58:50.642285 21862 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "35f5dbaa898b47369f784618572fd3c8" candidate_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d"
I20250629 01:58:50.642707 22164 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "35f5dbaa898b47369f784618572fd3c8" candidate_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" candidate_term: 3 candidate_status { last_received { term: 2 index: 11 } } ignore_live_leader: false dest_uuid: "8e0c882ed93e411495d3bc24bb61eb11"
I20250629 01:58:50.642840 21862 raft_consensus.cc:3058] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 2 FOLLOWER]: Advancing to term 3
I20250629 01:58:50.643193 22164 raft_consensus.cc:3058] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 FOLLOWER]: Advancing to term 3
I20250629 01:58:50.649362 21862 raft_consensus.cc:2466] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d2f2badc6b0341918964e73a7e3a0fe5 in term 3.
I20250629 01:58:50.650286 21939 leader_election.cc:304] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: a30df5c9a5634e1b9e6f378d3d3db88d, d2f2badc6b0341918964e73a7e3a0fe5; no voters:
I20250629 01:58:50.650976 22217 raft_consensus.cc:2802] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 3 FOLLOWER]: Leader election won for term 3
I20250629 01:58:50.651818 22164 raft_consensus.cc:2466] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d2f2badc6b0341918964e73a7e3a0fe5 in term 3.
I20250629 01:58:50.652778 22217 raft_consensus.cc:695] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [term 3 LEADER]: Becoming Leader. State: Replica: d2f2badc6b0341918964e73a7e3a0fe5, State: Running, Role: LEADER
I20250629 01:58:50.653566 22217 consensus_queue.cc:237] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } }
I20250629 01:58:50.659982 21719 catalog_manager.cc:5582] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 reported cstate change: term changed from 2 to 3, leader changed from 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67) to d2f2badc6b0341918964e73a7e3a0fe5 (127.17.83.66). New cstate: current_term: 3 leader_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d2f2badc6b0341918964e73a7e3a0fe5" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 45511 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false } health_report { overall_health: UNKNOWN } } }
I20250629 01:58:51.238957 22164 raft_consensus.cc:1273] T 35f5dbaa898b47369f784618572fd3c8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 3 FOLLOWER]: Refusing update from remote peer d2f2badc6b0341918964e73a7e3a0fe5: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 3 index: 12. (index mismatch)
I20250629 01:58:51.240160 22228 consensus_queue.cc:1035] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "8e0c882ed93e411495d3bc24bb61eb11" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 37557 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.000s
I20250629 01:58:51.254362 21862 raft_consensus.cc:1273] T 35f5dbaa898b47369f784618572fd3c8 P a30df5c9a5634e1b9e6f378d3d3db88d [term 3 FOLLOWER]: Refusing update from remote peer d2f2badc6b0341918964e73a7e3a0fe5: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 3 index: 12. (index mismatch)
I20250629 01:58:51.255981 22233 consensus_queue.cc:1035] T 35f5dbaa898b47369f784618572fd3c8 P d2f2badc6b0341918964e73a7e3a0fe5 [LEADER]: Connected to new peer: Peer: permanent_uuid: "a30df5c9a5634e1b9e6f378d3d3db88d" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 43367 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.001s
I20250629 01:58:51.500871 22144 tablet_service.cc:1515] Processing DeleteTablet for tablet 63f8fd3146fb4036bf60a5a070659dc8 with delete_type TABLET_DATA_TOMBSTONED (TS 8e0c882ed93e411495d3bc24bb61eb11 not found in new config with opid_index 16) from {username='slave'} at 127.0.0.1:57086
I20250629 01:58:51.503249 22240 tablet_replica.cc:331] stopping tablet replica
I20250629 01:58:51.504004 22240 raft_consensus.cc:2241] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250629 01:58:51.504462 22240 raft_consensus.cc:2270] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250629 01:58:51.507586 22240 ts_tablet_manager.cc:1905] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250629 01:58:51.519187 22240 ts_tablet_manager.cc:1918] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.14
I20250629 01:58:51.519495 22240 log.cc:1199] T 63f8fd3146fb4036bf60a5a070659dc8 P 8e0c882ed93e411495d3bc24bb61eb11: Deleting WAL directory at /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal/wals/63f8fd3146fb4036bf60a5a070659dc8
I20250629 01:58:51.520579 21704 catalog_manager.cc:4928] TS 8e0c882ed93e411495d3bc24bb61eb11 (127.17.83.67:37557): tablet 63f8fd3146fb4036bf60a5a070659dc8 (table TestTable [id=1d3100c4c10a44869af35d12503d40d6]) successfully deleted
W20250629 01:58:51.531786 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:58:52.535274 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:58:53.539717 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:58:54.543896 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:58:55.547605 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:58:56.551152 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:58:57.554818 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:58:58.558153 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:58:59.561451 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:59:00.565100 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:59:00.835913 21751 debug-util.cc:398] Leaking SignalData structure 0x7b08000969e0 after lost signal to thread 21687
W20250629 01:59:00.836668 21751 debug-util.cc:398] Leaking SignalData structure 0x7b08000920a0 after lost signal to thread 21754
W20250629 01:59:01.568481 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:59:02.572525 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:59:03.577172 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:59:04.581007 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:59:05.584628 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:59:06.587896 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:59:07.590992 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:59:08.594291 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250629 01:59:09.597347 17741 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet 63f8fd3146fb4036bf60a5a070659dc8: tablet_id: "63f8fd3146fb4036bf60a5a070659dc8" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
/home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/tools/kudu-admin-test.cc:3914: Failure
Failed
Bad status: Not found: not all replicas of tablets comprising table TestTable are registered yet
I20250629 01:59:10.600415 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 21757
W20250629 01:59:10.626186 21939 connection.cc:537] client connection to 127.17.83.65:43367 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250629 01:59:10.626739 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 21911
W20250629 01:59:10.626685 21939 proxy.cc:239] Call had error, refreshing address and retrying: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250629 01:59:10.652781 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 22062
I20250629 01:59:10.678150 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 21686
2025-06-29T01:59:10Z chronyd exiting
I20250629 01:59:10.724846 17741 test_util.cc:183] -----------------------------------------------
I20250629 01:59:10.725035 17741 test_util.cc:184] Had failures, leaving test files at /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1751162234221453-17741-0
[ FAILED ] AdminCliTest.TestRebuildTables (55430 ms)
[----------] 5 tests from AdminCliTest (116440 ms total)
[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest
[ RUN ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4
I20250629 01:59:10.728444 17741 test_util.cc:276] Using random seed: 1104934118
I20250629 01:59:10.732307 17741 ts_itest-base.cc:115] Starting cluster with:
I20250629 01:59:10.732435 17741 ts_itest-base.cc:116] --------------
I20250629 01:59:10.732532 17741 ts_itest-base.cc:117] 5 tablet servers
I20250629 01:59:10.732656 17741 ts_itest-base.cc:118] 3 replicas per TS
I20250629 01:59:10.732760 17741 ts_itest-base.cc:119] --------------
2025-06-29T01:59:10Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-29T01:59:10Z Disabled control of system clock
I20250629 01:59:10.771239 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:45605
--webserver_interface=127.17.83.126
--webserver_port=0
--builtin_ntp_servers=127.17.83.84:37979
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.17.83.126:45605
--raft_prepare_replacement_before_eviction=true with env {}
W20250629 01:59:11.056947 22263 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:11.057483 22263 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:11.057890 22263 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:11.086558 22263 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250629 01:59:11.086879 22263 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250629 01:59:11.087127 22263 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:11.087483 22263 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250629 01:59:11.087797 22263 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250629 01:59:11.120529 22263 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:37979
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.17.83.126:45605
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:45605
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.17.83.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:11.121776 22263 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:11.123376 22263 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:11.137208 22270 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:11.137200 22269 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:11.139014 22272 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:11.139539 22263 server_base.cc:1048] running on GCE node
I20250629 01:59:12.270685 22263 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:59:12.273236 22263 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:59:12.274559 22263 hybrid_clock.cc:648] HybridClock initialized: now 1751162352274544 us; error 39 us; skew 500 ppm
I20250629 01:59:12.275388 22263 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:12.287338 22263 webserver.cc:469] Webserver started at http://127.17.83.126:39833/ using document root <none> and password file <none>
I20250629 01:59:12.288218 22263 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:12.288425 22263 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:12.288888 22263 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:59:12.293535 22263 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "92231ee9cfd04543a8463123a4506dea"
format_stamp: "Formatted at 2025-06-29 01:59:12 on dist-test-slave-v1mb"
I20250629 01:59:12.294574 22263 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "92231ee9cfd04543a8463123a4506dea"
format_stamp: "Formatted at 2025-06-29 01:59:12 on dist-test-slave-v1mb"
I20250629 01:59:12.301297 22263 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.004s
I20250629 01:59:12.306406 22279 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:12.307343 22263 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.000s
I20250629 01:59:12.307626 22263 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
uuid: "92231ee9cfd04543a8463123a4506dea"
format_stamp: "Formatted at 2025-06-29 01:59:12 on dist-test-slave-v1mb"
I20250629 01:59:12.307925 22263 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:12.363075 22263 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:12.364545 22263 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:12.364980 22263 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:12.430671 22263 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.126:45605
I20250629 01:59:12.430750 22330 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.126:45605 every 8 connection(s)
I20250629 01:59:12.433109 22263 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250629 01:59:12.436187 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 22263
I20250629 01:59:12.436756 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250629 01:59:12.438788 22331 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:59:12.462711 22331 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea: Bootstrap starting.
I20250629 01:59:12.468241 22331 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea: Neither blocks nor log segments found. Creating new log.
I20250629 01:59:12.470045 22331 log.cc:826] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea: Log is configured to *not* fsync() on all Append() calls
I20250629 01:59:12.474470 22331 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea: No bootstrap required, opened a new log
I20250629 01:59:12.491143 22331 raft_consensus.cc:357] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "92231ee9cfd04543a8463123a4506dea" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 45605 } }
I20250629 01:59:12.491729 22331 raft_consensus.cc:383] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:59:12.491931 22331 raft_consensus.cc:738] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 92231ee9cfd04543a8463123a4506dea, State: Initialized, Role: FOLLOWER
I20250629 01:59:12.492463 22331 consensus_queue.cc:260] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "92231ee9cfd04543a8463123a4506dea" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 45605 } }
I20250629 01:59:12.492888 22331 raft_consensus.cc:397] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:59:12.493115 22331 raft_consensus.cc:491] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:59:12.493358 22331 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:59:12.497278 22331 raft_consensus.cc:513] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "92231ee9cfd04543a8463123a4506dea" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 45605 } }
I20250629 01:59:12.498008 22331 leader_election.cc:304] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 92231ee9cfd04543a8463123a4506dea; no voters:
I20250629 01:59:12.499522 22331 leader_election.cc:290] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [CANDIDATE]: Term 1 election: Requested vote from peers
I20250629 01:59:12.500191 22336 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:59:12.502254 22336 raft_consensus.cc:695] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [term 1 LEADER]: Becoming Leader. State: Replica: 92231ee9cfd04543a8463123a4506dea, State: Running, Role: LEADER
I20250629 01:59:12.502960 22336 consensus_queue.cc:237] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "92231ee9cfd04543a8463123a4506dea" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 45605 } }
I20250629 01:59:12.503945 22331 sys_catalog.cc:564] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [sys.catalog]: configured and running, proceeding with master startup.
I20250629 01:59:12.513605 22338 sys_catalog.cc:455] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [sys.catalog]: SysCatalogTable state changed. Reason: New leader 92231ee9cfd04543a8463123a4506dea. Latest consensus state: current_term: 1 leader_uuid: "92231ee9cfd04543a8463123a4506dea" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "92231ee9cfd04543a8463123a4506dea" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 45605 } } }
I20250629 01:59:12.513787 22337 sys_catalog.cc:455] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "92231ee9cfd04543a8463123a4506dea" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "92231ee9cfd04543a8463123a4506dea" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 45605 } } }
I20250629 01:59:12.514511 22338 sys_catalog.cc:458] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [sys.catalog]: This master's current role is: LEADER
I20250629 01:59:12.514614 22337 sys_catalog.cc:458] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea [sys.catalog]: This master's current role is: LEADER
I20250629 01:59:12.519685 22344 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250629 01:59:12.529874 22344 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250629 01:59:12.544361 22344 catalog_manager.cc:1349] Generated new cluster ID: 1b5ab4625c264ca4ae22177a89bbdfb4
I20250629 01:59:12.544610 22344 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250629 01:59:12.562754 22344 catalog_manager.cc:1372] Generated new certificate authority record
I20250629 01:59:12.564373 22344 catalog_manager.cc:1506] Loading token signing keys...
I20250629 01:59:12.577193 22344 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 92231ee9cfd04543a8463123a4506dea: Generated new TSK 0
I20250629 01:59:12.578176 22344 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250629 01:59:12.598589 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.65:0
--local_ip_for_outbound_sockets=127.17.83.65
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:45605
--builtin_ntp_servers=127.17.83.84:37979
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
W20250629 01:59:12.896054 22355 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:12.896502 22355 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:12.897015 22355 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:12.925875 22355 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250629 01:59:12.926262 22355 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:12.926970 22355 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.65
I20250629 01:59:12.960202 22355 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:37979
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:45605
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.65
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:12.961447 22355 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:12.962901 22355 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:12.979804 22362 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:12.980804 22361 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:12.983309 22364 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:12.981495 22355 server_base.cc:1048] running on GCE node
I20250629 01:59:14.102795 22355 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:59:14.105551 22355 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:59:14.107098 22355 hybrid_clock.cc:648] HybridClock initialized: now 1751162354107065 us; error 44 us; skew 500 ppm
I20250629 01:59:14.108093 22355 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:14.115129 22355 webserver.cc:469] Webserver started at http://127.17.83.65:41349/ using document root <none> and password file <none>
I20250629 01:59:14.116183 22355 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:14.116401 22355 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:14.116844 22355 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:59:14.121224 22355 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "3abbd98378944e2ba0b41ce85520a7e9"
format_stamp: "Formatted at 2025-06-29 01:59:14 on dist-test-slave-v1mb"
I20250629 01:59:14.122272 22355 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "3abbd98378944e2ba0b41ce85520a7e9"
format_stamp: "Formatted at 2025-06-29 01:59:14 on dist-test-slave-v1mb"
I20250629 01:59:14.129182 22355 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.004s
I20250629 01:59:14.134614 22371 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:14.135703 22355 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250629 01:59:14.136044 22355 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "3abbd98378944e2ba0b41ce85520a7e9"
format_stamp: "Formatted at 2025-06-29 01:59:14 on dist-test-slave-v1mb"
I20250629 01:59:14.136364 22355 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:14.183481 22355 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:14.184764 22355 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:14.185205 22355 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:14.188170 22355 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:59:14.191977 22355 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:59:14.192180 22355 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:14.192427 22355 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:59:14.192562 22355 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:14.316643 22355 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.65:37643
I20250629 01:59:14.316751 22483 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.65:37643 every 8 connection(s)
I20250629 01:59:14.319036 22355 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250629 01:59:14.323904 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 22355
I20250629 01:59:14.324297 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250629 01:59:14.330142 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.66:0
--local_ip_for_outbound_sockets=127.17.83.66
--webserver_interface=127.17.83.66
--webserver_port=0
--tserver_master_addrs=127.17.83.126:45605
--builtin_ntp_servers=127.17.83.84:37979
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250629 01:59:14.338760 22484 heartbeater.cc:344] Connected to a master server at 127.17.83.126:45605
I20250629 01:59:14.339251 22484 heartbeater.cc:461] Registering TS with master...
I20250629 01:59:14.340099 22484 heartbeater.cc:507] Master 127.17.83.126:45605 requested a full tablet report, sending...
I20250629 01:59:14.342267 22296 ts_manager.cc:194] Registered new tserver with Master: 3abbd98378944e2ba0b41ce85520a7e9 (127.17.83.65:37643)
I20250629 01:59:14.344149 22296 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.65:46599
W20250629 01:59:14.618862 22488 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:14.619349 22488 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:14.619817 22488 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:14.649415 22488 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250629 01:59:14.649760 22488 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:14.650555 22488 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.66
I20250629 01:59:14.682552 22488 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:37979
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.66:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.17.83.66
--webserver_port=0
--tserver_master_addrs=127.17.83.126:45605
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.66
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:14.683724 22488 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:14.685134 22488 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:14.699959 22496 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:15.347198 22484 heartbeater.cc:499] Master 127.17.83.126:45605 was elected leader, sending a full tablet report...
W20250629 01:59:14.700395 22498 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:14.701898 22488 server_base.cc:1048] running on GCE node
W20250629 01:59:14.700529 22495 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:15.810220 22488 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:59:15.812212 22488 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:59:15.813553 22488 hybrid_clock.cc:648] HybridClock initialized: now 1751162355813522 us; error 47 us; skew 500 ppm
I20250629 01:59:15.814257 22488 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:15.819835 22488 webserver.cc:469] Webserver started at http://127.17.83.66:39001/ using document root <none> and password file <none>
I20250629 01:59:15.820580 22488 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:15.820765 22488 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:15.821182 22488 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:59:15.824841 22488 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "5d2e4e68ff1948ea8051573917572746"
format_stamp: "Formatted at 2025-06-29 01:59:15 on dist-test-slave-v1mb"
I20250629 01:59:15.825802 22488 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "5d2e4e68ff1948ea8051573917572746"
format_stamp: "Formatted at 2025-06-29 01:59:15 on dist-test-slave-v1mb"
I20250629 01:59:15.832132 22488 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.008s sys 0.000s
I20250629 01:59:15.837076 22505 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:15.838001 22488 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.001s
I20250629 01:59:15.838289 22488 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "5d2e4e68ff1948ea8051573917572746"
format_stamp: "Formatted at 2025-06-29 01:59:15 on dist-test-slave-v1mb"
I20250629 01:59:15.838552 22488 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:15.900626 22488 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:15.901966 22488 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:15.902340 22488 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:15.904587 22488 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:59:15.908532 22488 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:59:15.908744 22488 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:15.908967 22488 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:59:15.909106 22488 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:16.032609 22488 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.66:36445
I20250629 01:59:16.032717 22617 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.66:36445 every 8 connection(s)
I20250629 01:59:16.034946 22488 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250629 01:59:16.040900 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 22488
I20250629 01:59:16.041483 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250629 01:59:16.047797 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.67:0
--local_ip_for_outbound_sockets=127.17.83.67
--webserver_interface=127.17.83.67
--webserver_port=0
--tserver_master_addrs=127.17.83.126:45605
--builtin_ntp_servers=127.17.83.84:37979
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250629 01:59:16.055111 22618 heartbeater.cc:344] Connected to a master server at 127.17.83.126:45605
I20250629 01:59:16.055565 22618 heartbeater.cc:461] Registering TS with master...
I20250629 01:59:16.056511 22618 heartbeater.cc:507] Master 127.17.83.126:45605 requested a full tablet report, sending...
I20250629 01:59:16.058589 22296 ts_manager.cc:194] Registered new tserver with Master: 5d2e4e68ff1948ea8051573917572746 (127.17.83.66:36445)
I20250629 01:59:16.059826 22296 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.66:40411
W20250629 01:59:16.324110 22622 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:16.324529 22622 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:16.324956 22622 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:16.353924 22622 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250629 01:59:16.354267 22622 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:16.355016 22622 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.67
I20250629 01:59:16.388024 22622 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:37979
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.67:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.17.83.67
--webserver_port=0
--tserver_master_addrs=127.17.83.126:45605
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.67
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:16.389261 22622 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:16.390784 22622 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:16.405455 22628 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:17.062463 22618 heartbeater.cc:499] Master 127.17.83.126:45605 was elected leader, sending a full tablet report...
W20250629 01:59:16.405640 22629 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:16.407691 22622 server_base.cc:1048] running on GCE node
W20250629 01:59:16.405846 22631 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:17.493557 22622 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:59:17.495627 22622 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:59:17.496954 22622 hybrid_clock.cc:648] HybridClock initialized: now 1751162357496918 us; error 51 us; skew 500 ppm
I20250629 01:59:17.497711 22622 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:17.507683 22622 webserver.cc:469] Webserver started at http://127.17.83.67:46079/ using document root <none> and password file <none>
I20250629 01:59:17.508566 22622 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:17.508816 22622 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:17.509238 22622 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:59:17.513391 22622 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "f24e9e9b70634ad88a735e6c2baf24be"
format_stamp: "Formatted at 2025-06-29 01:59:17 on dist-test-slave-v1mb"
I20250629 01:59:17.514322 22622 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "f24e9e9b70634ad88a735e6c2baf24be"
format_stamp: "Formatted at 2025-06-29 01:59:17 on dist-test-slave-v1mb"
I20250629 01:59:17.521073 22622 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.004s
I20250629 01:59:17.525931 22638 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:17.526816 22622 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250629 01:59:17.527076 22622 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "f24e9e9b70634ad88a735e6c2baf24be"
format_stamp: "Formatted at 2025-06-29 01:59:17 on dist-test-slave-v1mb"
I20250629 01:59:17.527374 22622 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:17.571928 22622 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:17.573086 22622 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:17.573511 22622 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:17.575848 22622 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:59:17.579582 22622 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:59:17.579804 22622 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:17.580037 22622 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:59:17.580192 22622 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:17.709054 22622 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.67:36173
I20250629 01:59:17.709177 22750 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.67:36173 every 8 connection(s)
I20250629 01:59:17.711275 22622 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250629 01:59:17.712296 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 22622
I20250629 01:59:17.712730 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250629 01:59:17.720477 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.68:0
--local_ip_for_outbound_sockets=127.17.83.68
--webserver_interface=127.17.83.68
--webserver_port=0
--tserver_master_addrs=127.17.83.126:45605
--builtin_ntp_servers=127.17.83.84:37979
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250629 01:59:17.735954 22751 heartbeater.cc:344] Connected to a master server at 127.17.83.126:45605
I20250629 01:59:17.736397 22751 heartbeater.cc:461] Registering TS with master...
I20250629 01:59:17.737318 22751 heartbeater.cc:507] Master 127.17.83.126:45605 requested a full tablet report, sending...
I20250629 01:59:17.739102 22296 ts_manager.cc:194] Registered new tserver with Master: f24e9e9b70634ad88a735e6c2baf24be (127.17.83.67:36173)
I20250629 01:59:17.740432 22296 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.67:43499
W20250629 01:59:18.004843 22755 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:18.005367 22755 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:18.005857 22755 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:18.039743 22755 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250629 01:59:18.040344 22755 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:18.041682 22755 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.68
I20250629 01:59:18.076340 22755 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:37979
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.68:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.17.83.68
--webserver_port=0
--tserver_master_addrs=127.17.83.126:45605
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.68
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:18.077503 22755 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:18.079013 22755 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:18.092741 22762 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:18.743083 22751 heartbeater.cc:499] Master 127.17.83.126:45605 was elected leader, sending a full tablet report...
W20250629 01:59:18.093479 22761 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:18.095273 22755 server_base.cc:1048] running on GCE node
W20250629 01:59:18.094494 22764 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:19.231889 22755 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:59:19.233817 22755 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:59:19.235129 22755 hybrid_clock.cc:648] HybridClock initialized: now 1751162359235096 us; error 48 us; skew 500 ppm
I20250629 01:59:19.235957 22755 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:19.241569 22755 webserver.cc:469] Webserver started at http://127.17.83.68:41105/ using document root <none> and password file <none>
I20250629 01:59:19.242413 22755 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:19.242602 22755 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:19.243011 22755 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:59:19.247058 22755 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "e6a8572fc9fa4225b11994f02e85c28d"
format_stamp: "Formatted at 2025-06-29 01:59:19 on dist-test-slave-v1mb"
I20250629 01:59:19.248093 22755 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "e6a8572fc9fa4225b11994f02e85c28d"
format_stamp: "Formatted at 2025-06-29 01:59:19 on dist-test-slave-v1mb"
I20250629 01:59:19.254531 22755 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.002s sys 0.004s
I20250629 01:59:19.259729 22771 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:19.260622 22755 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250629 01:59:19.260891 22755 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "e6a8572fc9fa4225b11994f02e85c28d"
format_stamp: "Formatted at 2025-06-29 01:59:19 on dist-test-slave-v1mb"
I20250629 01:59:19.261155 22755 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:19.307940 22755 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:19.309155 22755 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:19.309494 22755 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:19.311933 22755 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:59:19.315359 22755 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:59:19.315515 22755 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:19.315788 22755 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:59:19.315925 22755 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:19.442915 22755 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.68:39955
I20250629 01:59:19.443032 22883 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.68:39955 every 8 connection(s)
I20250629 01:59:19.445394 22755 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250629 01:59:19.450225 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 22755
I20250629 01:59:19.450594 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250629 01:59:19.456772 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.69:0
--local_ip_for_outbound_sockets=127.17.83.69
--webserver_interface=127.17.83.69
--webserver_port=0
--tserver_master_addrs=127.17.83.126:45605
--builtin_ntp_servers=127.17.83.84:37979
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250629 01:59:19.465214 22884 heartbeater.cc:344] Connected to a master server at 127.17.83.126:45605
I20250629 01:59:19.465587 22884 heartbeater.cc:461] Registering TS with master...
I20250629 01:59:19.466826 22884 heartbeater.cc:507] Master 127.17.83.126:45605 requested a full tablet report, sending...
I20250629 01:59:19.469084 22296 ts_manager.cc:194] Registered new tserver with Master: e6a8572fc9fa4225b11994f02e85c28d (127.17.83.68:39955)
I20250629 01:59:19.470279 22296 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.68:54801
W20250629 01:59:19.746764 22888 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:19.747364 22888 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:19.747870 22888 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:19.777922 22888 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250629 01:59:19.778313 22888 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:19.779089 22888 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.69
I20250629 01:59:19.811794 22888 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:37979
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.69:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--webserver_interface=127.17.83.69
--webserver_port=0
--tserver_master_addrs=127.17.83.126:45605
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.69
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:19.813020 22888 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:19.814405 22888 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:19.828737 22894 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:20.214934 22327 debug-util.cc:398] Leaking SignalData structure 0x7b08000a4520 after lost signal to thread 22264
I20250629 01:59:20.473528 22884 heartbeater.cc:499] Master 127.17.83.126:45605 was elected leader, sending a full tablet report...
W20250629 01:59:19.830070 22895 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:19.830829 22888 server_base.cc:1048] running on GCE node
W20250629 01:59:19.831233 22897 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:20.982864 22888 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:59:20.985219 22888 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:59:20.986618 22888 hybrid_clock.cc:648] HybridClock initialized: now 1751162360986561 us; error 75 us; skew 500 ppm
I20250629 01:59:20.987411 22888 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:20.993854 22888 webserver.cc:469] Webserver started at http://127.17.83.69:40385/ using document root <none> and password file <none>
I20250629 01:59:20.994659 22888 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:20.994860 22888 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:20.995486 22888 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:59:20.999635 22888 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/data/instance:
uuid: "95a5afce40c943de932b00325d5055c2"
format_stamp: "Formatted at 2025-06-29 01:59:20 on dist-test-slave-v1mb"
I20250629 01:59:21.000625 22888 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/wal/instance:
uuid: "95a5afce40c943de932b00325d5055c2"
format_stamp: "Formatted at 2025-06-29 01:59:20 on dist-test-slave-v1mb"
I20250629 01:59:21.007992 22888 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.007s sys 0.000s
I20250629 01:59:21.013753 22904 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:21.014680 22888 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.001s
I20250629 01:59:21.014951 22888 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/wal
uuid: "95a5afce40c943de932b00325d5055c2"
format_stamp: "Formatted at 2025-06-29 01:59:20 on dist-test-slave-v1mb"
I20250629 01:59:21.015235 22888 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:21.072417 22888 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:21.073725 22888 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:21.074134 22888 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:21.076723 22888 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:59:21.081669 22888 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:59:21.081856 22888 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:21.082098 22888 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:59:21.082242 22888 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:21.256012 22888 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.69:44555
I20250629 01:59:21.256193 23017 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.69:44555 every 8 connection(s)
I20250629 01:59:21.258414 22888 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/data/info.pb
I20250629 01:59:21.265826 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 22888
I20250629 01:59:21.266461 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-4/wal/instance
I20250629 01:59:21.284895 23018 heartbeater.cc:344] Connected to a master server at 127.17.83.126:45605
I20250629 01:59:21.285310 23018 heartbeater.cc:461] Registering TS with master...
I20250629 01:59:21.286216 23018 heartbeater.cc:507] Master 127.17.83.126:45605 requested a full tablet report, sending...
I20250629 01:59:21.288311 22295 ts_manager.cc:194] Registered new tserver with Master: 95a5afce40c943de932b00325d5055c2 (127.17.83.69:44555)
I20250629 01:59:21.289721 22295 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.69:47171
I20250629 01:59:21.301548 17741 external_mini_cluster.cc:934] 5 TS(s) registered with all masters
I20250629 01:59:21.341320 22295 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:48096:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250629 01:59:21.439693 22819 tablet_service.cc:1468] Processing CreateTablet for tablet 5207b70370f64149b8cd3efabf9228c2 (DEFAULT_TABLE table=TestTable [id=db64b34bfc62451a9c63da893b82f7c3]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:59:21.441430 22819 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5207b70370f64149b8cd3efabf9228c2. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:59:21.441484 22419 tablet_service.cc:1468] Processing CreateTablet for tablet 5207b70370f64149b8cd3efabf9228c2 (DEFAULT_TABLE table=TestTable [id=db64b34bfc62451a9c63da893b82f7c3]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:59:21.442736 22419 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5207b70370f64149b8cd3efabf9228c2. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:59:21.447337 22953 tablet_service.cc:1468] Processing CreateTablet for tablet 5207b70370f64149b8cd3efabf9228c2 (DEFAULT_TABLE table=TestTable [id=db64b34bfc62451a9c63da893b82f7c3]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:59:21.449568 22953 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5207b70370f64149b8cd3efabf9228c2. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:59:21.477252 23038 tablet_bootstrap.cc:492] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2: Bootstrap starting.
I20250629 01:59:21.485658 23037 tablet_bootstrap.cc:492] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d: Bootstrap starting.
I20250629 01:59:21.486568 23039 tablet_bootstrap.cc:492] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9: Bootstrap starting.
I20250629 01:59:21.492154 23038 tablet_bootstrap.cc:654] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2: Neither blocks nor log segments found. Creating new log.
I20250629 01:59:21.494503 23037 tablet_bootstrap.cc:654] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d: Neither blocks nor log segments found. Creating new log.
I20250629 01:59:21.494632 23038 log.cc:826] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2: Log is configured to *not* fsync() on all Append() calls
I20250629 01:59:21.495249 23039 tablet_bootstrap.cc:654] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9: Neither blocks nor log segments found. Creating new log.
I20250629 01:59:21.498050 23039 log.cc:826] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9: Log is configured to *not* fsync() on all Append() calls
I20250629 01:59:21.501981 23037 log.cc:826] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d: Log is configured to *not* fsync() on all Append() calls
I20250629 01:59:21.510586 23039 tablet_bootstrap.cc:492] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9: No bootstrap required, opened a new log
I20250629 01:59:21.511139 23039 ts_tablet_manager.cc:1397] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9: Time spent bootstrapping tablet: real 0.025s user 0.011s sys 0.008s
I20250629 01:59:21.515509 23038 tablet_bootstrap.cc:492] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2: No bootstrap required, opened a new log
I20250629 01:59:21.515873 23037 tablet_bootstrap.cc:492] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d: No bootstrap required, opened a new log
I20250629 01:59:21.516010 23038 ts_tablet_manager.cc:1397] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2: Time spent bootstrapping tablet: real 0.040s user 0.017s sys 0.008s
I20250629 01:59:21.516345 23037 ts_tablet_manager.cc:1397] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d: Time spent bootstrapping tablet: real 0.031s user 0.013s sys 0.010s
I20250629 01:59:21.541905 23039 raft_consensus.cc:357] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e6a8572fc9fa4225b11994f02e85c28d" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 39955 } } peers { permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 } } peers { permanent_uuid: "3abbd98378944e2ba0b41ce85520a7e9" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 37643 } }
I20250629 01:59:21.542959 23039 raft_consensus.cc:383] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:59:21.542745 23037 raft_consensus.cc:357] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e6a8572fc9fa4225b11994f02e85c28d" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 39955 } } peers { permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 } } peers { permanent_uuid: "3abbd98378944e2ba0b41ce85520a7e9" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 37643 } }
I20250629 01:59:21.543357 23039 raft_consensus.cc:738] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 3abbd98378944e2ba0b41ce85520a7e9, State: Initialized, Role: FOLLOWER
I20250629 01:59:21.543812 23037 raft_consensus.cc:383] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:59:21.544148 23037 raft_consensus.cc:738] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e6a8572fc9fa4225b11994f02e85c28d, State: Initialized, Role: FOLLOWER
I20250629 01:59:21.544576 23039 consensus_queue.cc:260] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e6a8572fc9fa4225b11994f02e85c28d" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 39955 } } peers { permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 } } peers { permanent_uuid: "3abbd98378944e2ba0b41ce85520a7e9" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 37643 } }
I20250629 01:59:21.545049 23037 consensus_queue.cc:260] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e6a8572fc9fa4225b11994f02e85c28d" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 39955 } } peers { permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 } } peers { permanent_uuid: "3abbd98378944e2ba0b41ce85520a7e9" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 37643 } }
I20250629 01:59:21.550056 23038 raft_consensus.cc:357] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e6a8572fc9fa4225b11994f02e85c28d" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 39955 } } peers { permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 } } peers { permanent_uuid: "3abbd98378944e2ba0b41ce85520a7e9" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 37643 } }
I20250629 01:59:21.550938 23038 raft_consensus.cc:383] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:59:21.551270 23038 raft_consensus.cc:738] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 95a5afce40c943de932b00325d5055c2, State: Initialized, Role: FOLLOWER
I20250629 01:59:21.552165 23038 consensus_queue.cc:260] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e6a8572fc9fa4225b11994f02e85c28d" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 39955 } } peers { permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 } } peers { permanent_uuid: "3abbd98378944e2ba0b41ce85520a7e9" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 37643 } }
I20250629 01:59:21.557160 23039 ts_tablet_manager.cc:1428] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9: Time spent starting tablet: real 0.046s user 0.018s sys 0.021s
I20250629 01:59:21.560036 23037 ts_tablet_manager.cc:1428] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d: Time spent starting tablet: real 0.043s user 0.031s sys 0.012s
I20250629 01:59:21.562158 23038 ts_tablet_manager.cc:1428] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2: Time spent starting tablet: real 0.046s user 0.031s sys 0.005s
I20250629 01:59:21.562479 23018 heartbeater.cc:499] Master 127.17.83.126:45605 was elected leader, sending a full tablet report...
W20250629 01:59:21.581835 22485 tablet.cc:2378] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250629 01:59:21.705116 22885 tablet.cc:2378] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250629 01:59:21.732717 23044 raft_consensus.cc:491] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250629 01:59:21.733592 23044 raft_consensus.cc:513] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e6a8572fc9fa4225b11994f02e85c28d" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 39955 } } peers { permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 } } peers { permanent_uuid: "3abbd98378944e2ba0b41ce85520a7e9" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 37643 } }
I20250629 01:59:21.735589 23044 leader_election.cc:290] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 95a5afce40c943de932b00325d5055c2 (127.17.83.69:44555), 3abbd98378944e2ba0b41ce85520a7e9 (127.17.83.65:37643)
I20250629 01:59:21.748153 22973 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5207b70370f64149b8cd3efabf9228c2" candidate_uuid: "e6a8572fc9fa4225b11994f02e85c28d" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "95a5afce40c943de932b00325d5055c2" is_pre_election: true
I20250629 01:59:21.749305 22973 raft_consensus.cc:2466] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate e6a8572fc9fa4225b11994f02e85c28d in term 0.
I20250629 01:59:21.750385 22775 leader_election.cc:304] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 95a5afce40c943de932b00325d5055c2, e6a8572fc9fa4225b11994f02e85c28d; no voters:
I20250629 01:59:21.751103 23044 raft_consensus.cc:2802] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250629 01:59:21.751485 23044 raft_consensus.cc:491] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250629 01:59:21.751590 22439 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5207b70370f64149b8cd3efabf9228c2" candidate_uuid: "e6a8572fc9fa4225b11994f02e85c28d" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "3abbd98378944e2ba0b41ce85520a7e9" is_pre_election: true
I20250629 01:59:21.751835 23044 raft_consensus.cc:3058] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:59:21.752269 22439 raft_consensus.cc:2466] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate e6a8572fc9fa4225b11994f02e85c28d in term 0.
I20250629 01:59:21.759352 23044 raft_consensus.cc:513] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e6a8572fc9fa4225b11994f02e85c28d" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 39955 } } peers { permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 } } peers { permanent_uuid: "3abbd98378944e2ba0b41ce85520a7e9" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 37643 } }
I20250629 01:59:21.761240 23044 leader_election.cc:290] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [CANDIDATE]: Term 1 election: Requested vote from peers 95a5afce40c943de932b00325d5055c2 (127.17.83.69:44555), 3abbd98378944e2ba0b41ce85520a7e9 (127.17.83.65:37643)
I20250629 01:59:21.761831 22973 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5207b70370f64149b8cd3efabf9228c2" candidate_uuid: "e6a8572fc9fa4225b11994f02e85c28d" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "95a5afce40c943de932b00325d5055c2"
I20250629 01:59:21.762320 22973 raft_consensus.cc:3058] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:59:21.762198 22439 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5207b70370f64149b8cd3efabf9228c2" candidate_uuid: "e6a8572fc9fa4225b11994f02e85c28d" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "3abbd98378944e2ba0b41ce85520a7e9"
I20250629 01:59:21.762715 22439 raft_consensus.cc:3058] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [term 0 FOLLOWER]: Advancing to term 1
W20250629 01:59:21.764451 23019 tablet.cc:2378] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250629 01:59:21.767416 22973 raft_consensus.cc:2466] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate e6a8572fc9fa4225b11994f02e85c28d in term 1.
I20250629 01:59:21.768733 22775 leader_election.cc:304] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 95a5afce40c943de932b00325d5055c2, e6a8572fc9fa4225b11994f02e85c28d; no voters:
I20250629 01:59:21.769255 23044 raft_consensus.cc:2802] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:59:21.770300 22439 raft_consensus.cc:2466] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate e6a8572fc9fa4225b11994f02e85c28d in term 1.
I20250629 01:59:21.771310 23044 raft_consensus.cc:695] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [term 1 LEADER]: Becoming Leader. State: Replica: e6a8572fc9fa4225b11994f02e85c28d, State: Running, Role: LEADER
I20250629 01:59:21.772930 23044 consensus_queue.cc:237] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e6a8572fc9fa4225b11994f02e85c28d" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 39955 } } peers { permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 } } peers { permanent_uuid: "3abbd98378944e2ba0b41ce85520a7e9" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 37643 } }
I20250629 01:59:21.784188 22294 catalog_manager.cc:5582] T 5207b70370f64149b8cd3efabf9228c2 P e6a8572fc9fa4225b11994f02e85c28d reported cstate change: term changed from 0 to 1, leader changed from <none> to e6a8572fc9fa4225b11994f02e85c28d (127.17.83.68). New cstate: current_term: 1 leader_uuid: "e6a8572fc9fa4225b11994f02e85c28d" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e6a8572fc9fa4225b11994f02e85c28d" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 39955 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "3abbd98378944e2ba0b41ce85520a7e9" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 37643 } health_report { overall_health: UNKNOWN } } }
I20250629 01:59:21.801951 17741 external_mini_cluster.cc:934] 5 TS(s) registered with all masters
I20250629 01:59:21.805145 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 3abbd98378944e2ba0b41ce85520a7e9 to finish bootstrapping
I20250629 01:59:21.819442 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver e6a8572fc9fa4225b11994f02e85c28d to finish bootstrapping
I20250629 01:59:21.831781 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 95a5afce40c943de932b00325d5055c2 to finish bootstrapping
I20250629 01:59:21.844054 17741 test_util.cc:276] Using random seed: 1116049732
I20250629 01:59:21.867540 17741 test_workload.cc:405] TestWorkload: Skipping table creation because table TestTable already exists
I20250629 01:59:21.868908 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 22755
W20250629 01:59:21.901432 23055 negotiation.cc:337] Failed RPC negotiation. Trace:
0629 01:59:21.878212 (+ 0us) reactor.cc:625] Submitting negotiation task for client connection to 127.17.83.68:39955 (local address 127.0.0.1:40458)
0629 01:59:21.878708 (+ 496us) negotiation.cc:107] Waiting for socket to connect
0629 01:59:21.878736 (+ 28us) client_negotiation.cc:174] Beginning negotiation
0629 01:59:21.878889 (+ 153us) client_negotiation.cc:252] Sending NEGOTIATE NegotiatePB request
0629 01:59:21.900314 (+ 21425us) negotiation.cc:327] Negotiation complete: Network error: Client connection negotiation failed: client connection to 127.17.83.68:39955: BlockingRecv error: recv error from unknown peer: Transport endpoint is not connected (error 107)
Metrics: {"client-negotiator.queue_time_us":62}
W20250629 01:59:21.911015 23053 meta_cache.cc:302] tablet 5207b70370f64149b8cd3efabf9228c2: replica e6a8572fc9fa4225b11994f02e85c28d (127.17.83.68:39955) has failed: Network error: Client connection negotiation failed: client connection to 127.17.83.68:39955: BlockingRecv error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250629 01:59:21.934576 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:21.955971 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:21.978384 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:21.990200 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.042968 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.058641 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.093016 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.107455 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.141323 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.160769 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.200888 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.221654 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.271770 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.298486 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.358316 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.385262 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.449458 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.480687 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.547989 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.583123 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.661006 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.699936 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.784633 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.826375 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.916842 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:22.963703 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:23.017127 23053 meta_cache.cc:302] tablet 5207b70370f64149b8cd3efabf9228c2: replica e6a8572fc9fa4225b11994f02e85c28d (127.17.83.68:39955) has failed: Network error: Client connection negotiation failed: client connection to 127.17.83.68:39955: connect: Connection refused (error 111)
W20250629 01:59:23.028580 22327 debug-util.cc:398] Leaking SignalData structure 0x7b0800088720 after lost signal to thread 22264
W20250629 01:59:23.029456 22327 debug-util.cc:398] Leaking SignalData structure 0x7b080007a920 after lost signal to thread 22330
W20250629 01:59:23.061190 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:23.110781 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:23.215909 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:23.272181 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:23.383170 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:23.434371 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:23.545895 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
W20250629 01:59:23.602952 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
W20250629 01:59:23.729259 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
I20250629 01:59:23.733304 23084 raft_consensus.cc:491] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250629 01:59:23.733871 23084 raft_consensus.cc:513] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e6a8572fc9fa4225b11994f02e85c28d" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 39955 } } peers { permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 } } peers { permanent_uuid: "3abbd98378944e2ba0b41ce85520a7e9" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 37643 } }
I20250629 01:59:23.736655 23084 leader_election.cc:290] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers e6a8572fc9fa4225b11994f02e85c28d (127.17.83.68:39955), 95a5afce40c943de932b00325d5055c2 (127.17.83.69:44555)
W20250629 01:59:23.744299 22374 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.17.83.68:39955: connect: Connection refused (error 111)
I20250629 01:59:23.750283 23090 raft_consensus.cc:491] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250629 01:59:23.750809 23090 raft_consensus.cc:513] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e6a8572fc9fa4225b11994f02e85c28d" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 39955 } } peers { permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 } } peers { permanent_uuid: "3abbd98378944e2ba0b41ce85520a7e9" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 37643 } }
I20250629 01:59:23.753854 23090 leader_election.cc:290] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers e6a8572fc9fa4225b11994f02e85c28d (127.17.83.68:39955), 3abbd98378944e2ba0b41ce85520a7e9 (127.17.83.65:37643)
W20250629 01:59:23.765344 22374 leader_election.cc:336] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer e6a8572fc9fa4225b11994f02e85c28d (127.17.83.68:39955): Network error: Client connection negotiation failed: client connection to 127.17.83.68:39955: connect: Connection refused (error 111)
I20250629 01:59:23.767738 22973 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5207b70370f64149b8cd3efabf9228c2" candidate_uuid: "3abbd98378944e2ba0b41ce85520a7e9" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "95a5afce40c943de932b00325d5055c2" is_pre_election: true
I20250629 01:59:23.768471 22973 raft_consensus.cc:2466] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 3abbd98378944e2ba0b41ce85520a7e9 in term 1.
I20250629 01:59:23.769752 22375 leader_election.cc:304] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 3abbd98378944e2ba0b41ce85520a7e9, 95a5afce40c943de932b00325d5055c2; no voters: e6a8572fc9fa4225b11994f02e85c28d
I20250629 01:59:23.770658 23084 raft_consensus.cc:2802] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250629 01:59:23.771028 23084 raft_consensus.cc:491] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250629 01:59:23.771575 23084 raft_consensus.cc:3058] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [term 1 FOLLOWER]: Advancing to term 2
W20250629 01:59:23.773088 22908 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.17.83.68:39955: connect: Connection refused (error 111)
I20250629 01:59:23.778807 22439 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5207b70370f64149b8cd3efabf9228c2" candidate_uuid: "95a5afce40c943de932b00325d5055c2" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "3abbd98378944e2ba0b41ce85520a7e9" is_pre_election: true
I20250629 01:59:23.780438 23084 raft_consensus.cc:513] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e6a8572fc9fa4225b11994f02e85c28d" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 39955 } } peers { permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 } } peers { permanent_uuid: "3abbd98378944e2ba0b41ce85520a7e9" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 37643 } }
I20250629 01:59:23.781642 22439 raft_consensus.cc:2391] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [term 2 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 95a5afce40c943de932b00325d5055c2 in current term 2: Already voted for candidate 3abbd98378944e2ba0b41ce85520a7e9 in this term.
W20250629 01:59:23.783586 22908 leader_election.cc:336] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer e6a8572fc9fa4225b11994f02e85c28d (127.17.83.68:39955): Network error: Client connection negotiation failed: client connection to 127.17.83.68:39955: connect: Connection refused (error 111)
I20250629 01:59:23.785127 23084 leader_election.cc:290] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [CANDIDATE]: Term 2 election: Requested vote from peers e6a8572fc9fa4225b11994f02e85c28d (127.17.83.68:39955), 95a5afce40c943de932b00325d5055c2 (127.17.83.69:44555)
I20250629 01:59:23.786540 22973 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5207b70370f64149b8cd3efabf9228c2" candidate_uuid: "3abbd98378944e2ba0b41ce85520a7e9" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "95a5afce40c943de932b00325d5055c2"
W20250629 01:59:23.786890 22374 leader_election.cc:336] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer e6a8572fc9fa4225b11994f02e85c28d (127.17.83.68:39955): Network error: Client connection negotiation failed: client connection to 127.17.83.68:39955: connect: Connection refused (error 111)
I20250629 01:59:23.787070 22973 raft_consensus.cc:3058] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [term 1 FOLLOWER]: Advancing to term 2
W20250629 01:59:23.788755 22399 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:50716: Illegal state: replica 3abbd98378944e2ba0b41ce85520a7e9 is not leader of this config: current role FOLLOWER
I20250629 01:59:23.791649 22973 raft_consensus.cc:2466] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 3abbd98378944e2ba0b41ce85520a7e9 in term 2.
I20250629 01:59:23.792516 22375 leader_election.cc:304] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 3abbd98378944e2ba0b41ce85520a7e9, 95a5afce40c943de932b00325d5055c2; no voters: e6a8572fc9fa4225b11994f02e85c28d
I20250629 01:59:23.793679 23084 raft_consensus.cc:2802] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [term 2 FOLLOWER]: Leader election won for term 2
I20250629 01:59:23.794696 22907 leader_election.cc:304] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 95a5afce40c943de932b00325d5055c2; no voters: 3abbd98378944e2ba0b41ce85520a7e9, e6a8572fc9fa4225b11994f02e85c28d
I20250629 01:59:23.796309 23090 raft_consensus.cc:2747] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [term 2 FOLLOWER]: Leader pre-election lost for term 2. Reason: could not achieve majority
I20250629 01:59:23.813688 23084 raft_consensus.cc:695] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [term 2 LEADER]: Becoming Leader. State: Replica: 3abbd98378944e2ba0b41ce85520a7e9, State: Running, Role: LEADER
I20250629 01:59:23.814565 23084 consensus_queue.cc:237] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e6a8572fc9fa4225b11994f02e85c28d" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 39955 } } peers { permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 } } peers { permanent_uuid: "3abbd98378944e2ba0b41ce85520a7e9" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 37643 } }
I20250629 01:59:23.835283 22553 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250629 01:59:23.844229 22296 catalog_manager.cc:5582] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 reported cstate change: term changed from 1 to 2, leader changed from e6a8572fc9fa4225b11994f02e85c28d (127.17.83.68) to 3abbd98378944e2ba0b41ce85520a7e9 (127.17.83.65). New cstate: current_term: 2 leader_uuid: "3abbd98378944e2ba0b41ce85520a7e9" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "e6a8572fc9fa4225b11994f02e85c28d" member_type: VOTER last_known_addr { host: "127.17.83.68" port: 39955 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "3abbd98378944e2ba0b41ce85520a7e9" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 37643 } health_report { overall_health: HEALTHY } } }
I20250629 01:59:23.848809 22419 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250629 01:59:23.862097 22953 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250629 01:59:23.878986 22686 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250629 01:59:23.914980 22933 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:55458: Illegal state: replica 95a5afce40c943de932b00325d5055c2 is not leader of this config: current role FOLLOWER
I20250629 01:59:24.006103 22973 raft_consensus.cc:1273] T 5207b70370f64149b8cd3efabf9228c2 P 95a5afce40c943de932b00325d5055c2 [term 2 FOLLOWER]: Refusing update from remote peer 3abbd98378944e2ba0b41ce85520a7e9: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250629 01:59:24.009877 23084 consensus_queue.cc:1035] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 [LEADER]: Connected to new peer: Peer: permanent_uuid: "95a5afce40c943de932b00325d5055c2" member_type: VOTER last_known_addr { host: "127.17.83.69" port: 44555 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250629 01:59:24.013998 22374 consensus_peers.cc:489] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 -> Peer e6a8572fc9fa4225b11994f02e85c28d (127.17.83.68:39955): Couldn't send request to peer e6a8572fc9fa4225b11994f02e85c28d. Status: Network error: Client connection negotiation failed: client connection to 127.17.83.68:39955: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250629 01:59:24.079701 23112 mvcc.cc:204] Tried to move back new op lower bound from 7172761042948980736 to 7172761042196455424. Current Snapshot: MvccSnapshot[applied={T|T < 7172761042948980736}]
I20250629 01:59:25.531122 22953 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250629 01:59:25.535765 22419 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250629 01:59:25.543946 22553 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250629 01:59:25.566320 22686 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250629 01:59:26.377828 22374 consensus_peers.cc:489] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 -> Peer e6a8572fc9fa4225b11994f02e85c28d (127.17.83.68:39955): Couldn't send request to peer e6a8572fc9fa4225b11994f02e85c28d. Status: Network error: Client connection negotiation failed: client connection to 127.17.83.68:39955: connect: Connection refused (error 111). This is attempt 6: this message will repeat every 5th retry.
I20250629 01:59:27.236310 22553 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250629 01:59:27.242290 22419 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250629 01:59:27.256453 22953 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250629 01:59:27.288924 22686 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250629 01:59:28.892000 22374 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.17.83.68:39955: connect: Connection refused (error 111) [suppressed 11 similar messages]
W20250629 01:59:28.894469 22374 consensus_peers.cc:489] T 5207b70370f64149b8cd3efabf9228c2 P 3abbd98378944e2ba0b41ce85520a7e9 -> Peer e6a8572fc9fa4225b11994f02e85c28d (127.17.83.68:39955): Couldn't send request to peer e6a8572fc9fa4225b11994f02e85c28d. Status: Network error: Client connection negotiation failed: client connection to 127.17.83.68:39955: connect: Connection refused (error 111). This is attempt 11: this message will repeat every 5th retry.
I20250629 01:59:29.734288 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 22355
I20250629 01:59:29.773169 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 22488
I20250629 01:59:29.797863 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 22622
I20250629 01:59:29.823318 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 22888
I20250629 01:59:29.856747 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 22263
2025-06-29T01:59:29Z chronyd exiting
[ OK ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4 (19188 ms)
[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest (19189 ms total)
[----------] 1 test from ListTableCliSimpleParamTest
[ RUN ] ListTableCliSimpleParamTest.TestListTables/2
I20250629 01:59:29.917625 17741 test_util.cc:276] Using random seed: 1124123297
I20250629 01:59:29.921545 17741 ts_itest-base.cc:115] Starting cluster with:
I20250629 01:59:29.921696 17741 ts_itest-base.cc:116] --------------
I20250629 01:59:29.921867 17741 ts_itest-base.cc:117] 1 tablet servers
I20250629 01:59:29.922014 17741 ts_itest-base.cc:118] 1 replicas per TS
I20250629 01:59:29.922191 17741 ts_itest-base.cc:119] --------------
2025-06-29T01:59:29Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-29T01:59:29Z Disabled control of system clock
I20250629 01:59:29.963572 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:37933
--webserver_interface=127.17.83.126
--webserver_port=0
--builtin_ntp_servers=127.17.83.84:43845
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.17.83.126:37933 with env {}
W20250629 01:59:30.247282 23205 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:30.247852 23205 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:30.248301 23205 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:30.277077 23205 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250629 01:59:30.277385 23205 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:30.277640 23205 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250629 01:59:30.277884 23205 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250629 01:59:30.310077 23205 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:43845
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.17.83.126:37933
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:37933
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.17.83.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:30.311296 23205 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:30.312788 23205 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:30.325755 23211 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:30.325798 23212 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:30.326860 23214 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:31.510896 23213 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1182 milliseconds
I20250629 01:59:31.510999 23205 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:59:31.512180 23205 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:59:31.515003 23205 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:59:31.516403 23205 hybrid_clock.cc:648] HybridClock initialized: now 1751162371516370 us; error 59 us; skew 500 ppm
I20250629 01:59:31.517207 23205 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:31.522588 23205 webserver.cc:469] Webserver started at http://127.17.83.126:35957/ using document root <none> and password file <none>
I20250629 01:59:31.523461 23205 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:31.523643 23205 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:31.524083 23205 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:59:31.528224 23205 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "705cb57640c84f289ac7ed3d7e8e4291"
format_stamp: "Formatted at 2025-06-29 01:59:31 on dist-test-slave-v1mb"
I20250629 01:59:31.529188 23205 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "705cb57640c84f289ac7ed3d7e8e4291"
format_stamp: "Formatted at 2025-06-29 01:59:31 on dist-test-slave-v1mb"
I20250629 01:59:31.535604 23205 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.003s sys 0.005s
I20250629 01:59:31.540688 23221 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:31.541563 23205 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.005s sys 0.000s
I20250629 01:59:31.541870 23205 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
uuid: "705cb57640c84f289ac7ed3d7e8e4291"
format_stamp: "Formatted at 2025-06-29 01:59:31 on dist-test-slave-v1mb"
I20250629 01:59:31.542166 23205 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:31.598393 23205 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:31.599738 23205 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:31.600137 23205 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:31.667379 23205 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.126:37933
I20250629 01:59:31.667436 23272 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.126:37933 every 8 connection(s)
I20250629 01:59:31.670017 23205 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250629 01:59:31.674793 23273 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:59:31.677942 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 23205
I20250629 01:59:31.678336 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250629 01:59:31.695106 23273 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291: Bootstrap starting.
I20250629 01:59:31.701041 23273 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291: Neither blocks nor log segments found. Creating new log.
I20250629 01:59:31.702574 23273 log.cc:826] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291: Log is configured to *not* fsync() on all Append() calls
I20250629 01:59:31.706552 23273 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291: No bootstrap required, opened a new log
I20250629 01:59:31.723543 23273 raft_consensus.cc:357] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "705cb57640c84f289ac7ed3d7e8e4291" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 37933 } }
I20250629 01:59:31.724148 23273 raft_consensus.cc:383] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:59:31.724402 23273 raft_consensus.cc:738] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 705cb57640c84f289ac7ed3d7e8e4291, State: Initialized, Role: FOLLOWER
I20250629 01:59:31.725099 23273 consensus_queue.cc:260] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "705cb57640c84f289ac7ed3d7e8e4291" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 37933 } }
I20250629 01:59:31.725570 23273 raft_consensus.cc:397] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:59:31.725831 23273 raft_consensus.cc:491] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:59:31.726174 23273 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:59:31.730294 23273 raft_consensus.cc:513] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "705cb57640c84f289ac7ed3d7e8e4291" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 37933 } }
I20250629 01:59:31.730876 23273 leader_election.cc:304] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 705cb57640c84f289ac7ed3d7e8e4291; no voters:
I20250629 01:59:31.732339 23273 leader_election.cc:290] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250629 01:59:31.732976 23278 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:59:31.734848 23278 raft_consensus.cc:695] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [term 1 LEADER]: Becoming Leader. State: Replica: 705cb57640c84f289ac7ed3d7e8e4291, State: Running, Role: LEADER
I20250629 01:59:31.735567 23278 consensus_queue.cc:237] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "705cb57640c84f289ac7ed3d7e8e4291" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 37933 } }
I20250629 01:59:31.736439 23273 sys_catalog.cc:564] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [sys.catalog]: configured and running, proceeding with master startup.
I20250629 01:59:31.746650 23280 sys_catalog.cc:455] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 705cb57640c84f289ac7ed3d7e8e4291. Latest consensus state: current_term: 1 leader_uuid: "705cb57640c84f289ac7ed3d7e8e4291" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "705cb57640c84f289ac7ed3d7e8e4291" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 37933 } } }
I20250629 01:59:31.747360 23280 sys_catalog.cc:458] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [sys.catalog]: This master's current role is: LEADER
I20250629 01:59:31.746479 23279 sys_catalog.cc:455] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "705cb57640c84f289ac7ed3d7e8e4291" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "705cb57640c84f289ac7ed3d7e8e4291" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 37933 } } }
I20250629 01:59:31.748163 23279 sys_catalog.cc:458] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291 [sys.catalog]: This master's current role is: LEADER
I20250629 01:59:31.752889 23288 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250629 01:59:31.762856 23288 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250629 01:59:31.776106 23288 catalog_manager.cc:1349] Generated new cluster ID: 05672cef36914cbd8627b965981b227b
I20250629 01:59:31.776328 23288 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250629 01:59:31.795156 23288 catalog_manager.cc:1372] Generated new certificate authority record
I20250629 01:59:31.796996 23288 catalog_manager.cc:1506] Loading token signing keys...
I20250629 01:59:31.811686 23288 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 705cb57640c84f289ac7ed3d7e8e4291: Generated new TSK 0
I20250629 01:59:31.812618 23288 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250629 01:59:31.821389 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.65:0
--local_ip_for_outbound_sockets=127.17.83.65
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:37933
--builtin_ntp_servers=127.17.83.84:43845
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250629 01:59:32.111297 23297 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:32.111773 23297 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:32.112246 23297 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:32.141455 23297 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:32.142256 23297 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.65
I20250629 01:59:32.174201 23297 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:43845
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:37933
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.65
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:32.175460 23297 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:32.176931 23297 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:32.193322 23304 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:32.195107 23297 server_base.cc:1048] running on GCE node
W20250629 01:59:32.194654 23306 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:32.194445 23303 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:33.318006 23297 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:59:33.320951 23297 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:59:33.322458 23297 hybrid_clock.cc:648] HybridClock initialized: now 1751162373322411 us; error 49 us; skew 500 ppm
I20250629 01:59:33.323434 23297 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:33.330296 23297 webserver.cc:469] Webserver started at http://127.17.83.65:44155/ using document root <none> and password file <none>
I20250629 01:59:33.331430 23297 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:33.331677 23297 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:33.332206 23297 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:59:33.338392 23297 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "ddef629eeb5c4116b24d431e87f1d07a"
format_stamp: "Formatted at 2025-06-29 01:59:33 on dist-test-slave-v1mb"
I20250629 01:59:33.340018 23297 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "ddef629eeb5c4116b24d431e87f1d07a"
format_stamp: "Formatted at 2025-06-29 01:59:33 on dist-test-slave-v1mb"
I20250629 01:59:33.347918 23297 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.001s
I20250629 01:59:33.353595 23313 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:33.354619 23297 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.002s
I20250629 01:59:33.354919 23297 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "ddef629eeb5c4116b24d431e87f1d07a"
format_stamp: "Formatted at 2025-06-29 01:59:33 on dist-test-slave-v1mb"
I20250629 01:59:33.355263 23297 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:33.404762 23297 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:33.405968 23297 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:33.406347 23297 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:33.409202 23297 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:59:33.412796 23297 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:59:33.412971 23297 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:33.413172 23297 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:59:33.413303 23297 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:33.541677 23297 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.65:34617
I20250629 01:59:33.541754 23425 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.65:34617 every 8 connection(s)
I20250629 01:59:33.543955 23297 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250629 01:59:33.547638 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 23297
I20250629 01:59:33.548151 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1751162234221453-17741-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250629 01:59:33.562638 23426 heartbeater.cc:344] Connected to a master server at 127.17.83.126:37933
I20250629 01:59:33.562983 23426 heartbeater.cc:461] Registering TS with master...
I20250629 01:59:33.563889 23426 heartbeater.cc:507] Master 127.17.83.126:37933 requested a full tablet report, sending...
I20250629 01:59:33.566284 23238 ts_manager.cc:194] Registered new tserver with Master: ddef629eeb5c4116b24d431e87f1d07a (127.17.83.65:34617)
I20250629 01:59:33.567263 17741 external_mini_cluster.cc:934] 1 TS(s) registered with all masters
I20250629 01:59:33.568308 23238 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.65:46299
I20250629 01:59:33.595734 23238 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:52462:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250629 01:59:33.648316 23361 tablet_service.cc:1468] Processing CreateTablet for tablet 53a5209f6faf4e318debec8b496e90b4 (DEFAULT_TABLE table=TestTable [id=fa76feb3a65f4767b524f80b11aba3c6]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:59:33.649751 23361 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 53a5209f6faf4e318debec8b496e90b4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:59:33.677884 23441 tablet_bootstrap.cc:492] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a: Bootstrap starting.
I20250629 01:59:33.683286 23441 tablet_bootstrap.cc:654] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a: Neither blocks nor log segments found. Creating new log.
I20250629 01:59:33.684967 23441 log.cc:826] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a: Log is configured to *not* fsync() on all Append() calls
I20250629 01:59:33.689719 23441 tablet_bootstrap.cc:492] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a: No bootstrap required, opened a new log
I20250629 01:59:33.690116 23441 ts_tablet_manager.cc:1397] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a: Time spent bootstrapping tablet: real 0.013s user 0.006s sys 0.004s
I20250629 01:59:33.707150 23441 raft_consensus.cc:357] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ddef629eeb5c4116b24d431e87f1d07a" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 34617 } }
I20250629 01:59:33.707719 23441 raft_consensus.cc:383] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:59:33.707949 23441 raft_consensus.cc:738] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ddef629eeb5c4116b24d431e87f1d07a, State: Initialized, Role: FOLLOWER
I20250629 01:59:33.708621 23441 consensus_queue.cc:260] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ddef629eeb5c4116b24d431e87f1d07a" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 34617 } }
I20250629 01:59:33.709131 23441 raft_consensus.cc:397] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:59:33.709395 23441 raft_consensus.cc:491] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:59:33.709672 23441 raft_consensus.cc:3058] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:59:33.714229 23441 raft_consensus.cc:513] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ddef629eeb5c4116b24d431e87f1d07a" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 34617 } }
I20250629 01:59:33.714854 23441 leader_election.cc:304] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: ddef629eeb5c4116b24d431e87f1d07a; no voters:
I20250629 01:59:33.716486 23441 leader_election.cc:290] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a [CANDIDATE]: Term 1 election: Requested vote from peers
I20250629 01:59:33.716883 23443 raft_consensus.cc:2802] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:59:33.720270 23426 heartbeater.cc:499] Master 127.17.83.126:37933 was elected leader, sending a full tablet report...
I20250629 01:59:33.719739 23443 raft_consensus.cc:695] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a [term 1 LEADER]: Becoming Leader. State: Replica: ddef629eeb5c4116b24d431e87f1d07a, State: Running, Role: LEADER
I20250629 01:59:33.721141 23441 ts_tablet_manager.cc:1428] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a: Time spent starting tablet: real 0.031s user 0.026s sys 0.005s
I20250629 01:59:33.721101 23443 consensus_queue.cc:237] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ddef629eeb5c4116b24d431e87f1d07a" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 34617 } }
I20250629 01:59:33.733100 23238 catalog_manager.cc:5582] T 53a5209f6faf4e318debec8b496e90b4 P ddef629eeb5c4116b24d431e87f1d07a reported cstate change: term changed from 0 to 1, leader changed from <none> to ddef629eeb5c4116b24d431e87f1d07a (127.17.83.65). New cstate: current_term: 1 leader_uuid: "ddef629eeb5c4116b24d431e87f1d07a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ddef629eeb5c4116b24d431e87f1d07a" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 34617 } health_report { overall_health: HEALTHY } } }
I20250629 01:59:33.756999 17741 external_mini_cluster.cc:934] 1 TS(s) registered with all masters
I20250629 01:59:33.759747 17741 ts_itest-base.cc:246] Waiting for 1 tablets on tserver ddef629eeb5c4116b24d431e87f1d07a to finish bootstrapping
I20250629 01:59:36.391531 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 23297
I20250629 01:59:36.414301 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 23205
2025-06-29T01:59:36Z chronyd exiting
[ OK ] ListTableCliSimpleParamTest.TestListTables/2 (6553 ms)
[----------] 1 test from ListTableCliSimpleParamTest (6553 ms total)
[----------] 1 test from ListTableCliParamTest
[ RUN ] ListTableCliParamTest.ListTabletWithPartitionInfo/4
I20250629 01:59:36.471094 17741 test_util.cc:276] Using random seed: 1130676765
[ OK ] ListTableCliParamTest.ListTabletWithPartitionInfo/4 (11 ms)
[----------] 1 test from ListTableCliParamTest (11 ms total)
[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest
[ RUN ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0
2025-06-29T01:59:36Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-06-29T01:59:36Z Disabled control of system clock
I20250629 01:59:36.519940 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:39373
--webserver_interface=127.17.83.126
--webserver_port=0
--builtin_ntp_servers=127.17.83.84:40585
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.17.83.126:39373 with env {}
W20250629 01:59:36.794169 23470 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:36.794657 23470 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:36.795028 23470 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:36.824867 23470 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250629 01:59:36.825169 23470 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:36.825369 23470 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250629 01:59:36.825562 23470 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250629 01:59:36.858418 23470 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:40585
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.17.83.126:39373
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:39373
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.17.83.126
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:36.859654 23470 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:36.861088 23470 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:36.875671 23477 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:36.875721 23479 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:36.877918 23470 server_base.cc:1048] running on GCE node
W20250629 01:59:36.876017 23476 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:37.985996 23470 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:59:37.988461 23470 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:59:37.989791 23470 hybrid_clock.cc:648] HybridClock initialized: now 1751162377989763 us; error 39 us; skew 500 ppm
I20250629 01:59:37.990509 23470 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:37.999804 23470 webserver.cc:469] Webserver started at http://127.17.83.126:37935/ using document root <none> and password file <none>
I20250629 01:59:38.000736 23470 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:38.000950 23470 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:38.001400 23470 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:59:38.005437 23470 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data/instance:
uuid: "858712214f9a492db3af5fa4d178c004"
format_stamp: "Formatted at 2025-06-29 01:59:37 on dist-test-slave-v1mb"
I20250629 01:59:38.006385 23470 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal/instance:
uuid: "858712214f9a492db3af5fa4d178c004"
format_stamp: "Formatted at 2025-06-29 01:59:37 on dist-test-slave-v1mb"
I20250629 01:59:38.012818 23470 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.004s
I20250629 01:59:38.018358 23486 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:38.019487 23470 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.001s
I20250629 01:59:38.019766 23470 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal
uuid: "858712214f9a492db3af5fa4d178c004"
format_stamp: "Formatted at 2025-06-29 01:59:37 on dist-test-slave-v1mb"
I20250629 01:59:38.020054 23470 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:38.068768 23470 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:38.070119 23470 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:38.070510 23470 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:38.137204 23470 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.126:39373
I20250629 01:59:38.137284 23537 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.126:39373 every 8 connection(s)
I20250629 01:59:38.139809 23470 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data/info.pb
I20250629 01:59:38.144374 23538 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:59:38.150354 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 23470
I20250629 01:59:38.150770 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal/instance
I20250629 01:59:38.164167 23538 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004: Bootstrap starting.
I20250629 01:59:38.170390 23538 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004: Neither blocks nor log segments found. Creating new log.
I20250629 01:59:38.172575 23538 log.cc:826] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004: Log is configured to *not* fsync() on all Append() calls
I20250629 01:59:38.177088 23538 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004: No bootstrap required, opened a new log
I20250629 01:59:38.194444 23538 raft_consensus.cc:357] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "858712214f9a492db3af5fa4d178c004" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 39373 } }
I20250629 01:59:38.195046 23538 raft_consensus.cc:383] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:59:38.195276 23538 raft_consensus.cc:738] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 858712214f9a492db3af5fa4d178c004, State: Initialized, Role: FOLLOWER
I20250629 01:59:38.195883 23538 consensus_queue.cc:260] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "858712214f9a492db3af5fa4d178c004" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 39373 } }
I20250629 01:59:38.196377 23538 raft_consensus.cc:397] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:59:38.196669 23538 raft_consensus.cc:491] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:59:38.196974 23538 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:59:38.201252 23538 raft_consensus.cc:513] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "858712214f9a492db3af5fa4d178c004" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 39373 } }
I20250629 01:59:38.201894 23538 leader_election.cc:304] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 858712214f9a492db3af5fa4d178c004; no voters:
I20250629 01:59:38.203480 23538 leader_election.cc:290] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250629 01:59:38.204162 23543 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:59:38.206285 23543 raft_consensus.cc:695] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [term 1 LEADER]: Becoming Leader. State: Replica: 858712214f9a492db3af5fa4d178c004, State: Running, Role: LEADER
I20250629 01:59:38.206981 23543 consensus_queue.cc:237] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "858712214f9a492db3af5fa4d178c004" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 39373 } }
I20250629 01:59:38.207937 23538 sys_catalog.cc:564] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [sys.catalog]: configured and running, proceeding with master startup.
I20250629 01:59:38.217347 23545 sys_catalog.cc:455] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 858712214f9a492db3af5fa4d178c004. Latest consensus state: current_term: 1 leader_uuid: "858712214f9a492db3af5fa4d178c004" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "858712214f9a492db3af5fa4d178c004" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 39373 } } }
I20250629 01:59:38.218079 23545 sys_catalog.cc:458] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [sys.catalog]: This master's current role is: LEADER
I20250629 01:59:38.218569 23544 sys_catalog.cc:455] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "858712214f9a492db3af5fa4d178c004" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "858712214f9a492db3af5fa4d178c004" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 39373 } } }
I20250629 01:59:38.219341 23544 sys_catalog.cc:458] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004 [sys.catalog]: This master's current role is: LEADER
I20250629 01:59:38.221176 23552 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250629 01:59:38.233125 23552 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250629 01:59:38.247793 23552 catalog_manager.cc:1349] Generated new cluster ID: 7ef128a96bab4b8d9bbdcacd28cc5396
I20250629 01:59:38.248096 23552 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250629 01:59:38.269946 23552 catalog_manager.cc:1372] Generated new certificate authority record
I20250629 01:59:38.271422 23552 catalog_manager.cc:1506] Loading token signing keys...
I20250629 01:59:38.286854 23552 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 858712214f9a492db3af5fa4d178c004: Generated new TSK 0
I20250629 01:59:38.287649 23552 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250629 01:59:38.311677 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.65:0
--local_ip_for_outbound_sockets=127.17.83.65
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:39373
--builtin_ntp_servers=127.17.83.84:40585
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
W20250629 01:59:38.584394 23562 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:38.584911 23562 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:38.585430 23562 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:38.614040 23562 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:38.614779 23562 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.65
I20250629 01:59:38.645826 23562 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:40585
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.65:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.17.83.65
--webserver_port=0
--tserver_master_addrs=127.17.83.126:39373
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.65
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:38.647042 23562 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:38.648557 23562 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:38.665016 23568 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:38.668749 23571 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:38.667464 23569 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:40.023141 23570 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1355 milliseconds
I20250629 01:59:40.023291 23562 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:59:40.024469 23562 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:59:40.026872 23562 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:59:40.028353 23562 hybrid_clock.cc:648] HybridClock initialized: now 1751162380028288 us; error 79 us; skew 500 ppm
I20250629 01:59:40.029129 23562 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:40.035934 23562 webserver.cc:469] Webserver started at http://127.17.83.65:45447/ using document root <none> and password file <none>
I20250629 01:59:40.036836 23562 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:40.037051 23562 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:40.037485 23562 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:59:40.043284 23562 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data/instance:
uuid: "960815965ed44e50b5dccaca6f132dba"
format_stamp: "Formatted at 2025-06-29 01:59:40 on dist-test-slave-v1mb"
I20250629 01:59:40.044288 23562 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/wal/instance:
uuid: "960815965ed44e50b5dccaca6f132dba"
format_stamp: "Formatted at 2025-06-29 01:59:40 on dist-test-slave-v1mb"
I20250629 01:59:40.051179 23562 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.002s sys 0.005s
I20250629 01:59:40.056821 23578 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:40.057787 23562 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250629 01:59:40.058079 23562 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/wal
uuid: "960815965ed44e50b5dccaca6f132dba"
format_stamp: "Formatted at 2025-06-29 01:59:40 on dist-test-slave-v1mb"
I20250629 01:59:40.058367 23562 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:40.107632 23562 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:40.108989 23562 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:40.109436 23562 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:40.112203 23562 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:59:40.116071 23562 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:59:40.116240 23562 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:40.116478 23562 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:59:40.116647 23562 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:40.270644 23562 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.65:40853
I20250629 01:59:40.270756 23690 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.65:40853 every 8 connection(s)
I20250629 01:59:40.273042 23562 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data/info.pb
I20250629 01:59:40.275733 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 23562
I20250629 01:59:40.276255 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/wal/instance
I20250629 01:59:40.283197 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.66:0
--local_ip_for_outbound_sockets=127.17.83.66
--webserver_interface=127.17.83.66
--webserver_port=0
--tserver_master_addrs=127.17.83.126:39373
--builtin_ntp_servers=127.17.83.84:40585
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250629 01:59:40.293704 23691 heartbeater.cc:344] Connected to a master server at 127.17.83.126:39373
I20250629 01:59:40.294081 23691 heartbeater.cc:461] Registering TS with master...
I20250629 01:59:40.295014 23691 heartbeater.cc:507] Master 127.17.83.126:39373 requested a full tablet report, sending...
I20250629 01:59:40.297305 23503 ts_manager.cc:194] Registered new tserver with Master: 960815965ed44e50b5dccaca6f132dba (127.17.83.65:40853)
I20250629 01:59:40.299104 23503 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.65:51143
W20250629 01:59:40.568837 23695 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:40.569309 23695 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:40.569861 23695 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:40.598750 23695 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:40.599597 23695 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.66
I20250629 01:59:40.632818 23695 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:40585
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.66:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.17.83.66
--webserver_port=0
--tserver_master_addrs=127.17.83.126:39373
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.66
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:40.634085 23695 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:40.635588 23695 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:40.649681 23702 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:41.302131 23691 heartbeater.cc:499] Master 127.17.83.126:39373 was elected leader, sending a full tablet report...
W20250629 01:59:40.649874 23701 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:42.296692 23695 thread.cc:640] OpenStack (cloud detector) Time spent creating pthread: real 1.650s user 0.634s sys 1.008s
W20250629 01:59:42.297127 23695 thread.cc:606] OpenStack (cloud detector) Time spent starting thread: real 1.650s user 0.635s sys 1.009s
W20250629 01:59:42.050068 23700 debug-util.cc:398] Leaking SignalData structure 0x7b0800000a80 after lost signal to thread 23695
W20250629 01:59:42.298928 23704 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:42.302385 23703 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1654 milliseconds
I20250629 01:59:42.302423 23695 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:59:42.303907 23695 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:59:42.305832 23695 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:59:42.307178 23695 hybrid_clock.cc:648] HybridClock initialized: now 1751162382307105 us; error 69 us; skew 500 ppm
I20250629 01:59:42.308158 23695 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:42.314131 23695 webserver.cc:469] Webserver started at http://127.17.83.66:38495/ using document root <none> and password file <none>
I20250629 01:59:42.314968 23695 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:42.315140 23695 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:42.315563 23695 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:59:42.319686 23695 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data/instance:
uuid: "4ec069a1079d413a9c88c502917576f9"
format_stamp: "Formatted at 2025-06-29 01:59:42 on dist-test-slave-v1mb"
I20250629 01:59:42.320648 23695 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/wal/instance:
uuid: "4ec069a1079d413a9c88c502917576f9"
format_stamp: "Formatted at 2025-06-29 01:59:42 on dist-test-slave-v1mb"
I20250629 01:59:42.326982 23695 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.001s
I20250629 01:59:42.332016 23711 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:42.332824 23695 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.001s
I20250629 01:59:42.333060 23695 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/wal
uuid: "4ec069a1079d413a9c88c502917576f9"
format_stamp: "Formatted at 2025-06-29 01:59:42 on dist-test-slave-v1mb"
I20250629 01:59:42.333313 23695 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:42.378060 23695 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:42.379405 23695 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:42.379788 23695 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:42.381968 23695 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:59:42.385517 23695 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:59:42.385723 23695 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:42.385905 23695 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:59:42.386031 23695 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:42.513361 23695 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.66:46293
I20250629 01:59:42.513465 23823 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.66:46293 every 8 connection(s)
I20250629 01:59:42.515750 23695 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data/info.pb
I20250629 01:59:42.520186 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 23695
I20250629 01:59:42.520694 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/wal/instance
I20250629 01:59:42.527228 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.67:0
--local_ip_for_outbound_sockets=127.17.83.67
--webserver_interface=127.17.83.67
--webserver_port=0
--tserver_master_addrs=127.17.83.126:39373
--builtin_ntp_servers=127.17.83.84:40585
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250629 01:59:42.536027 23824 heartbeater.cc:344] Connected to a master server at 127.17.83.126:39373
I20250629 01:59:42.536404 23824 heartbeater.cc:461] Registering TS with master...
I20250629 01:59:42.537348 23824 heartbeater.cc:507] Master 127.17.83.126:39373 requested a full tablet report, sending...
I20250629 01:59:42.539330 23503 ts_manager.cc:194] Registered new tserver with Master: 4ec069a1079d413a9c88c502917576f9 (127.17.83.66:46293)
I20250629 01:59:42.540390 23503 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.66:34967
W20250629 01:59:42.826125 23828 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:42.826579 23828 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:42.827046 23828 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:42.855667 23828 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:42.856319 23828 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.67
I20250629 01:59:42.886463 23828 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:40585
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.67:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.17.83.67
--webserver_port=0
--tserver_master_addrs=127.17.83.126:39373
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.67
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:42.887544 23828 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:42.888967 23828 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:42.904314 23835 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:43.542893 23824 heartbeater.cc:499] Master 127.17.83.126:39373 was elected leader, sending a full tablet report...
W20250629 01:59:42.904458 23834 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:42.904318 23837 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:44.030941 23836 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1124 milliseconds
I20250629 01:59:44.031066 23828 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:59:44.032088 23828 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:59:44.034379 23828 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:59:44.035799 23828 hybrid_clock.cc:648] HybridClock initialized: now 1751162384035751 us; error 59 us; skew 500 ppm
I20250629 01:59:44.036523 23828 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:44.046972 23828 webserver.cc:469] Webserver started at http://127.17.83.67:36413/ using document root <none> and password file <none>
I20250629 01:59:44.047823 23828 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:44.048003 23828 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:44.048398 23828 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:59:44.052690 23828 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data/instance:
uuid: "362dbed412b844bd8694d82670dd3b72"
format_stamp: "Formatted at 2025-06-29 01:59:44 on dist-test-slave-v1mb"
I20250629 01:59:44.053632 23828 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/wal/instance:
uuid: "362dbed412b844bd8694d82670dd3b72"
format_stamp: "Formatted at 2025-06-29 01:59:44 on dist-test-slave-v1mb"
I20250629 01:59:44.060040 23828 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.001s
I20250629 01:59:44.065191 23844 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:44.066076 23828 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.001s
I20250629 01:59:44.066366 23828 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/wal
uuid: "362dbed412b844bd8694d82670dd3b72"
format_stamp: "Formatted at 2025-06-29 01:59:44 on dist-test-slave-v1mb"
I20250629 01:59:44.066641 23828 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:44.133337 23828 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:44.134590 23828 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:44.134965 23828 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:44.137480 23828 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:59:44.141537 23828 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250629 01:59:44.141737 23828 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:44.141992 23828 ts_tablet_manager.cc:610] Registered 0 tablets
I20250629 01:59:44.142143 23828 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:44.272637 23828 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.67:46531
I20250629 01:59:44.272747 23956 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.67:46531 every 8 connection(s)
I20250629 01:59:44.275041 23828 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data/info.pb
I20250629 01:59:44.283699 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 23828
I20250629 01:59:44.284041 17741 external_mini_cluster.cc:1427] Reading /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/wal/instance
I20250629 01:59:44.293609 23957 heartbeater.cc:344] Connected to a master server at 127.17.83.126:39373
I20250629 01:59:44.293931 23957 heartbeater.cc:461] Registering TS with master...
I20250629 01:59:44.294749 23957 heartbeater.cc:507] Master 127.17.83.126:39373 requested a full tablet report, sending...
I20250629 01:59:44.296430 23503 ts_manager.cc:194] Registered new tserver with Master: 362dbed412b844bd8694d82670dd3b72 (127.17.83.67:46531)
I20250629 01:59:44.297559 23503 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.67:56523
I20250629 01:59:44.302508 17741 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250629 01:59:44.326023 17741 test_util.cc:276] Using random seed: 1138531698
I20250629 01:59:44.362185 23503 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:38460:
name: "pre_rebuild"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
W20250629 01:59:44.364435 23503 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table pre_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250629 01:59:44.407189 23759 tablet_service.cc:1468] Processing CreateTablet for tablet 84a92da9c4e347c896253e1a7e77e4ff (DEFAULT_TABLE table=pre_rebuild [id=d2dcaaa008fc457987fa528a12f04bb0]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:59:44.409018 23759 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 84a92da9c4e347c896253e1a7e77e4ff. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:59:44.409379 23892 tablet_service.cc:1468] Processing CreateTablet for tablet 84a92da9c4e347c896253e1a7e77e4ff (DEFAULT_TABLE table=pre_rebuild [id=d2dcaaa008fc457987fa528a12f04bb0]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:59:44.411339 23892 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 84a92da9c4e347c896253e1a7e77e4ff. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:59:44.411934 23626 tablet_service.cc:1468] Processing CreateTablet for tablet 84a92da9c4e347c896253e1a7e77e4ff (DEFAULT_TABLE table=pre_rebuild [id=d2dcaaa008fc457987fa528a12f04bb0]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 01:59:44.413539 23626 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 84a92da9c4e347c896253e1a7e77e4ff. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:59:44.428787 23981 tablet_bootstrap.cc:492] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: Bootstrap starting.
I20250629 01:59:44.434864 23981 tablet_bootstrap.cc:654] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: Neither blocks nor log segments found. Creating new log.
I20250629 01:59:44.437208 23981 log.cc:826] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: Log is configured to *not* fsync() on all Append() calls
I20250629 01:59:44.437356 23982 tablet_bootstrap.cc:492] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: Bootstrap starting.
I20250629 01:59:44.439558 23983 tablet_bootstrap.cc:492] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: Bootstrap starting.
I20250629 01:59:44.444121 23981 tablet_bootstrap.cc:492] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: No bootstrap required, opened a new log
I20250629 01:59:44.444633 23981 ts_tablet_manager.cc:1397] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: Time spent bootstrapping tablet: real 0.016s user 0.011s sys 0.002s
I20250629 01:59:44.445077 23982 tablet_bootstrap.cc:654] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: Neither blocks nor log segments found. Creating new log.
I20250629 01:59:44.445479 23983 tablet_bootstrap.cc:654] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: Neither blocks nor log segments found. Creating new log.
I20250629 01:59:44.447139 23983 log.cc:826] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: Log is configured to *not* fsync() on all Append() calls
I20250629 01:59:44.447422 23982 log.cc:826] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: Log is configured to *not* fsync() on all Append() calls
I20250629 01:59:44.452032 23983 tablet_bootstrap.cc:492] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: No bootstrap required, opened a new log
I20250629 01:59:44.452404 23982 tablet_bootstrap.cc:492] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: No bootstrap required, opened a new log
I20250629 01:59:44.452500 23983 ts_tablet_manager.cc:1397] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: Time spent bootstrapping tablet: real 0.013s user 0.008s sys 0.004s
I20250629 01:59:44.452855 23982 ts_tablet_manager.cc:1397] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: Time spent bootstrapping tablet: real 0.016s user 0.001s sys 0.013s
I20250629 01:59:44.469715 23983 raft_consensus.cc:357] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 01:59:44.470381 23983 raft_consensus.cc:383] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:59:44.470610 23983 raft_consensus.cc:738] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 960815965ed44e50b5dccaca6f132dba, State: Initialized, Role: FOLLOWER
I20250629 01:59:44.471359 23983 consensus_queue.cc:260] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 01:59:44.471508 23981 raft_consensus.cc:357] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 01:59:44.472309 23981 raft_consensus.cc:383] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:59:44.472627 23981 raft_consensus.cc:738] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4ec069a1079d413a9c88c502917576f9, State: Initialized, Role: FOLLOWER
I20250629 01:59:44.473415 23981 consensus_queue.cc:260] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 01:59:44.478482 23983 ts_tablet_manager.cc:1428] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: Time spent starting tablet: real 0.026s user 0.017s sys 0.008s
I20250629 01:59:44.479439 23981 ts_tablet_manager.cc:1428] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: Time spent starting tablet: real 0.035s user 0.021s sys 0.013s
I20250629 01:59:44.479799 23982 raft_consensus.cc:357] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 01:59:44.480669 23982 raft_consensus.cc:383] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:59:44.480962 23982 raft_consensus.cc:738] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 362dbed412b844bd8694d82670dd3b72, State: Initialized, Role: FOLLOWER
I20250629 01:59:44.481784 23982 consensus_queue.cc:260] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 01:59:44.484170 23957 heartbeater.cc:499] Master 127.17.83.126:39373 was elected leader, sending a full tablet report...
I20250629 01:59:44.485249 23982 ts_tablet_manager.cc:1428] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: Time spent starting tablet: real 0.032s user 0.029s sys 0.002s
W20250629 01:59:44.521723 23825 tablet.cc:2378] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250629 01:59:44.530627 23958 tablet.cc:2378] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250629 01:59:44.531673 23692 tablet.cc:2378] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250629 01:59:44.688627 23989 raft_consensus.cc:491] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250629 01:59:44.689113 23989 raft_consensus.cc:513] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 01:59:44.691318 23989 leader_election.cc:290] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 960815965ed44e50b5dccaca6f132dba (127.17.83.65:40853), 4ec069a1079d413a9c88c502917576f9 (127.17.83.66:46293)
I20250629 01:59:44.701273 23646 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "84a92da9c4e347c896253e1a7e77e4ff" candidate_uuid: "362dbed412b844bd8694d82670dd3b72" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "960815965ed44e50b5dccaca6f132dba" is_pre_election: true
I20250629 01:59:44.701884 23646 raft_consensus.cc:2466] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 362dbed412b844bd8694d82670dd3b72 in term 0.
I20250629 01:59:44.702442 23779 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "84a92da9c4e347c896253e1a7e77e4ff" candidate_uuid: "362dbed412b844bd8694d82670dd3b72" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "4ec069a1079d413a9c88c502917576f9" is_pre_election: true
I20250629 01:59:44.702950 23848 leader_election.cc:304] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 362dbed412b844bd8694d82670dd3b72, 960815965ed44e50b5dccaca6f132dba; no voters:
I20250629 01:59:44.703068 23779 raft_consensus.cc:2466] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 362dbed412b844bd8694d82670dd3b72 in term 0.
I20250629 01:59:44.703629 23989 raft_consensus.cc:2802] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250629 01:59:44.703945 23989 raft_consensus.cc:491] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250629 01:59:44.704267 23989 raft_consensus.cc:3058] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:59:44.708266 23989 raft_consensus.cc:513] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 01:59:44.709545 23989 leader_election.cc:290] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [CANDIDATE]: Term 1 election: Requested vote from peers 960815965ed44e50b5dccaca6f132dba (127.17.83.65:40853), 4ec069a1079d413a9c88c502917576f9 (127.17.83.66:46293)
I20250629 01:59:44.710247 23646 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "84a92da9c4e347c896253e1a7e77e4ff" candidate_uuid: "362dbed412b844bd8694d82670dd3b72" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "960815965ed44e50b5dccaca6f132dba"
I20250629 01:59:44.710422 23779 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "84a92da9c4e347c896253e1a7e77e4ff" candidate_uuid: "362dbed412b844bd8694d82670dd3b72" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "4ec069a1079d413a9c88c502917576f9"
I20250629 01:59:44.710606 23646 raft_consensus.cc:3058] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:59:44.710865 23779 raft_consensus.cc:3058] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:59:44.714390 23646 raft_consensus.cc:2466] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 362dbed412b844bd8694d82670dd3b72 in term 1.
I20250629 01:59:44.714673 23779 raft_consensus.cc:2466] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 362dbed412b844bd8694d82670dd3b72 in term 1.
I20250629 01:59:44.715080 23848 leader_election.cc:304] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 362dbed412b844bd8694d82670dd3b72, 960815965ed44e50b5dccaca6f132dba; no voters:
I20250629 01:59:44.715721 23989 raft_consensus.cc:2802] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:59:44.717085 23989 raft_consensus.cc:695] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [term 1 LEADER]: Becoming Leader. State: Replica: 362dbed412b844bd8694d82670dd3b72, State: Running, Role: LEADER
I20250629 01:59:44.717762 23989 consensus_queue.cc:237] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 01:59:44.728252 23502 catalog_manager.cc:5582] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 reported cstate change: term changed from 0 to 1, leader changed from <none> to 362dbed412b844bd8694d82670dd3b72 (127.17.83.67). New cstate: current_term: 1 leader_uuid: "362dbed412b844bd8694d82670dd3b72" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } health_report { overall_health: HEALTHY } } }
I20250629 01:59:44.905848 23779 raft_consensus.cc:1273] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 1 FOLLOWER]: Refusing update from remote peer 362dbed412b844bd8694d82670dd3b72: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250629 01:59:44.906024 23646 raft_consensus.cc:1273] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 1 FOLLOWER]: Refusing update from remote peer 362dbed412b844bd8694d82670dd3b72: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250629 01:59:44.907701 23992 consensus_queue.cc:1035] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250629 01:59:44.908355 23989 consensus_queue.cc:1035] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250629 01:59:44.938041 24000 mvcc.cc:204] Tried to move back new op lower bound from 7172761128559845376 to 7172761127810600960. Current Snapshot: MvccSnapshot[applied={T|T < 7172761128559845376}]
I20250629 01:59:44.936738 24001 mvcc.cc:204] Tried to move back new op lower bound from 7172761128559845376 to 7172761127810600960. Current Snapshot: MvccSnapshot[applied={T|T < 7172761128559845376}]
I20250629 01:59:44.948999 24002 mvcc.cc:204] Tried to move back new op lower bound from 7172761128559845376 to 7172761127810600960. Current Snapshot: MvccSnapshot[applied={T|T < 7172761128559845376}]
I20250629 01:59:49.634267 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 23470
W20250629 01:59:49.964109 24029 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:49.964675 24029 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:49.994114 24029 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250629 01:59:50.018821 23824 heartbeater.cc:646] Failed to heartbeat to 127.17.83.126:39373 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.17.83.126:39373: connect: Connection refused (error 111)
W20250629 01:59:50.020071 23957 heartbeater.cc:646] Failed to heartbeat to 127.17.83.126:39373 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.17.83.126:39373: connect: Connection refused (error 111)
W20250629 01:59:50.029119 23691 heartbeater.cc:646] Failed to heartbeat to 127.17.83.126:39373 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.17.83.126:39373: connect: Connection refused (error 111)
W20250629 01:59:51.451936 24037 debug-util.cc:398] Leaking SignalData structure 0x7b0800036040 after lost signal to thread 24029
W20250629 01:59:51.452514 24037 kernel_stack_watchdog.cc:198] Thread 24029 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:641 for 401ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250629 01:59:51.454234 24029 thread.cc:640] rpc reactor (reactor) Time spent creating pthread: real 1.406s user 0.479s sys 0.888s
W20250629 01:59:51.549191 24029 thread.cc:606] rpc reactor (reactor) Time spent starting thread: real 1.502s user 0.481s sys 0.898s
I20250629 01:59:51.620922 24029 minidump.cc:252] Setting minidump size limit to 20M
I20250629 01:59:51.622640 24029 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:51.623616 24029 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:51.632907 24062 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:51.634006 24063 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:51.636312 24065 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:51.636785 24029 server_base.cc:1048] running on GCE node
I20250629 01:59:51.637813 24029 hybrid_clock.cc:584] initializing the hybrid clock with 'system' time source
I20250629 01:59:51.638255 24029 hybrid_clock.cc:648] HybridClock initialized: now 1751162391638235 us; error 318016 us; skew 500 ppm
I20250629 01:59:51.638859 24029 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:51.642881 24029 webserver.cc:469] Webserver started at http://0.0.0.0:36365/ using document root <none> and password file <none>
I20250629 01:59:51.643663 24029 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:51.643874 24029 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:51.644269 24029 server_base.cc:896] This appears to be a new deployment of Kudu; creating new FS layout
I20250629 01:59:51.648387 24029 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data/instance:
uuid: "55ea2bf53ce24d32939774f2c339f476"
format_stamp: "Formatted at 2025-06-29 01:59:51 on dist-test-slave-v1mb"
I20250629 01:59:51.649410 24029 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal/instance:
uuid: "55ea2bf53ce24d32939774f2c339f476"
format_stamp: "Formatted at 2025-06-29 01:59:51 on dist-test-slave-v1mb"
I20250629 01:59:51.654889 24029 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.006s sys 0.000s
I20250629 01:59:51.660115 24070 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:51.661223 24029 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.001s
I20250629 01:59:51.661608 24029 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal
uuid: "55ea2bf53ce24d32939774f2c339f476"
format_stamp: "Formatted at 2025-06-29 01:59:51 on dist-test-slave-v1mb"
I20250629 01:59:51.662019 24029 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:51.818621 24029 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:51.820008 24029 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:51.820391 24029 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:51.824841 24029 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 01:59:51.837567 24029 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476: Bootstrap starting.
I20250629 01:59:51.842257 24029 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476: Neither blocks nor log segments found. Creating new log.
I20250629 01:59:51.843775 24029 log.cc:826] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476: Log is configured to *not* fsync() on all Append() calls
I20250629 01:59:51.847230 24029 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476: No bootstrap required, opened a new log
I20250629 01:59:51.861507 24029 raft_consensus.cc:357] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "55ea2bf53ce24d32939774f2c339f476" member_type: VOTER }
I20250629 01:59:51.861930 24029 raft_consensus.cc:383] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 01:59:51.862162 24029 raft_consensus.cc:738] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 55ea2bf53ce24d32939774f2c339f476, State: Initialized, Role: FOLLOWER
I20250629 01:59:51.862762 24029 consensus_queue.cc:260] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "55ea2bf53ce24d32939774f2c339f476" member_type: VOTER }
I20250629 01:59:51.863198 24029 raft_consensus.cc:397] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:59:51.863477 24029 raft_consensus.cc:491] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:59:51.863771 24029 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 0 FOLLOWER]: Advancing to term 1
I20250629 01:59:51.867167 24029 raft_consensus.cc:513] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "55ea2bf53ce24d32939774f2c339f476" member_type: VOTER }
I20250629 01:59:51.867820 24029 leader_election.cc:304] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 55ea2bf53ce24d32939774f2c339f476; no voters:
I20250629 01:59:51.869339 24029 leader_election.cc:290] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250629 01:59:51.869563 24077 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 01:59:51.871655 24077 raft_consensus.cc:695] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 1 LEADER]: Becoming Leader. State: Replica: 55ea2bf53ce24d32939774f2c339f476, State: Running, Role: LEADER
I20250629 01:59:51.872443 24077 consensus_queue.cc:237] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "55ea2bf53ce24d32939774f2c339f476" member_type: VOTER }
I20250629 01:59:51.879806 24079 sys_catalog.cc:455] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 55ea2bf53ce24d32939774f2c339f476. Latest consensus state: current_term: 1 leader_uuid: "55ea2bf53ce24d32939774f2c339f476" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "55ea2bf53ce24d32939774f2c339f476" member_type: VOTER } }
I20250629 01:59:51.879995 24078 sys_catalog.cc:455] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "55ea2bf53ce24d32939774f2c339f476" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "55ea2bf53ce24d32939774f2c339f476" member_type: VOTER } }
I20250629 01:59:51.880421 24079 sys_catalog.cc:458] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [sys.catalog]: This master's current role is: LEADER
I20250629 01:59:51.880553 24078 sys_catalog.cc:458] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [sys.catalog]: This master's current role is: LEADER
I20250629 01:59:51.890125 24029 tablet_replica.cc:331] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476: stopping tablet replica
I20250629 01:59:51.890650 24029 raft_consensus.cc:2241] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 1 LEADER]: Raft consensus shutting down.
I20250629 01:59:51.891044 24029 raft_consensus.cc:2270] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250629 01:59:51.892835 24029 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250629 01:59:51.893173 24029 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250629 01:59:51.943833 24029 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
I20250629 01:59:52.970407 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 23562
I20250629 01:59:53.001443 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 23695
I20250629 01:59:53.034404 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 23828
I20250629 01:59:53.074466 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:39373
--webserver_interface=127.17.83.126
--webserver_port=37935
--builtin_ntp_servers=127.17.83.84:40585
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.17.83.126:39373 with env {}
W20250629 01:59:53.354074 24087 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:53.354576 24087 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:53.354966 24087 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:53.382272 24087 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250629 01:59:53.382542 24087 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:53.382781 24087 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250629 01:59:53.383009 24087 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250629 01:59:53.413923 24087 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:40585
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.17.83.126:39373
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.17.83.126:39373
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.17.83.126
--webserver_port=37935
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:53.415068 24087 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:53.416489 24087 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:53.428987 24094 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:53.429895 24093 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:53.436442 24096 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:54.572783 24095 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1141 milliseconds
I20250629 01:59:54.572945 24087 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:59:54.574182 24087 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:59:54.576632 24087 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:59:54.577955 24087 hybrid_clock.cc:648] HybridClock initialized: now 1751162394577931 us; error 43 us; skew 500 ppm
I20250629 01:59:54.578661 24087 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:54.585386 24087 webserver.cc:469] Webserver started at http://127.17.83.126:37935/ using document root <none> and password file <none>
I20250629 01:59:54.586227 24087 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:54.586413 24087 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:54.593745 24087 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.005s sys 0.000s
I20250629 01:59:54.597932 24103 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:54.598920 24087 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250629 01:59:54.599268 24087 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal
uuid: "55ea2bf53ce24d32939774f2c339f476"
format_stamp: "Formatted at 2025-06-29 01:59:51 on dist-test-slave-v1mb"
I20250629 01:59:54.601116 24087 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:54.647337 24087 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:54.648718 24087 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:54.649121 24087 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:54.715399 24087 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.126:39373
I20250629 01:59:54.715476 24155 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.126:39373 every 8 connection(s)
I20250629 01:59:54.718140 24087 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data/info.pb
I20250629 01:59:54.727768 24156 sys_catalog.cc:263] Verifying existing consensus state
I20250629 01:59:54.727771 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 24087
I20250629 01:59:54.730084 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.65:40853
--local_ip_for_outbound_sockets=127.17.83.65
--tserver_master_addrs=127.17.83.126:39373
--webserver_port=45447
--webserver_interface=127.17.83.65
--builtin_ntp_servers=127.17.83.84:40585
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250629 01:59:54.744088 24156 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476: Bootstrap starting.
I20250629 01:59:54.752800 24156 log.cc:826] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476: Log is configured to *not* fsync() on all Append() calls
I20250629 01:59:54.763801 24156 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=2 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:59:54.764449 24156 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476: Bootstrap complete.
I20250629 01:59:54.783435 24156 raft_consensus.cc:357] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "55ea2bf53ce24d32939774f2c339f476" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 39373 } }
I20250629 01:59:54.784034 24156 raft_consensus.cc:738] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 55ea2bf53ce24d32939774f2c339f476, State: Initialized, Role: FOLLOWER
I20250629 01:59:54.784725 24156 consensus_queue.cc:260] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "55ea2bf53ce24d32939774f2c339f476" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 39373 } }
I20250629 01:59:54.785198 24156 raft_consensus.cc:397] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250629 01:59:54.785501 24156 raft_consensus.cc:491] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250629 01:59:54.785785 24156 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 1 FOLLOWER]: Advancing to term 2
I20250629 01:59:54.789505 24156 raft_consensus.cc:513] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "55ea2bf53ce24d32939774f2c339f476" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 39373 } }
I20250629 01:59:54.790081 24156 leader_election.cc:304] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 55ea2bf53ce24d32939774f2c339f476; no voters:
I20250629 01:59:54.792181 24156 leader_election.cc:290] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250629 01:59:54.792534 24160 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 2 FOLLOWER]: Leader election won for term 2
I20250629 01:59:54.795519 24160 raft_consensus.cc:695] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [term 2 LEADER]: Becoming Leader. State: Replica: 55ea2bf53ce24d32939774f2c339f476, State: Running, Role: LEADER
I20250629 01:59:54.796231 24160 consensus_queue.cc:237] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "55ea2bf53ce24d32939774f2c339f476" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 39373 } }
I20250629 01:59:54.796937 24156 sys_catalog.cc:564] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [sys.catalog]: configured and running, proceeding with master startup.
I20250629 01:59:54.805286 24162 sys_catalog.cc:455] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 55ea2bf53ce24d32939774f2c339f476. Latest consensus state: current_term: 2 leader_uuid: "55ea2bf53ce24d32939774f2c339f476" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "55ea2bf53ce24d32939774f2c339f476" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 39373 } } }
I20250629 01:59:54.806012 24162 sys_catalog.cc:458] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [sys.catalog]: This master's current role is: LEADER
I20250629 01:59:54.806239 24161 sys_catalog.cc:455] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "55ea2bf53ce24d32939774f2c339f476" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "55ea2bf53ce24d32939774f2c339f476" member_type: VOTER last_known_addr { host: "127.17.83.126" port: 39373 } } }
I20250629 01:59:54.806773 24161 sys_catalog.cc:458] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476 [sys.catalog]: This master's current role is: LEADER
I20250629 01:59:54.818918 24168 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250629 01:59:54.829512 24168 catalog_manager.cc:671] Loaded metadata for table pre_rebuild [id=34bef625ec00465c8fe357a8c6b2ae14]
I20250629 01:59:54.839182 24168 tablet_loader.cc:96] loaded metadata for tablet 84a92da9c4e347c896253e1a7e77e4ff (table pre_rebuild [id=34bef625ec00465c8fe357a8c6b2ae14])
I20250629 01:59:54.841205 24168 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250629 01:59:54.861745 24168 catalog_manager.cc:1349] Generated new cluster ID: 341a1cc2ebb94762b8bd047040218f15
I20250629 01:59:54.862078 24168 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250629 01:59:54.899341 24178 catalog_manager.cc:797] Waiting for catalog manager background task thread to start: Service unavailable: Catalog manager is not initialized. State: Starting
I20250629 01:59:54.904881 24168 catalog_manager.cc:1372] Generated new certificate authority record
I20250629 01:59:54.906090 24168 catalog_manager.cc:1506] Loading token signing keys...
I20250629 01:59:54.922783 24168 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476: Generated new TSK 0
I20250629 01:59:54.923537 24168 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250629 01:59:55.093559 24158 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:55.094036 24158 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:55.094504 24158 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:55.123327 24158 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:55.124110 24158 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.65
I20250629 01:59:55.157776 24158 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:40585
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.65:40853
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.17.83.65
--webserver_port=45447
--tserver_master_addrs=127.17.83.126:39373
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.65
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:55.159029 24158 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:55.160641 24158 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:55.177389 24185 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:55.177646 24184 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:55.179958 24187 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:56.349987 24186 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250629 01:59:56.350061 24158 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:59:56.354678 24158 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:59:56.357239 24158 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:59:56.358677 24158 hybrid_clock.cc:648] HybridClock initialized: now 1751162396358636 us; error 57 us; skew 500 ppm
I20250629 01:59:56.359437 24158 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:56.366369 24158 webserver.cc:469] Webserver started at http://127.17.83.65:45447/ using document root <none> and password file <none>
I20250629 01:59:56.367297 24158 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:56.367523 24158 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:56.375455 24158 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.006s sys 0.001s
I20250629 01:59:56.380264 24194 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:56.381314 24158 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.000s sys 0.005s
I20250629 01:59:56.381618 24158 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/wal
uuid: "960815965ed44e50b5dccaca6f132dba"
format_stamp: "Formatted at 2025-06-29 01:59:40 on dist-test-slave-v1mb"
I20250629 01:59:56.383482 24158 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:56.432525 24158 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:56.433974 24158 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:56.434401 24158 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:56.437436 24158 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:59:56.443472 24201 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250629 01:59:56.450541 24158 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250629 01:59:56.450752 24158 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.002s sys 0.001s
I20250629 01:59:56.451030 24158 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250629 01:59:56.455469 24158 ts_tablet_manager.cc:610] Registered 1 tablets
I20250629 01:59:56.455704 24158 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.003s sys 0.002s
I20250629 01:59:56.456058 24201 tablet_bootstrap.cc:492] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: Bootstrap starting.
I20250629 01:59:56.625372 24158 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.65:40853
I20250629 01:59:56.625597 24307 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.65:40853 every 8 connection(s)
I20250629 01:59:56.628799 24158 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data/info.pb
I20250629 01:59:56.638067 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 24158
I20250629 01:59:56.639679 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.66:46293
--local_ip_for_outbound_sockets=127.17.83.66
--tserver_master_addrs=127.17.83.126:39373
--webserver_port=38495
--webserver_interface=127.17.83.66
--builtin_ntp_servers=127.17.83.84:40585
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250629 01:59:56.687414 24308 heartbeater.cc:344] Connected to a master server at 127.17.83.126:39373
I20250629 01:59:56.687836 24308 heartbeater.cc:461] Registering TS with master...
I20250629 01:59:56.688884 24308 heartbeater.cc:507] Master 127.17.83.126:39373 requested a full tablet report, sending...
I20250629 01:59:56.693045 24121 ts_manager.cc:194] Registered new tserver with Master: 960815965ed44e50b5dccaca6f132dba (127.17.83.65:40853)
I20250629 01:59:56.699628 24121 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.65:44951
I20250629 01:59:56.805629 24201 log.cc:826] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: Log is configured to *not* fsync() on all Append() calls
W20250629 01:59:57.020712 24312 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:57.021193 24312 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:57.021644 24312 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:57.050585 24312 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:57.051427 24312 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.66
I20250629 01:59:57.083182 24312 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:40585
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.66:46293
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.17.83.66
--webserver_port=38495
--tserver_master_addrs=127.17.83.126:39373
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.66
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:57.084337 24312 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:57.085844 24312 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:57.101949 24319 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:57.703579 24308 heartbeater.cc:499] Master 127.17.83.126:39373 was elected leader, sending a full tablet report...
W20250629 01:59:57.102383 24320 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:58.221475 24322 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:58.223224 24321 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250629 01:59:58.223357 24312 server_base.cc:1043] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250629 01:59:58.224467 24312 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 01:59:58.229708 24312 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 01:59:58.231117 24312 hybrid_clock.cc:648] HybridClock initialized: now 1751162398231079 us; error 39 us; skew 500 ppm
I20250629 01:59:58.231873 24312 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 01:59:58.238655 24312 webserver.cc:469] Webserver started at http://127.17.83.66:38495/ using document root <none> and password file <none>
I20250629 01:59:58.239594 24312 fs_manager.cc:362] Metadata directory not provided
I20250629 01:59:58.239818 24312 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 01:59:58.247275 24312 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.007s sys 0.001s
I20250629 01:59:58.251896 24329 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 01:59:58.252815 24312 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250629 01:59:58.253101 24312 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/wal
uuid: "4ec069a1079d413a9c88c502917576f9"
format_stamp: "Formatted at 2025-06-29 01:59:42 on dist-test-slave-v1mb"
I20250629 01:59:58.254742 24312 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 01:59:58.304144 24312 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 01:59:58.305426 24312 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 01:59:58.305826 24312 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 01:59:58.308194 24312 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 01:59:58.314204 24336 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250629 01:59:58.324602 24312 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250629 01:59:58.324822 24312 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.012s user 0.000s sys 0.002s
I20250629 01:59:58.325160 24312 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250629 01:59:58.329466 24312 ts_tablet_manager.cc:610] Registered 1 tablets
I20250629 01:59:58.329659 24312 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.003s sys 0.000s
I20250629 01:59:58.330065 24336 tablet_bootstrap.cc:492] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: Bootstrap starting.
I20250629 01:59:58.518756 24312 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.66:46293
I20250629 01:59:58.518925 24442 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.66:46293 every 8 connection(s)
I20250629 01:59:58.521396 24312 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data/info.pb
I20250629 01:59:58.524794 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 24312
I20250629 01:59:58.527379 17741 external_mini_cluster.cc:1351] Running /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
/tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.17.83.67:46531
--local_ip_for_outbound_sockets=127.17.83.67
--tserver_master_addrs=127.17.83.126:39373
--webserver_port=36413
--webserver_interface=127.17.83.67
--builtin_ntp_servers=127.17.83.84:40585
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250629 01:59:58.571322 24443 heartbeater.cc:344] Connected to a master server at 127.17.83.126:39373
I20250629 01:59:58.571745 24443 heartbeater.cc:461] Registering TS with master...
I20250629 01:59:58.572988 24443 heartbeater.cc:507] Master 127.17.83.126:39373 requested a full tablet report, sending...
I20250629 01:59:58.577045 24121 ts_manager.cc:194] Registered new tserver with Master: 4ec069a1079d413a9c88c502917576f9 (127.17.83.66:46293)
I20250629 01:59:58.579871 24121 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.66:46529
I20250629 01:59:58.618549 24336 log.cc:826] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: Log is configured to *not* fsync() on all Append() calls
I20250629 01:59:58.758133 24201 tablet_bootstrap.cc:492] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 01:59:58.758867 24201 tablet_bootstrap.cc:492] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: Bootstrap complete.
I20250629 01:59:58.760170 24201 ts_tablet_manager.cc:1397] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: Time spent bootstrapping tablet: real 2.305s user 2.210s sys 0.080s
I20250629 01:59:58.769656 24201 raft_consensus.cc:357] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 01:59:58.771519 24201 raft_consensus.cc:738] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 960815965ed44e50b5dccaca6f132dba, State: Initialized, Role: FOLLOWER
I20250629 01:59:58.772188 24201 consensus_queue.cc:260] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 01:59:58.775343 24201 ts_tablet_manager.cc:1428] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: Time spent starting tablet: real 0.015s user 0.016s sys 0.000s
W20250629 01:59:58.892726 24447 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250629 01:59:58.893170 24447 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250629 01:59:58.893617 24447 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250629 01:59:58.922470 24447 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250629 01:59:58.923341 24447 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.17.83.67
I20250629 01:59:58.956096 24447 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.17.83.84:40585
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.17.83.67:46531
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.17.83.67
--webserver_port=36413
--tserver_master_addrs=127.17.83.126:39373
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.17.83.67
--log_dir=/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.18.0-SNAPSHOT
revision f0776864a65c19ebdd909053f3e624e8bcc25cee
build type FASTDEBUG
built by None at 29 Jun 2025 01:43:20 UTC on 5fd53c4cbb9d
build id 6821
TSAN enabled
I20250629 01:59:58.957341 24447 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250629 01:59:58.958885 24447 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250629 01:59:58.974956 24456 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:59.583485 24443 heartbeater.cc:499] Master 127.17.83.126:39373 was elected leader, sending a full tablet report...
I20250629 01:59:59.899806 24461 raft_consensus.cc:491] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250629 01:59:59.900360 24461 raft_consensus.cc:513] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 01:59:59.904070 24461 leader_election.cc:290] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 4ec069a1079d413a9c88c502917576f9 (127.17.83.66:46293), 362dbed412b844bd8694d82670dd3b72 (127.17.83.67:46531)
W20250629 01:59:59.926712 24195 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.17.83.67:46531: connect: Connection refused (error 111)
W20250629 01:59:59.934496 24195 leader_election.cc:336] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer 362dbed412b844bd8694d82670dd3b72 (127.17.83.67:46531): Network error: Client connection negotiation failed: client connection to 127.17.83.67:46531: connect: Connection refused (error 111)
I20250629 01:59:59.941843 24398 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "84a92da9c4e347c896253e1a7e77e4ff" candidate_uuid: "960815965ed44e50b5dccaca6f132dba" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "4ec069a1079d413a9c88c502917576f9" is_pre_election: true
W20250629 01:59:59.953792 24196 leader_election.cc:343] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [CANDIDATE]: Term 2 pre-election: Tablet error from VoteRequest() call to peer 4ec069a1079d413a9c88c502917576f9 (127.17.83.66:46293): Illegal state: must be running to vote when last-logged opid is not known
I20250629 01:59:59.954279 24196 leader_election.cc:304] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 960815965ed44e50b5dccaca6f132dba; no voters: 362dbed412b844bd8694d82670dd3b72, 4ec069a1079d413a9c88c502917576f9
I20250629 01:59:59.955011 24461 raft_consensus.cc:2747] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 1 FOLLOWER]: Leader pre-election lost for term 2. Reason: could not achieve majority
W20250629 01:59:58.976049 24455 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250629 01:59:58.976332 24458 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250629 01:59:58.978224 24447 server_base.cc:1048] running on GCE node
I20250629 02:00:00.131534 24447 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250629 02:00:00.134356 24447 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250629 02:00:00.135874 24447 hybrid_clock.cc:648] HybridClock initialized: now 1751162400135840 us; error 40 us; skew 500 ppm
I20250629 02:00:00.136940 24447 server_base.cc:848] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250629 02:00:00.144443 24447 webserver.cc:469] Webserver started at http://127.17.83.67:36413/ using document root <none> and password file <none>
I20250629 02:00:00.145674 24447 fs_manager.cc:362] Metadata directory not provided
I20250629 02:00:00.145967 24447 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250629 02:00:00.154218 24447 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.005s sys 0.000s
I20250629 02:00:00.159111 24470 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250629 02:00:00.160157 24447 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250629 02:00:00.160470 24447 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data,/tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/wal
uuid: "362dbed412b844bd8694d82670dd3b72"
format_stamp: "Formatted at 2025-06-29 01:59:44 on dist-test-slave-v1mb"
I20250629 02:00:00.162313 24447 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250629 02:00:00.221247 24447 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250629 02:00:00.222754 24447 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250629 02:00:00.223222 24447 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250629 02:00:00.226233 24447 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250629 02:00:00.232537 24477 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250629 02:00:00.240233 24447 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250629 02:00:00.240453 24447 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.010s user 0.000s sys 0.002s
I20250629 02:00:00.240744 24447 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250629 02:00:00.245354 24447 ts_tablet_manager.cc:610] Registered 1 tablets
I20250629 02:00:00.245604 24447 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.002s sys 0.001s
I20250629 02:00:00.246003 24477 tablet_bootstrap.cc:492] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: Bootstrap starting.
I20250629 02:00:00.424803 24447 rpc_server.cc:307] RPC server started. Bound to: 127.17.83.67:46531
I20250629 02:00:00.425005 24583 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.17.83.67:46531 every 8 connection(s)
I20250629 02:00:00.428423 24447 server_base.cc:1180] Dumped server information to /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data/info.pb
I20250629 02:00:00.430480 17741 external_mini_cluster.cc:1413] Started /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu as pid 24447
I20250629 02:00:00.486079 24584 heartbeater.cc:344] Connected to a master server at 127.17.83.126:39373
I20250629 02:00:00.486611 24584 heartbeater.cc:461] Registering TS with master...
I20250629 02:00:00.487886 24584 heartbeater.cc:507] Master 127.17.83.126:39373 requested a full tablet report, sending...
I20250629 02:00:00.493356 24121 ts_manager.cc:194] Registered new tserver with Master: 362dbed412b844bd8694d82670dd3b72 (127.17.83.67:46531)
I20250629 02:00:00.496812 17741 external_mini_cluster.cc:934] 3 TS(s) registered with all masters
I20250629 02:00:00.496868 24121 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.17.83.67:43283
I20250629 02:00:00.596068 24477 log.cc:826] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: Log is configured to *not* fsync() on all Append() calls
I20250629 02:00:00.682595 24336 tablet_bootstrap.cc:492] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 02:00:00.683419 24336 tablet_bootstrap.cc:492] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: Bootstrap complete.
I20250629 02:00:00.684718 24336 ts_tablet_manager.cc:1397] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: Time spent bootstrapping tablet: real 2.355s user 2.262s sys 0.068s
I20250629 02:00:00.690111 24336 raft_consensus.cc:357] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 02:00:00.692106 24336 raft_consensus.cc:738] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4ec069a1079d413a9c88c502917576f9, State: Initialized, Role: FOLLOWER
I20250629 02:00:00.692982 24336 consensus_queue.cc:260] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 02:00:00.696185 24336 ts_tablet_manager.cc:1428] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: Time spent starting tablet: real 0.011s user 0.012s sys 0.000s
I20250629 02:00:01.500793 24584 heartbeater.cc:499] Master 127.17.83.126:39373 was elected leader, sending a full tablet report...
I20250629 02:00:01.959508 24597 raft_consensus.cc:491] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250629 02:00:01.959923 24597 raft_consensus.cc:513] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 02:00:01.961376 24597 leader_election.cc:290] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 4ec069a1079d413a9c88c502917576f9 (127.17.83.66:46293), 362dbed412b844bd8694d82670dd3b72 (127.17.83.67:46531)
I20250629 02:00:01.962322 24398 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "84a92da9c4e347c896253e1a7e77e4ff" candidate_uuid: "960815965ed44e50b5dccaca6f132dba" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "4ec069a1079d413a9c88c502917576f9" is_pre_election: true
I20250629 02:00:01.963119 24398 raft_consensus.cc:2466] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 960815965ed44e50b5dccaca6f132dba in term 1.
I20250629 02:00:01.964838 24196 leader_election.cc:304] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 4ec069a1079d413a9c88c502917576f9, 960815965ed44e50b5dccaca6f132dba; no voters:
I20250629 02:00:01.966544 24597 raft_consensus.cc:2802] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250629 02:00:01.966984 24597 raft_consensus.cc:491] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250629 02:00:01.967438 24597 raft_consensus.cc:3058] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 1 FOLLOWER]: Advancing to term 2
I20250629 02:00:01.975340 24597 raft_consensus.cc:513] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 02:00:01.978021 24597 leader_election.cc:290] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [CANDIDATE]: Term 2 election: Requested vote from peers 4ec069a1079d413a9c88c502917576f9 (127.17.83.66:46293), 362dbed412b844bd8694d82670dd3b72 (127.17.83.67:46531)
I20250629 02:00:01.978251 24398 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "84a92da9c4e347c896253e1a7e77e4ff" candidate_uuid: "960815965ed44e50b5dccaca6f132dba" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "4ec069a1079d413a9c88c502917576f9"
I20250629 02:00:01.978787 24398 raft_consensus.cc:3058] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 1 FOLLOWER]: Advancing to term 2
I20250629 02:00:01.985482 24398 raft_consensus.cc:2466] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 960815965ed44e50b5dccaca6f132dba in term 2.
I20250629 02:00:01.986374 24196 leader_election.cc:304] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 4ec069a1079d413a9c88c502917576f9, 960815965ed44e50b5dccaca6f132dba; no voters:
I20250629 02:00:01.986969 24597 raft_consensus.cc:2802] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 2 FOLLOWER]: Leader election won for term 2
I20250629 02:00:01.988718 24597 raft_consensus.cc:695] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 2 LEADER]: Becoming Leader. State: Replica: 960815965ed44e50b5dccaca6f132dba, State: Running, Role: LEADER
I20250629 02:00:01.989974 24597 consensus_queue.cc:237] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 205, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 02:00:01.979897 24538 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "84a92da9c4e347c896253e1a7e77e4ff" candidate_uuid: "960815965ed44e50b5dccaca6f132dba" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "362dbed412b844bd8694d82670dd3b72"
I20250629 02:00:01.979507 24539 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "84a92da9c4e347c896253e1a7e77e4ff" candidate_uuid: "960815965ed44e50b5dccaca6f132dba" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "362dbed412b844bd8694d82670dd3b72" is_pre_election: true
W20250629 02:00:01.992282 24195 leader_election.cc:343] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [CANDIDATE]: Term 2 election: Tablet error from VoteRequest() call to peer 362dbed412b844bd8694d82670dd3b72 (127.17.83.67:46531): Illegal state: must be running to vote when last-logged opid is not known
I20250629 02:00:02.000737 24121 catalog_manager.cc:5582] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba reported cstate change: term changed from 0 to 2, leader changed from <none> to 960815965ed44e50b5dccaca6f132dba (127.17.83.65), VOTER 362dbed412b844bd8694d82670dd3b72 (127.17.83.67) added, VOTER 4ec069a1079d413a9c88c502917576f9 (127.17.83.66) added, VOTER 960815965ed44e50b5dccaca6f132dba (127.17.83.65) added. New cstate: current_term: 2 leader_uuid: "960815965ed44e50b5dccaca6f132dba" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } health_report { overall_health: UNKNOWN } } }
W20250629 02:00:02.433847 17741 scanner-internal.cc:458] Time spent opening tablet: real 1.870s user 0.006s sys 0.001s
I20250629 02:00:02.442358 24398 raft_consensus.cc:1273] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 2 FOLLOWER]: Refusing update from remote peer 960815965ed44e50b5dccaca6f132dba: Log matching property violated. Preceding OpId in replica: term: 1 index: 205. Preceding OpId from leader: term: 2 index: 206. (index mismatch)
I20250629 02:00:02.443913 24597 consensus_queue.cc:1035] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [LEADER]: Connected to new peer: Peer: permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 206, Last known committed idx: 205, Time since last communication: 0.000s
W20250629 02:00:02.554237 24195 consensus_peers.cc:489] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba -> Peer 362dbed412b844bd8694d82670dd3b72 (127.17.83.67:46531): Couldn't send request to peer 362dbed412b844bd8694d82670dd3b72. Error code: TABLET_NOT_RUNNING (12). Status: Illegal state: Tablet not RUNNING: BOOTSTRAPPING. This is attempt 1: this message will repeat every 5th retry.
I20250629 02:00:02.604791 24263 consensus_queue.cc:237] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 206, Committed index: 206, Last appended: 2.206, Last appended by leader: 205, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } }
I20250629 02:00:02.610329 24398 raft_consensus.cc:1273] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 2 FOLLOWER]: Refusing update from remote peer 960815965ed44e50b5dccaca6f132dba: Log matching property violated. Preceding OpId in replica: term: 2 index: 206. Preceding OpId from leader: term: 2 index: 207. (index mismatch)
I20250629 02:00:02.611850 24612 consensus_queue.cc:1035] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [LEADER]: Connected to new peer: Peer: permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 207, Last known committed idx: 206, Time since last communication: 0.000s
I20250629 02:00:02.617827 24612 raft_consensus.cc:2953] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 2 LEADER]: Committing config change with OpId 2.207: config changed from index -1 to 207, VOTER 362dbed412b844bd8694d82670dd3b72 (127.17.83.67) evicted. New config: { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } }
I20250629 02:00:02.620177 24398 raft_consensus.cc:2953] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 2 FOLLOWER]: Committing config change with OpId 2.207: config changed from index -1 to 207, VOTER 362dbed412b844bd8694d82670dd3b72 (127.17.83.67) evicted. New config: { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } }
I20250629 02:00:02.667295 24108 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 84a92da9c4e347c896253e1a7e77e4ff with cas_config_opid_index -1: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250629 02:00:02.673981 24121 catalog_manager.cc:5582] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 reported cstate change: config changed from index -1 to 207, VOTER 362dbed412b844bd8694d82670dd3b72 (127.17.83.67) evicted. New cstate: current_term: 2 leader_uuid: "960815965ed44e50b5dccaca6f132dba" committed_config { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } }
I20250629 02:00:02.707476 24477 tablet_bootstrap.cc:492] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250629 02:00:02.708446 24477 tablet_bootstrap.cc:492] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: Bootstrap complete.
I20250629 02:00:02.710204 24477 ts_tablet_manager.cc:1397] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: Time spent bootstrapping tablet: real 2.465s user 2.399s sys 0.053s
I20250629 02:00:02.717633 24477 raft_consensus.cc:357] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 02:00:02.720585 24477 raft_consensus.cc:738] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 362dbed412b844bd8694d82670dd3b72, State: Initialized, Role: FOLLOWER
I20250629 02:00:02.721486 24477 consensus_queue.cc:260] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } }
I20250629 02:00:02.727043 24263 consensus_queue.cc:237] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 207, Committed index: 207, Last appended: 2.207, Last appended by leader: 205, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } }
I20250629 02:00:02.741178 24612 raft_consensus.cc:2953] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 2 LEADER]: Committing config change with OpId 2.208: config changed from index 207 to 208, VOTER 4ec069a1079d413a9c88c502917576f9 (127.17.83.66) evicted. New config: { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } } }
I20250629 02:00:02.750581 24108 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 84a92da9c4e347c896253e1a7e77e4ff with cas_config_opid_index 207: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250629 02:00:02.754638 24120 catalog_manager.cc:5582] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba reported cstate change: config changed from index 207 to 208, VOTER 4ec069a1079d413a9c88c502917576f9 (127.17.83.66) evicted. New cstate: current_term: 2 leader_uuid: "960815965ed44e50b5dccaca6f132dba" committed_config { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } health_report { overall_health: HEALTHY } } }
I20250629 02:00:02.782033 24378 tablet_service.cc:1515] Processing DeleteTablet for tablet 84a92da9c4e347c896253e1a7e77e4ff with delete_type TABLET_DATA_TOMBSTONED (TS 4ec069a1079d413a9c88c502917576f9 not found in new config with opid_index 208) from {username='slave'} at 127.0.0.1:37894
I20250629 02:00:02.796391 24623 tablet_replica.cc:331] stopping tablet replica
I20250629 02:00:02.797345 24623 raft_consensus.cc:2241] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250629 02:00:02.798122 24623 raft_consensus.cc:2270] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250629 02:00:02.801651 24519 tablet_service.cc:1515] Processing DeleteTablet for tablet 84a92da9c4e347c896253e1a7e77e4ff with delete_type TABLET_DATA_TOMBSTONED (TS 362dbed412b844bd8694d82670dd3b72 not found in new config with opid_index 207) from {username='slave'} at 127.0.0.1:55986
I20250629 02:00:02.809186 24477 ts_tablet_manager.cc:1428] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: Time spent starting tablet: real 0.099s user 0.027s sys 0.044s
I20250629 02:00:02.813794 24624 tablet_replica.cc:331] stopping tablet replica
I20250629 02:00:02.814592 24624 raft_consensus.cc:2241] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250629 02:00:02.815236 24624 raft_consensus.cc:2270] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250629 02:00:02.833204 24623 ts_tablet_manager.cc:1905] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250629 02:00:02.847357 24623 ts_tablet_manager.cc:1918] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.207
I20250629 02:00:02.847707 24623 log.cc:1199] T 84a92da9c4e347c896253e1a7e77e4ff P 4ec069a1079d413a9c88c502917576f9: Deleting WAL directory at /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/wal/wals/84a92da9c4e347c896253e1a7e77e4ff
I20250629 02:00:02.849220 24106 catalog_manager.cc:4928] TS 4ec069a1079d413a9c88c502917576f9 (127.17.83.66:46293): tablet 84a92da9c4e347c896253e1a7e77e4ff (table pre_rebuild [id=34bef625ec00465c8fe357a8c6b2ae14]) successfully deleted
I20250629 02:00:02.862332 24624 ts_tablet_manager.cc:1905] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250629 02:00:02.881069 24624 ts_tablet_manager.cc:1918] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.205
I20250629 02:00:02.881448 24624 log.cc:1199] T 84a92da9c4e347c896253e1a7e77e4ff P 362dbed412b844bd8694d82670dd3b72: Deleting WAL directory at /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/wal/wals/84a92da9c4e347c896253e1a7e77e4ff
I20250629 02:00:02.883361 24105 catalog_manager.cc:4928] TS 362dbed412b844bd8694d82670dd3b72 (127.17.83.67:46531): tablet 84a92da9c4e347c896253e1a7e77e4ff (table pre_rebuild [id=34bef625ec00465c8fe357a8c6b2ae14]) successfully deleted
I20250629 02:00:03.275827 24519 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250629 02:00:03.276880 24378 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250629 02:00:03.292301 24243 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
Master Summary
UUID | Address | Status
----------------------------------+---------------------+---------
55ea2bf53ce24d32939774f2c339f476 | 127.17.83.126:39373 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+--------------------+-------------------------
builtin_ntp_servers | 127.17.83.84:40585 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+--------------------+---------+----------+----------------+-----------------
362dbed412b844bd8694d82670dd3b72 | 127.17.83.67:46531 | HEALTHY | <none> | 0 | 0
4ec069a1079d413a9c88c502917576f9 | 127.17.83.66:46293 | HEALTHY | <none> | 0 | 0
960815965ed44e50b5dccaca6f132dba | 127.17.83.65:40853 | HEALTHY | <none> | 1 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.17.83.65 | experimental | 127.17.83.65:40853
local_ip_for_outbound_sockets | 127.17.83.66 | experimental | 127.17.83.66:46293
local_ip_for_outbound_sockets | 127.17.83.67 | experimental | 127.17.83.67:46531
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data/info.pb | hidden | 127.17.83.65:40853
server_dump_info_path | /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data/info.pb | hidden | 127.17.83.66:46293
server_dump_info_path | /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data/info.pb | hidden | 127.17.83.67:46531
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+--------------------+-------------------------
builtin_ntp_servers | 127.17.83.84:40585 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.18.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
-------------+----+---------+---------------+---------+------------+------------------+-------------
pre_rebuild | 1 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 0
First Quartile | 0
Median | 0
Third Quartile | 1
Maximum | 1
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 1
Tablets | 1
Replicas | 1
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250629 02:00:03.599359 17741 log_verifier.cc:126] Checking tablet 84a92da9c4e347c896253e1a7e77e4ff
I20250629 02:00:03.899417 17741 log_verifier.cc:177] Verified matching terms for 208 ops in tablet 84a92da9c4e347c896253e1a7e77e4ff
I20250629 02:00:03.901774 24121 catalog_manager.cc:2482] Servicing SoftDeleteTable request from {username='slave'} at 127.0.0.1:44174:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250629 02:00:03.902452 24121 catalog_manager.cc:2730] Servicing DeleteTable request from {username='slave'} at 127.0.0.1:44174:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250629 02:00:03.915141 24121 catalog_manager.cc:5869] T 00000000000000000000000000000000 P 55ea2bf53ce24d32939774f2c339f476: Sending DeleteTablet for 1 replicas of tablet 84a92da9c4e347c896253e1a7e77e4ff
I20250629 02:00:03.917196 17741 test_util.cc:276] Using random seed: 1158122867
I20250629 02:00:03.917026 24243 tablet_service.cc:1515] Processing DeleteTablet for tablet 84a92da9c4e347c896253e1a7e77e4ff with delete_type TABLET_DATA_DELETED (Table deleted at 2025-06-29 02:00:03 UTC) from {username='slave'} at 127.0.0.1:54428
I20250629 02:00:03.918637 24655 tablet_replica.cc:331] stopping tablet replica
I20250629 02:00:03.919436 24655 raft_consensus.cc:2241] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 2 LEADER]: Raft consensus shutting down.
I20250629 02:00:03.920007 24655 raft_consensus.cc:2270] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba [term 2 FOLLOWER]: Raft consensus is shut down!
I20250629 02:00:03.950444 24121 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:44224:
name: "post_rebuild"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250629 02:00:03.954077 24655 ts_tablet_manager.cc:1905] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: Deleting tablet data with delete state TABLET_DATA_DELETED
W20250629 02:00:03.953997 24121 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table post_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250629 02:00:03.965623 24655 ts_tablet_manager.cc:1918] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 2.208
I20250629 02:00:03.966158 24655 log.cc:1199] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: Deleting WAL directory at /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/wal/wals/84a92da9c4e347c896253e1a7e77e4ff
I20250629 02:00:03.967412 24655 ts_tablet_manager.cc:1939] T 84a92da9c4e347c896253e1a7e77e4ff P 960815965ed44e50b5dccaca6f132dba: Deleting consensus metadata
I20250629 02:00:03.970304 24108 catalog_manager.cc:4928] TS 960815965ed44e50b5dccaca6f132dba (127.17.83.65:40853): tablet 84a92da9c4e347c896253e1a7e77e4ff (table pre_rebuild [id=34bef625ec00465c8fe357a8c6b2ae14]) successfully deleted
I20250629 02:00:03.980866 24519 tablet_service.cc:1468] Processing CreateTablet for tablet c188b75d445a4d7585b1097bedab06f9 (DEFAULT_TABLE table=post_rebuild [id=08abe6c948794ff7b0b695541cddedc4]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 02:00:03.981323 24243 tablet_service.cc:1468] Processing CreateTablet for tablet c188b75d445a4d7585b1097bedab06f9 (DEFAULT_TABLE table=post_rebuild [id=08abe6c948794ff7b0b695541cddedc4]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 02:00:03.982161 24519 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c188b75d445a4d7585b1097bedab06f9. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 02:00:03.982371 24243 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c188b75d445a4d7585b1097bedab06f9. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 02:00:03.985021 24378 tablet_service.cc:1468] Processing CreateTablet for tablet c188b75d445a4d7585b1097bedab06f9 (DEFAULT_TABLE table=post_rebuild [id=08abe6c948794ff7b0b695541cddedc4]), partition=RANGE (key) PARTITION UNBOUNDED
I20250629 02:00:03.986246 24378 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c188b75d445a4d7585b1097bedab06f9. 1 dirs total, 0 dirs full, 0 dirs failed
I20250629 02:00:04.001397 24663 tablet_bootstrap.cc:492] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72: Bootstrap starting.
I20250629 02:00:04.008618 24663 tablet_bootstrap.cc:654] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72: Neither blocks nor log segments found. Creating new log.
I20250629 02:00:04.009927 24662 tablet_bootstrap.cc:492] T c188b75d445a4d7585b1097bedab06f9 P 960815965ed44e50b5dccaca6f132dba: Bootstrap starting.
I20250629 02:00:04.015820 24662 tablet_bootstrap.cc:654] T c188b75d445a4d7585b1097bedab06f9 P 960815965ed44e50b5dccaca6f132dba: Neither blocks nor log segments found. Creating new log.
I20250629 02:00:04.019706 24664 tablet_bootstrap.cc:492] T c188b75d445a4d7585b1097bedab06f9 P 4ec069a1079d413a9c88c502917576f9: Bootstrap starting.
I20250629 02:00:04.022416 24662 tablet_bootstrap.cc:492] T c188b75d445a4d7585b1097bedab06f9 P 960815965ed44e50b5dccaca6f132dba: No bootstrap required, opened a new log
I20250629 02:00:04.022914 24662 ts_tablet_manager.cc:1397] T c188b75d445a4d7585b1097bedab06f9 P 960815965ed44e50b5dccaca6f132dba: Time spent bootstrapping tablet: real 0.013s user 0.005s sys 0.006s
I20250629 02:00:04.025465 24662 raft_consensus.cc:357] T c188b75d445a4d7585b1097bedab06f9 P 960815965ed44e50b5dccaca6f132dba [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } }
I20250629 02:00:04.026161 24662 raft_consensus.cc:383] T c188b75d445a4d7585b1097bedab06f9 P 960815965ed44e50b5dccaca6f132dba [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 02:00:04.026531 24662 raft_consensus.cc:738] T c188b75d445a4d7585b1097bedab06f9 P 960815965ed44e50b5dccaca6f132dba [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 960815965ed44e50b5dccaca6f132dba, State: Initialized, Role: FOLLOWER
I20250629 02:00:04.027292 24662 consensus_queue.cc:260] T c188b75d445a4d7585b1097bedab06f9 P 960815965ed44e50b5dccaca6f132dba [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } }
I20250629 02:00:04.026484 24664 tablet_bootstrap.cc:654] T c188b75d445a4d7585b1097bedab06f9 P 4ec069a1079d413a9c88c502917576f9: Neither blocks nor log segments found. Creating new log.
I20250629 02:00:04.030489 24663 tablet_bootstrap.cc:492] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72: No bootstrap required, opened a new log
I20250629 02:00:04.030939 24663 ts_tablet_manager.cc:1397] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72: Time spent bootstrapping tablet: real 0.030s user 0.012s sys 0.004s
I20250629 02:00:04.033545 24663 raft_consensus.cc:357] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } }
I20250629 02:00:04.034392 24663 raft_consensus.cc:383] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 02:00:04.034709 24663 raft_consensus.cc:738] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 362dbed412b844bd8694d82670dd3b72, State: Initialized, Role: FOLLOWER
I20250629 02:00:04.035523 24663 consensus_queue.cc:260] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } }
I20250629 02:00:04.040782 24662 ts_tablet_manager.cc:1428] T c188b75d445a4d7585b1097bedab06f9 P 960815965ed44e50b5dccaca6f132dba: Time spent starting tablet: real 0.018s user 0.010s sys 0.006s
I20250629 02:00:04.042045 24663 ts_tablet_manager.cc:1428] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72: Time spent starting tablet: real 0.011s user 0.004s sys 0.004s
I20250629 02:00:04.044611 24664 tablet_bootstrap.cc:492] T c188b75d445a4d7585b1097bedab06f9 P 4ec069a1079d413a9c88c502917576f9: No bootstrap required, opened a new log
I20250629 02:00:04.045085 24664 ts_tablet_manager.cc:1397] T c188b75d445a4d7585b1097bedab06f9 P 4ec069a1079d413a9c88c502917576f9: Time spent bootstrapping tablet: real 0.026s user 0.016s sys 0.000s
I20250629 02:00:04.047876 24664 raft_consensus.cc:357] T c188b75d445a4d7585b1097bedab06f9 P 4ec069a1079d413a9c88c502917576f9 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } }
I20250629 02:00:04.048696 24664 raft_consensus.cc:383] T c188b75d445a4d7585b1097bedab06f9 P 4ec069a1079d413a9c88c502917576f9 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250629 02:00:04.048992 24664 raft_consensus.cc:738] T c188b75d445a4d7585b1097bedab06f9 P 4ec069a1079d413a9c88c502917576f9 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4ec069a1079d413a9c88c502917576f9, State: Initialized, Role: FOLLOWER
I20250629 02:00:04.049871 24664 consensus_queue.cc:260] T c188b75d445a4d7585b1097bedab06f9 P 4ec069a1079d413a9c88c502917576f9 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } }
I20250629 02:00:04.056413 24664 ts_tablet_manager.cc:1428] T c188b75d445a4d7585b1097bedab06f9 P 4ec069a1079d413a9c88c502917576f9: Time spent starting tablet: real 0.011s user 0.006s sys 0.003s
W20250629 02:00:04.163558 24309 tablet.cc:2378] T c188b75d445a4d7585b1097bedab06f9 P 960815965ed44e50b5dccaca6f132dba: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250629 02:00:04.223246 24587 tablet.cc:2378] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250629 02:00:04.300112 24444 tablet.cc:2378] T c188b75d445a4d7585b1097bedab06f9 P 4ec069a1079d413a9c88c502917576f9: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250629 02:00:04.366842 24669 raft_consensus.cc:491] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250629 02:00:04.367370 24669 raft_consensus.cc:513] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } }
I20250629 02:00:04.369642 24669 leader_election.cc:290] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 4ec069a1079d413a9c88c502917576f9 (127.17.83.66:46293), 960815965ed44e50b5dccaca6f132dba (127.17.83.65:40853)
I20250629 02:00:04.391645 24398 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "c188b75d445a4d7585b1097bedab06f9" candidate_uuid: "362dbed412b844bd8694d82670dd3b72" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "4ec069a1079d413a9c88c502917576f9" is_pre_election: true
I20250629 02:00:04.391877 24263 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "c188b75d445a4d7585b1097bedab06f9" candidate_uuid: "362dbed412b844bd8694d82670dd3b72" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "960815965ed44e50b5dccaca6f132dba" is_pre_election: true
I20250629 02:00:04.392123 24398 raft_consensus.cc:2466] T c188b75d445a4d7585b1097bedab06f9 P 4ec069a1079d413a9c88c502917576f9 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 362dbed412b844bd8694d82670dd3b72 in term 0.
I20250629 02:00:04.392499 24263 raft_consensus.cc:2466] T c188b75d445a4d7585b1097bedab06f9 P 960815965ed44e50b5dccaca6f132dba [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 362dbed412b844bd8694d82670dd3b72 in term 0.
I20250629 02:00:04.393198 24472 leader_election.cc:304] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 362dbed412b844bd8694d82670dd3b72, 4ec069a1079d413a9c88c502917576f9; no voters:
I20250629 02:00:04.393855 24669 raft_consensus.cc:2802] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250629 02:00:04.394102 24669 raft_consensus.cc:491] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250629 02:00:04.394340 24669 raft_consensus.cc:3058] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [term 0 FOLLOWER]: Advancing to term 1
I20250629 02:00:04.399474 24669 raft_consensus.cc:513] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } }
I20250629 02:00:04.400797 24669 leader_election.cc:290] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [CANDIDATE]: Term 1 election: Requested vote from peers 4ec069a1079d413a9c88c502917576f9 (127.17.83.66:46293), 960815965ed44e50b5dccaca6f132dba (127.17.83.65:40853)
I20250629 02:00:04.401466 24398 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "c188b75d445a4d7585b1097bedab06f9" candidate_uuid: "362dbed412b844bd8694d82670dd3b72" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "4ec069a1079d413a9c88c502917576f9"
I20250629 02:00:04.401639 24263 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "c188b75d445a4d7585b1097bedab06f9" candidate_uuid: "362dbed412b844bd8694d82670dd3b72" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "960815965ed44e50b5dccaca6f132dba"
I20250629 02:00:04.401851 24398 raft_consensus.cc:3058] T c188b75d445a4d7585b1097bedab06f9 P 4ec069a1079d413a9c88c502917576f9 [term 0 FOLLOWER]: Advancing to term 1
I20250629 02:00:04.402060 24263 raft_consensus.cc:3058] T c188b75d445a4d7585b1097bedab06f9 P 960815965ed44e50b5dccaca6f132dba [term 0 FOLLOWER]: Advancing to term 1
I20250629 02:00:04.405862 24398 raft_consensus.cc:2466] T c188b75d445a4d7585b1097bedab06f9 P 4ec069a1079d413a9c88c502917576f9 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 362dbed412b844bd8694d82670dd3b72 in term 1.
I20250629 02:00:04.406314 24263 raft_consensus.cc:2466] T c188b75d445a4d7585b1097bedab06f9 P 960815965ed44e50b5dccaca6f132dba [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 362dbed412b844bd8694d82670dd3b72 in term 1.
I20250629 02:00:04.406626 24472 leader_election.cc:304] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 362dbed412b844bd8694d82670dd3b72, 4ec069a1079d413a9c88c502917576f9; no voters:
I20250629 02:00:04.407186 24669 raft_consensus.cc:2802] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [term 1 FOLLOWER]: Leader election won for term 1
I20250629 02:00:04.408663 24669 raft_consensus.cc:695] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [term 1 LEADER]: Becoming Leader. State: Replica: 362dbed412b844bd8694d82670dd3b72, State: Running, Role: LEADER
I20250629 02:00:04.409430 24669 consensus_queue.cc:237] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } } peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } }
I20250629 02:00:04.418969 24121 catalog_manager.cc:5582] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 reported cstate change: term changed from 0 to 1, leader changed from <none> to 362dbed412b844bd8694d82670dd3b72 (127.17.83.67). New cstate: current_term: 1 leader_uuid: "362dbed412b844bd8694d82670dd3b72" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "362dbed412b844bd8694d82670dd3b72" member_type: VOTER last_known_addr { host: "127.17.83.67" port: 46531 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 } health_report { overall_health: UNKNOWN } } }
I20250629 02:00:04.609532 24398 raft_consensus.cc:1273] T c188b75d445a4d7585b1097bedab06f9 P 4ec069a1079d413a9c88c502917576f9 [term 1 FOLLOWER]: Refusing update from remote peer 362dbed412b844bd8694d82670dd3b72: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250629 02:00:04.609701 24263 raft_consensus.cc:1273] T c188b75d445a4d7585b1097bedab06f9 P 960815965ed44e50b5dccaca6f132dba [term 1 FOLLOWER]: Refusing update from remote peer 362dbed412b844bd8694d82670dd3b72: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250629 02:00:04.610740 24675 consensus_queue.cc:1035] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "4ec069a1079d413a9c88c502917576f9" member_type: VOTER last_known_addr { host: "127.17.83.66" port: 46293 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250629 02:00:04.611486 24669 consensus_queue.cc:1035] T c188b75d445a4d7585b1097bedab06f9 P 362dbed412b844bd8694d82670dd3b72 [LEADER]: Connected to new peer: Peer: permanent_uuid: "960815965ed44e50b5dccaca6f132dba" member_type: VOTER last_known_addr { host: "127.17.83.65" port: 40853 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250629 02:00:04.632418 24687 mvcc.cc:204] Tried to move back new op lower bound from 7172761209267625984 to 7172761208467628032. Current Snapshot: MvccSnapshot[applied={T|T < 7172761209267625984}]
I20250629 02:00:04.657850 24684 mvcc.cc:204] Tried to move back new op lower bound from 7172761209267625984 to 7172761208467628032. Current Snapshot: MvccSnapshot[applied={T|T < 7172761209267625984}]
I20250629 02:00:04.696233 24689 mvcc.cc:204] Tried to move back new op lower bound from 7172761209267625984 to 7172761208467628032. Current Snapshot: MvccSnapshot[applied={T|T < 7172761209267625984}]
I20250629 02:00:09.259148 24378 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250629 02:00:09.259359 24519 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250629 02:00:09.273789 24243 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
Master Summary
UUID | Address | Status
----------------------------------+---------------------+---------
55ea2bf53ce24d32939774f2c339f476 | 127.17.83.126:39373 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+--------------------+-------------------------
builtin_ntp_servers | 127.17.83.84:40585 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+--------------------+---------+----------+----------------+-----------------
362dbed412b844bd8694d82670dd3b72 | 127.17.83.67:46531 | HEALTHY | <none> | 1 | 0
4ec069a1079d413a9c88c502917576f9 | 127.17.83.66:46293 | HEALTHY | <none> | 0 | 0
960815965ed44e50b5dccaca6f132dba | 127.17.83.65:40853 | HEALTHY | <none> | 0 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.17.83.65 | experimental | 127.17.83.65:40853
local_ip_for_outbound_sockets | 127.17.83.66 | experimental | 127.17.83.66:46293
local_ip_for_outbound_sockets | 127.17.83.67 | experimental | 127.17.83.67:46531
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-0/data/info.pb | hidden | 127.17.83.65:40853
server_dump_info_path | /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-1/data/info.pb | hidden | 127.17.83.66:46293
server_dump_info_path | /tmp/dist-test-taskWB9DNc/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1751162234221453-17741-0/minicluster-data/ts-2/data/info.pb | hidden | 127.17.83.67:46531
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+--------------------+-------------------------
builtin_ntp_servers | 127.17.83.84:40585 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.18.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
--------------+----+---------+---------------+---------+------------+------------------+-------------
post_rebuild | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 1
First Quartile | 1
Median | 1
Third Quartile | 1
Maximum | 1
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 1
Tablets | 1
Replicas | 3
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250629 02:00:09.455989 17741 log_verifier.cc:126] Checking tablet 84a92da9c4e347c896253e1a7e77e4ff
I20250629 02:00:09.456566 17741 log_verifier.cc:177] Verified matching terms for 0 ops in tablet 84a92da9c4e347c896253e1a7e77e4ff
I20250629 02:00:09.456810 17741 log_verifier.cc:126] Checking tablet c188b75d445a4d7585b1097bedab06f9
I20250629 02:00:10.234181 17741 log_verifier.cc:177] Verified matching terms for 205 ops in tablet c188b75d445a4d7585b1097bedab06f9
I20250629 02:00:10.257959 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 24158
I20250629 02:00:10.294174 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 24312
I20250629 02:00:10.331604 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 24447
I20250629 02:00:10.370342 17741 external_mini_cluster.cc:1620] Killing /tmp/dist-test-taskWB9DNc/build/tsan/bin/kudu with pid 24087
2025-06-29T02:00:10Z chronyd exiting
[ OK ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0 (33945 ms)
[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest (33945 ms total)
[----------] Global test environment tear-down
[==========] 9 tests from 5 test suites ran. (176141 ms total)
[ PASSED ] 8 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] AdminCliTest.TestRebuildTables
1 FAILED TEST
I20250629 02:00:10.433598 17741 logging.cc:424] LogThrottler /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/client/meta_cache.cc:302: suppressed but not reported on 5 messages since previous log ~47 seconds ago