Note: This is test shard 6 of 8.
[==========] Running 9 tests from 5 test suites.
[----------] Global test environment set-up.
[----------] 5 tests from AdminCliTest
[ RUN ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes
WARNING: Logging before InitGoogleLogging() is written to STDERR
I20250814 01:52:49.326443 426 test_util.cc:276] Using random seed: -2016184082
W20250814 01:52:50.458420 426 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.089s user 0.452s sys 0.636s
W20250814 01:52:50.458842 426 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.090s user 0.452s sys 0.636s
I20250814 01:52:50.460844 426 ts_itest-base.cc:115] Starting cluster with:
I20250814 01:52:50.461092 426 ts_itest-base.cc:116] --------------
I20250814 01:52:50.461287 426 ts_itest-base.cc:117] 4 tablet servers
I20250814 01:52:50.461469 426 ts_itest-base.cc:118] 3 replicas per TS
I20250814 01:52:50.461634 426 ts_itest-base.cc:119] --------------
2025-08-14T01:52:50Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-14T01:52:50Z Disabled control of system clock
I20250814 01:52:50.502144 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:42253
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:43623
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:42253 with env {}
W20250814 01:52:50.808465 440 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:52:50.809139 440 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:52:50.809600 440 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:52:50.841429 440 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250814 01:52:50.841777 440 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:52:50.842036 440 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250814 01:52:50.842264 440 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250814 01:52:50.878961 440 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:43623
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:42253
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:42253
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:52:50.880254 440 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:52:50.881876 440 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:52:50.893404 446 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:52:50.895558 447 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:52:50.898274 449 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:52:52.000078 448 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1102 milliseconds
I20250814 01:52:52.000209 440 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:52:52.001379 440 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:52:52.003839 440 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:52:52.005157 440 hybrid_clock.cc:648] HybridClock initialized: now 1755136372005105 us; error 83 us; skew 500 ppm
I20250814 01:52:52.005951 440 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:52:52.012318 440 webserver.cc:480] Webserver started at http://127.0.106.190:41721/ using document root <none> and password file <none>
I20250814 01:52:52.013182 440 fs_manager.cc:362] Metadata directory not provided
I20250814 01:52:52.013394 440 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:52:52.013872 440 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:52:52.018599 440 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "a0050609a72d43e68397636e306e0877"
format_stamp: "Formatted at 2025-08-14 01:52:52 on dist-test-slave-30wj"
I20250814 01:52:52.019652 440 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "a0050609a72d43e68397636e306e0877"
format_stamp: "Formatted at 2025-08-14 01:52:52 on dist-test-slave-30wj"
I20250814 01:52:52.026432 440 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.001s
I20250814 01:52:52.032099 456 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:52:52.033130 440 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.001s
I20250814 01:52:52.033443 440 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "a0050609a72d43e68397636e306e0877"
format_stamp: "Formatted at 2025-08-14 01:52:52 on dist-test-slave-30wj"
I20250814 01:52:52.033784 440 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:52:52.087145 440 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:52:52.089016 440 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:52:52.089424 440 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:52:52.164148 440 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:42253
I20250814 01:52:52.164217 507 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:42253 every 8 connection(s)
I20250814 01:52:52.166822 440 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250814 01:52:52.172314 508 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:52:52.177850 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 440
I20250814 01:52:52.178390 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250814 01:52:52.195195 508 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877: Bootstrap starting.
I20250814 01:52:52.200883 508 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877: Neither blocks nor log segments found. Creating new log.
I20250814 01:52:52.203047 508 log.cc:826] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877: Log is configured to *not* fsync() on all Append() calls
I20250814 01:52:52.208243 508 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877: No bootstrap required, opened a new log
I20250814 01:52:52.227836 508 raft_consensus.cc:357] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a0050609a72d43e68397636e306e0877" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42253 } }
I20250814 01:52:52.228503 508 raft_consensus.cc:383] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:52:52.228729 508 raft_consensus.cc:738] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a0050609a72d43e68397636e306e0877, State: Initialized, Role: FOLLOWER
I20250814 01:52:52.229362 508 consensus_queue.cc:260] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a0050609a72d43e68397636e306e0877" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42253 } }
I20250814 01:52:52.229902 508 raft_consensus.cc:397] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:52:52.230161 508 raft_consensus.cc:491] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:52:52.230472 508 raft_consensus.cc:3058] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:52:52.235452 508 raft_consensus.cc:513] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a0050609a72d43e68397636e306e0877" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42253 } }
I20250814 01:52:52.236203 508 leader_election.cc:304] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: a0050609a72d43e68397636e306e0877; no voters:
I20250814 01:52:52.238323 508 leader_election.cc:290] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250814 01:52:52.238946 513 raft_consensus.cc:2802] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:52:52.242236 513 raft_consensus.cc:695] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 1 LEADER]: Becoming Leader. State: Replica: a0050609a72d43e68397636e306e0877, State: Running, Role: LEADER
I20250814 01:52:52.242916 513 consensus_queue.cc:237] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a0050609a72d43e68397636e306e0877" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42253 } }
I20250814 01:52:52.244062 508 sys_catalog.cc:564] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [sys.catalog]: configured and running, proceeding with master startup.
I20250814 01:52:52.255424 515 sys_catalog.cc:455] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [sys.catalog]: SysCatalogTable state changed. Reason: New leader a0050609a72d43e68397636e306e0877. Latest consensus state: current_term: 1 leader_uuid: "a0050609a72d43e68397636e306e0877" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a0050609a72d43e68397636e306e0877" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42253 } } }
I20250814 01:52:52.257283 514 sys_catalog.cc:455] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "a0050609a72d43e68397636e306e0877" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a0050609a72d43e68397636e306e0877" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42253 } } }
I20250814 01:52:52.259238 514 sys_catalog.cc:458] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [sys.catalog]: This master's current role is: LEADER
I20250814 01:52:52.261849 515 sys_catalog.cc:458] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [sys.catalog]: This master's current role is: LEADER
I20250814 01:52:52.279062 522 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250814 01:52:52.293318 522 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250814 01:52:52.310213 522 catalog_manager.cc:1349] Generated new cluster ID: 3e304aac4995474681802d53bb4a1695
I20250814 01:52:52.310525 522 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250814 01:52:52.341140 522 catalog_manager.cc:1372] Generated new certificate authority record
I20250814 01:52:52.342646 522 catalog_manager.cc:1506] Loading token signing keys...
I20250814 01:52:52.359089 522 catalog_manager.cc:5955] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877: Generated new TSK 0
I20250814 01:52:52.360011 522 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250814 01:52:52.374898 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:42253
--builtin_ntp_servers=127.0.106.148:43623
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250814 01:52:52.685195 532 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:52:52.685688 532 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:52:52.686244 532 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:52:52.716691 532 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:52:52.717537 532 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250814 01:52:52.751597 532 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:43623
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:42253
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:52:52.752874 532 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:52:52.754446 532 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:52:52.768060 538 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:52:52.768721 539 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:52:53.906801 532 thread.cc:641] GCE (cloud detector) Time spent creating pthread: real 1.140s user 0.001s sys 0.005s
W20250814 01:52:53.908185 532 thread.cc:608] GCE (cloud detector) Time spent starting thread: real 1.142s user 0.002s sys 0.005s
W20250814 01:52:53.914752 543 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:52:53.924787 532 server_base.cc:1047] running on GCE node
I20250814 01:52:53.926234 532 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:52:53.932541 532 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:52:53.934063 532 hybrid_clock.cc:648] HybridClock initialized: now 1755136373934024 us; error 46 us; skew 500 ppm
I20250814 01:52:53.935132 532 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:52:53.945274 532 webserver.cc:480] Webserver started at http://127.0.106.129:38257/ using document root <none> and password file <none>
I20250814 01:52:53.946619 532 fs_manager.cc:362] Metadata directory not provided
I20250814 01:52:53.946890 532 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:52:53.947530 532 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:52:53.954008 532 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "df09d9cf326b44f0baadfd078061c402"
format_stamp: "Formatted at 2025-08-14 01:52:53 on dist-test-slave-30wj"
I20250814 01:52:53.955511 532 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "df09d9cf326b44f0baadfd078061c402"
format_stamp: "Formatted at 2025-08-14 01:52:53 on dist-test-slave-30wj"
I20250814 01:52:53.965508 532 fs_manager.cc:696] Time spent creating directory manager: real 0.009s user 0.006s sys 0.005s
I20250814 01:52:53.973789 548 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:52:53.974854 532 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.003s sys 0.000s
I20250814 01:52:53.975178 532 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "df09d9cf326b44f0baadfd078061c402"
format_stamp: "Formatted at 2025-08-14 01:52:53 on dist-test-slave-30wj"
I20250814 01:52:53.975503 532 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:52:54.053118 532 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:52:54.055140 532 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:52:54.055555 532 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:52:54.058296 532 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:52:54.062810 532 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:52:54.063019 532 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:52:54.063261 532 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:52:54.063413 532 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:52:54.237473 532 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:37707
I20250814 01:52:54.237634 660 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:37707 every 8 connection(s)
I20250814 01:52:54.239969 532 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250814 01:52:54.248950 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 532
I20250814 01:52:54.249418 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250814 01:52:54.256325 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:0
--local_ip_for_outbound_sockets=127.0.106.130
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:42253
--builtin_ntp_servers=127.0.106.148:43623
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250814 01:52:54.264832 661 heartbeater.cc:344] Connected to a master server at 127.0.106.190:42253
I20250814 01:52:54.265288 661 heartbeater.cc:461] Registering TS with master...
I20250814 01:52:54.266319 661 heartbeater.cc:507] Master 127.0.106.190:42253 requested a full tablet report, sending...
I20250814 01:52:54.269073 473 ts_manager.cc:194] Registered new tserver with Master: df09d9cf326b44f0baadfd078061c402 (127.0.106.129:37707)
I20250814 01:52:54.270938 473 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:39223
W20250814 01:52:54.579041 665 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:52:54.579486 665 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:52:54.579917 665 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:52:54.610530 665 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:52:54.611404 665 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250814 01:52:54.646104 665 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:43623
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:42253
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:52:54.647313 665 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:52:54.648809 665 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:52:54.660250 671 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:52:55.274158 661 heartbeater.cc:499] Master 127.0.106.190:42253 was elected leader, sending a full tablet report...
W20250814 01:52:54.661638 672 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:52:55.944396 673 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1279 milliseconds
W20250814 01:52:55.946038 674 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:52:55.948189 665 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.287s user 0.000s sys 0.007s
W20250814 01:52:55.948439 665 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.287s user 0.000s sys 0.007s
I20250814 01:52:55.948652 665 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:52:55.949635 665 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:52:55.952376 665 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:52:55.953790 665 hybrid_clock.cc:648] HybridClock initialized: now 1755136375953758 us; error 46 us; skew 500 ppm
I20250814 01:52:55.954562 665 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:52:55.962240 665 webserver.cc:480] Webserver started at http://127.0.106.130:39939/ using document root <none> and password file <none>
I20250814 01:52:55.963304 665 fs_manager.cc:362] Metadata directory not provided
I20250814 01:52:55.963527 665 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:52:55.963963 665 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:52:55.968395 665 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2"
format_stamp: "Formatted at 2025-08-14 01:52:55 on dist-test-slave-30wj"
I20250814 01:52:55.969513 665 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2"
format_stamp: "Formatted at 2025-08-14 01:52:55 on dist-test-slave-30wj"
I20250814 01:52:55.977753 665 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.007s sys 0.000s
I20250814 01:52:55.984127 681 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:52:55.985348 665 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.006s sys 0.000s
I20250814 01:52:55.985677 665 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2"
format_stamp: "Formatted at 2025-08-14 01:52:55 on dist-test-slave-30wj"
I20250814 01:52:55.986054 665 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:52:56.052697 665 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:52:56.054137 665 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:52:56.054553 665 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:52:56.057353 665 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:52:56.061218 665 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:52:56.061471 665 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:52:56.061738 665 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:52:56.061905 665 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.001s sys 0.000s
I20250814 01:52:56.214107 665 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:36919
I20250814 01:52:56.214216 793 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:36919 every 8 connection(s)
I20250814 01:52:56.216763 665 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250814 01:52:56.220557 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 665
I20250814 01:52:56.221045 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250814 01:52:56.227497 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:0
--local_ip_for_outbound_sockets=127.0.106.131
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:42253
--builtin_ntp_servers=127.0.106.148:43623
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250814 01:52:56.242974 794 heartbeater.cc:344] Connected to a master server at 127.0.106.190:42253
I20250814 01:52:56.243584 794 heartbeater.cc:461] Registering TS with master...
I20250814 01:52:56.244866 794 heartbeater.cc:507] Master 127.0.106.190:42253 requested a full tablet report, sending...
I20250814 01:52:56.248965 473 ts_manager.cc:194] Registered new tserver with Master: db7fe2b48b6641b6ad0e7bfce8e7bee2 (127.0.106.130:36919)
I20250814 01:52:56.250646 473 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:59149
W20250814 01:52:56.530637 798 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:52:56.531144 798 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:52:56.531679 798 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:52:56.562776 798 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:52:56.563625 798 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250814 01:52:56.598083 798 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:43623
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:42253
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:52:56.599344 798 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:52:56.600946 798 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:52:56.612798 804 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:52:57.255084 794 heartbeater.cc:499] Master 127.0.106.190:42253 was elected leader, sending a full tablet report...
W20250814 01:52:56.614434 805 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:52:56.624118 807 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:52:57.700904 806 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250814 01:52:57.701030 798 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:52:57.704787 798 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:52:57.707991 798 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:52:57.709403 798 hybrid_clock.cc:648] HybridClock initialized: now 1755136377709382 us; error 45 us; skew 500 ppm
I20250814 01:52:57.710180 798 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:52:57.716650 798 webserver.cc:480] Webserver started at http://127.0.106.131:38491/ using document root <none> and password file <none>
I20250814 01:52:57.717602 798 fs_manager.cc:362] Metadata directory not provided
I20250814 01:52:57.717833 798 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:52:57.718302 798 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:52:57.722563 798 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "c40ce4fb30da4caab5adbbf50ed6d921"
format_stamp: "Formatted at 2025-08-14 01:52:57 on dist-test-slave-30wj"
I20250814 01:52:57.723623 798 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "c40ce4fb30da4caab5adbbf50ed6d921"
format_stamp: "Formatted at 2025-08-14 01:52:57 on dist-test-slave-30wj"
I20250814 01:52:57.731173 798 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.003s sys 0.005s
I20250814 01:52:57.736696 814 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:52:57.737684 798 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250814 01:52:57.738021 798 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "c40ce4fb30da4caab5adbbf50ed6d921"
format_stamp: "Formatted at 2025-08-14 01:52:57 on dist-test-slave-30wj"
I20250814 01:52:57.738389 798 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:52:57.809002 798 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:52:57.810545 798 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:52:57.810953 798 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:52:57.813817 798 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:52:57.818207 798 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:52:57.818413 798 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:52:57.818641 798 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:52:57.818794 798 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:52:57.964008 798 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:43857
I20250814 01:52:57.964114 926 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:43857 every 8 connection(s)
I20250814 01:52:57.966569 798 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250814 01:52:57.973423 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 798
I20250814 01:52:57.973915 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250814 01:52:57.980566 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.132:0
--local_ip_for_outbound_sockets=127.0.106.132
--webserver_interface=127.0.106.132
--webserver_port=0
--tserver_master_addrs=127.0.106.190:42253
--builtin_ntp_servers=127.0.106.148:43623
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250814 01:52:57.993662 927 heartbeater.cc:344] Connected to a master server at 127.0.106.190:42253
I20250814 01:52:57.994437 927 heartbeater.cc:461] Registering TS with master...
I20250814 01:52:57.996352 927 heartbeater.cc:507] Master 127.0.106.190:42253 requested a full tablet report, sending...
I20250814 01:52:58.002964 473 ts_manager.cc:194] Registered new tserver with Master: c40ce4fb30da4caab5adbbf50ed6d921 (127.0.106.131:43857)
I20250814 01:52:58.004693 473 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:34725
W20250814 01:52:58.299906 931 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:52:58.300410 931 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:52:58.300854 931 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:52:58.335362 931 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:52:58.336153 931 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.132
I20250814 01:52:58.370929 931 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:43623
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.132:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.0.106.132
--webserver_port=0
--tserver_master_addrs=127.0.106.190:42253
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.132
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:52:58.372211 931 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:52:58.373720 931 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:52:58.385577 937 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:52:59.008898 927 heartbeater.cc:499] Master 127.0.106.190:42253 was elected leader, sending a full tablet report...
W20250814 01:52:58.386533 938 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:52:59.555593 940 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:52:59.558192 939 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1166 milliseconds
W20250814 01:52:59.558573 931 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.173s user 0.394s sys 0.778s
W20250814 01:52:59.558831 931 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.173s user 0.394s sys 0.778s
I20250814 01:52:59.559021 931 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:52:59.560009 931 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:52:59.562263 931 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:52:59.563632 931 hybrid_clock.cc:648] HybridClock initialized: now 1755136379563604 us; error 42 us; skew 500 ppm
I20250814 01:52:59.564409 931 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:52:59.572515 931 webserver.cc:480] Webserver started at http://127.0.106.132:42957/ using document root <none> and password file <none>
I20250814 01:52:59.573527 931 fs_manager.cc:362] Metadata directory not provided
I20250814 01:52:59.573789 931 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:52:59.574232 931 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:52:59.578599 931 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "45f71bd6110a48cd87243e52fa96f1b4"
format_stamp: "Formatted at 2025-08-14 01:52:59 on dist-test-slave-30wj"
I20250814 01:52:59.579699 931 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "45f71bd6110a48cd87243e52fa96f1b4"
format_stamp: "Formatted at 2025-08-14 01:52:59 on dist-test-slave-30wj"
I20250814 01:52:59.587989 931 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.006s sys 0.002s
I20250814 01:52:59.594173 948 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:52:59.595335 931 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.003s
I20250814 01:52:59.595655 931 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "45f71bd6110a48cd87243e52fa96f1b4"
format_stamp: "Formatted at 2025-08-14 01:52:59 on dist-test-slave-30wj"
I20250814 01:52:59.595978 931 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:52:59.660194 931 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:52:59.661893 931 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:52:59.662302 931 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:52:59.665105 931 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:52:59.669359 931 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:52:59.669623 931 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:52:59.669902 931 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:52:59.670055 931 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:52:59.816197 931 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.132:42825
I20250814 01:52:59.816308 1060 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.132:42825 every 8 connection(s)
I20250814 01:52:59.818667 931 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250814 01:52:59.823344 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 931
I20250814 01:52:59.824637 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250814 01:52:59.839951 1061 heartbeater.cc:344] Connected to a master server at 127.0.106.190:42253
I20250814 01:52:59.840378 1061 heartbeater.cc:461] Registering TS with master...
I20250814 01:52:59.841372 1061 heartbeater.cc:507] Master 127.0.106.190:42253 requested a full tablet report, sending...
I20250814 01:52:59.843600 472 ts_manager.cc:194] Registered new tserver with Master: 45f71bd6110a48cd87243e52fa96f1b4 (127.0.106.132:42825)
I20250814 01:52:59.845541 472 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.132:42599
I20250814 01:52:59.845594 426 external_mini_cluster.cc:949] 4 TS(s) registered with all masters
I20250814 01:52:59.886600 472 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:44892:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250814 01:52:59.952600 729 tablet_service.cc:1468] Processing CreateTablet for tablet ec20f1804cb241318d260d38f749de22 (DEFAULT_TABLE table=TestTable [id=476b1696194344b0b67946c86f572b9a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:52:59.954205 729 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ec20f1804cb241318d260d38f749de22. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:52:59.961932 596 tablet_service.cc:1468] Processing CreateTablet for tablet ec20f1804cb241318d260d38f749de22 (DEFAULT_TABLE table=TestTable [id=476b1696194344b0b67946c86f572b9a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:52:59.962541 862 tablet_service.cc:1468] Processing CreateTablet for tablet ec20f1804cb241318d260d38f749de22 (DEFAULT_TABLE table=TestTable [id=476b1696194344b0b67946c86f572b9a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:52:59.964062 596 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ec20f1804cb241318d260d38f749de22. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:52:59.964221 862 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ec20f1804cb241318d260d38f749de22. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:52:59.993301 1080 tablet_bootstrap.cc:492] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2: Bootstrap starting.
I20250814 01:52:59.996218 1081 tablet_bootstrap.cc:492] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402: Bootstrap starting.
I20250814 01:53:00.001332 1080 tablet_bootstrap.cc:654] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:00.002977 1082 tablet_bootstrap.cc:492] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921: Bootstrap starting.
I20250814 01:53:00.003618 1081 tablet_bootstrap.cc:654] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:00.004268 1080 log.cc:826] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:00.006124 1081 log.cc:826] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:00.010831 1080 tablet_bootstrap.cc:492] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2: No bootstrap required, opened a new log
I20250814 01:53:00.011245 1080 ts_tablet_manager.cc:1397] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2: Time spent bootstrapping tablet: real 0.018s user 0.005s sys 0.011s
I20250814 01:53:00.011317 1082 tablet_bootstrap.cc:654] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:00.013417 1082 log.cc:826] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:00.013748 1081 tablet_bootstrap.cc:492] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402: No bootstrap required, opened a new log
I20250814 01:53:00.014276 1081 ts_tablet_manager.cc:1397] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402: Time spent bootstrapping tablet: real 0.019s user 0.010s sys 0.008s
I20250814 01:53:00.024272 1082 tablet_bootstrap.cc:492] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921: No bootstrap required, opened a new log
I20250814 01:53:00.024724 1082 ts_tablet_manager.cc:1397] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921: Time spent bootstrapping tablet: real 0.022s user 0.008s sys 0.011s
I20250814 01:53:00.038261 1080 raft_consensus.cc:357] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }
I20250814 01:53:00.039260 1080 raft_consensus.cc:383] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:00.039603 1080 raft_consensus.cc:738] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: db7fe2b48b6641b6ad0e7bfce8e7bee2, State: Initialized, Role: FOLLOWER
I20250814 01:53:00.040573 1080 consensus_queue.cc:260] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }
I20250814 01:53:00.043562 1081 raft_consensus.cc:357] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }
I20250814 01:53:00.044524 1081 raft_consensus.cc:383] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:00.044886 1081 raft_consensus.cc:738] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: df09d9cf326b44f0baadfd078061c402, State: Initialized, Role: FOLLOWER
I20250814 01:53:00.044423 1082 raft_consensus.cc:357] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }
I20250814 01:53:00.045274 1082 raft_consensus.cc:383] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:00.045583 1082 raft_consensus.cc:738] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: c40ce4fb30da4caab5adbbf50ed6d921, State: Initialized, Role: FOLLOWER
I20250814 01:53:00.046455 1082 consensus_queue.cc:260] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }
I20250814 01:53:00.046772 1081 consensus_queue.cc:260] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }
I20250814 01:53:00.047336 1080 ts_tablet_manager.cc:1428] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2: Time spent starting tablet: real 0.036s user 0.033s sys 0.002s
I20250814 01:53:00.051219 1082 ts_tablet_manager.cc:1428] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921: Time spent starting tablet: real 0.026s user 0.027s sys 0.001s
I20250814 01:53:00.054240 1081 ts_tablet_manager.cc:1428] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402: Time spent starting tablet: real 0.040s user 0.030s sys 0.008s
W20250814 01:53:00.223351 928 tablet.cc:2378] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250814 01:53:00.225522 795 tablet.cc:2378] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250814 01:53:00.256626 662 tablet.cc:2378] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250814 01:53:00.267455 1088 raft_consensus.cc:491] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250814 01:53:00.267901 1088 raft_consensus.cc:513] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }
I20250814 01:53:00.270434 1088 leader_election.cc:290] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers db7fe2b48b6641b6ad0e7bfce8e7bee2 (127.0.106.130:36919), c40ce4fb30da4caab5adbbf50ed6d921 (127.0.106.131:43857)
I20250814 01:53:00.281865 749 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ec20f1804cb241318d260d38f749de22" candidate_uuid: "df09d9cf326b44f0baadfd078061c402" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" is_pre_election: true
I20250814 01:53:00.282397 882 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ec20f1804cb241318d260d38f749de22" candidate_uuid: "df09d9cf326b44f0baadfd078061c402" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" is_pre_election: true
I20250814 01:53:00.282829 749 raft_consensus.cc:2466] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate df09d9cf326b44f0baadfd078061c402 in term 0.
I20250814 01:53:00.283270 882 raft_consensus.cc:2466] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate df09d9cf326b44f0baadfd078061c402 in term 0.
I20250814 01:53:00.284616 550 leader_election.cc:304] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: db7fe2b48b6641b6ad0e7bfce8e7bee2, df09d9cf326b44f0baadfd078061c402; no voters:
I20250814 01:53:00.285310 1088 raft_consensus.cc:2802] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250814 01:53:00.285624 1088 raft_consensus.cc:491] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250814 01:53:00.285866 1088 raft_consensus.cc:3058] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:00.289883 1088 raft_consensus.cc:513] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }
I20250814 01:53:00.291726 1088 leader_election.cc:290] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [CANDIDATE]: Term 1 election: Requested vote from peers db7fe2b48b6641b6ad0e7bfce8e7bee2 (127.0.106.130:36919), c40ce4fb30da4caab5adbbf50ed6d921 (127.0.106.131:43857)
I20250814 01:53:00.291971 749 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ec20f1804cb241318d260d38f749de22" candidate_uuid: "df09d9cf326b44f0baadfd078061c402" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2"
I20250814 01:53:00.292527 749 raft_consensus.cc:3058] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:00.292680 882 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ec20f1804cb241318d260d38f749de22" candidate_uuid: "df09d9cf326b44f0baadfd078061c402" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "c40ce4fb30da4caab5adbbf50ed6d921"
I20250814 01:53:00.293107 882 raft_consensus.cc:3058] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:00.297459 749 raft_consensus.cc:2466] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate df09d9cf326b44f0baadfd078061c402 in term 1.
I20250814 01:53:00.297677 882 raft_consensus.cc:2466] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate df09d9cf326b44f0baadfd078061c402 in term 1.
I20250814 01:53:00.298461 550 leader_election.cc:304] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: db7fe2b48b6641b6ad0e7bfce8e7bee2, df09d9cf326b44f0baadfd078061c402; no voters:
I20250814 01:53:00.299105 1088 raft_consensus.cc:2802] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:53:00.300992 1088 raft_consensus.cc:695] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [term 1 LEADER]: Becoming Leader. State: Replica: df09d9cf326b44f0baadfd078061c402, State: Running, Role: LEADER
I20250814 01:53:00.301764 1088 consensus_queue.cc:237] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }
I20250814 01:53:00.311113 473 catalog_manager.cc:5582] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 reported cstate change: term changed from 0 to 1, leader changed from <none> to df09d9cf326b44f0baadfd078061c402 (127.0.106.129). New cstate: current_term: 1 leader_uuid: "df09d9cf326b44f0baadfd078061c402" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } health_report { overall_health: UNKNOWN } } }
I20250814 01:53:00.341930 426 external_mini_cluster.cc:949] 4 TS(s) registered with all masters
I20250814 01:53:00.345414 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver df09d9cf326b44f0baadfd078061c402 to finish bootstrapping
I20250814 01:53:00.359333 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver db7fe2b48b6641b6ad0e7bfce8e7bee2 to finish bootstrapping
I20250814 01:53:00.370581 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver c40ce4fb30da4caab5adbbf50ed6d921 to finish bootstrapping
I20250814 01:53:00.381072 426 kudu-admin-test.cc:709] Waiting for Master to see the current replicas...
I20250814 01:53:00.384114 426 kudu-admin-test.cc:716] Tablet locations:
tablet_locations {
tablet_id: "ec20f1804cb241318d260d38f749de22"
DEPRECATED_stale: false
partition {
partition_key_start: ""
partition_key_end: ""
}
interned_replicas {
ts_info_idx: 0
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 1
role: LEADER
}
interned_replicas {
ts_info_idx: 2
role: FOLLOWER
}
}
ts_infos {
permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2"
rpc_addresses {
host: "127.0.106.130"
port: 36919
}
}
ts_infos {
permanent_uuid: "df09d9cf326b44f0baadfd078061c402"
rpc_addresses {
host: "127.0.106.129"
port: 37707
}
}
ts_infos {
permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921"
rpc_addresses {
host: "127.0.106.131"
port: 43857
}
}
I20250814 01:53:00.823060 1096 consensus_queue.cc:1035] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [LEADER]: Connected to new peer: Peer: permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250814 01:53:00.839668 1099 consensus_queue.cc:1035] T ec20f1804cb241318d260d38f749de22 P df09d9cf326b44f0baadfd078061c402 [LEADER]: Connected to new peer: Peer: permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250814 01:53:00.840384 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 532
I20250814 01:53:00.848816 1061 heartbeater.cc:499] Master 127.0.106.190:42253 was elected leader, sending a full tablet report...
W20250814 01:53:00.863866 457 connection.cc:537] server connection from 127.0.106.129:39223 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250814 01:53:00.864521 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 440
I20250814 01:53:00.888571 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:42253
--webserver_interface=127.0.106.190
--webserver_port=41721
--builtin_ntp_servers=127.0.106.148:43623
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:42253 with env {}
W20250814 01:53:01.066431 927 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:42253 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:42253: connect: Connection refused (error 111)
W20250814 01:53:01.192247 1102 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:01.192832 1102 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:01.193305 1102 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:01.224825 1102 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250814 01:53:01.225155 1102 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:01.225396 1102 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250814 01:53:01.225623 1102 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250814 01:53:01.262642 1102 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:43623
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:42253
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:42253
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=41721
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:01.263916 1102 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:01.265472 1102 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:01.276435 1109 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:01.858484 1061 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:42253 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:42253: connect: Connection refused (error 111)
W20250814 01:53:01.866575 794 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:42253 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:42253: connect: Connection refused (error 111)
I20250814 01:53:01.962265 1117 raft_consensus.cc:491] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250814 01:53:01.963249 1117 raft_consensus.cc:513] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }
I20250814 01:53:01.986696 1117 leader_election.cc:290] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers db7fe2b48b6641b6ad0e7bfce8e7bee2 (127.0.106.130:36919), df09d9cf326b44f0baadfd078061c402 (127.0.106.129:37707)
W20250814 01:53:02.013242 816 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.106.129:37707: connect: Connection refused (error 111)
W20250814 01:53:02.026557 816 leader_election.cc:336] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer df09d9cf326b44f0baadfd078061c402 (127.0.106.129:37707): Network error: Client connection negotiation failed: client connection to 127.0.106.129:37707: connect: Connection refused (error 111)
I20250814 01:53:02.026633 749 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ec20f1804cb241318d260d38f749de22" candidate_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" is_pre_election: true
I20250814 01:53:02.028373 816 leader_election.cc:304] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: c40ce4fb30da4caab5adbbf50ed6d921; no voters: db7fe2b48b6641b6ad0e7bfce8e7bee2, df09d9cf326b44f0baadfd078061c402
I20250814 01:53:02.029340 1117 raft_consensus.cc:2747] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 1 FOLLOWER]: Leader pre-election lost for term 2. Reason: could not achieve majority
I20250814 01:53:02.348973 1122 raft_consensus.cc:491] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 1 FOLLOWER]: Starting pre-election (detected failure of leader df09d9cf326b44f0baadfd078061c402)
I20250814 01:53:02.349406 1122 raft_consensus.cc:513] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }
W20250814 01:53:02.354229 683 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.106.129:37707: connect: Connection refused (error 111)
I20250814 01:53:02.364287 1122 leader_election.cc:290] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers df09d9cf326b44f0baadfd078061c402 (127.0.106.129:37707), c40ce4fb30da4caab5adbbf50ed6d921 (127.0.106.131:43857)
W20250814 01:53:02.378417 683 leader_election.cc:336] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer df09d9cf326b44f0baadfd078061c402 (127.0.106.129:37707): Network error: Client connection negotiation failed: client connection to 127.0.106.129:37707: connect: Connection refused (error 111)
I20250814 01:53:02.381136 882 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ec20f1804cb241318d260d38f749de22" candidate_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" is_pre_election: true
I20250814 01:53:02.381814 882 raft_consensus.cc:2466] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate db7fe2b48b6641b6ad0e7bfce8e7bee2 in term 1.
I20250814 01:53:02.383486 685 leader_election.cc:304] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: c40ce4fb30da4caab5adbbf50ed6d921, db7fe2b48b6641b6ad0e7bfce8e7bee2; no voters: df09d9cf326b44f0baadfd078061c402
I20250814 01:53:02.384727 1122 raft_consensus.cc:2802] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250814 01:53:02.385068 1122 raft_consensus.cc:491] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 1 FOLLOWER]: Starting leader election (detected failure of leader df09d9cf326b44f0baadfd078061c402)
I20250814 01:53:02.385387 1122 raft_consensus.cc:3058] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 1 FOLLOWER]: Advancing to term 2
I20250814 01:53:02.392324 1122 raft_consensus.cc:513] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }
I20250814 01:53:02.395748 882 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ec20f1804cb241318d260d38f749de22" candidate_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "c40ce4fb30da4caab5adbbf50ed6d921"
I20250814 01:53:02.396342 882 raft_consensus.cc:3058] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 1 FOLLOWER]: Advancing to term 2
I20250814 01:53:02.403023 882 raft_consensus.cc:2466] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate db7fe2b48b6641b6ad0e7bfce8e7bee2 in term 2.
W20250814 01:53:02.408380 683 leader_election.cc:336] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer df09d9cf326b44f0baadfd078061c402 (127.0.106.129:37707): Network error: Client connection negotiation failed: client connection to 127.0.106.129:37707: connect: Connection refused (error 111)
I20250814 01:53:02.410462 685 leader_election.cc:304] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: c40ce4fb30da4caab5adbbf50ed6d921, db7fe2b48b6641b6ad0e7bfce8e7bee2; no voters: df09d9cf326b44f0baadfd078061c402
I20250814 01:53:02.409833 1122 leader_election.cc:290] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [CANDIDATE]: Term 2 election: Requested vote from peers df09d9cf326b44f0baadfd078061c402 (127.0.106.129:37707), c40ce4fb30da4caab5adbbf50ed6d921 (127.0.106.131:43857)
I20250814 01:53:02.415606 1122 raft_consensus.cc:2802] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 2 FOLLOWER]: Leader election won for term 2
I20250814 01:53:02.417006 1122 raft_consensus.cc:695] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 2 LEADER]: Becoming Leader. State: Replica: db7fe2b48b6641b6ad0e7bfce8e7bee2, State: Running, Role: LEADER
I20250814 01:53:02.418566 1122 consensus_queue.cc:237] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }
W20250814 01:53:01.276535 1110 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:02.509754 1112 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:02.512390 1102 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.237s user 0.452s sys 0.754s
W20250814 01:53:02.512745 1102 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.237s user 0.452s sys 0.754s
W20250814 01:53:02.512899 1111 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1231 milliseconds
I20250814 01:53:02.513016 1102 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:53:02.514235 1102 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:02.516774 1102 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:02.518132 1102 hybrid_clock.cc:648] HybridClock initialized: now 1755136382518085 us; error 69 us; skew 500 ppm
I20250814 01:53:02.518930 1102 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:02.526280 1102 webserver.cc:480] Webserver started at http://127.0.106.190:41721/ using document root <none> and password file <none>
I20250814 01:53:02.527196 1102 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:02.527424 1102 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:02.535703 1102 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.000s sys 0.004s
I20250814 01:53:02.540979 1132 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:02.542266 1102 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.005s sys 0.001s
I20250814 01:53:02.542616 1102 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "a0050609a72d43e68397636e306e0877"
format_stamp: "Formatted at 2025-08-14 01:52:52 on dist-test-slave-30wj"
I20250814 01:53:02.544569 1102 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:02.609619 1102 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:02.611172 1102 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:02.611618 1102 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:02.677722 1102 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:42253
I20250814 01:53:02.677803 1183 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:42253 every 8 connection(s)
I20250814 01:53:02.680474 1102 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250814 01:53:02.688551 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 1102
I20250814 01:53:02.689011 426 kudu-admin-test.cc:735] Forcing unsafe config change on tserver db7fe2b48b6641b6ad0e7bfce8e7bee2
I20250814 01:53:02.690062 1184 sys_catalog.cc:263] Verifying existing consensus state
I20250814 01:53:02.694674 1184 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877: Bootstrap starting.
I20250814 01:53:02.730476 1184 log.cc:826] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:02.759292 1184 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=5 ignored=0} mutations{seen=2 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:53:02.760465 1184 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877: Bootstrap complete.
I20250814 01:53:02.791700 1184 raft_consensus.cc:357] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a0050609a72d43e68397636e306e0877" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42253 } }
I20250814 01:53:02.794014 1184 raft_consensus.cc:738] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: a0050609a72d43e68397636e306e0877, State: Initialized, Role: FOLLOWER
I20250814 01:53:02.794848 1184 consensus_queue.cc:260] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a0050609a72d43e68397636e306e0877" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42253 } }
I20250814 01:53:02.795360 1184 raft_consensus.cc:397] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:53:02.795640 1184 raft_consensus.cc:491] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:53:02.795943 1184 raft_consensus.cc:3058] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 1 FOLLOWER]: Advancing to term 2
I20250814 01:53:02.801797 1184 raft_consensus.cc:513] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a0050609a72d43e68397636e306e0877" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42253 } }
I20250814 01:53:02.802508 1184 leader_election.cc:304] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: a0050609a72d43e68397636e306e0877; no voters:
I20250814 01:53:02.804242 1184 leader_election.cc:290] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250814 01:53:02.804880 1188 raft_consensus.cc:2802] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 2 FOLLOWER]: Leader election won for term 2
I20250814 01:53:02.807971 1188 raft_consensus.cc:695] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [term 2 LEADER]: Becoming Leader. State: Replica: a0050609a72d43e68397636e306e0877, State: Running, Role: LEADER
I20250814 01:53:02.809216 1184 sys_catalog.cc:564] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [sys.catalog]: configured and running, proceeding with master startup.
I20250814 01:53:02.808884 1188 consensus_queue.cc:237] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a0050609a72d43e68397636e306e0877" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42253 } }
I20250814 01:53:02.817008 1190 sys_catalog.cc:455] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [sys.catalog]: SysCatalogTable state changed. Reason: New leader a0050609a72d43e68397636e306e0877. Latest consensus state: current_term: 2 leader_uuid: "a0050609a72d43e68397636e306e0877" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a0050609a72d43e68397636e306e0877" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42253 } } }
I20250814 01:53:02.819170 1190 sys_catalog.cc:458] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [sys.catalog]: This master's current role is: LEADER
I20250814 01:53:02.818423 1189 sys_catalog.cc:455] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "a0050609a72d43e68397636e306e0877" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a0050609a72d43e68397636e306e0877" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 42253 } } }
I20250814 01:53:02.820773 1189 sys_catalog.cc:458] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877 [sys.catalog]: This master's current role is: LEADER
I20250814 01:53:02.831672 1195 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250814 01:53:02.849049 1195 catalog_manager.cc:671] Loaded metadata for table TestTable [id=476b1696194344b0b67946c86f572b9a]
I20250814 01:53:02.856968 1195 tablet_loader.cc:96] loaded metadata for tablet ec20f1804cb241318d260d38f749de22 (table TestTable [id=476b1696194344b0b67946c86f572b9a])
I20250814 01:53:02.858798 1195 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250814 01:53:02.866703 1195 catalog_manager.cc:1261] Loaded cluster ID: 3e304aac4995474681802d53bb4a1695
I20250814 01:53:02.867138 1195 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250814 01:53:02.875721 1195 catalog_manager.cc:1506] Loading token signing keys...
I20250814 01:53:02.880713 1195 catalog_manager.cc:5966] T 00000000000000000000000000000000 P a0050609a72d43e68397636e306e0877: Loaded TSK: 0
I20250814 01:53:02.883087 1195 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250814 01:53:02.895175 1061 heartbeater.cc:344] Connected to a master server at 127.0.106.190:42253
W20250814 01:53:02.955250 683 consensus_peers.cc:489] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 -> Peer df09d9cf326b44f0baadfd078061c402 (127.0.106.129:37707): Couldn't send request to peer df09d9cf326b44f0baadfd078061c402. Status: Network error: Client connection negotiation failed: client connection to 127.0.106.129:37707: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250814 01:53:02.970216 882 raft_consensus.cc:1273] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 2 FOLLOWER]: Refusing update from remote peer db7fe2b48b6641b6ad0e7bfce8e7bee2: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250814 01:53:02.971876 1209 consensus_queue.cc:1035] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [LEADER]: Connected to new peer: Peer: permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 0, Time since last communication: 0.000s
I20250814 01:53:03.035687 927 heartbeater.cc:344] Connected to a master server at 127.0.106.190:42253
I20250814 01:53:03.042651 1149 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" instance_seqno: 1755136377930876) as {username='slave'} at 127.0.106.131:39599; Asking this server to re-register.
I20250814 01:53:03.044405 927 heartbeater.cc:461] Registering TS with master...
I20250814 01:53:03.045060 927 heartbeater.cc:507] Master 127.0.106.190:42253 requested a full tablet report, sending...
I20250814 01:53:03.048840 1148 ts_manager.cc:194] Registered new tserver with Master: c40ce4fb30da4caab5adbbf50ed6d921 (127.0.106.131:43857)
I20250814 01:53:03.053416 794 heartbeater.cc:344] Connected to a master server at 127.0.106.190:42253
I20250814 01:53:03.054329 1148 catalog_manager.cc:5582] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 reported cstate change: term changed from 1 to 2, leader changed from df09d9cf326b44f0baadfd078061c402 (127.0.106.129) to db7fe2b48b6641b6ad0e7bfce8e7bee2 (127.0.106.130). New cstate: current_term: 2 leader_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } }
I20250814 01:53:03.057189 1147 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" instance_seqno: 1755136376179477) as {username='slave'} at 127.0.106.130:33415; Asking this server to re-register.
I20250814 01:53:03.058771 794 heartbeater.cc:461] Registering TS with master...
I20250814 01:53:03.059376 794 heartbeater.cc:507] Master 127.0.106.190:42253 requested a full tablet report, sending...
I20250814 01:53:03.062572 1147 ts_manager.cc:194] Registered new tserver with Master: db7fe2b48b6641b6ad0e7bfce8e7bee2 (127.0.106.130:36919)
W20250814 01:53:03.101976 1186 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:03.102563 1186 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:03.133764 1186 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250814 01:53:03.901424 1147 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "45f71bd6110a48cd87243e52fa96f1b4" instance_seqno: 1755136379782122) as {username='slave'} at 127.0.106.132:40781; Asking this server to re-register.
I20250814 01:53:03.903494 1061 heartbeater.cc:461] Registering TS with master...
I20250814 01:53:03.904243 1061 heartbeater.cc:507] Master 127.0.106.190:42253 requested a full tablet report, sending...
I20250814 01:53:03.906852 1147 ts_manager.cc:194] Registered new tserver with Master: 45f71bd6110a48cd87243e52fa96f1b4 (127.0.106.132:42825)
W20250814 01:53:04.383525 1186 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.211s user 0.521s sys 0.686s
W20250814 01:53:04.383895 1186 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.212s user 0.521s sys 0.686s
I20250814 01:53:04.433849 749 tablet_service.cc:1905] Received UnsafeChangeConfig RPC: dest_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2"
tablet_id: "ec20f1804cb241318d260d38f749de22"
caller_id: "kudu-tools"
new_config {
peers {
permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921"
}
peers {
permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2"
}
}
from {username='slave'} at 127.0.0.1:57736
W20250814 01:53:04.435161 749 raft_consensus.cc:2216] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 2 LEADER]: PROCEEDING WITH UNSAFE CONFIG CHANGE ON THIS SERVER, COMMITTED CONFIG: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }NEW CONFIG: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } unsafe_config_change: true
I20250814 01:53:04.436275 749 raft_consensus.cc:3053] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 2 LEADER]: Stepping down as leader of term 2
I20250814 01:53:04.436555 749 raft_consensus.cc:738] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 2 LEADER]: Becoming Follower/Learner. State: Replica: db7fe2b48b6641b6ad0e7bfce8e7bee2, State: Running, Role: LEADER
I20250814 01:53:04.438393 749 consensus_queue.cc:260] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 2.2, Last appended by leader: 2, Current term: 2, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }
I20250814 01:53:04.439592 749 raft_consensus.cc:3058] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 2 FOLLOWER]: Advancing to term 3
I20250814 01:53:05.519765 1242 raft_consensus.cc:491] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 2 FOLLOWER]: Starting pre-election (detected failure of leader db7fe2b48b6641b6ad0e7bfce8e7bee2)
I20250814 01:53:05.520157 1242 raft_consensus.cc:513] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } }
I20250814 01:53:05.521538 1242 leader_election.cc:290] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers db7fe2b48b6641b6ad0e7bfce8e7bee2 (127.0.106.130:36919), df09d9cf326b44f0baadfd078061c402 (127.0.106.129:37707)
I20250814 01:53:05.522737 749 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ec20f1804cb241318d260d38f749de22" candidate_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" candidate_term: 3 candidate_status { last_received { term: 2 index: 2 } } ignore_live_leader: false dest_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" is_pre_election: true
W20250814 01:53:05.526278 816 leader_election.cc:336] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer df09d9cf326b44f0baadfd078061c402 (127.0.106.129:37707): Network error: Client connection negotiation failed: client connection to 127.0.106.129:37707: connect: Connection refused (error 111)
I20250814 01:53:05.526594 816 leader_election.cc:304] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: c40ce4fb30da4caab5adbbf50ed6d921; no voters: db7fe2b48b6641b6ad0e7bfce8e7bee2, df09d9cf326b44f0baadfd078061c402
I20250814 01:53:05.527083 1242 raft_consensus.cc:2747] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250814 01:53:05.949033 1245 raft_consensus.cc:491] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 3 FOLLOWER]: Starting pre-election (detected failure of leader kudu-tools)
I20250814 01:53:05.949532 1245 raft_consensus.cc:513] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 3 FOLLOWER]: Starting pre-election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } unsafe_config_change: true
I20250814 01:53:05.950665 1245 leader_election.cc:290] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [CANDIDATE]: Term 4 pre-election: Requested pre-vote from peers c40ce4fb30da4caab5adbbf50ed6d921 (127.0.106.131:43857)
I20250814 01:53:05.951633 882 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ec20f1804cb241318d260d38f749de22" candidate_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" is_pre_election: true
I20250814 01:53:05.952090 882 raft_consensus.cc:2466] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate db7fe2b48b6641b6ad0e7bfce8e7bee2 in term 2.
I20250814 01:53:05.952984 685 leader_election.cc:304] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [CANDIDATE]: Term 4 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: c40ce4fb30da4caab5adbbf50ed6d921, db7fe2b48b6641b6ad0e7bfce8e7bee2; no voters:
I20250814 01:53:05.953569 1245 raft_consensus.cc:2802] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 3 FOLLOWER]: Leader pre-election won for term 4
I20250814 01:53:05.953851 1245 raft_consensus.cc:491] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 3 FOLLOWER]: Starting leader election (detected failure of leader kudu-tools)
I20250814 01:53:05.954087 1245 raft_consensus.cc:3058] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 3 FOLLOWER]: Advancing to term 4
I20250814 01:53:05.958582 1245 raft_consensus.cc:513] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 4 FOLLOWER]: Starting leader election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } unsafe_config_change: true
I20250814 01:53:05.959578 1245 leader_election.cc:290] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [CANDIDATE]: Term 4 election: Requested vote from peers c40ce4fb30da4caab5adbbf50ed6d921 (127.0.106.131:43857)
I20250814 01:53:05.960589 882 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ec20f1804cb241318d260d38f749de22" candidate_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "c40ce4fb30da4caab5adbbf50ed6d921"
I20250814 01:53:05.960990 882 raft_consensus.cc:3058] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 2 FOLLOWER]: Advancing to term 4
I20250814 01:53:05.964967 882 raft_consensus.cc:2466] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 4 FOLLOWER]: Leader election vote request: Granting yes vote for candidate db7fe2b48b6641b6ad0e7bfce8e7bee2 in term 4.
I20250814 01:53:05.965878 685 leader_election.cc:304] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: c40ce4fb30da4caab5adbbf50ed6d921, db7fe2b48b6641b6ad0e7bfce8e7bee2; no voters:
I20250814 01:53:05.966586 1245 raft_consensus.cc:2802] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 4 FOLLOWER]: Leader election won for term 4
I20250814 01:53:05.967443 1245 raft_consensus.cc:695] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 4 LEADER]: Becoming Leader. State: Replica: db7fe2b48b6641b6ad0e7bfce8e7bee2, State: Running, Role: LEADER
I20250814 01:53:05.968253 1245 consensus_queue.cc:237] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 3.3, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } unsafe_config_change: true
I20250814 01:53:05.974896 1147 catalog_manager.cc:5582] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 reported cstate change: term changed from 2 to 4, now has a pending config: VOTER db7fe2b48b6641b6ad0e7bfce8e7bee2 (127.0.106.130), VOTER c40ce4fb30da4caab5adbbf50ed6d921 (127.0.106.131). New cstate: current_term: 4 leader_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "df09d9cf326b44f0baadfd078061c402" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 37707 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } health_report { overall_health: UNKNOWN } } } pending_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } unsafe_config_change: true }
W20250814 01:53:06.064024 790 debug-util.cc:398] Leaking SignalData structure 0x7b08000c8d40 after lost signal to thread 666
W20250814 01:53:06.064841 790 debug-util.cc:398] Leaking SignalData structure 0x7b08000c90c0 after lost signal to thread 793
I20250814 01:53:06.485738 882 raft_consensus.cc:1273] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 4 FOLLOWER]: Refusing update from remote peer db7fe2b48b6641b6ad0e7bfce8e7bee2: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 4 index: 4. (index mismatch)
I20250814 01:53:06.487136 1252 consensus_queue.cc:1035] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [LEADER]: Connected to new peer: Peer: permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 2, Time since last communication: 0.000s
I20250814 01:53:06.498468 1255 raft_consensus.cc:2953] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 4 LEADER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER df09d9cf326b44f0baadfd078061c402 (127.0.106.129) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } unsafe_config_change: true }
I20250814 01:53:06.499691 882 raft_consensus.cc:2953] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 4 FOLLOWER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER df09d9cf326b44f0baadfd078061c402 (127.0.106.129) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } unsafe_config_change: true }
I20250814 01:53:06.510854 1147 catalog_manager.cc:5582] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 reported cstate change: config changed from index -1 to 3, VOTER df09d9cf326b44f0baadfd078061c402 (127.0.106.129) evicted, no longer has a pending config: VOTER db7fe2b48b6641b6ad0e7bfce8e7bee2 (127.0.106.130), VOTER c40ce4fb30da4caab5adbbf50ed6d921 (127.0.106.131). New cstate: current_term: 4 leader_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } unsafe_config_change: true }
W20250814 01:53:06.530365 1147 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet ec20f1804cb241318d260d38f749de22 on TS df09d9cf326b44f0baadfd078061c402: Not found: failed to reset TS proxy: Could not find TS for UUID df09d9cf326b44f0baadfd078061c402
I20250814 01:53:06.549386 749 consensus_queue.cc:237] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 4.4, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } peers { permanent_uuid: "45f71bd6110a48cd87243e52fa96f1b4" member_type: NON_VOTER last_known_addr { host: "127.0.106.132" port: 42825 } attrs { promote: true } } unsafe_config_change: true
I20250814 01:53:06.554663 882 raft_consensus.cc:1273] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 4 FOLLOWER]: Refusing update from remote peer db7fe2b48b6641b6ad0e7bfce8e7bee2: Log matching property violated. Preceding OpId in replica: term: 4 index: 4. Preceding OpId from leader: term: 4 index: 5. (index mismatch)
I20250814 01:53:06.556707 1255 consensus_queue.cc:1035] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [LEADER]: Connected to new peer: Peer: permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.000s
I20250814 01:53:06.564761 1252 raft_consensus.cc:2953] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 4 LEADER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER 45f71bd6110a48cd87243e52fa96f1b4 (127.0.106.132) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } peers { permanent_uuid: "45f71bd6110a48cd87243e52fa96f1b4" member_type: NON_VOTER last_known_addr { host: "127.0.106.132" port: 42825 } attrs { promote: true } } unsafe_config_change: true }
I20250814 01:53:06.566073 882 raft_consensus.cc:2953] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 4 FOLLOWER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER 45f71bd6110a48cd87243e52fa96f1b4 (127.0.106.132) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } peers { permanent_uuid: "45f71bd6110a48cd87243e52fa96f1b4" member_type: NON_VOTER last_known_addr { host: "127.0.106.132" port: 42825 } attrs { promote: true } } unsafe_config_change: true }
I20250814 01:53:06.577885 1134 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet ec20f1804cb241318d260d38f749de22 with cas_config_opid_index 3: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250814 01:53:06.578722 1147 catalog_manager.cc:5582] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 reported cstate change: config changed from index 3 to 5, NON_VOTER 45f71bd6110a48cd87243e52fa96f1b4 (127.0.106.132) added. New cstate: current_term: 4 leader_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" committed_config { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } peers { permanent_uuid: "45f71bd6110a48cd87243e52fa96f1b4" member_type: NON_VOTER last_known_addr { host: "127.0.106.132" port: 42825 } attrs { promote: true } } unsafe_config_change: true }
W20250814 01:53:06.579701 683 consensus_peers.cc:489] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 -> Peer 45f71bd6110a48cd87243e52fa96f1b4 (127.0.106.132:42825): Couldn't send request to peer 45f71bd6110a48cd87243e52fa96f1b4. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: ec20f1804cb241318d260d38f749de22. This is attempt 1: this message will repeat every 5th retry.
W20250814 01:53:06.582888 1134 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet ec20f1804cb241318d260d38f749de22 on TS df09d9cf326b44f0baadfd078061c402 failed: Not found: failed to reset TS proxy: Could not find TS for UUID df09d9cf326b44f0baadfd078061c402
I20250814 01:53:06.959358 1266 ts_tablet_manager.cc:927] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4: Initiating tablet copy from peer db7fe2b48b6641b6ad0e7bfce8e7bee2 (127.0.106.130:36919)
I20250814 01:53:06.962023 1266 tablet_copy_client.cc:323] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4: tablet copy: Beginning tablet copy session from remote peer at address 127.0.106.130:36919
I20250814 01:53:06.972965 769 tablet_copy_service.cc:140] P db7fe2b48b6641b6ad0e7bfce8e7bee2: Received BeginTabletCopySession request for tablet ec20f1804cb241318d260d38f749de22 from peer 45f71bd6110a48cd87243e52fa96f1b4 ({username='slave'} at 127.0.106.132:34689)
I20250814 01:53:06.973454 769 tablet_copy_service.cc:161] P db7fe2b48b6641b6ad0e7bfce8e7bee2: Beginning new tablet copy session on tablet ec20f1804cb241318d260d38f749de22 from peer 45f71bd6110a48cd87243e52fa96f1b4 at {username='slave'} at 127.0.106.132:34689: session id = 45f71bd6110a48cd87243e52fa96f1b4-ec20f1804cb241318d260d38f749de22
I20250814 01:53:06.978526 769 tablet_copy_source_session.cc:215] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2: Tablet Copy: opened 0 blocks and 1 log segments
I20250814 01:53:06.983350 1266 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ec20f1804cb241318d260d38f749de22. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:07.004524 1266 tablet_copy_client.cc:806] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4: tablet copy: Starting download of 0 data blocks...
I20250814 01:53:07.005143 1266 tablet_copy_client.cc:670] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4: tablet copy: Starting download of 1 WAL segments...
I20250814 01:53:07.012200 1266 tablet_copy_client.cc:538] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250814 01:53:07.020756 1266 tablet_bootstrap.cc:492] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4: Bootstrap starting.
I20250814 01:53:07.032728 1266 log.cc:826] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:07.044068 1266 tablet_bootstrap.cc:492] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4: Bootstrap replayed 1/1 log segments. Stats: ops{read=5 overwritten=0 applied=5 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:53:07.044763 1266 tablet_bootstrap.cc:492] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4: Bootstrap complete.
I20250814 01:53:07.045351 1266 ts_tablet_manager.cc:1397] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4: Time spent bootstrapping tablet: real 0.025s user 0.016s sys 0.008s
I20250814 01:53:07.063401 1266 raft_consensus.cc:357] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4 [term 4 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } peers { permanent_uuid: "45f71bd6110a48cd87243e52fa96f1b4" member_type: NON_VOTER last_known_addr { host: "127.0.106.132" port: 42825 } attrs { promote: true } } unsafe_config_change: true
I20250814 01:53:07.064442 1266 raft_consensus.cc:738] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4 [term 4 LEARNER]: Becoming Follower/Learner. State: Replica: 45f71bd6110a48cd87243e52fa96f1b4, State: Initialized, Role: LEARNER
I20250814 01:53:07.065032 1266 consensus_queue.cc:260] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 5, Last appended: 4.5, Last appended by leader: 5, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } peers { permanent_uuid: "45f71bd6110a48cd87243e52fa96f1b4" member_type: NON_VOTER last_known_addr { host: "127.0.106.132" port: 42825 } attrs { promote: true } } unsafe_config_change: true
I20250814 01:53:07.068533 1266 ts_tablet_manager.cc:1428] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4: Time spent starting tablet: real 0.023s user 0.018s sys 0.008s
I20250814 01:53:07.070431 769 tablet_copy_service.cc:342] P db7fe2b48b6641b6ad0e7bfce8e7bee2: Request end of tablet copy session 45f71bd6110a48cd87243e52fa96f1b4-ec20f1804cb241318d260d38f749de22 received from {username='slave'} at 127.0.106.132:34689
I20250814 01:53:07.070883 769 tablet_copy_service.cc:434] P db7fe2b48b6641b6ad0e7bfce8e7bee2: ending tablet copy session 45f71bd6110a48cd87243e52fa96f1b4-ec20f1804cb241318d260d38f749de22 on tablet ec20f1804cb241318d260d38f749de22 with peer 45f71bd6110a48cd87243e52fa96f1b4
I20250814 01:53:07.434612 1016 raft_consensus.cc:1215] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4 [term 4 LEARNER]: Deduplicated request from leader. Original: 4.4->[4.5-4.5] Dedup: 4.5->[]
W20250814 01:53:07.750204 1134 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet ec20f1804cb241318d260d38f749de22 on TS df09d9cf326b44f0baadfd078061c402 failed: Not found: failed to reset TS proxy: Could not find TS for UUID df09d9cf326b44f0baadfd078061c402
I20250814 01:53:07.891817 1271 raft_consensus.cc:1062] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2: attempting to promote NON_VOTER 45f71bd6110a48cd87243e52fa96f1b4 to VOTER
I20250814 01:53:07.893349 1271 consensus_queue.cc:237] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 5, Committed index: 5, Last appended: 4.5, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } peers { permanent_uuid: "45f71bd6110a48cd87243e52fa96f1b4" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 42825 } attrs { promote: false } } unsafe_config_change: true
I20250814 01:53:07.897861 882 raft_consensus.cc:1273] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 4 FOLLOWER]: Refusing update from remote peer db7fe2b48b6641b6ad0e7bfce8e7bee2: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250814 01:53:07.897925 1016 raft_consensus.cc:1273] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4 [term 4 LEARNER]: Refusing update from remote peer db7fe2b48b6641b6ad0e7bfce8e7bee2: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250814 01:53:07.899134 1252 consensus_queue.cc:1035] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [LEADER]: Connected to new peer: Peer: permanent_uuid: "45f71bd6110a48cd87243e52fa96f1b4" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 42825 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250814 01:53:07.899798 1271 consensus_queue.cc:1035] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [LEADER]: Connected to new peer: Peer: permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250814 01:53:07.906055 1271 raft_consensus.cc:2953] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 [term 4 LEADER]: Committing config change with OpId 4.6: config changed from index 5 to 6, 45f71bd6110a48cd87243e52fa96f1b4 (127.0.106.132) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } peers { permanent_uuid: "45f71bd6110a48cd87243e52fa96f1b4" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 42825 } attrs { promote: false } } unsafe_config_change: true }
I20250814 01:53:07.907444 882 raft_consensus.cc:2953] T ec20f1804cb241318d260d38f749de22 P c40ce4fb30da4caab5adbbf50ed6d921 [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, 45f71bd6110a48cd87243e52fa96f1b4 (127.0.106.132) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } peers { permanent_uuid: "45f71bd6110a48cd87243e52fa96f1b4" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 42825 } attrs { promote: false } } unsafe_config_change: true }
I20250814 01:53:07.910518 1016 raft_consensus.cc:2953] T ec20f1804cb241318d260d38f749de22 P 45f71bd6110a48cd87243e52fa96f1b4 [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, 45f71bd6110a48cd87243e52fa96f1b4 (127.0.106.132) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } } peers { permanent_uuid: "45f71bd6110a48cd87243e52fa96f1b4" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 42825 } attrs { promote: false } } unsafe_config_change: true }
I20250814 01:53:07.915689 1147 catalog_manager.cc:5582] T ec20f1804cb241318d260d38f749de22 P db7fe2b48b6641b6ad0e7bfce8e7bee2 reported cstate change: config changed from index 5 to 6, 45f71bd6110a48cd87243e52fa96f1b4 (127.0.106.132) changed from NON_VOTER to VOTER. New cstate: current_term: 4 leader_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" committed_config { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 36919 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 43857 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "45f71bd6110a48cd87243e52fa96f1b4" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 42825 } attrs { promote: false } health_report { overall_health: HEALTHY } } unsafe_config_change: true }
I20250814 01:53:07.977276 426 kudu-admin-test.cc:751] Waiting for Master to see new config...
I20250814 01:53:07.990855 426 kudu-admin-test.cc:756] Tablet locations:
tablet_locations {
tablet_id: "ec20f1804cb241318d260d38f749de22"
DEPRECATED_stale: false
partition {
partition_key_start: ""
partition_key_end: ""
}
interned_replicas {
ts_info_idx: 0
role: LEADER
}
interned_replicas {
ts_info_idx: 1
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 2
role: FOLLOWER
}
}
ts_infos {
permanent_uuid: "db7fe2b48b6641b6ad0e7bfce8e7bee2"
rpc_addresses {
host: "127.0.106.130"
port: 36919
}
}
ts_infos {
permanent_uuid: "c40ce4fb30da4caab5adbbf50ed6d921"
rpc_addresses {
host: "127.0.106.131"
port: 43857
}
}
ts_infos {
permanent_uuid: "45f71bd6110a48cd87243e52fa96f1b4"
rpc_addresses {
host: "127.0.106.132"
port: 42825
}
}
I20250814 01:53:07.994863 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 665
I20250814 01:53:08.016394 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 798
I20250814 01:53:08.035363 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 931
I20250814 01:53:08.057260 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 1102
2025-08-14T01:53:08Z chronyd exiting
[ OK ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes (18787 ms)
[ RUN ] AdminCliTest.TestGracefulSpecificLeaderStepDown
I20250814 01:53:08.112179 426 test_util.cc:276] Using random seed: -1997398247
I20250814 01:53:08.117846 426 ts_itest-base.cc:115] Starting cluster with:
I20250814 01:53:08.118012 426 ts_itest-base.cc:116] --------------
I20250814 01:53:08.118135 426 ts_itest-base.cc:117] 3 tablet servers
I20250814 01:53:08.118243 426 ts_itest-base.cc:118] 3 replicas per TS
I20250814 01:53:08.118346 426 ts_itest-base.cc:119] --------------
2025-08-14T01:53:08Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-14T01:53:08Z Disabled control of system clock
I20250814 01:53:08.152161 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:32843
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:44667
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:32843
--catalog_manager_wait_for_new_tablets_to_elect_leader=false with env {}
W20250814 01:53:08.442775 1290 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:08.443338 1290 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:08.443743 1290 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:08.474792 1290 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250814 01:53:08.475081 1290 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:08.475292 1290 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250814 01:53:08.475483 1290 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250814 01:53:08.510543 1290 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:44667
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--catalog_manager_wait_for_new_tablets_to_elect_leader=false
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:32843
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:32843
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:08.511943 1290 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:08.513684 1290 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:08.524197 1296 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:08.524595 1297 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:09.634866 1299 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:09.638322 1298 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1112 milliseconds
W20250814 01:53:09.641920 1290 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.118s user 0.419s sys 0.692s
W20250814 01:53:09.642253 1290 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.118s user 0.419s sys 0.692s
I20250814 01:53:09.642480 1290 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:53:09.643551 1290 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:09.646196 1290 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:09.647569 1290 hybrid_clock.cc:648] HybridClock initialized: now 1755136389647540 us; error 45 us; skew 500 ppm
I20250814 01:53:09.648381 1290 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:09.655730 1290 webserver.cc:480] Webserver started at http://127.0.106.190:46535/ using document root <none> and password file <none>
I20250814 01:53:09.656672 1290 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:09.656893 1290 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:09.657330 1290 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:53:09.663925 1290 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "f45a93d90e654a129fc91ec3fbfdc6d7"
format_stamp: "Formatted at 2025-08-14 01:53:09 on dist-test-slave-30wj"
I20250814 01:53:09.665241 1290 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "f45a93d90e654a129fc91ec3fbfdc6d7"
format_stamp: "Formatted at 2025-08-14 01:53:09 on dist-test-slave-30wj"
I20250814 01:53:09.672914 1290 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.001s sys 0.005s
I20250814 01:53:09.679127 1307 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:09.680347 1290 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250814 01:53:09.680711 1290 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "f45a93d90e654a129fc91ec3fbfdc6d7"
format_stamp: "Formatted at 2025-08-14 01:53:09 on dist-test-slave-30wj"
I20250814 01:53:09.681052 1290 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:09.766958 1290 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:09.768467 1290 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:09.768918 1290 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:09.836942 1290 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:32843
I20250814 01:53:09.837023 1358 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:32843 every 8 connection(s)
I20250814 01:53:09.839700 1290 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250814 01:53:09.844583 1359 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:09.846800 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 1290
I20250814 01:53:09.847463 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250814 01:53:09.865749 1359 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7: Bootstrap starting.
I20250814 01:53:09.871153 1359 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:09.873384 1359 log.cc:826] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:09.877861 1359 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7: No bootstrap required, opened a new log
I20250814 01:53:09.895339 1359 raft_consensus.cc:357] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f45a93d90e654a129fc91ec3fbfdc6d7" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 32843 } }
I20250814 01:53:09.895990 1359 raft_consensus.cc:383] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:09.896229 1359 raft_consensus.cc:738] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f45a93d90e654a129fc91ec3fbfdc6d7, State: Initialized, Role: FOLLOWER
I20250814 01:53:09.896888 1359 consensus_queue.cc:260] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f45a93d90e654a129fc91ec3fbfdc6d7" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 32843 } }
I20250814 01:53:09.897377 1359 raft_consensus.cc:397] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:53:09.897663 1359 raft_consensus.cc:491] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:53:09.898013 1359 raft_consensus.cc:3058] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:09.902369 1359 raft_consensus.cc:513] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f45a93d90e654a129fc91ec3fbfdc6d7" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 32843 } }
I20250814 01:53:09.903070 1359 leader_election.cc:304] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: f45a93d90e654a129fc91ec3fbfdc6d7; no voters:
I20250814 01:53:09.904668 1359 leader_election.cc:290] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250814 01:53:09.905392 1364 raft_consensus.cc:2802] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:53:09.907435 1364 raft_consensus.cc:695] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [term 1 LEADER]: Becoming Leader. State: Replica: f45a93d90e654a129fc91ec3fbfdc6d7, State: Running, Role: LEADER
I20250814 01:53:09.908252 1364 consensus_queue.cc:237] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f45a93d90e654a129fc91ec3fbfdc6d7" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 32843 } }
I20250814 01:53:09.908743 1359 sys_catalog.cc:564] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [sys.catalog]: configured and running, proceeding with master startup.
I20250814 01:53:09.915802 1366 sys_catalog.cc:455] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [sys.catalog]: SysCatalogTable state changed. Reason: New leader f45a93d90e654a129fc91ec3fbfdc6d7. Latest consensus state: current_term: 1 leader_uuid: "f45a93d90e654a129fc91ec3fbfdc6d7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f45a93d90e654a129fc91ec3fbfdc6d7" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 32843 } } }
I20250814 01:53:09.916631 1366 sys_catalog.cc:458] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [sys.catalog]: This master's current role is: LEADER
I20250814 01:53:09.916895 1365 sys_catalog.cc:455] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "f45a93d90e654a129fc91ec3fbfdc6d7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f45a93d90e654a129fc91ec3fbfdc6d7" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 32843 } } }
I20250814 01:53:09.917560 1365 sys_catalog.cc:458] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7 [sys.catalog]: This master's current role is: LEADER
I20250814 01:53:09.922407 1373 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250814 01:53:09.934659 1373 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250814 01:53:09.950604 1373 catalog_manager.cc:1349] Generated new cluster ID: 3a4465b0bf6d4fdc8895c678e89549b4
I20250814 01:53:09.950932 1373 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250814 01:53:09.976061 1373 catalog_manager.cc:1372] Generated new certificate authority record
I20250814 01:53:09.978260 1373 catalog_manager.cc:1506] Loading token signing keys...
I20250814 01:53:10.000244 1373 catalog_manager.cc:5955] T 00000000000000000000000000000000 P f45a93d90e654a129fc91ec3fbfdc6d7: Generated new TSK 0
I20250814 01:53:10.001370 1373 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250814 01:53:10.023042 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:32843
--builtin_ntp_servers=127.0.106.148:44667
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
W20250814 01:53:10.322898 1383 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250814 01:53:10.323537 1383 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:10.323781 1383 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:10.324256 1383 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:10.356199 1383 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:10.357069 1383 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250814 01:53:10.392572 1383 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:44667
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:32843
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:10.393963 1383 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:10.395522 1383 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:10.407938 1389 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:10.412396 1392 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:10.409607 1390 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:11.745463 1391 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1333 milliseconds
I20250814 01:53:11.745553 1383 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:53:11.746706 1383 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:11.749202 1383 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:11.750610 1383 hybrid_clock.cc:648] HybridClock initialized: now 1755136391750563 us; error 76 us; skew 500 ppm
I20250814 01:53:11.751387 1383 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:11.757666 1383 webserver.cc:480] Webserver started at http://127.0.106.129:34715/ using document root <none> and password file <none>
I20250814 01:53:11.758626 1383 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:11.758813 1383 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:11.759282 1383 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:53:11.763652 1383 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "de74e77289884f52a08c3259599822c8"
format_stamp: "Formatted at 2025-08-14 01:53:11 on dist-test-slave-30wj"
I20250814 01:53:11.764748 1383 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "de74e77289884f52a08c3259599822c8"
format_stamp: "Formatted at 2025-08-14 01:53:11 on dist-test-slave-30wj"
I20250814 01:53:11.772303 1383 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.008s sys 0.001s
I20250814 01:53:11.778119 1399 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:11.779330 1383 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250814 01:53:11.779688 1383 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "de74e77289884f52a08c3259599822c8"
format_stamp: "Formatted at 2025-08-14 01:53:11 on dist-test-slave-30wj"
I20250814 01:53:11.780130 1383 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:11.832394 1383 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:11.833894 1383 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:11.834350 1383 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:11.837160 1383 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:53:11.841605 1383 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:53:11.841868 1383 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.001s sys 0.000s
I20250814 01:53:11.842119 1383 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:53:11.842283 1383 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:11.996567 1383 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:34259
I20250814 01:53:11.996722 1511 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:34259 every 8 connection(s)
I20250814 01:53:11.999131 1383 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250814 01:53:12.003978 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 1383
I20250814 01:53:12.004426 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250814 01:53:12.012735 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:0
--local_ip_for_outbound_sockets=127.0.106.130
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:32843
--builtin_ntp_servers=127.0.106.148:44667
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250814 01:53:12.025205 1512 heartbeater.cc:344] Connected to a master server at 127.0.106.190:32843
I20250814 01:53:12.025635 1512 heartbeater.cc:461] Registering TS with master...
I20250814 01:53:12.026677 1512 heartbeater.cc:507] Master 127.0.106.190:32843 requested a full tablet report, sending...
I20250814 01:53:12.029314 1324 ts_manager.cc:194] Registered new tserver with Master: de74e77289884f52a08c3259599822c8 (127.0.106.129:34259)
I20250814 01:53:12.031551 1324 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:58437
W20250814 01:53:12.337561 1516 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250814 01:53:12.338203 1516 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:12.338418 1516 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:12.338835 1516 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:12.371402 1516 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:12.372229 1516 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250814 01:53:12.408211 1516 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:44667
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:32843
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:12.409477 1516 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:12.411082 1516 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:12.423389 1522 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:53:13.034989 1512 heartbeater.cc:499] Master 127.0.106.190:32843 was elected leader, sending a full tablet report...
W20250814 01:53:12.424364 1523 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:13.769843 1525 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:13.771771 1524 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1342 milliseconds
W20250814 01:53:13.772598 1516 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.349s user 0.563s sys 0.778s
W20250814 01:53:13.772841 1516 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.349s user 0.563s sys 0.778s
I20250814 01:53:13.773051 1516 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:53:13.774088 1516 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:13.776273 1516 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:13.777621 1516 hybrid_clock.cc:648] HybridClock initialized: now 1755136393777596 us; error 42 us; skew 500 ppm
I20250814 01:53:13.778448 1516 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:13.785574 1516 webserver.cc:480] Webserver started at http://127.0.106.130:33597/ using document root <none> and password file <none>
I20250814 01:53:13.786794 1516 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:13.787000 1516 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:13.787449 1516 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:53:13.791805 1516 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "3aada10a84784e8ea14fa831572dd83c"
format_stamp: "Formatted at 2025-08-14 01:53:13 on dist-test-slave-30wj"
I20250814 01:53:13.792920 1516 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "3aada10a84784e8ea14fa831572dd83c"
format_stamp: "Formatted at 2025-08-14 01:53:13 on dist-test-slave-30wj"
I20250814 01:53:13.800568 1516 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.010s sys 0.000s
I20250814 01:53:13.806483 1532 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:13.807694 1516 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250814 01:53:13.808029 1516 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "3aada10a84784e8ea14fa831572dd83c"
format_stamp: "Formatted at 2025-08-14 01:53:13 on dist-test-slave-30wj"
I20250814 01:53:13.808383 1516 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:13.881573 1516 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:13.883045 1516 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:13.883482 1516 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:13.885924 1516 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:53:13.889981 1516 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:53:13.890209 1516 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:13.890450 1516 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:53:13.890611 1516 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:14.020011 1516 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:33279
I20250814 01:53:14.020109 1644 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:33279 every 8 connection(s)
I20250814 01:53:14.022583 1516 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250814 01:53:14.024744 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 1516
I20250814 01:53:14.025213 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250814 01:53:14.032279 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:0
--local_ip_for_outbound_sockets=127.0.106.131
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:32843
--builtin_ntp_servers=127.0.106.148:44667
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250814 01:53:14.053504 1645 heartbeater.cc:344] Connected to a master server at 127.0.106.190:32843
I20250814 01:53:14.054096 1645 heartbeater.cc:461] Registering TS with master...
I20250814 01:53:14.055493 1645 heartbeater.cc:507] Master 127.0.106.190:32843 requested a full tablet report, sending...
I20250814 01:53:14.058427 1324 ts_manager.cc:194] Registered new tserver with Master: 3aada10a84784e8ea14fa831572dd83c (127.0.106.130:33279)
I20250814 01:53:14.060204 1324 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:46639
W20250814 01:53:14.335536 1649 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250814 01:53:14.336159 1649 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:14.336422 1649 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:14.336890 1649 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:14.367866 1649 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:14.368753 1649 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250814 01:53:14.402961 1649 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:44667
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:32843
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:14.404284 1649 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:14.405896 1649 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:14.417801 1655 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:53:15.064389 1645 heartbeater.cc:499] Master 127.0.106.190:32843 was elected leader, sending a full tablet report...
W20250814 01:53:14.418313 1656 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:14.421196 1658 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:15.491825 1657 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250814 01:53:15.491854 1649 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:53:15.495589 1649 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:15.497663 1649 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:15.499091 1649 hybrid_clock.cc:648] HybridClock initialized: now 1755136395499060 us; error 43 us; skew 500 ppm
I20250814 01:53:15.499958 1649 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:15.506181 1649 webserver.cc:480] Webserver started at http://127.0.106.131:37093/ using document root <none> and password file <none>
I20250814 01:53:15.507150 1649 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:15.507352 1649 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:15.507853 1649 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:53:15.512490 1649 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "961e8d68c7284606a2b5d7480a2cd3c3"
format_stamp: "Formatted at 2025-08-14 01:53:15 on dist-test-slave-30wj"
I20250814 01:53:15.513692 1649 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "961e8d68c7284606a2b5d7480a2cd3c3"
format_stamp: "Formatted at 2025-08-14 01:53:15 on dist-test-slave-30wj"
I20250814 01:53:15.520874 1649 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.000s
I20250814 01:53:15.526707 1665 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:15.527769 1649 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.000s sys 0.004s
I20250814 01:53:15.528101 1649 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "961e8d68c7284606a2b5d7480a2cd3c3"
format_stamp: "Formatted at 2025-08-14 01:53:15 on dist-test-slave-30wj"
I20250814 01:53:15.528455 1649 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:15.584355 1649 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:15.585846 1649 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:15.586280 1649 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:15.588717 1649 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:53:15.592669 1649 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:53:15.592881 1649 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:15.593118 1649 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:53:15.593277 1649 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:15.718719 1649 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:33377
I20250814 01:53:15.718815 1777 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:33377 every 8 connection(s)
I20250814 01:53:15.721185 1649 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250814 01:53:15.729622 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 1649
I20250814 01:53:15.730036 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250814 01:53:15.743808 1778 heartbeater.cc:344] Connected to a master server at 127.0.106.190:32843
I20250814 01:53:15.744239 1778 heartbeater.cc:461] Registering TS with master...
I20250814 01:53:15.745254 1778 heartbeater.cc:507] Master 127.0.106.190:32843 requested a full tablet report, sending...
I20250814 01:53:15.747545 1323 ts_manager.cc:194] Registered new tserver with Master: 961e8d68c7284606a2b5d7480a2cd3c3 (127.0.106.131:33377)
I20250814 01:53:15.749207 1323 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:56261
I20250814 01:53:15.749943 426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250814 01:53:15.784659 1323 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:46752:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250814 01:53:15.802942 1323 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250814 01:53:15.857475 1580 tablet_service.cc:1468] Processing CreateTablet for tablet 5392d641a774477e8bb45d1090bcded4 (DEFAULT_TABLE table=TestTable [id=76a51f0eb72444c1b4cb0a8c8bb633e6]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:53:15.859158 1580 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5392d641a774477e8bb45d1090bcded4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:15.860605 1447 tablet_service.cc:1468] Processing CreateTablet for tablet 5392d641a774477e8bb45d1090bcded4 (DEFAULT_TABLE table=TestTable [id=76a51f0eb72444c1b4cb0a8c8bb633e6]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:53:15.862428 1447 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5392d641a774477e8bb45d1090bcded4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:15.874089 1713 tablet_service.cc:1468] Processing CreateTablet for tablet 5392d641a774477e8bb45d1090bcded4 (DEFAULT_TABLE table=TestTable [id=76a51f0eb72444c1b4cb0a8c8bb633e6]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:53:15.876000 1713 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 5392d641a774477e8bb45d1090bcded4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:15.892834 1797 tablet_bootstrap.cc:492] T 5392d641a774477e8bb45d1090bcded4 P 3aada10a84784e8ea14fa831572dd83c: Bootstrap starting.
I20250814 01:53:15.895004 1798 tablet_bootstrap.cc:492] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8: Bootstrap starting.
I20250814 01:53:15.902729 1797 tablet_bootstrap.cc:654] T 5392d641a774477e8bb45d1090bcded4 P 3aada10a84784e8ea14fa831572dd83c: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:15.905097 1797 log.cc:826] T 5392d641a774477e8bb45d1090bcded4 P 3aada10a84784e8ea14fa831572dd83c: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:15.905880 1798 tablet_bootstrap.cc:654] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:15.908224 1798 log.cc:826] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:15.910590 1797 tablet_bootstrap.cc:492] T 5392d641a774477e8bb45d1090bcded4 P 3aada10a84784e8ea14fa831572dd83c: No bootstrap required, opened a new log
I20250814 01:53:15.910935 1800 tablet_bootstrap.cc:492] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3: Bootstrap starting.
I20250814 01:53:15.911088 1797 ts_tablet_manager.cc:1397] T 5392d641a774477e8bb45d1090bcded4 P 3aada10a84784e8ea14fa831572dd83c: Time spent bootstrapping tablet: real 0.019s user 0.010s sys 0.004s
I20250814 01:53:15.914485 1798 tablet_bootstrap.cc:492] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8: No bootstrap required, opened a new log
I20250814 01:53:15.914884 1798 ts_tablet_manager.cc:1397] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8: Time spent bootstrapping tablet: real 0.020s user 0.006s sys 0.009s
I20250814 01:53:15.918084 1800 tablet_bootstrap.cc:654] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:15.920351 1800 log.cc:826] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:15.925981 1800 tablet_bootstrap.cc:492] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3: No bootstrap required, opened a new log
I20250814 01:53:15.926491 1800 ts_tablet_manager.cc:1397] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3: Time spent bootstrapping tablet: real 0.016s user 0.000s sys 0.015s
I20250814 01:53:15.933319 1798 raft_consensus.cc:357] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3aada10a84784e8ea14fa831572dd83c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 33279 } } peers { permanent_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 33377 } } peers { permanent_uuid: "de74e77289884f52a08c3259599822c8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 34259 } }
I20250814 01:53:15.934288 1798 raft_consensus.cc:738] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: de74e77289884f52a08c3259599822c8, State: Initialized, Role: FOLLOWER
I20250814 01:53:15.935016 1798 consensus_queue.cc:260] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3aada10a84784e8ea14fa831572dd83c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 33279 } } peers { permanent_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 33377 } } peers { permanent_uuid: "de74e77289884f52a08c3259599822c8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 34259 } }
I20250814 01:53:15.936403 1797 raft_consensus.cc:357] T 5392d641a774477e8bb45d1090bcded4 P 3aada10a84784e8ea14fa831572dd83c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3aada10a84784e8ea14fa831572dd83c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 33279 } } peers { permanent_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 33377 } } peers { permanent_uuid: "de74e77289884f52a08c3259599822c8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 34259 } }
I20250814 01:53:15.937331 1797 raft_consensus.cc:738] T 5392d641a774477e8bb45d1090bcded4 P 3aada10a84784e8ea14fa831572dd83c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 3aada10a84784e8ea14fa831572dd83c, State: Initialized, Role: FOLLOWER
I20250814 01:53:15.938625 1798 ts_tablet_manager.cc:1428] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8: Time spent starting tablet: real 0.023s user 0.023s sys 0.000s
I20250814 01:53:15.939209 1797 consensus_queue.cc:260] T 5392d641a774477e8bb45d1090bcded4 P 3aada10a84784e8ea14fa831572dd83c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3aada10a84784e8ea14fa831572dd83c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 33279 } } peers { permanent_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 33377 } } peers { permanent_uuid: "de74e77289884f52a08c3259599822c8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 34259 } }
I20250814 01:53:15.942986 1797 ts_tablet_manager.cc:1428] T 5392d641a774477e8bb45d1090bcded4 P 3aada10a84784e8ea14fa831572dd83c: Time spent starting tablet: real 0.032s user 0.029s sys 0.000s
I20250814 01:53:15.950016 1800 raft_consensus.cc:357] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3aada10a84784e8ea14fa831572dd83c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 33279 } } peers { permanent_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 33377 } } peers { permanent_uuid: "de74e77289884f52a08c3259599822c8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 34259 } }
I20250814 01:53:15.950975 1800 raft_consensus.cc:738] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 961e8d68c7284606a2b5d7480a2cd3c3, State: Initialized, Role: FOLLOWER
I20250814 01:53:15.951730 1800 consensus_queue.cc:260] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3aada10a84784e8ea14fa831572dd83c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 33279 } } peers { permanent_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 33377 } } peers { permanent_uuid: "de74e77289884f52a08c3259599822c8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 34259 } }
I20250814 01:53:15.954871 1778 heartbeater.cc:499] Master 127.0.106.190:32843 was elected leader, sending a full tablet report...
I20250814 01:53:15.956183 1800 ts_tablet_manager.cc:1428] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3: Time spent starting tablet: real 0.029s user 0.020s sys 0.009s
I20250814 01:53:15.965485 426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250814 01:53:15.968751 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver de74e77289884f52a08c3259599822c8 to finish bootstrapping
W20250814 01:53:15.975512 1779 tablet.cc:2378] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250814 01:53:15.981762 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 3aada10a84784e8ea14fa831572dd83c to finish bootstrapping
I20250814 01:53:15.991878 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 961e8d68c7284606a2b5d7480a2cd3c3 to finish bootstrapping
W20250814 01:53:16.008230 1513 tablet.cc:2378] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250814 01:53:16.029732 1646 tablet.cc:2378] T 5392d641a774477e8bb45d1090bcded4 P 3aada10a84784e8ea14fa831572dd83c: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250814 01:53:16.037921 1467 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "5392d641a774477e8bb45d1090bcded4"
dest_uuid: "de74e77289884f52a08c3259599822c8"
from {username='slave'} at 127.0.0.1:46040
I20250814 01:53:16.038648 1467 raft_consensus.cc:491] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 0 FOLLOWER]: Starting forced leader election (received explicit request)
I20250814 01:53:16.039029 1467 raft_consensus.cc:3058] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:16.045732 1467 raft_consensus.cc:513] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 1 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3aada10a84784e8ea14fa831572dd83c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 33279 } } peers { permanent_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 33377 } } peers { permanent_uuid: "de74e77289884f52a08c3259599822c8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 34259 } }
I20250814 01:53:16.048684 1467 leader_election.cc:290] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [CANDIDATE]: Term 1 election: Requested vote from peers 3aada10a84784e8ea14fa831572dd83c (127.0.106.130:33279), 961e8d68c7284606a2b5d7480a2cd3c3 (127.0.106.131:33377)
I20250814 01:53:16.056886 426 cluster_itest_util.cc:257] Not converged past 1 yet: 0.0 0.0 0.0
I20250814 01:53:16.060891 1600 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5392d641a774477e8bb45d1090bcded4" candidate_uuid: "de74e77289884f52a08c3259599822c8" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "3aada10a84784e8ea14fa831572dd83c"
I20250814 01:53:16.061693 1600 raft_consensus.cc:3058] T 5392d641a774477e8bb45d1090bcded4 P 3aada10a84784e8ea14fa831572dd83c [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:16.064584 1733 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5392d641a774477e8bb45d1090bcded4" candidate_uuid: "de74e77289884f52a08c3259599822c8" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "961e8d68c7284606a2b5d7480a2cd3c3"
I20250814 01:53:16.065316 1733 raft_consensus.cc:3058] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:16.066713 1600 raft_consensus.cc:2466] T 5392d641a774477e8bb45d1090bcded4 P 3aada10a84784e8ea14fa831572dd83c [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate de74e77289884f52a08c3259599822c8 in term 1.
I20250814 01:53:16.067807 1402 leader_election.cc:304] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 3aada10a84784e8ea14fa831572dd83c, de74e77289884f52a08c3259599822c8; no voters:
I20250814 01:53:16.068634 1803 raft_consensus.cc:2802] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:53:16.070559 1803 raft_consensus.cc:695] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 1 LEADER]: Becoming Leader. State: Replica: de74e77289884f52a08c3259599822c8, State: Running, Role: LEADER
I20250814 01:53:16.071363 1733 raft_consensus.cc:2466] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate de74e77289884f52a08c3259599822c8 in term 1.
I20250814 01:53:16.071555 1803 consensus_queue.cc:237] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3aada10a84784e8ea14fa831572dd83c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 33279 } } peers { permanent_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 33377 } } peers { permanent_uuid: "de74e77289884f52a08c3259599822c8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 34259 } }
I20250814 01:53:16.082619 1321 catalog_manager.cc:5582] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 reported cstate change: term changed from 0 to 1, leader changed from <none> to de74e77289884f52a08c3259599822c8 (127.0.106.129). New cstate: current_term: 1 leader_uuid: "de74e77289884f52a08c3259599822c8" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3aada10a84784e8ea14fa831572dd83c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 33279 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 33377 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "de74e77289884f52a08c3259599822c8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 34259 } health_report { overall_health: HEALTHY } } }
I20250814 01:53:16.162429 426 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
I20250814 01:53:16.367450 426 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
I20250814 01:53:16.585942 1812 consensus_queue.cc:1035] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [LEADER]: Connected to new peer: Peer: permanent_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 33377 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250814 01:53:16.601971 1812 consensus_queue.cc:1035] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [LEADER]: Connected to new peer: Peer: permanent_uuid: "3aada10a84784e8ea14fa831572dd83c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 33279 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250814 01:53:18.229328 1467 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "5392d641a774477e8bb45d1090bcded4"
dest_uuid: "de74e77289884f52a08c3259599822c8"
mode: GRACEFUL
from {username='slave'} at 127.0.0.1:34394
I20250814 01:53:18.229961 1467 raft_consensus.cc:604] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 1 LEADER]: Received request to transfer leadership
I20250814 01:53:18.666265 1845 raft_consensus.cc:991] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8: : Instructing follower 961e8d68c7284606a2b5d7480a2cd3c3 to start an election
I20250814 01:53:18.666620 1845 raft_consensus.cc:1079] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 1 LEADER]: Signalling peer 961e8d68c7284606a2b5d7480a2cd3c3 to start an election
I20250814 01:53:18.667933 1733 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "5392d641a774477e8bb45d1090bcded4"
dest_uuid: "961e8d68c7284606a2b5d7480a2cd3c3"
from {username='slave'} at 127.0.106.129:37683
I20250814 01:53:18.668409 1733 raft_consensus.cc:491] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 [term 1 FOLLOWER]: Starting forced leader election (received explicit request)
I20250814 01:53:18.668642 1733 raft_consensus.cc:3058] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 [term 1 FOLLOWER]: Advancing to term 2
I20250814 01:53:18.672664 1733 raft_consensus.cc:513] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 [term 2 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3aada10a84784e8ea14fa831572dd83c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 33279 } } peers { permanent_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 33377 } } peers { permanent_uuid: "de74e77289884f52a08c3259599822c8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 34259 } }
I20250814 01:53:18.674762 1733 leader_election.cc:290] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 [CANDIDATE]: Term 2 election: Requested vote from peers 3aada10a84784e8ea14fa831572dd83c (127.0.106.130:33279), de74e77289884f52a08c3259599822c8 (127.0.106.129:34259)
I20250814 01:53:18.685016 1600 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5392d641a774477e8bb45d1090bcded4" candidate_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "3aada10a84784e8ea14fa831572dd83c"
I20250814 01:53:18.685482 1600 raft_consensus.cc:3058] T 5392d641a774477e8bb45d1090bcded4 P 3aada10a84784e8ea14fa831572dd83c [term 1 FOLLOWER]: Advancing to term 2
I20250814 01:53:18.686630 1467 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "5392d641a774477e8bb45d1090bcded4" candidate_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "de74e77289884f52a08c3259599822c8"
I20250814 01:53:18.687122 1467 raft_consensus.cc:3053] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 1 LEADER]: Stepping down as leader of term 1
I20250814 01:53:18.687346 1467 raft_consensus.cc:738] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 1 LEADER]: Becoming Follower/Learner. State: Replica: de74e77289884f52a08c3259599822c8, State: Running, Role: LEADER
I20250814 01:53:18.687809 1467 consensus_queue.cc:260] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 1, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3aada10a84784e8ea14fa831572dd83c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 33279 } } peers { permanent_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 33377 } } peers { permanent_uuid: "de74e77289884f52a08c3259599822c8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 34259 } }
I20250814 01:53:18.688678 1467 raft_consensus.cc:3058] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 1 FOLLOWER]: Advancing to term 2
I20250814 01:53:18.690148 1600 raft_consensus.cc:2466] T 5392d641a774477e8bb45d1090bcded4 P 3aada10a84784e8ea14fa831572dd83c [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 961e8d68c7284606a2b5d7480a2cd3c3 in term 2.
I20250814 01:53:18.691074 1668 leader_election.cc:304] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 3aada10a84784e8ea14fa831572dd83c, 961e8d68c7284606a2b5d7480a2cd3c3; no voters:
I20250814 01:53:18.692752 1467 raft_consensus.cc:2466] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 961e8d68c7284606a2b5d7480a2cd3c3 in term 2.
I20250814 01:53:18.693081 1849 raft_consensus.cc:2802] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 [term 2 FOLLOWER]: Leader election won for term 2
I20250814 01:53:18.694494 1849 raft_consensus.cc:695] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 [term 2 LEADER]: Becoming Leader. State: Replica: 961e8d68c7284606a2b5d7480a2cd3c3, State: Running, Role: LEADER
I20250814 01:53:18.695322 1849 consensus_queue.cc:237] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3aada10a84784e8ea14fa831572dd83c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 33279 } } peers { permanent_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 33377 } } peers { permanent_uuid: "de74e77289884f52a08c3259599822c8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 34259 } }
I20250814 01:53:18.702836 1321 catalog_manager.cc:5582] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 reported cstate change: term changed from 1 to 2, leader changed from de74e77289884f52a08c3259599822c8 (127.0.106.129) to 961e8d68c7284606a2b5d7480a2cd3c3 (127.0.106.131). New cstate: current_term: 2 leader_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "3aada10a84784e8ea14fa831572dd83c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 33279 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "961e8d68c7284606a2b5d7480a2cd3c3" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 33377 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "de74e77289884f52a08c3259599822c8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 34259 } health_report { overall_health: UNKNOWN } } }
I20250814 01:53:19.147845 1467 raft_consensus.cc:1273] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 2 FOLLOWER]: Refusing update from remote peer 961e8d68c7284606a2b5d7480a2cd3c3: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250814 01:53:19.149181 1849 consensus_queue.cc:1035] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 [LEADER]: Connected to new peer: Peer: permanent_uuid: "de74e77289884f52a08c3259599822c8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 34259 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250814 01:53:19.158227 1600 raft_consensus.cc:1273] T 5392d641a774477e8bb45d1090bcded4 P 3aada10a84784e8ea14fa831572dd83c [term 2 FOLLOWER]: Refusing update from remote peer 961e8d68c7284606a2b5d7480a2cd3c3: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250814 01:53:19.159669 1849 consensus_queue.cc:1035] T 5392d641a774477e8bb45d1090bcded4 P 961e8d68c7284606a2b5d7480a2cd3c3 [LEADER]: Connected to new peer: Peer: permanent_uuid: "3aada10a84784e8ea14fa831572dd83c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 33279 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250814 01:53:20.874490 1467 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "5392d641a774477e8bb45d1090bcded4"
dest_uuid: "de74e77289884f52a08c3259599822c8"
mode: GRACEFUL
from {username='slave'} at 127.0.0.1:34402
I20250814 01:53:20.875108 1467 raft_consensus.cc:604] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 2 FOLLOWER]: Received request to transfer leadership
I20250814 01:53:20.875428 1467 raft_consensus.cc:612] T 5392d641a774477e8bb45d1090bcded4 P de74e77289884f52a08c3259599822c8 [term 2 FOLLOWER]: Rejecting request to transer leadership while not leader
I20250814 01:53:21.956187 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 1383
I20250814 01:53:21.978351 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 1516
I20250814 01:53:22.000032 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 1649
I20250814 01:53:22.021950 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 1290
2025-08-14T01:53:22Z chronyd exiting
[ OK ] AdminCliTest.TestGracefulSpecificLeaderStepDown (13959 ms)
[ RUN ] AdminCliTest.TestDescribeTableColumnFlags
I20250814 01:53:22.072160 426 test_util.cc:276] Using random seed: -1983438265
I20250814 01:53:22.076102 426 ts_itest-base.cc:115] Starting cluster with:
I20250814 01:53:22.076277 426 ts_itest-base.cc:116] --------------
I20250814 01:53:22.076424 426 ts_itest-base.cc:117] 3 tablet servers
I20250814 01:53:22.076570 426 ts_itest-base.cc:118] 3 replicas per TS
I20250814 01:53:22.076696 426 ts_itest-base.cc:119] --------------
2025-08-14T01:53:22Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-14T01:53:22Z Disabled control of system clock
I20250814 01:53:22.110275 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:33835
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:37101
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:33835 with env {}
W20250814 01:53:22.397874 1892 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:22.398435 1892 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:22.398849 1892 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:22.429847 1892 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250814 01:53:22.430126 1892 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:22.430332 1892 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250814 01:53:22.430526 1892 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250814 01:53:22.465332 1892 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:37101
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:33835
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:33835
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:22.466635 1892 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:22.468273 1892 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:22.478912 1898 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:22.479573 1899 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:23.576397 1900 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1095 milliseconds
W20250814 01:53:23.577848 1901 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:23.580797 1892 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.102s user 0.000s sys 0.006s
W20250814 01:53:23.581043 1892 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.103s user 0.000s sys 0.006s
I20250814 01:53:23.581279 1892 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:53:23.582495 1892 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:23.585387 1892 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:23.586781 1892 hybrid_clock.cc:648] HybridClock initialized: now 1755136403586739 us; error 48 us; skew 500 ppm
I20250814 01:53:23.587574 1892 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:23.594777 1892 webserver.cc:480] Webserver started at http://127.0.106.190:38317/ using document root <none> and password file <none>
I20250814 01:53:23.595667 1892 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:23.595865 1892 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:23.596309 1892 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:53:23.600704 1892 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "181d042d219042dbb7fe80cfc07b3fbc"
format_stamp: "Formatted at 2025-08-14 01:53:23 on dist-test-slave-30wj"
I20250814 01:53:23.601809 1892 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "181d042d219042dbb7fe80cfc07b3fbc"
format_stamp: "Formatted at 2025-08-14 01:53:23 on dist-test-slave-30wj"
I20250814 01:53:23.609757 1892 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.003s sys 0.004s
I20250814 01:53:23.615656 1908 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:23.616698 1892 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.004s
I20250814 01:53:23.616993 1892 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "181d042d219042dbb7fe80cfc07b3fbc"
format_stamp: "Formatted at 2025-08-14 01:53:23 on dist-test-slave-30wj"
I20250814 01:53:23.617311 1892 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:23.677559 1892 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:23.679028 1892 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:23.679451 1892 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:23.756423 1892 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:33835
I20250814 01:53:23.756585 1959 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:33835 every 8 connection(s)
I20250814 01:53:23.759109 1892 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250814 01:53:23.763132 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 1892
I20250814 01:53:23.763592 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250814 01:53:23.764454 1960 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:23.786760 1960 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc: Bootstrap starting.
I20250814 01:53:23.792794 1960 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:23.794631 1960 log.cc:826] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:23.799189 1960 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc: No bootstrap required, opened a new log
I20250814 01:53:23.816437 1960 raft_consensus.cc:357] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "181d042d219042dbb7fe80cfc07b3fbc" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 33835 } }
I20250814 01:53:23.817085 1960 raft_consensus.cc:383] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:23.817329 1960 raft_consensus.cc:738] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 181d042d219042dbb7fe80cfc07b3fbc, State: Initialized, Role: FOLLOWER
I20250814 01:53:23.817992 1960 consensus_queue.cc:260] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "181d042d219042dbb7fe80cfc07b3fbc" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 33835 } }
I20250814 01:53:23.818482 1960 raft_consensus.cc:397] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:53:23.818744 1960 raft_consensus.cc:491] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:53:23.819028 1960 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:23.822932 1960 raft_consensus.cc:513] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "181d042d219042dbb7fe80cfc07b3fbc" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 33835 } }
I20250814 01:53:23.823589 1960 leader_election.cc:304] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 181d042d219042dbb7fe80cfc07b3fbc; no voters:
I20250814 01:53:23.825398 1960 leader_election.cc:290] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [CANDIDATE]: Term 1 election: Requested vote from peers
I20250814 01:53:23.826025 1965 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:53:23.828047 1965 raft_consensus.cc:695] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [term 1 LEADER]: Becoming Leader. State: Replica: 181d042d219042dbb7fe80cfc07b3fbc, State: Running, Role: LEADER
I20250814 01:53:23.828765 1965 consensus_queue.cc:237] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "181d042d219042dbb7fe80cfc07b3fbc" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 33835 } }
I20250814 01:53:23.829766 1960 sys_catalog.cc:564] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [sys.catalog]: configured and running, proceeding with master startup.
I20250814 01:53:23.834964 1967 sys_catalog.cc:455] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [sys.catalog]: SysCatalogTable state changed. Reason: New leader 181d042d219042dbb7fe80cfc07b3fbc. Latest consensus state: current_term: 1 leader_uuid: "181d042d219042dbb7fe80cfc07b3fbc" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "181d042d219042dbb7fe80cfc07b3fbc" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 33835 } } }
I20250814 01:53:23.835580 1967 sys_catalog.cc:458] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [sys.catalog]: This master's current role is: LEADER
I20250814 01:53:23.836269 1966 sys_catalog.cc:455] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "181d042d219042dbb7fe80cfc07b3fbc" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "181d042d219042dbb7fe80cfc07b3fbc" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 33835 } } }
I20250814 01:53:23.836822 1966 sys_catalog.cc:458] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc [sys.catalog]: This master's current role is: LEADER
I20250814 01:53:23.841084 1971 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250814 01:53:23.851814 1971 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250814 01:53:23.866746 1971 catalog_manager.cc:1349] Generated new cluster ID: 44fc16e02a1c42e38f90c235ff610b9a
I20250814 01:53:23.866998 1971 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250814 01:53:23.896075 1971 catalog_manager.cc:1372] Generated new certificate authority record
I20250814 01:53:23.897439 1971 catalog_manager.cc:1506] Loading token signing keys...
I20250814 01:53:23.909927 1971 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 181d042d219042dbb7fe80cfc07b3fbc: Generated new TSK 0
I20250814 01:53:23.910787 1971 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250814 01:53:23.926013 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:33835
--builtin_ntp_servers=127.0.106.148:37101
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250814 01:53:24.220705 1984 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:24.221202 1984 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:24.221693 1984 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:24.252728 1984 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:24.253585 1984 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250814 01:53:24.288683 1984 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:37101
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:33835
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:24.290038 1984 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:24.291559 1984 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:24.303460 1990 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:24.305757 1991 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:25.702975 1993 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:25.705094 1992 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1396 milliseconds
I20250814 01:53:25.705178 1984 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:53:25.706333 1984 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:25.710251 1984 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:25.711668 1984 hybrid_clock.cc:648] HybridClock initialized: now 1755136405711644 us; error 39 us; skew 500 ppm
I20250814 01:53:25.712445 1984 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:25.718650 1984 webserver.cc:480] Webserver started at http://127.0.106.129:44555/ using document root <none> and password file <none>
I20250814 01:53:25.719528 1984 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:25.719730 1984 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:25.720172 1984 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:53:25.725653 1984 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "945f9b952ae247a492dd13de5c826ab8"
format_stamp: "Formatted at 2025-08-14 01:53:25 on dist-test-slave-30wj"
I20250814 01:53:25.726799 1984 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "945f9b952ae247a492dd13de5c826ab8"
format_stamp: "Formatted at 2025-08-14 01:53:25 on dist-test-slave-30wj"
I20250814 01:53:25.733692 1984 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.001s
I20250814 01:53:25.739362 2000 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:25.740350 1984 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.005s sys 0.000s
I20250814 01:53:25.740662 1984 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "945f9b952ae247a492dd13de5c826ab8"
format_stamp: "Formatted at 2025-08-14 01:53:25 on dist-test-slave-30wj"
I20250814 01:53:25.740971 1984 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:25.794167 1984 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:25.795598 1984 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:25.796026 1984 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:25.798480 1984 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:53:25.802435 1984 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:53:25.802642 1984 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:25.802876 1984 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:53:25.803025 1984 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:25.926826 1984 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:42465
I20250814 01:53:25.926927 2112 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:42465 every 8 connection(s)
I20250814 01:53:25.929278 1984 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250814 01:53:25.931596 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 1984
I20250814 01:53:25.932078 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250814 01:53:25.938411 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:0
--local_ip_for_outbound_sockets=127.0.106.130
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:33835
--builtin_ntp_servers=127.0.106.148:37101
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250814 01:53:25.952548 2113 heartbeater.cc:344] Connected to a master server at 127.0.106.190:33835
I20250814 01:53:25.952957 2113 heartbeater.cc:461] Registering TS with master...
I20250814 01:53:25.953984 2113 heartbeater.cc:507] Master 127.0.106.190:33835 requested a full tablet report, sending...
I20250814 01:53:25.956370 1925 ts_manager.cc:194] Registered new tserver with Master: 945f9b952ae247a492dd13de5c826ab8 (127.0.106.129:42465)
I20250814 01:53:25.958253 1925 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:59945
W20250814 01:53:26.225437 2117 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:26.225955 2117 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:26.226415 2117 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:26.257208 2117 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:26.258128 2117 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250814 01:53:26.292742 2117 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:37101
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:33835
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:26.293995 2117 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:26.295483 2117 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:26.306751 2123 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:53:26.961468 2113 heartbeater.cc:499] Master 127.0.106.190:33835 was elected leader, sending a full tablet report...
W20250814 01:53:27.709971 2122 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 2117
W20250814 01:53:27.796386 2122 kernel_stack_watchdog.cc:198] Thread 2117 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 401ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250814 01:53:26.307487 2124 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:27.797472 2125 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1489 milliseconds
W20250814 01:53:27.797607 2117 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.491s user 0.556s sys 0.934s
W20250814 01:53:27.798041 2117 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.492s user 0.556s sys 0.934s
I20250814 01:53:27.798705 2117 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250814 01:53:27.798749 2126 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:53:27.801465 2117 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:27.803403 2117 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:27.804725 2117 hybrid_clock.cc:648] HybridClock initialized: now 1755136407804697 us; error 32 us; skew 500 ppm
I20250814 01:53:27.805446 2117 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:27.811012 2117 webserver.cc:480] Webserver started at http://127.0.106.130:44753/ using document root <none> and password file <none>
I20250814 01:53:27.811882 2117 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:27.812091 2117 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:27.812510 2117 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:53:27.816677 2117 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "a24b3eab881c436c90a7b9431f7a3ff3"
format_stamp: "Formatted at 2025-08-14 01:53:27 on dist-test-slave-30wj"
I20250814 01:53:27.817761 2117 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "a24b3eab881c436c90a7b9431f7a3ff3"
format_stamp: "Formatted at 2025-08-14 01:53:27 on dist-test-slave-30wj"
I20250814 01:53:27.824337 2117 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.001s
I20250814 01:53:27.829641 2133 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:27.830601 2117 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.001s
I20250814 01:53:27.830891 2117 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "a24b3eab881c436c90a7b9431f7a3ff3"
format_stamp: "Formatted at 2025-08-14 01:53:27 on dist-test-slave-30wj"
I20250814 01:53:27.831223 2117 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:27.884492 2117 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:27.886051 2117 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:27.886507 2117 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:27.888983 2117 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:53:27.893292 2117 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:53:27.893577 2117 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:27.893865 2117 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:53:27.894011 2117 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:28.021554 2117 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:46771
I20250814 01:53:28.021678 2245 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:46771 every 8 connection(s)
I20250814 01:53:28.023986 2117 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250814 01:53:28.029054 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 2117
I20250814 01:53:28.029567 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250814 01:53:28.036275 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:0
--local_ip_for_outbound_sockets=127.0.106.131
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:33835
--builtin_ntp_servers=127.0.106.148:37101
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250814 01:53:28.045503 2246 heartbeater.cc:344] Connected to a master server at 127.0.106.190:33835
I20250814 01:53:28.045951 2246 heartbeater.cc:461] Registering TS with master...
I20250814 01:53:28.046921 2246 heartbeater.cc:507] Master 127.0.106.190:33835 requested a full tablet report, sending...
I20250814 01:53:28.048933 1925 ts_manager.cc:194] Registered new tserver with Master: a24b3eab881c436c90a7b9431f7a3ff3 (127.0.106.130:46771)
I20250814 01:53:28.050146 1925 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:45241
W20250814 01:53:28.322853 2250 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:28.323325 2250 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:28.323814 2250 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:28.356273 2250 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:28.357102 2250 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250814 01:53:28.394722 2250 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:37101
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:33835
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:28.395973 2250 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:28.397459 2250 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:28.408303 2256 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:53:29.053067 2246 heartbeater.cc:499] Master 127.0.106.190:33835 was elected leader, sending a full tablet report...
W20250814 01:53:28.409401 2257 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:28.412400 2259 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:29.477768 2258 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250814 01:53:29.477828 2250 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:53:29.481637 2250 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:29.483742 2250 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:29.485062 2250 hybrid_clock.cc:648] HybridClock initialized: now 1755136409485028 us; error 54 us; skew 500 ppm
I20250814 01:53:29.485852 2250 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:29.492015 2250 webserver.cc:480] Webserver started at http://127.0.106.131:38171/ using document root <none> and password file <none>
I20250814 01:53:29.492890 2250 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:29.493098 2250 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:29.493744 2250 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:53:29.498220 2250 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "1cc725bdacf144889313514dc9d298ae"
format_stamp: "Formatted at 2025-08-14 01:53:29 on dist-test-slave-30wj"
I20250814 01:53:29.499291 2250 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "1cc725bdacf144889313514dc9d298ae"
format_stamp: "Formatted at 2025-08-14 01:53:29 on dist-test-slave-30wj"
I20250814 01:53:29.505919 2250 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.001s
I20250814 01:53:29.511161 2267 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:29.512077 2250 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.005s sys 0.000s
I20250814 01:53:29.512394 2250 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "1cc725bdacf144889313514dc9d298ae"
format_stamp: "Formatted at 2025-08-14 01:53:29 on dist-test-slave-30wj"
I20250814 01:53:29.512701 2250 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:29.554603 2250 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:29.555970 2250 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:29.556380 2250 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:29.558806 2250 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:53:29.562770 2250 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:53:29.562973 2250 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:29.563199 2250 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:53:29.563356 2250 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:29.686362 2250 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:45139
I20250814 01:53:29.686426 2379 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:45139 every 8 connection(s)
I20250814 01:53:29.689045 2250 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250814 01:53:29.689932 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 2250
I20250814 01:53:29.690444 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250814 01:53:29.709471 2380 heartbeater.cc:344] Connected to a master server at 127.0.106.190:33835
I20250814 01:53:29.709977 2380 heartbeater.cc:461] Registering TS with master...
I20250814 01:53:29.711014 2380 heartbeater.cc:507] Master 127.0.106.190:33835 requested a full tablet report, sending...
I20250814 01:53:29.712994 1924 ts_manager.cc:194] Registered new tserver with Master: 1cc725bdacf144889313514dc9d298ae (127.0.106.131:45139)
I20250814 01:53:29.714406 1924 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:43249
I20250814 01:53:29.724730 426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250814 01:53:29.757933 1924 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:54290:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250814 01:53:29.778414 1924 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250814 01:53:29.822822 2181 tablet_service.cc:1468] Processing CreateTablet for tablet 4c1053cf23a94d46a83d6fe99a538032 (DEFAULT_TABLE table=TestTable [id=a546dbfc726d4a4cb3c2a3e351b2bf37]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:53:29.824692 2181 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 4c1053cf23a94d46a83d6fe99a538032. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:29.829886 2315 tablet_service.cc:1468] Processing CreateTablet for tablet 4c1053cf23a94d46a83d6fe99a538032 (DEFAULT_TABLE table=TestTable [id=a546dbfc726d4a4cb3c2a3e351b2bf37]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:53:29.831375 2315 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 4c1053cf23a94d46a83d6fe99a538032. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:29.830749 2048 tablet_service.cc:1468] Processing CreateTablet for tablet 4c1053cf23a94d46a83d6fe99a538032 (DEFAULT_TABLE table=TestTable [id=a546dbfc726d4a4cb3c2a3e351b2bf37]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:53:29.832582 2048 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 4c1053cf23a94d46a83d6fe99a538032. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:29.851053 2399 tablet_bootstrap.cc:492] T 4c1053cf23a94d46a83d6fe99a538032 P 1cc725bdacf144889313514dc9d298ae: Bootstrap starting.
I20250814 01:53:29.853103 2400 tablet_bootstrap.cc:492] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3: Bootstrap starting.
I20250814 01:53:29.858248 2401 tablet_bootstrap.cc:492] T 4c1053cf23a94d46a83d6fe99a538032 P 945f9b952ae247a492dd13de5c826ab8: Bootstrap starting.
I20250814 01:53:29.859145 2399 tablet_bootstrap.cc:654] T 4c1053cf23a94d46a83d6fe99a538032 P 1cc725bdacf144889313514dc9d298ae: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:29.860049 2400 tablet_bootstrap.cc:654] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:29.860823 2399 log.cc:826] T 4c1053cf23a94d46a83d6fe99a538032 P 1cc725bdacf144889313514dc9d298ae: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:29.862429 2400 log.cc:826] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:29.866123 2399 tablet_bootstrap.cc:492] T 4c1053cf23a94d46a83d6fe99a538032 P 1cc725bdacf144889313514dc9d298ae: No bootstrap required, opened a new log
I20250814 01:53:29.866441 2401 tablet_bootstrap.cc:654] T 4c1053cf23a94d46a83d6fe99a538032 P 945f9b952ae247a492dd13de5c826ab8: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:29.866751 2399 ts_tablet_manager.cc:1397] T 4c1053cf23a94d46a83d6fe99a538032 P 1cc725bdacf144889313514dc9d298ae: Time spent bootstrapping tablet: real 0.016s user 0.013s sys 0.002s
I20250814 01:53:29.867774 2400 tablet_bootstrap.cc:492] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3: No bootstrap required, opened a new log
I20250814 01:53:29.868240 2400 ts_tablet_manager.cc:1397] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3: Time spent bootstrapping tablet: real 0.016s user 0.006s sys 0.006s
I20250814 01:53:29.868264 2401 log.cc:826] T 4c1053cf23a94d46a83d6fe99a538032 P 945f9b952ae247a492dd13de5c826ab8: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:29.873061 2401 tablet_bootstrap.cc:492] T 4c1053cf23a94d46a83d6fe99a538032 P 945f9b952ae247a492dd13de5c826ab8: No bootstrap required, opened a new log
I20250814 01:53:29.873446 2401 ts_tablet_manager.cc:1397] T 4c1053cf23a94d46a83d6fe99a538032 P 945f9b952ae247a492dd13de5c826ab8: Time spent bootstrapping tablet: real 0.016s user 0.000s sys 0.012s
I20250814 01:53:29.890901 2401 raft_consensus.cc:357] T 4c1053cf23a94d46a83d6fe99a538032 P 945f9b952ae247a492dd13de5c826ab8 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } } peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } }
I20250814 01:53:29.891572 2401 raft_consensus.cc:383] T 4c1053cf23a94d46a83d6fe99a538032 P 945f9b952ae247a492dd13de5c826ab8 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:29.891826 2401 raft_consensus.cc:738] T 4c1053cf23a94d46a83d6fe99a538032 P 945f9b952ae247a492dd13de5c826ab8 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 945f9b952ae247a492dd13de5c826ab8, State: Initialized, Role: FOLLOWER
I20250814 01:53:29.892575 2401 consensus_queue.cc:260] T 4c1053cf23a94d46a83d6fe99a538032 P 945f9b952ae247a492dd13de5c826ab8 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } } peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } }
I20250814 01:53:29.892347 2399 raft_consensus.cc:357] T 4c1053cf23a94d46a83d6fe99a538032 P 1cc725bdacf144889313514dc9d298ae [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } } peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } }
I20250814 01:53:29.893169 2399 raft_consensus.cc:383] T 4c1053cf23a94d46a83d6fe99a538032 P 1cc725bdacf144889313514dc9d298ae [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:29.893450 2399 raft_consensus.cc:738] T 4c1053cf23a94d46a83d6fe99a538032 P 1cc725bdacf144889313514dc9d298ae [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1cc725bdacf144889313514dc9d298ae, State: Initialized, Role: FOLLOWER
I20250814 01:53:29.893903 2400 raft_consensus.cc:357] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } } peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } }
I20250814 01:53:29.894752 2400 raft_consensus.cc:383] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:29.894412 2399 consensus_queue.cc:260] T 4c1053cf23a94d46a83d6fe99a538032 P 1cc725bdacf144889313514dc9d298ae [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } } peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } }
I20250814 01:53:29.895062 2400 raft_consensus.cc:738] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a24b3eab881c436c90a7b9431f7a3ff3, State: Initialized, Role: FOLLOWER
I20250814 01:53:29.895866 2400 consensus_queue.cc:260] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } } peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } }
I20250814 01:53:29.901585 2399 ts_tablet_manager.cc:1428] T 4c1053cf23a94d46a83d6fe99a538032 P 1cc725bdacf144889313514dc9d298ae: Time spent starting tablet: real 0.034s user 0.028s sys 0.004s
I20250814 01:53:29.902606 2380 heartbeater.cc:499] Master 127.0.106.190:33835 was elected leader, sending a full tablet report...
I20250814 01:53:29.903564 2401 ts_tablet_manager.cc:1428] T 4c1053cf23a94d46a83d6fe99a538032 P 945f9b952ae247a492dd13de5c826ab8: Time spent starting tablet: real 0.030s user 0.019s sys 0.005s
I20250814 01:53:29.906219 2400 ts_tablet_manager.cc:1428] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3: Time spent starting tablet: real 0.038s user 0.037s sys 0.002s
W20250814 01:53:29.937469 2114 tablet.cc:2378] T 4c1053cf23a94d46a83d6fe99a538032 P 945f9b952ae247a492dd13de5c826ab8: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250814 01:53:29.943562 2381 tablet.cc:2378] T 4c1053cf23a94d46a83d6fe99a538032 P 1cc725bdacf144889313514dc9d298ae: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250814 01:53:30.029934 2247 tablet.cc:2378] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250814 01:53:30.045023 2407 raft_consensus.cc:491] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250814 01:53:30.045532 2407 raft_consensus.cc:513] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } } peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } }
I20250814 01:53:30.047832 2407 leader_election.cc:290] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 1cc725bdacf144889313514dc9d298ae (127.0.106.131:45139), 945f9b952ae247a492dd13de5c826ab8 (127.0.106.129:42465)
I20250814 01:53:30.059391 2335 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "4c1053cf23a94d46a83d6fe99a538032" candidate_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1cc725bdacf144889313514dc9d298ae" is_pre_election: true
I20250814 01:53:30.060115 2335 raft_consensus.cc:2466] T 4c1053cf23a94d46a83d6fe99a538032 P 1cc725bdacf144889313514dc9d298ae [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate a24b3eab881c436c90a7b9431f7a3ff3 in term 0.
I20250814 01:53:30.060154 2068 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "4c1053cf23a94d46a83d6fe99a538032" candidate_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "945f9b952ae247a492dd13de5c826ab8" is_pre_election: true
I20250814 01:53:30.060842 2068 raft_consensus.cc:2466] T 4c1053cf23a94d46a83d6fe99a538032 P 945f9b952ae247a492dd13de5c826ab8 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate a24b3eab881c436c90a7b9431f7a3ff3 in term 0.
I20250814 01:53:30.061209 2137 leader_election.cc:304] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1cc725bdacf144889313514dc9d298ae, a24b3eab881c436c90a7b9431f7a3ff3; no voters:
I20250814 01:53:30.062034 2407 raft_consensus.cc:2802] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250814 01:53:30.062338 2407 raft_consensus.cc:491] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250814 01:53:30.062577 2407 raft_consensus.cc:3058] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:30.067416 2407 raft_consensus.cc:513] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } } peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } }
I20250814 01:53:30.068909 2407 leader_election.cc:290] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [CANDIDATE]: Term 1 election: Requested vote from peers 1cc725bdacf144889313514dc9d298ae (127.0.106.131:45139), 945f9b952ae247a492dd13de5c826ab8 (127.0.106.129:42465)
I20250814 01:53:30.069638 2335 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "4c1053cf23a94d46a83d6fe99a538032" candidate_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1cc725bdacf144889313514dc9d298ae"
I20250814 01:53:30.069742 2068 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "4c1053cf23a94d46a83d6fe99a538032" candidate_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "945f9b952ae247a492dd13de5c826ab8"
I20250814 01:53:30.070060 2335 raft_consensus.cc:3058] T 4c1053cf23a94d46a83d6fe99a538032 P 1cc725bdacf144889313514dc9d298ae [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:30.070163 2068 raft_consensus.cc:3058] T 4c1053cf23a94d46a83d6fe99a538032 P 945f9b952ae247a492dd13de5c826ab8 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:30.074417 2335 raft_consensus.cc:2466] T 4c1053cf23a94d46a83d6fe99a538032 P 1cc725bdacf144889313514dc9d298ae [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate a24b3eab881c436c90a7b9431f7a3ff3 in term 1.
I20250814 01:53:30.074476 2068 raft_consensus.cc:2466] T 4c1053cf23a94d46a83d6fe99a538032 P 945f9b952ae247a492dd13de5c826ab8 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate a24b3eab881c436c90a7b9431f7a3ff3 in term 1.
I20250814 01:53:30.075260 2137 leader_election.cc:304] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1cc725bdacf144889313514dc9d298ae, a24b3eab881c436c90a7b9431f7a3ff3; no voters:
I20250814 01:53:30.075896 2407 raft_consensus.cc:2802] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:53:30.077307 2407 raft_consensus.cc:695] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [term 1 LEADER]: Becoming Leader. State: Replica: a24b3eab881c436c90a7b9431f7a3ff3, State: Running, Role: LEADER
I20250814 01:53:30.078133 2407 consensus_queue.cc:237] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } } peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } }
I20250814 01:53:30.087530 1923 catalog_manager.cc:5582] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 reported cstate change: term changed from 0 to 1, leader changed from <none> to a24b3eab881c436c90a7b9431f7a3ff3 (127.0.106.130). New cstate: current_term: 1 leader_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } health_report { overall_health: UNKNOWN } } }
I20250814 01:53:30.124594 426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250814 01:53:30.127631 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 945f9b952ae247a492dd13de5c826ab8 to finish bootstrapping
I20250814 01:53:30.139663 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver a24b3eab881c436c90a7b9431f7a3ff3 to finish bootstrapping
I20250814 01:53:30.149791 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 1cc725bdacf144889313514dc9d298ae to finish bootstrapping
I20250814 01:53:30.162067 1923 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:54290:
name: "TestAnotherTable"
schema {
columns {
name: "foo"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "bar"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
comment: "comment for bar"
immutable: false
}
}
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "foo"
}
}
}
W20250814 01:53:30.163566 1923 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestAnotherTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250814 01:53:30.178699 2048 tablet_service.cc:1468] Processing CreateTablet for tablet a2fc48049713423b9fb96c07f5f59fac (DEFAULT_TABLE table=TestAnotherTable [id=5a0fd9acce884e83b9300b7588f0798e]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250814 01:53:30.179340 2181 tablet_service.cc:1468] Processing CreateTablet for tablet a2fc48049713423b9fb96c07f5f59fac (DEFAULT_TABLE table=TestAnotherTable [id=5a0fd9acce884e83b9300b7588f0798e]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250814 01:53:30.179762 2048 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet a2fc48049713423b9fb96c07f5f59fac. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:30.180255 2181 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet a2fc48049713423b9fb96c07f5f59fac. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:30.180104 2315 tablet_service.cc:1468] Processing CreateTablet for tablet a2fc48049713423b9fb96c07f5f59fac (DEFAULT_TABLE table=TestAnotherTable [id=5a0fd9acce884e83b9300b7588f0798e]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250814 01:53:30.181078 2315 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet a2fc48049713423b9fb96c07f5f59fac. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:30.190414 2400 tablet_bootstrap.cc:492] T a2fc48049713423b9fb96c07f5f59fac P a24b3eab881c436c90a7b9431f7a3ff3: Bootstrap starting.
I20250814 01:53:30.193792 2401 tablet_bootstrap.cc:492] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8: Bootstrap starting.
I20250814 01:53:30.194384 2399 tablet_bootstrap.cc:492] T a2fc48049713423b9fb96c07f5f59fac P 1cc725bdacf144889313514dc9d298ae: Bootstrap starting.
I20250814 01:53:30.195773 2400 tablet_bootstrap.cc:654] T a2fc48049713423b9fb96c07f5f59fac P a24b3eab881c436c90a7b9431f7a3ff3: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:30.198992 2401 tablet_bootstrap.cc:654] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:30.199951 2399 tablet_bootstrap.cc:654] T a2fc48049713423b9fb96c07f5f59fac P 1cc725bdacf144889313514dc9d298ae: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:30.201498 2400 tablet_bootstrap.cc:492] T a2fc48049713423b9fb96c07f5f59fac P a24b3eab881c436c90a7b9431f7a3ff3: No bootstrap required, opened a new log
I20250814 01:53:30.201942 2400 ts_tablet_manager.cc:1397] T a2fc48049713423b9fb96c07f5f59fac P a24b3eab881c436c90a7b9431f7a3ff3: Time spent bootstrapping tablet: real 0.012s user 0.011s sys 0.000s
I20250814 01:53:30.204557 2400 raft_consensus.cc:357] T a2fc48049713423b9fb96c07f5f59fac P a24b3eab881c436c90a7b9431f7a3ff3 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } } peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } }
I20250814 01:53:30.205256 2400 raft_consensus.cc:383] T a2fc48049713423b9fb96c07f5f59fac P a24b3eab881c436c90a7b9431f7a3ff3 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:30.205529 2400 raft_consensus.cc:738] T a2fc48049713423b9fb96c07f5f59fac P a24b3eab881c436c90a7b9431f7a3ff3 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a24b3eab881c436c90a7b9431f7a3ff3, State: Initialized, Role: FOLLOWER
I20250814 01:53:30.206184 2400 consensus_queue.cc:260] T a2fc48049713423b9fb96c07f5f59fac P a24b3eab881c436c90a7b9431f7a3ff3 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } } peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } }
I20250814 01:53:30.208266 2400 ts_tablet_manager.cc:1428] T a2fc48049713423b9fb96c07f5f59fac P a24b3eab881c436c90a7b9431f7a3ff3: Time spent starting tablet: real 0.006s user 0.004s sys 0.000s
I20250814 01:53:30.211100 2399 tablet_bootstrap.cc:492] T a2fc48049713423b9fb96c07f5f59fac P 1cc725bdacf144889313514dc9d298ae: No bootstrap required, opened a new log
I20250814 01:53:30.211539 2399 ts_tablet_manager.cc:1397] T a2fc48049713423b9fb96c07f5f59fac P 1cc725bdacf144889313514dc9d298ae: Time spent bootstrapping tablet: real 0.017s user 0.015s sys 0.000s
I20250814 01:53:30.212177 2401 tablet_bootstrap.cc:492] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8: No bootstrap required, opened a new log
I20250814 01:53:30.212591 2401 ts_tablet_manager.cc:1397] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8: Time spent bootstrapping tablet: real 0.019s user 0.008s sys 0.008s
I20250814 01:53:30.213665 2399 raft_consensus.cc:357] T a2fc48049713423b9fb96c07f5f59fac P 1cc725bdacf144889313514dc9d298ae [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } } peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } }
I20250814 01:53:30.214193 2399 raft_consensus.cc:383] T a2fc48049713423b9fb96c07f5f59fac P 1cc725bdacf144889313514dc9d298ae [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:30.214496 2399 raft_consensus.cc:738] T a2fc48049713423b9fb96c07f5f59fac P 1cc725bdacf144889313514dc9d298ae [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1cc725bdacf144889313514dc9d298ae, State: Initialized, Role: FOLLOWER
I20250814 01:53:30.214978 2401 raft_consensus.cc:357] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } } peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } }
I20250814 01:53:30.215477 2401 raft_consensus.cc:383] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:30.215142 2399 consensus_queue.cc:260] T a2fc48049713423b9fb96c07f5f59fac P 1cc725bdacf144889313514dc9d298ae [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } } peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } }
I20250814 01:53:30.215687 2401 raft_consensus.cc:738] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 945f9b952ae247a492dd13de5c826ab8, State: Initialized, Role: FOLLOWER
I20250814 01:53:30.216228 2401 consensus_queue.cc:260] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } } peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } }
I20250814 01:53:30.218564 2399 ts_tablet_manager.cc:1428] T a2fc48049713423b9fb96c07f5f59fac P 1cc725bdacf144889313514dc9d298ae: Time spent starting tablet: real 0.007s user 0.004s sys 0.000s
I20250814 01:53:30.219184 2401 ts_tablet_manager.cc:1428] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8: Time spent starting tablet: real 0.006s user 0.000s sys 0.004s
I20250814 01:53:30.253724 2405 raft_consensus.cc:491] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250814 01:53:30.254199 2405 raft_consensus.cc:513] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } } peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } }
I20250814 01:53:30.256793 2405 leader_election.cc:290] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers a24b3eab881c436c90a7b9431f7a3ff3 (127.0.106.130:46771), 1cc725bdacf144889313514dc9d298ae (127.0.106.131:45139)
I20250814 01:53:30.267262 2335 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a2fc48049713423b9fb96c07f5f59fac" candidate_uuid: "945f9b952ae247a492dd13de5c826ab8" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1cc725bdacf144889313514dc9d298ae" is_pre_election: true
I20250814 01:53:30.267730 2335 raft_consensus.cc:2466] T a2fc48049713423b9fb96c07f5f59fac P 1cc725bdacf144889313514dc9d298ae [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 945f9b952ae247a492dd13de5c826ab8 in term 0.
I20250814 01:53:30.267630 2201 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a2fc48049713423b9fb96c07f5f59fac" candidate_uuid: "945f9b952ae247a492dd13de5c826ab8" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" is_pre_election: true
I20250814 01:53:30.268266 2201 raft_consensus.cc:2466] T a2fc48049713423b9fb96c07f5f59fac P a24b3eab881c436c90a7b9431f7a3ff3 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 945f9b952ae247a492dd13de5c826ab8 in term 0.
I20250814 01:53:30.268568 2004 leader_election.cc:304] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1cc725bdacf144889313514dc9d298ae, 945f9b952ae247a492dd13de5c826ab8; no voters:
I20250814 01:53:30.269275 2405 raft_consensus.cc:2802] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250814 01:53:30.269587 2405 raft_consensus.cc:491] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250814 01:53:30.269881 2405 raft_consensus.cc:3058] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:30.274335 2405 raft_consensus.cc:513] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } } peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } }
I20250814 01:53:30.275624 2405 leader_election.cc:290] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [CANDIDATE]: Term 1 election: Requested vote from peers a24b3eab881c436c90a7b9431f7a3ff3 (127.0.106.130:46771), 1cc725bdacf144889313514dc9d298ae (127.0.106.131:45139)
I20250814 01:53:30.276239 2201 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a2fc48049713423b9fb96c07f5f59fac" candidate_uuid: "945f9b952ae247a492dd13de5c826ab8" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a24b3eab881c436c90a7b9431f7a3ff3"
I20250814 01:53:30.276522 2335 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a2fc48049713423b9fb96c07f5f59fac" candidate_uuid: "945f9b952ae247a492dd13de5c826ab8" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1cc725bdacf144889313514dc9d298ae"
I20250814 01:53:30.276733 2201 raft_consensus.cc:3058] T a2fc48049713423b9fb96c07f5f59fac P a24b3eab881c436c90a7b9431f7a3ff3 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:30.276996 2335 raft_consensus.cc:3058] T a2fc48049713423b9fb96c07f5f59fac P 1cc725bdacf144889313514dc9d298ae [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:30.282956 2335 raft_consensus.cc:2466] T a2fc48049713423b9fb96c07f5f59fac P 1cc725bdacf144889313514dc9d298ae [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 945f9b952ae247a492dd13de5c826ab8 in term 1.
I20250814 01:53:30.282956 2201 raft_consensus.cc:2466] T a2fc48049713423b9fb96c07f5f59fac P a24b3eab881c436c90a7b9431f7a3ff3 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 945f9b952ae247a492dd13de5c826ab8 in term 1.
I20250814 01:53:30.283958 2003 leader_election.cc:304] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 945f9b952ae247a492dd13de5c826ab8, a24b3eab881c436c90a7b9431f7a3ff3; no voters:
I20250814 01:53:30.284560 2405 raft_consensus.cc:2802] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:53:30.285984 2405 raft_consensus.cc:695] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [term 1 LEADER]: Becoming Leader. State: Replica: 945f9b952ae247a492dd13de5c826ab8, State: Running, Role: LEADER
I20250814 01:53:30.286674 2405 consensus_queue.cc:237] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } } peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } }
I20250814 01:53:30.296900 1925 catalog_manager.cc:5582] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 reported cstate change: term changed from 0 to 1, leader changed from <none> to 945f9b952ae247a492dd13de5c826ab8 (127.0.106.129). New cstate: current_term: 1 leader_uuid: "945f9b952ae247a492dd13de5c826ab8" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 } health_report { overall_health: UNKNOWN } } }
I20250814 01:53:30.479784 2407 consensus_queue.cc:1035] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [LEADER]: Connected to new peer: Peer: permanent_uuid: "945f9b952ae247a492dd13de5c826ab8" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42465 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250814 01:53:30.493850 2410 consensus_queue.cc:1035] T 4c1053cf23a94d46a83d6fe99a538032 P a24b3eab881c436c90a7b9431f7a3ff3 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250814 01:53:30.615653 2420 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:30.616218 2420 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:30.646581 2420 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250814 01:53:30.759984 2405 consensus_queue.cc:1035] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [LEADER]: Connected to new peer: Peer: permanent_uuid: "a24b3eab881c436c90a7b9431f7a3ff3" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46771 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250814 01:53:30.791370 2405 consensus_queue.cc:1035] T a2fc48049713423b9fb96c07f5f59fac P 945f9b952ae247a492dd13de5c826ab8 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1cc725bdacf144889313514dc9d298ae" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 45139 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250814 01:53:31.927989 2420 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.245s user 0.449s sys 0.790s
W20250814 01:53:31.928282 2420 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.245s user 0.449s sys 0.790s
W20250814 01:53:33.300019 2446 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:33.300618 2446 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:33.331596 2446 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250814 01:53:34.503399 2446 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.133s user 0.498s sys 0.633s
W20250814 01:53:34.503697 2446 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.134s user 0.498s sys 0.633s
W20250814 01:53:35.871515 2460 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:35.872057 2460 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:35.903354 2460 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250814 01:53:37.109050 2460 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.168s user 0.547s sys 0.618s
W20250814 01:53:37.109335 2460 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.169s user 0.547s sys 0.618s
W20250814 01:53:38.469341 2475 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:38.469949 2475 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:38.501452 2475 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250814 01:53:39.674578 2475 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.137s user 0.461s sys 0.675s
W20250814 01:53:39.675071 2475 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.138s user 0.461s sys 0.675s
I20250814 01:53:40.747058 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 1984
W20250814 01:53:40.769359 2136 connection.cc:537] client connection to 127.0.106.129:42465 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250814 01:53:40.769984 2136 proxy.cc:239] Call had error, refreshing address and retrying: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250814 01:53:40.770488 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 2117
I20250814 01:53:40.794157 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 2250
I20250814 01:53:40.813485 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 1892
2025-08-14T01:53:40Z chronyd exiting
[ OK ] AdminCliTest.TestDescribeTableColumnFlags (18793 ms)
[ RUN ] AdminCliTest.TestAuthzResetCacheNotAuthorized
I20250814 01:53:40.865810 426 test_util.cc:276] Using random seed: -1964644617
I20250814 01:53:40.869863 426 ts_itest-base.cc:115] Starting cluster with:
I20250814 01:53:40.870038 426 ts_itest-base.cc:116] --------------
I20250814 01:53:40.870150 426 ts_itest-base.cc:117] 3 tablet servers
I20250814 01:53:40.870252 426 ts_itest-base.cc:118] 3 replicas per TS
I20250814 01:53:40.870360 426 ts_itest-base.cc:119] --------------
2025-08-14T01:53:40Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-14T01:53:40Z Disabled control of system clock
I20250814 01:53:40.903230 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:45771
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:35925
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:45771
--superuser_acl=no-such-user with env {}
W20250814 01:53:41.190225 2499 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:41.190770 2499 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:41.191286 2499 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:41.222043 2499 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250814 01:53:41.222322 2499 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:41.222523 2499 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250814 01:53:41.222723 2499 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250814 01:53:41.257860 2499 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35925
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:45771
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:45771
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--superuser_acl=<redacted>
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:41.259061 2499 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:41.260571 2499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:41.270891 2505 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:41.271538 2506 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:42.380784 2508 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:42.383208 2507 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1107 milliseconds
I20250814 01:53:42.383343 2499 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:53:42.384552 2499 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:42.387542 2499 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:42.388962 2499 hybrid_clock.cc:648] HybridClock initialized: now 1755136422388921 us; error 58 us; skew 500 ppm
I20250814 01:53:42.389811 2499 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:42.396277 2499 webserver.cc:480] Webserver started at http://127.0.106.190:42773/ using document root <none> and password file <none>
I20250814 01:53:42.397531 2499 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:42.397850 2499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:42.398447 2499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:53:42.402956 2499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "9cc9fae9d9eb4e769fad38393e2ed0d7"
format_stamp: "Formatted at 2025-08-14 01:53:42 on dist-test-slave-30wj"
I20250814 01:53:42.404078 2499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "9cc9fae9d9eb4e769fad38393e2ed0d7"
format_stamp: "Formatted at 2025-08-14 01:53:42 on dist-test-slave-30wj"
I20250814 01:53:42.411589 2499 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.003s
I20250814 01:53:42.417272 2515 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:42.418324 2499 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250814 01:53:42.418660 2499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "9cc9fae9d9eb4e769fad38393e2ed0d7"
format_stamp: "Formatted at 2025-08-14 01:53:42 on dist-test-slave-30wj"
I20250814 01:53:42.418993 2499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:42.476881 2499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:42.478413 2499 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:42.478865 2499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:42.545197 2499 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:45771
I20250814 01:53:42.545279 2566 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:45771 every 8 connection(s)
I20250814 01:53:42.547842 2499 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250814 01:53:42.552876 2567 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:42.554838 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 2499
I20250814 01:53:42.555320 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250814 01:53:42.573683 2567 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7: Bootstrap starting.
I20250814 01:53:42.578926 2567 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:42.580559 2567 log.cc:826] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:42.584811 2567 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7: No bootstrap required, opened a new log
I20250814 01:53:42.602893 2567 raft_consensus.cc:357] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9cc9fae9d9eb4e769fad38393e2ed0d7" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 45771 } }
I20250814 01:53:42.603559 2567 raft_consensus.cc:383] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:42.603770 2567 raft_consensus.cc:738] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9cc9fae9d9eb4e769fad38393e2ed0d7, State: Initialized, Role: FOLLOWER
I20250814 01:53:42.604377 2567 consensus_queue.cc:260] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9cc9fae9d9eb4e769fad38393e2ed0d7" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 45771 } }
I20250814 01:53:42.604867 2567 raft_consensus.cc:397] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:53:42.605091 2567 raft_consensus.cc:491] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:53:42.605427 2567 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:42.609431 2567 raft_consensus.cc:513] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9cc9fae9d9eb4e769fad38393e2ed0d7" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 45771 } }
I20250814 01:53:42.610208 2567 leader_election.cc:304] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9cc9fae9d9eb4e769fad38393e2ed0d7; no voters:
I20250814 01:53:42.611820 2567 leader_election.cc:290] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250814 01:53:42.612504 2572 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:53:42.614493 2572 raft_consensus.cc:695] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [term 1 LEADER]: Becoming Leader. State: Replica: 9cc9fae9d9eb4e769fad38393e2ed0d7, State: Running, Role: LEADER
I20250814 01:53:42.615231 2572 consensus_queue.cc:237] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9cc9fae9d9eb4e769fad38393e2ed0d7" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 45771 } }
I20250814 01:53:42.616259 2567 sys_catalog.cc:564] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [sys.catalog]: configured and running, proceeding with master startup.
I20250814 01:53:42.624675 2573 sys_catalog.cc:455] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "9cc9fae9d9eb4e769fad38393e2ed0d7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9cc9fae9d9eb4e769fad38393e2ed0d7" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 45771 } } }
I20250814 01:53:42.624850 2574 sys_catalog.cc:455] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 9cc9fae9d9eb4e769fad38393e2ed0d7. Latest consensus state: current_term: 1 leader_uuid: "9cc9fae9d9eb4e769fad38393e2ed0d7" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9cc9fae9d9eb4e769fad38393e2ed0d7" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 45771 } } }
I20250814 01:53:42.625738 2574 sys_catalog.cc:458] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [sys.catalog]: This master's current role is: LEADER
I20250814 01:53:42.625746 2573 sys_catalog.cc:458] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7 [sys.catalog]: This master's current role is: LEADER
I20250814 01:53:42.632304 2580 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250814 01:53:42.643304 2580 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250814 01:53:42.660620 2580 catalog_manager.cc:1349] Generated new cluster ID: edb8bcda65314859bf0e3ac5284e7ff8
I20250814 01:53:42.660887 2580 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250814 01:53:42.683766 2580 catalog_manager.cc:1372] Generated new certificate authority record
I20250814 01:53:42.685165 2580 catalog_manager.cc:1506] Loading token signing keys...
I20250814 01:53:42.698299 2580 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 9cc9fae9d9eb4e769fad38393e2ed0d7: Generated new TSK 0
I20250814 01:53:42.699167 2580 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250814 01:53:42.710713 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:45771
--builtin_ntp_servers=127.0.106.148:35925
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250814 01:53:43.029074 2591 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:43.029610 2591 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:43.030125 2591 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:43.061770 2591 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:43.062621 2591 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250814 01:53:43.098373 2591 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35925
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:45771
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:43.099655 2591 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:43.101250 2591 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:43.113487 2597 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:44.516139 2596 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 2591
W20250814 01:53:43.114413 2598 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:44.718022 2596 kernel_stack_watchdog.cc:198] Thread 2591 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 398ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250814 01:53:44.719698 2599 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1603 milliseconds
W20250814 01:53:44.719784 2591 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.604s user 0.001s sys 0.000s
W20250814 01:53:44.720710 2591 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.605s user 0.002s sys 0.001s
I20250814 01:53:44.722002 2591 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250814 01:53:44.722075 2601 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:53:44.724843 2591 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:44.727265 2591 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:44.728715 2591 hybrid_clock.cc:648] HybridClock initialized: now 1755136424728652 us; error 74 us; skew 500 ppm
I20250814 01:53:44.729496 2591 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:44.736052 2591 webserver.cc:480] Webserver started at http://127.0.106.129:46123/ using document root <none> and password file <none>
I20250814 01:53:44.736959 2591 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:44.737154 2591 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:44.737596 2591 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:53:44.741994 2591 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "0f0abf8129f949eda89695abe08bb177"
format_stamp: "Formatted at 2025-08-14 01:53:44 on dist-test-slave-30wj"
I20250814 01:53:44.743059 2591 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "0f0abf8129f949eda89695abe08bb177"
format_stamp: "Formatted at 2025-08-14 01:53:44 on dist-test-slave-30wj"
I20250814 01:53:44.750326 2591 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.007s sys 0.000s
I20250814 01:53:44.756181 2607 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:44.757246 2591 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.001s
I20250814 01:53:44.757563 2591 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "0f0abf8129f949eda89695abe08bb177"
format_stamp: "Formatted at 2025-08-14 01:53:44 on dist-test-slave-30wj"
I20250814 01:53:44.757908 2591 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:44.813548 2591 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:44.815076 2591 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:44.815496 2591 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:44.818267 2591 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:53:44.822733 2591 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:53:44.822950 2591 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:44.823186 2591 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:53:44.823341 2591 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:44.981918 2591 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:40589
I20250814 01:53:44.982023 2719 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:40589 every 8 connection(s)
I20250814 01:53:44.984514 2591 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250814 01:53:44.986271 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 2591
I20250814 01:53:44.986733 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250814 01:53:45.001516 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:0
--local_ip_for_outbound_sockets=127.0.106.130
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:45771
--builtin_ntp_servers=127.0.106.148:35925
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250814 01:53:45.023265 2720 heartbeater.cc:344] Connected to a master server at 127.0.106.190:45771
I20250814 01:53:45.023828 2720 heartbeater.cc:461] Registering TS with master...
I20250814 01:53:45.025213 2720 heartbeater.cc:507] Master 127.0.106.190:45771 requested a full tablet report, sending...
I20250814 01:53:45.028523 2532 ts_manager.cc:194] Registered new tserver with Master: 0f0abf8129f949eda89695abe08bb177 (127.0.106.129:40589)
I20250814 01:53:45.031332 2532 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:60201
W20250814 01:53:45.304189 2724 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:45.304687 2724 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:45.305163 2724 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:45.336179 2724 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:45.337023 2724 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250814 01:53:45.371376 2724 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35925
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:45771
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:45.372670 2724 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:45.374315 2724 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:45.386308 2730 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:53:46.035562 2720 heartbeater.cc:499] Master 127.0.106.190:45771 was elected leader, sending a full tablet report...
W20250814 01:53:45.386762 2731 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:46.635157 2733 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:46.637205 2732 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1245 milliseconds
W20250814 01:53:46.638497 2724 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.252s user 0.414s sys 0.828s
W20250814 01:53:46.638849 2724 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.252s user 0.414s sys 0.828s
I20250814 01:53:46.639113 2724 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:53:46.640519 2724 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:46.643175 2724 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:46.644685 2724 hybrid_clock.cc:648] HybridClock initialized: now 1755136426644644 us; error 32 us; skew 500 ppm
I20250814 01:53:46.645869 2724 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:46.653792 2724 webserver.cc:480] Webserver started at http://127.0.106.130:42301/ using document root <none> and password file <none>
I20250814 01:53:46.654786 2724 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:46.654995 2724 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:46.655426 2724 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:53:46.659765 2724 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "418386e7370d48febe9ad892cf65e076"
format_stamp: "Formatted at 2025-08-14 01:53:46 on dist-test-slave-30wj"
I20250814 01:53:46.660813 2724 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "418386e7370d48febe9ad892cf65e076"
format_stamp: "Formatted at 2025-08-14 01:53:46 on dist-test-slave-30wj"
I20250814 01:53:46.668242 2724 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.005s
I20250814 01:53:46.674031 2740 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:46.675143 2724 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.001s
I20250814 01:53:46.675463 2724 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "418386e7370d48febe9ad892cf65e076"
format_stamp: "Formatted at 2025-08-14 01:53:46 on dist-test-slave-30wj"
I20250814 01:53:46.675771 2724 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:46.733314 2724 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:46.734863 2724 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:46.735267 2724 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:46.737648 2724 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:53:46.741590 2724 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:53:46.741827 2724 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.001s sys 0.000s
I20250814 01:53:46.742075 2724 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:53:46.742220 2724 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:46.866164 2724 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:42331
I20250814 01:53:46.866266 2852 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:42331 every 8 connection(s)
I20250814 01:53:46.868782 2724 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250814 01:53:46.877733 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 2724
I20250814 01:53:46.878110 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250814 01:53:46.884431 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:0
--local_ip_for_outbound_sockets=127.0.106.131
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:45771
--builtin_ntp_servers=127.0.106.148:35925
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250814 01:53:46.889307 2853 heartbeater.cc:344] Connected to a master server at 127.0.106.190:45771
I20250814 01:53:46.889783 2853 heartbeater.cc:461] Registering TS with master...
I20250814 01:53:46.890745 2853 heartbeater.cc:507] Master 127.0.106.190:45771 requested a full tablet report, sending...
I20250814 01:53:46.892876 2532 ts_manager.cc:194] Registered new tserver with Master: 418386e7370d48febe9ad892cf65e076 (127.0.106.130:42331)
I20250814 01:53:46.894466 2532 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:56037
W20250814 01:53:47.178326 2857 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:47.178787 2857 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:47.179229 2857 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:47.210305 2857 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:47.211109 2857 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250814 01:53:47.245246 2857 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35925
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:45771
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:47.246483 2857 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:47.248169 2857 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:47.259415 2863 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:53:47.898059 2853 heartbeater.cc:499] Master 127.0.106.190:45771 was elected leader, sending a full tablet report...
W20250814 01:53:48.662411 2862 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 2857
W20250814 01:53:48.746996 2857 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.487s user 0.616s sys 0.871s
W20250814 01:53:48.747306 2857 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.488s user 0.616s sys 0.871s
W20250814 01:53:47.260399 2864 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:48.749305 2866 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:48.752146 2865 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1487 milliseconds
I20250814 01:53:48.752198 2857 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:53:48.753446 2857 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:48.755425 2857 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:48.756752 2857 hybrid_clock.cc:648] HybridClock initialized: now 1755136428756713 us; error 47 us; skew 500 ppm
I20250814 01:53:48.757510 2857 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:48.763168 2857 webserver.cc:480] Webserver started at http://127.0.106.131:42921/ using document root <none> and password file <none>
I20250814 01:53:48.764071 2857 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:48.764293 2857 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:48.764735 2857 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:53:48.769212 2857 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "9adfee0b03004223b9042639f0cde681"
format_stamp: "Formatted at 2025-08-14 01:53:48 on dist-test-slave-30wj"
I20250814 01:53:48.770349 2857 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "9adfee0b03004223b9042639f0cde681"
format_stamp: "Formatted at 2025-08-14 01:53:48 on dist-test-slave-30wj"
I20250814 01:53:48.777338 2857 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.001s
I20250814 01:53:48.782744 2873 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:48.783780 2857 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.002s
I20250814 01:53:48.784090 2857 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "9adfee0b03004223b9042639f0cde681"
format_stamp: "Formatted at 2025-08-14 01:53:48 on dist-test-slave-30wj"
I20250814 01:53:48.784428 2857 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:48.836449 2857 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:48.837904 2857 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:48.838315 2857 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:48.840723 2857 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:53:48.844635 2857 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:53:48.844834 2857 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:48.845060 2857 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:53:48.845212 2857 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:48.972072 2857 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:42743
I20250814 01:53:48.972172 2985 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:42743 every 8 connection(s)
I20250814 01:53:48.974540 2857 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250814 01:53:48.979404 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 2857
I20250814 01:53:48.979883 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250814 01:53:49.009214 2986 heartbeater.cc:344] Connected to a master server at 127.0.106.190:45771
I20250814 01:53:49.009647 2986 heartbeater.cc:461] Registering TS with master...
I20250814 01:53:49.010661 2986 heartbeater.cc:507] Master 127.0.106.190:45771 requested a full tablet report, sending...
I20250814 01:53:49.012581 2532 ts_manager.cc:194] Registered new tserver with Master: 9adfee0b03004223b9042639f0cde681 (127.0.106.131:42743)
I20250814 01:53:49.013800 2532 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:59743
I20250814 01:53:49.014915 426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250814 01:53:49.042840 2532 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:54864:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250814 01:53:49.061767 2532 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250814 01:53:49.117831 2655 tablet_service.cc:1468] Processing CreateTablet for tablet 9d2aa96cdc7548dab5c0ba44482ed3c3 (DEFAULT_TABLE table=TestTable [id=18ac2ff0f5dc426eaa68416092d0cbca]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:53:49.119823 2655 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9d2aa96cdc7548dab5c0ba44482ed3c3. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:49.125422 2788 tablet_service.cc:1468] Processing CreateTablet for tablet 9d2aa96cdc7548dab5c0ba44482ed3c3 (DEFAULT_TABLE table=TestTable [id=18ac2ff0f5dc426eaa68416092d0cbca]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:53:49.127270 2788 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9d2aa96cdc7548dab5c0ba44482ed3c3. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:49.127425 2921 tablet_service.cc:1468] Processing CreateTablet for tablet 9d2aa96cdc7548dab5c0ba44482ed3c3 (DEFAULT_TABLE table=TestTable [id=18ac2ff0f5dc426eaa68416092d0cbca]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:53:49.129284 2921 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9d2aa96cdc7548dab5c0ba44482ed3c3. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:49.147353 3005 tablet_bootstrap.cc:492] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177: Bootstrap starting.
I20250814 01:53:49.149252 3006 tablet_bootstrap.cc:492] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 418386e7370d48febe9ad892cf65e076: Bootstrap starting.
I20250814 01:53:49.155709 3005 tablet_bootstrap.cc:654] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:49.156342 3006 tablet_bootstrap.cc:654] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 418386e7370d48febe9ad892cf65e076: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:49.157022 3007 tablet_bootstrap.cc:492] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 9adfee0b03004223b9042639f0cde681: Bootstrap starting.
I20250814 01:53:49.158100 3005 log.cc:826] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:49.158695 3006 log.cc:826] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 418386e7370d48febe9ad892cf65e076: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:49.163919 3007 tablet_bootstrap.cc:654] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 9adfee0b03004223b9042639f0cde681: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:49.164508 3005 tablet_bootstrap.cc:492] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177: No bootstrap required, opened a new log
I20250814 01:53:49.164506 3006 tablet_bootstrap.cc:492] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 418386e7370d48febe9ad892cf65e076: No bootstrap required, opened a new log
I20250814 01:53:49.164994 3006 ts_tablet_manager.cc:1397] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 418386e7370d48febe9ad892cf65e076: Time spent bootstrapping tablet: real 0.016s user 0.013s sys 0.000s
I20250814 01:53:49.165006 3005 ts_tablet_manager.cc:1397] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177: Time spent bootstrapping tablet: real 0.019s user 0.008s sys 0.007s
I20250814 01:53:49.166182 3007 log.cc:826] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 9adfee0b03004223b9042639f0cde681: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:49.170739 3007 tablet_bootstrap.cc:492] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 9adfee0b03004223b9042639f0cde681: No bootstrap required, opened a new log
I20250814 01:53:49.171133 3007 ts_tablet_manager.cc:1397] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 9adfee0b03004223b9042639f0cde681: Time spent bootstrapping tablet: real 0.015s user 0.008s sys 0.004s
I20250814 01:53:49.188324 3007 raft_consensus.cc:357] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 9adfee0b03004223b9042639f0cde681 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0f0abf8129f949eda89695abe08bb177" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 40589 } } peers { permanent_uuid: "9adfee0b03004223b9042639f0cde681" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 42743 } } peers { permanent_uuid: "418386e7370d48febe9ad892cf65e076" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 42331 } }
I20250814 01:53:49.189010 3007 raft_consensus.cc:383] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 9adfee0b03004223b9042639f0cde681 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:49.189384 3007 raft_consensus.cc:738] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 9adfee0b03004223b9042639f0cde681 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9adfee0b03004223b9042639f0cde681, State: Initialized, Role: FOLLOWER
I20250814 01:53:49.190228 3007 consensus_queue.cc:260] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 9adfee0b03004223b9042639f0cde681 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0f0abf8129f949eda89695abe08bb177" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 40589 } } peers { permanent_uuid: "9adfee0b03004223b9042639f0cde681" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 42743 } } peers { permanent_uuid: "418386e7370d48febe9ad892cf65e076" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 42331 } }
I20250814 01:53:49.190636 3006 raft_consensus.cc:357] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 418386e7370d48febe9ad892cf65e076 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0f0abf8129f949eda89695abe08bb177" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 40589 } } peers { permanent_uuid: "9adfee0b03004223b9042639f0cde681" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 42743 } } peers { permanent_uuid: "418386e7370d48febe9ad892cf65e076" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 42331 } }
I20250814 01:53:49.190636 3005 raft_consensus.cc:357] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0f0abf8129f949eda89695abe08bb177" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 40589 } } peers { permanent_uuid: "9adfee0b03004223b9042639f0cde681" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 42743 } } peers { permanent_uuid: "418386e7370d48febe9ad892cf65e076" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 42331 } }
I20250814 01:53:49.191510 3005 raft_consensus.cc:383] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:49.191589 3006 raft_consensus.cc:383] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 418386e7370d48febe9ad892cf65e076 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:49.191884 3005 raft_consensus.cc:738] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 0f0abf8129f949eda89695abe08bb177, State: Initialized, Role: FOLLOWER
I20250814 01:53:49.191893 3006 raft_consensus.cc:738] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 418386e7370d48febe9ad892cf65e076 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 418386e7370d48febe9ad892cf65e076, State: Initialized, Role: FOLLOWER
I20250814 01:53:49.193094 2986 heartbeater.cc:499] Master 127.0.106.190:45771 was elected leader, sending a full tablet report...
I20250814 01:53:49.192879 3006 consensus_queue.cc:260] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 418386e7370d48febe9ad892cf65e076 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0f0abf8129f949eda89695abe08bb177" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 40589 } } peers { permanent_uuid: "9adfee0b03004223b9042639f0cde681" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 42743 } } peers { permanent_uuid: "418386e7370d48febe9ad892cf65e076" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 42331 } }
I20250814 01:53:49.194300 3007 ts_tablet_manager.cc:1428] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 9adfee0b03004223b9042639f0cde681: Time spent starting tablet: real 0.023s user 0.024s sys 0.000s
I20250814 01:53:49.193818 3005 consensus_queue.cc:260] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0f0abf8129f949eda89695abe08bb177" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 40589 } } peers { permanent_uuid: "9adfee0b03004223b9042639f0cde681" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 42743 } } peers { permanent_uuid: "418386e7370d48febe9ad892cf65e076" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 42331 } }
I20250814 01:53:49.196975 3006 ts_tablet_manager.cc:1428] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 418386e7370d48febe9ad892cf65e076: Time spent starting tablet: real 0.032s user 0.026s sys 0.004s
I20250814 01:53:49.201067 3005 ts_tablet_manager.cc:1428] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177: Time spent starting tablet: real 0.036s user 0.024s sys 0.008s
W20250814 01:53:49.228538 2987 tablet.cc:2378] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 9adfee0b03004223b9042639f0cde681: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250814 01:53:49.269389 2721 tablet.cc:2378] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250814 01:53:49.375211 2854 tablet.cc:2378] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 418386e7370d48febe9ad892cf65e076: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250814 01:53:49.533830 3013 raft_consensus.cc:491] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250814 01:53:49.534478 3013 raft_consensus.cc:513] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0f0abf8129f949eda89695abe08bb177" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 40589 } } peers { permanent_uuid: "9adfee0b03004223b9042639f0cde681" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 42743 } } peers { permanent_uuid: "418386e7370d48febe9ad892cf65e076" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 42331 } }
I20250814 01:53:49.536749 3013 leader_election.cc:290] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 9adfee0b03004223b9042639f0cde681 (127.0.106.131:42743), 418386e7370d48febe9ad892cf65e076 (127.0.106.130:42331)
I20250814 01:53:49.548707 2808 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "9d2aa96cdc7548dab5c0ba44482ed3c3" candidate_uuid: "0f0abf8129f949eda89695abe08bb177" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "418386e7370d48febe9ad892cf65e076" is_pre_election: true
I20250814 01:53:49.549233 2941 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "9d2aa96cdc7548dab5c0ba44482ed3c3" candidate_uuid: "0f0abf8129f949eda89695abe08bb177" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9adfee0b03004223b9042639f0cde681" is_pre_election: true
I20250814 01:53:49.549592 2808 raft_consensus.cc:2466] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 418386e7370d48febe9ad892cf65e076 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0f0abf8129f949eda89695abe08bb177 in term 0.
I20250814 01:53:49.550010 2941 raft_consensus.cc:2466] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 9adfee0b03004223b9042639f0cde681 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 0f0abf8129f949eda89695abe08bb177 in term 0.
I20250814 01:53:49.551066 2608 leader_election.cc:304] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0f0abf8129f949eda89695abe08bb177, 418386e7370d48febe9ad892cf65e076; no voters:
I20250814 01:53:49.551765 3013 raft_consensus.cc:2802] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250814 01:53:49.552101 3013 raft_consensus.cc:491] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250814 01:53:49.552362 3013 raft_consensus.cc:3058] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:49.556677 3013 raft_consensus.cc:513] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0f0abf8129f949eda89695abe08bb177" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 40589 } } peers { permanent_uuid: "9adfee0b03004223b9042639f0cde681" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 42743 } } peers { permanent_uuid: "418386e7370d48febe9ad892cf65e076" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 42331 } }
I20250814 01:53:49.557998 3013 leader_election.cc:290] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [CANDIDATE]: Term 1 election: Requested vote from peers 9adfee0b03004223b9042639f0cde681 (127.0.106.131:42743), 418386e7370d48febe9ad892cf65e076 (127.0.106.130:42331)
I20250814 01:53:49.558821 2941 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "9d2aa96cdc7548dab5c0ba44482ed3c3" candidate_uuid: "0f0abf8129f949eda89695abe08bb177" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "9adfee0b03004223b9042639f0cde681"
I20250814 01:53:49.559028 2808 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "9d2aa96cdc7548dab5c0ba44482ed3c3" candidate_uuid: "0f0abf8129f949eda89695abe08bb177" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "418386e7370d48febe9ad892cf65e076"
I20250814 01:53:49.559211 2941 raft_consensus.cc:3058] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 9adfee0b03004223b9042639f0cde681 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:49.559531 2808 raft_consensus.cc:3058] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 418386e7370d48febe9ad892cf65e076 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:49.563335 2941 raft_consensus.cc:2466] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 9adfee0b03004223b9042639f0cde681 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0f0abf8129f949eda89695abe08bb177 in term 1.
I20250814 01:53:49.564240 2610 leader_election.cc:304] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 0f0abf8129f949eda89695abe08bb177, 9adfee0b03004223b9042639f0cde681; no voters:
I20250814 01:53:49.564828 3013 raft_consensus.cc:2802] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:53:49.565900 2808 raft_consensus.cc:2466] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 418386e7370d48febe9ad892cf65e076 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 0f0abf8129f949eda89695abe08bb177 in term 1.
I20250814 01:53:49.566915 3013 raft_consensus.cc:695] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [term 1 LEADER]: Becoming Leader. State: Replica: 0f0abf8129f949eda89695abe08bb177, State: Running, Role: LEADER
I20250814 01:53:49.567777 3013 consensus_queue.cc:237] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0f0abf8129f949eda89695abe08bb177" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 40589 } } peers { permanent_uuid: "9adfee0b03004223b9042639f0cde681" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 42743 } } peers { permanent_uuid: "418386e7370d48febe9ad892cf65e076" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 42331 } }
I20250814 01:53:49.578390 2532 catalog_manager.cc:5582] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 reported cstate change: term changed from 0 to 1, leader changed from <none> to 0f0abf8129f949eda89695abe08bb177 (127.0.106.129). New cstate: current_term: 1 leader_uuid: "0f0abf8129f949eda89695abe08bb177" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "0f0abf8129f949eda89695abe08bb177" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 40589 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9adfee0b03004223b9042639f0cde681" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 42743 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "418386e7370d48febe9ad892cf65e076" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 42331 } health_report { overall_health: UNKNOWN } } }
I20250814 01:53:49.698632 426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250814 01:53:49.701862 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 0f0abf8129f949eda89695abe08bb177 to finish bootstrapping
I20250814 01:53:49.714697 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 418386e7370d48febe9ad892cf65e076 to finish bootstrapping
I20250814 01:53:49.724493 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 9adfee0b03004223b9042639f0cde681 to finish bootstrapping
I20250814 01:53:50.155748 3034 consensus_queue.cc:1035] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [LEADER]: Connected to new peer: Peer: permanent_uuid: "418386e7370d48febe9ad892cf65e076" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 42331 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250814 01:53:50.186524 3034 consensus_queue.cc:1035] T 9d2aa96cdc7548dab5c0ba44482ed3c3 P 0f0abf8129f949eda89695abe08bb177 [LEADER]: Connected to new peer: Peer: permanent_uuid: "9adfee0b03004223b9042639f0cde681" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 42743 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250814 01:53:51.275595 2532 server_base.cc:1129] Unauthorized access attempt to method kudu.master.MasterService.RefreshAuthzCache from {username='slave'} at 127.0.0.1:54894
I20250814 01:53:52.306847 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 2591
I20250814 01:53:52.327991 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 2724
I20250814 01:53:52.348301 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 2857
I20250814 01:53:52.367568 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 2499
2025-08-14T01:53:52Z chronyd exiting
[ OK ] AdminCliTest.TestAuthzResetCacheNotAuthorized (11551 ms)
[ RUN ] AdminCliTest.TestRebuildTables
I20250814 01:53:52.417104 426 test_util.cc:276] Using random seed: -1953093323
I20250814 01:53:52.420945 426 ts_itest-base.cc:115] Starting cluster with:
I20250814 01:53:52.421105 426 ts_itest-base.cc:116] --------------
I20250814 01:53:52.421257 426 ts_itest-base.cc:117] 3 tablet servers
I20250814 01:53:52.421382 426 ts_itest-base.cc:118] 3 replicas per TS
I20250814 01:53:52.421515 426 ts_itest-base.cc:119] --------------
2025-08-14T01:53:52Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-14T01:53:52Z Disabled control of system clock
I20250814 01:53:52.454860 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:41981
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:35399
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:41981 with env {}
W20250814 01:53:52.748009 3060 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:52.748560 3060 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:52.748975 3060 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:52.779822 3060 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250814 01:53:52.780123 3060 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:52.780325 3060 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250814 01:53:52.780527 3060 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250814 01:53:52.815421 3060 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35399
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:41981
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:41981
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:52.816644 3060 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:52.818217 3060 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:52.827983 3066 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:52.829406 3067 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:54.231778 3065 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 3060
W20250814 01:53:54.373222 3060 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.544s user 0.000s sys 0.002s
W20250814 01:53:54.373942 3068 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1544 milliseconds
W20250814 01:53:54.374044 3060 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.545s user 0.000s sys 0.002s
I20250814 01:53:54.375290 3060 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250814 01:53:54.375360 3069 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:53:54.378510 3060 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:54.380802 3060 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:54.382138 3060 hybrid_clock.cc:648] HybridClock initialized: now 1755136434382092 us; error 48 us; skew 500 ppm
I20250814 01:53:54.382933 3060 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:54.388996 3060 webserver.cc:480] Webserver started at http://127.0.106.190:38807/ using document root <none> and password file <none>
I20250814 01:53:54.389902 3060 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:54.390094 3060 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:54.390497 3060 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:53:54.394838 3060 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "7f0da5f36e8940ea919b6eabe2ddbc00"
format_stamp: "Formatted at 2025-08-14 01:53:54 on dist-test-slave-30wj"
I20250814 01:53:54.395874 3060 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "7f0da5f36e8940ea919b6eabe2ddbc00"
format_stamp: "Formatted at 2025-08-14 01:53:54 on dist-test-slave-30wj"
I20250814 01:53:54.402740 3060 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.001s
I20250814 01:53:54.408056 3076 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:54.409046 3060 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.005s sys 0.000s
I20250814 01:53:54.409329 3060 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "7f0da5f36e8940ea919b6eabe2ddbc00"
format_stamp: "Formatted at 2025-08-14 01:53:54 on dist-test-slave-30wj"
I20250814 01:53:54.409631 3060 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:54.457297 3060 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:54.458829 3060 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:54.459230 3060 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:54.525856 3060 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:41981
I20250814 01:53:54.525938 3127 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:41981 every 8 connection(s)
I20250814 01:53:54.528561 3060 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250814 01:53:54.533579 3128 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:53:54.536123 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 3060
I20250814 01:53:54.536612 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250814 01:53:54.554860 3128 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Bootstrap starting.
I20250814 01:53:54.560214 3128 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Neither blocks nor log segments found. Creating new log.
I20250814 01:53:54.562208 3128 log.cc:826] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Log is configured to *not* fsync() on all Append() calls
I20250814 01:53:54.566689 3128 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: No bootstrap required, opened a new log
I20250814 01:53:54.583511 3128 raft_consensus.cc:357] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:53:54.584173 3128 raft_consensus.cc:383] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:53:54.584410 3128 raft_consensus.cc:738] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7f0da5f36e8940ea919b6eabe2ddbc00, State: Initialized, Role: FOLLOWER
I20250814 01:53:54.585068 3128 consensus_queue.cc:260] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:53:54.585557 3128 raft_consensus.cc:397] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:53:54.585856 3128 raft_consensus.cc:491] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:53:54.586161 3128 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:53:54.590389 3128 raft_consensus.cc:513] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:53:54.591068 3128 leader_election.cc:304] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7f0da5f36e8940ea919b6eabe2ddbc00; no voters:
I20250814 01:53:54.592650 3128 leader_election.cc:290] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250814 01:53:54.593252 3133 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:53:54.595230 3133 raft_consensus.cc:695] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 1 LEADER]: Becoming Leader. State: Replica: 7f0da5f36e8940ea919b6eabe2ddbc00, State: Running, Role: LEADER
I20250814 01:53:54.596029 3133 consensus_queue.cc:237] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:53:54.596943 3128 sys_catalog.cc:564] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: configured and running, proceeding with master startup.
I20250814 01:53:54.604935 3134 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } } }
I20250814 01:53:54.605378 3135 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 7f0da5f36e8940ea919b6eabe2ddbc00. Latest consensus state: current_term: 1 leader_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } } }
I20250814 01:53:54.605813 3134 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: This master's current role is: LEADER
I20250814 01:53:54.606307 3135 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: This master's current role is: LEADER
I20250814 01:53:54.609095 3142 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250814 01:53:54.620050 3142 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250814 01:53:54.634150 3142 catalog_manager.cc:1349] Generated new cluster ID: 51839d06f43f4f4d8312b038947fb808
I20250814 01:53:54.634477 3142 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250814 01:53:54.676149 3142 catalog_manager.cc:1372] Generated new certificate authority record
I20250814 01:53:54.678277 3142 catalog_manager.cc:1506] Loading token signing keys...
I20250814 01:53:54.700091 3142 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Generated new TSK 0
I20250814 01:53:54.700963 3142 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250814 01:53:54.722898 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:41981
--builtin_ntp_servers=127.0.106.148:35399
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250814 01:53:55.025960 3152 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:55.026463 3152 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:55.026960 3152 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:55.058537 3152 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:55.059412 3152 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250814 01:53:55.096071 3152 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35399
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:41981
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:55.097329 3152 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:55.098978 3152 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:55.111346 3158 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:55.116549 3161 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:55.113148 3159 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:56.437602 3160 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1320 milliseconds
I20250814 01:53:56.437777 3152 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:53:56.439241 3152 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:56.441970 3152 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:56.443488 3152 hybrid_clock.cc:648] HybridClock initialized: now 1755136436443442 us; error 41 us; skew 500 ppm
I20250814 01:53:56.444548 3152 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:56.451972 3152 webserver.cc:480] Webserver started at http://127.0.106.129:44955/ using document root <none> and password file <none>
I20250814 01:53:56.453229 3152 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:56.453521 3152 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:56.454123 3152 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:53:56.461560 3152 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "9133c463c51a41d1bcda681cad9e6d9b"
format_stamp: "Formatted at 2025-08-14 01:53:56 on dist-test-slave-30wj"
I20250814 01:53:56.463073 3152 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "9133c463c51a41d1bcda681cad9e6d9b"
format_stamp: "Formatted at 2025-08-14 01:53:56 on dist-test-slave-30wj"
I20250814 01:53:56.472193 3152 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.006s sys 0.003s
I20250814 01:53:56.479526 3168 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:56.480719 3152 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250814 01:53:56.481122 3152 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "9133c463c51a41d1bcda681cad9e6d9b"
format_stamp: "Formatted at 2025-08-14 01:53:56 on dist-test-slave-30wj"
I20250814 01:53:56.481564 3152 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:56.540064 3152 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:56.541649 3152 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:56.542101 3152 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:56.544664 3152 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:53:56.548928 3152 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:53:56.549146 3152 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:56.549387 3152 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:53:56.549548 3152 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:56.697947 3152 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:33085
I20250814 01:53:56.698076 3280 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:33085 every 8 connection(s)
I20250814 01:53:56.700649 3152 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250814 01:53:56.702410 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 3152
I20250814 01:53:56.702870 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250814 01:53:56.713239 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:0
--local_ip_for_outbound_sockets=127.0.106.130
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:41981
--builtin_ntp_servers=127.0.106.148:35399
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250814 01:53:56.740913 3281 heartbeater.cc:344] Connected to a master server at 127.0.106.190:41981
I20250814 01:53:56.741372 3281 heartbeater.cc:461] Registering TS with master...
I20250814 01:53:56.742728 3281 heartbeater.cc:507] Master 127.0.106.190:41981 requested a full tablet report, sending...
I20250814 01:53:56.745916 3093 ts_manager.cc:194] Registered new tserver with Master: 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129:33085)
I20250814 01:53:56.748792 3093 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:54421
W20250814 01:53:57.019642 3285 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:57.020107 3285 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:57.020560 3285 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:57.052345 3285 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:57.053246 3285 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250814 01:53:57.088429 3285 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35399
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:41981
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:57.089670 3285 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:57.091326 3285 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:57.103019 3291 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:53:57.753135 3281 heartbeater.cc:499] Master 127.0.106.190:41981 was elected leader, sending a full tablet report...
W20250814 01:53:58.505759 3290 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 3285
W20250814 01:53:58.607970 3290 kernel_stack_watchdog.cc:198] Thread 3285 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250814 01:53:58.608668 3285 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.505s user 0.578s sys 0.925s
W20250814 01:53:58.608956 3285 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.506s user 0.578s sys 0.925s
W20250814 01:53:57.103641 3292 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:58.609338 3293 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1504 milliseconds
W20250814 01:53:58.610692 3294 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:53:58.610663 3285 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:53:58.613651 3285 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:53:58.615680 3285 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:53:58.617033 3285 hybrid_clock.cc:648] HybridClock initialized: now 1755136438616995 us; error 44 us; skew 500 ppm
I20250814 01:53:58.617843 3285 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:53:58.623796 3285 webserver.cc:480] Webserver started at http://127.0.106.130:34373/ using document root <none> and password file <none>
I20250814 01:53:58.624738 3285 fs_manager.cc:362] Metadata directory not provided
I20250814 01:53:58.624959 3285 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:53:58.625393 3285 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:53:58.629650 3285 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "e1ba29edf7c9461ca140735ae3609839"
format_stamp: "Formatted at 2025-08-14 01:53:58 on dist-test-slave-30wj"
I20250814 01:53:58.630782 3285 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "e1ba29edf7c9461ca140735ae3609839"
format_stamp: "Formatted at 2025-08-14 01:53:58 on dist-test-slave-30wj"
I20250814 01:53:58.637571 3285 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.001s
I20250814 01:53:58.643134 3301 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:58.644277 3285 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250814 01:53:58.644622 3285 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "e1ba29edf7c9461ca140735ae3609839"
format_stamp: "Formatted at 2025-08-14 01:53:58 on dist-test-slave-30wj"
I20250814 01:53:58.644935 3285 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:53:58.710186 3285 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:53:58.711673 3285 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:53:58.712085 3285 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:53:58.714494 3285 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:53:58.718410 3285 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:53:58.718658 3285 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:58.718892 3285 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:53:58.719046 3285 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:53:58.845988 3285 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:34695
I20250814 01:53:58.846084 3413 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:34695 every 8 connection(s)
I20250814 01:53:58.848464 3285 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250814 01:53:58.854619 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 3285
I20250814 01:53:58.855104 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250814 01:53:58.860805 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:0
--local_ip_for_outbound_sockets=127.0.106.131
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:41981
--builtin_ntp_servers=127.0.106.148:35399
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250814 01:53:58.868830 3414 heartbeater.cc:344] Connected to a master server at 127.0.106.190:41981
I20250814 01:53:58.869268 3414 heartbeater.cc:461] Registering TS with master...
I20250814 01:53:58.870340 3414 heartbeater.cc:507] Master 127.0.106.190:41981 requested a full tablet report, sending...
I20250814 01:53:58.872500 3093 ts_manager.cc:194] Registered new tserver with Master: e1ba29edf7c9461ca140735ae3609839 (127.0.106.130:34695)
I20250814 01:53:58.873780 3093 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:53019
W20250814 01:53:59.153790 3418 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:53:59.154290 3418 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:53:59.154790 3418 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:53:59.185820 3418 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:53:59.186666 3418 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250814 01:53:59.222262 3418 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35399
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:41981
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:53:59.223613 3418 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:53:59.225207 3418 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:53:59.236264 3424 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:53:59.876953 3414 heartbeater.cc:499] Master 127.0.106.190:41981 was elected leader, sending a full tablet report...
W20250814 01:53:59.237841 3425 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:53:59.240563 3427 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:00.314306 3426 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250814 01:54:00.314349 3418 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:54:00.318089 3418 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:54:00.320607 3418 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:54:00.322055 3418 hybrid_clock.cc:648] HybridClock initialized: now 1755136440322005 us; error 59 us; skew 500 ppm
I20250814 01:54:00.322940 3418 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:54:00.329313 3418 webserver.cc:480] Webserver started at http://127.0.106.131:43431/ using document root <none> and password file <none>
I20250814 01:54:00.330261 3418 fs_manager.cc:362] Metadata directory not provided
I20250814 01:54:00.330479 3418 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:54:00.330930 3418 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:54:00.335296 3418 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "136ed1bf01a24d0db8541678c6fed252"
format_stamp: "Formatted at 2025-08-14 01:54:00 on dist-test-slave-30wj"
I20250814 01:54:00.336364 3418 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "136ed1bf01a24d0db8541678c6fed252"
format_stamp: "Formatted at 2025-08-14 01:54:00 on dist-test-slave-30wj"
I20250814 01:54:00.343279 3418 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.004s sys 0.004s
I20250814 01:54:00.348728 3435 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:00.349784 3418 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.001s
I20250814 01:54:00.350082 3418 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "136ed1bf01a24d0db8541678c6fed252"
format_stamp: "Formatted at 2025-08-14 01:54:00 on dist-test-slave-30wj"
I20250814 01:54:00.350407 3418 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:54:00.403447 3418 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:54:00.404875 3418 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:54:00.405285 3418 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:54:00.407706 3418 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:54:00.411756 3418 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:54:00.411950 3418 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:00.412190 3418 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:54:00.412348 3418 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:00.540215 3418 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:40049
I20250814 01:54:00.540375 3547 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:40049 every 8 connection(s)
I20250814 01:54:00.542837 3418 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250814 01:54:00.547433 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 3418
I20250814 01:54:00.548126 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250814 01:54:00.566816 3548 heartbeater.cc:344] Connected to a master server at 127.0.106.190:41981
I20250814 01:54:00.567255 3548 heartbeater.cc:461] Registering TS with master...
I20250814 01:54:00.568420 3548 heartbeater.cc:507] Master 127.0.106.190:41981 requested a full tablet report, sending...
I20250814 01:54:00.570546 3093 ts_manager.cc:194] Registered new tserver with Master: 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049)
I20250814 01:54:00.571697 3093 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:54353
I20250814 01:54:00.582231 426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250814 01:54:00.615795 426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250814 01:54:00.616122 426 test_util.cc:276] Using random seed: -1944894297
I20250814 01:54:00.655612 3093 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:55292:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250814 01:54:00.695032 3483 tablet_service.cc:1468] Processing CreateTablet for tablet ea532542ffb34d05bafdc9cdf0dbf89a (DEFAULT_TABLE table=TestTable [id=03aa1e8d49154058baf2642c150e878a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:54:00.696677 3483 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ea532542ffb34d05bafdc9cdf0dbf89a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:54:00.716377 3568 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Bootstrap starting.
I20250814 01:54:00.721879 3568 tablet_bootstrap.cc:654] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Neither blocks nor log segments found. Creating new log.
I20250814 01:54:00.723628 3568 log.cc:826] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Log is configured to *not* fsync() on all Append() calls
I20250814 01:54:00.728282 3568 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: No bootstrap required, opened a new log
I20250814 01:54:00.728696 3568 ts_tablet_manager.cc:1397] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Time spent bootstrapping tablet: real 0.013s user 0.004s sys 0.005s
I20250814 01:54:00.745640 3568 raft_consensus.cc:357] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } }
I20250814 01:54:00.746232 3568 raft_consensus.cc:383] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:54:00.746474 3568 raft_consensus.cc:738] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 136ed1bf01a24d0db8541678c6fed252, State: Initialized, Role: FOLLOWER
I20250814 01:54:00.747136 3568 consensus_queue.cc:260] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } }
I20250814 01:54:00.747659 3568 raft_consensus.cc:397] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:54:00.747931 3568 raft_consensus.cc:491] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:54:00.748229 3568 raft_consensus.cc:3058] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:54:00.752342 3568 raft_consensus.cc:513] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } }
I20250814 01:54:00.753098 3568 leader_election.cc:304] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 136ed1bf01a24d0db8541678c6fed252; no voters:
I20250814 01:54:00.754853 3568 leader_election.cc:290] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250814 01:54:00.755609 3570 raft_consensus.cc:2802] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:54:00.757233 3548 heartbeater.cc:499] Master 127.0.106.190:41981 was elected leader, sending a full tablet report...
I20250814 01:54:00.757645 3570 raft_consensus.cc:695] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 1 LEADER]: Becoming Leader. State: Replica: 136ed1bf01a24d0db8541678c6fed252, State: Running, Role: LEADER
I20250814 01:54:00.758173 3568 ts_tablet_manager.cc:1428] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Time spent starting tablet: real 0.029s user 0.031s sys 0.000s
I20250814 01:54:00.758467 3570 consensus_queue.cc:237] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } }
I20250814 01:54:00.770807 3093 catalog_manager.cc:5582] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 reported cstate change: term changed from 0 to 1, leader changed from <none> to 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131). New cstate: current_term: 1 leader_uuid: "136ed1bf01a24d0db8541678c6fed252" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } health_report { overall_health: HEALTHY } } }
I20250814 01:54:00.994443 426 test_util.cc:276] Using random seed: -1944515989
I20250814 01:54:01.016039 3084 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:55296:
name: "TestTable1"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250814 01:54:01.044067 3349 tablet_service.cc:1468] Processing CreateTablet for tablet b3acc7639edd406eb75d2d8662b9fc63 (DEFAULT_TABLE table=TestTable1 [id=cc4d15ddce0a4a7c8022d91f1d434d97]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:54:01.045477 3349 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b3acc7639edd406eb75d2d8662b9fc63. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:54:01.064352 3589 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Bootstrap starting.
I20250814 01:54:01.070039 3589 tablet_bootstrap.cc:654] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Neither blocks nor log segments found. Creating new log.
I20250814 01:54:01.071760 3589 log.cc:826] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Log is configured to *not* fsync() on all Append() calls
I20250814 01:54:01.075980 3589 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: No bootstrap required, opened a new log
I20250814 01:54:01.076381 3589 ts_tablet_manager.cc:1397] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Time spent bootstrapping tablet: real 0.012s user 0.004s sys 0.005s
I20250814 01:54:01.094027 3589 raft_consensus.cc:357] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } }
I20250814 01:54:01.094594 3589 raft_consensus.cc:383] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:54:01.094830 3589 raft_consensus.cc:738] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e1ba29edf7c9461ca140735ae3609839, State: Initialized, Role: FOLLOWER
I20250814 01:54:01.095448 3589 consensus_queue.cc:260] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } }
I20250814 01:54:01.095970 3589 raft_consensus.cc:397] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:54:01.096272 3589 raft_consensus.cc:491] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:54:01.096661 3589 raft_consensus.cc:3058] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:54:01.100907 3589 raft_consensus.cc:513] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } }
I20250814 01:54:01.101591 3589 leader_election.cc:304] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: e1ba29edf7c9461ca140735ae3609839; no voters:
I20250814 01:54:01.103361 3589 leader_election.cc:290] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250814 01:54:01.103770 3591 raft_consensus.cc:2802] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:54:01.106540 3591 raft_consensus.cc:695] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 1 LEADER]: Becoming Leader. State: Replica: e1ba29edf7c9461ca140735ae3609839, State: Running, Role: LEADER
I20250814 01:54:01.107388 3591 consensus_queue.cc:237] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } }
I20250814 01:54:01.107697 3589 ts_tablet_manager.cc:1428] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Time spent starting tablet: real 0.031s user 0.031s sys 0.000s
I20250814 01:54:01.116922 3084 catalog_manager.cc:5582] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 reported cstate change: term changed from 0 to 1, leader changed from <none> to e1ba29edf7c9461ca140735ae3609839 (127.0.106.130). New cstate: current_term: 1 leader_uuid: "e1ba29edf7c9461ca140735ae3609839" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } health_report { overall_health: HEALTHY } } }
I20250814 01:54:01.323714 426 test_util.cc:276] Using random seed: -1944186720
I20250814 01:54:01.342922 3084 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:55302:
name: "TestTable2"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250814 01:54:01.370940 3216 tablet_service.cc:1468] Processing CreateTablet for tablet 9184dfddca454231b2eabe3e05851953 (DEFAULT_TABLE table=TestTable2 [id=256e11a38719467d94d49047e335b05d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:54:01.372403 3216 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 9184dfddca454231b2eabe3e05851953. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:54:01.391137 3610 tablet_bootstrap.cc:492] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap starting.
I20250814 01:54:01.396643 3610 tablet_bootstrap.cc:654] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Neither blocks nor log segments found. Creating new log.
I20250814 01:54:01.398398 3610 log.cc:826] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Log is configured to *not* fsync() on all Append() calls
I20250814 01:54:01.402663 3610 tablet_bootstrap.cc:492] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: No bootstrap required, opened a new log
I20250814 01:54:01.403081 3610 ts_tablet_manager.cc:1397] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Time spent bootstrapping tablet: real 0.012s user 0.005s sys 0.005s
I20250814 01:54:01.419910 3610 raft_consensus.cc:357] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } }
I20250814 01:54:01.420485 3610 raft_consensus.cc:383] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:54:01.420709 3610 raft_consensus.cc:738] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9133c463c51a41d1bcda681cad9e6d9b, State: Initialized, Role: FOLLOWER
I20250814 01:54:01.421351 3610 consensus_queue.cc:260] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } }
I20250814 01:54:01.421916 3610 raft_consensus.cc:397] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:54:01.422180 3610 raft_consensus.cc:491] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:54:01.422538 3610 raft_consensus.cc:3058] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:54:01.426609 3610 raft_consensus.cc:513] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } }
I20250814 01:54:01.427284 3610 leader_election.cc:304] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9133c463c51a41d1bcda681cad9e6d9b; no voters:
I20250814 01:54:01.428927 3610 leader_election.cc:290] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 1 election: Requested vote from peers
I20250814 01:54:01.429368 3612 raft_consensus.cc:2802] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:54:01.432278 3612 raft_consensus.cc:695] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 1 LEADER]: Becoming Leader. State: Replica: 9133c463c51a41d1bcda681cad9e6d9b, State: Running, Role: LEADER
I20250814 01:54:01.432911 3610 ts_tablet_manager.cc:1428] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Time spent starting tablet: real 0.030s user 0.026s sys 0.003s
I20250814 01:54:01.433051 3612 consensus_queue.cc:237] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } }
I20250814 01:54:01.443849 3084 catalog_manager.cc:5582] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b reported cstate change: term changed from 0 to 1, leader changed from <none> to 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129). New cstate: current_term: 1 leader_uuid: "9133c463c51a41d1bcda681cad9e6d9b" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } health_report { overall_health: HEALTHY } } }
I20250814 01:54:01.645656 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 3060
W20250814 01:54:01.786006 3548 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:41981 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:41981: connect: Connection refused (error 111)
W20250814 01:54:02.132360 3414 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:41981 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:41981: connect: Connection refused (error 111)
W20250814 01:54:02.459458 3281 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:41981 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:41981: connect: Connection refused (error 111)
I20250814 01:54:06.321414 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 3152
I20250814 01:54:06.340596 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 3285
I20250814 01:54:06.362782 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 3418
I20250814 01:54:06.385387 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:41981
--webserver_interface=127.0.106.190
--webserver_port=38807
--builtin_ntp_servers=127.0.106.148:35399
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:41981 with env {}
W20250814 01:54:06.675233 3687 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:54:06.675796 3687 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:54:06.676239 3687 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:54:06.706609 3687 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250814 01:54:06.706914 3687 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:54:06.707162 3687 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250814 01:54:06.707388 3687 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250814 01:54:06.742206 3687 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35399
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:41981
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:41981
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=38807
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:54:06.743463 3687 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:54:06.744999 3687 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:54:06.755007 3693 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:06.755720 3694 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:07.847872 3696 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:07.850081 3695 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1090 milliseconds
I20250814 01:54:07.850178 3687 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:54:07.851372 3687 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:54:07.853850 3687 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:54:07.855219 3687 hybrid_clock.cc:648] HybridClock initialized: now 1755136447855190 us; error 34 us; skew 500 ppm
I20250814 01:54:07.856007 3687 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:54:07.861791 3687 webserver.cc:480] Webserver started at http://127.0.106.190:38807/ using document root <none> and password file <none>
I20250814 01:54:07.862668 3687 fs_manager.cc:362] Metadata directory not provided
I20250814 01:54:07.862875 3687 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:54:07.870535 3687 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.003s sys 0.001s
I20250814 01:54:07.874836 3703 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:07.875820 3687 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.002s
I20250814 01:54:07.876147 3687 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "7f0da5f36e8940ea919b6eabe2ddbc00"
format_stamp: "Formatted at 2025-08-14 01:53:54 on dist-test-slave-30wj"
I20250814 01:54:07.878067 3687 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:54:07.921501 3687 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:54:07.922921 3687 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:54:07.923334 3687 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:54:07.989292 3687 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:41981
I20250814 01:54:07.989348 3754 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:41981 every 8 connection(s)
I20250814 01:54:07.992029 3687 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250814 01:54:07.997484 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 3687
I20250814 01:54:07.998898 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:33085
--local_ip_for_outbound_sockets=127.0.106.129
--tserver_master_addrs=127.0.106.190:41981
--webserver_port=44955
--webserver_interface=127.0.106.129
--builtin_ntp_servers=127.0.106.148:35399
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250814 01:54:08.001624 3755 sys_catalog.cc:263] Verifying existing consensus state
I20250814 01:54:08.006220 3755 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Bootstrap starting.
I20250814 01:54:08.018899 3755 log.cc:826] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Log is configured to *not* fsync() on all Append() calls
I20250814 01:54:08.080533 3755 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Bootstrap replayed 1/1 log segments. Stats: ops{read=18 overwritten=0 applied=18 ignored=0} inserts{seen=13 ignored=0} mutations{seen=10 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:08.081533 3755 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Bootstrap complete.
I20250814 01:54:08.112449 3755 raft_consensus.cc:357] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:54:08.115638 3755 raft_consensus.cc:738] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7f0da5f36e8940ea919b6eabe2ddbc00, State: Initialized, Role: FOLLOWER
I20250814 01:54:08.116616 3755 consensus_queue.cc:260] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 18, Last appended: 2.18, Last appended by leader: 18, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:54:08.117329 3755 raft_consensus.cc:397] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:54:08.117668 3755 raft_consensus.cc:491] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:54:08.118129 3755 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 2 FOLLOWER]: Advancing to term 3
I20250814 01:54:08.127049 3755 raft_consensus.cc:513] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:54:08.127897 3755 leader_election.cc:304] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7f0da5f36e8940ea919b6eabe2ddbc00; no voters:
I20250814 01:54:08.129827 3755 leader_election.cc:290] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [CANDIDATE]: Term 3 election: Requested vote from peers
I20250814 01:54:08.130154 3759 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 3 FOLLOWER]: Leader election won for term 3
I20250814 01:54:08.133004 3759 raft_consensus.cc:695] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 3 LEADER]: Becoming Leader. State: Replica: 7f0da5f36e8940ea919b6eabe2ddbc00, State: Running, Role: LEADER
I20250814 01:54:08.133806 3759 consensus_queue.cc:237] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 18, Committed index: 18, Last appended: 2.18, Last appended by leader: 18, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:54:08.134195 3755 sys_catalog.cc:564] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: configured and running, proceeding with master startup.
I20250814 01:54:08.143241 3761 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 7f0da5f36e8940ea919b6eabe2ddbc00. Latest consensus state: current_term: 3 leader_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } } }
I20250814 01:54:08.143898 3761 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: This master's current role is: LEADER
I20250814 01:54:08.143177 3760 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 3 leader_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } } }
I20250814 01:54:08.144593 3760 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: This master's current role is: LEADER
I20250814 01:54:08.157907 3766 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250814 01:54:08.169972 3766 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=256e11a38719467d94d49047e335b05d]
I20250814 01:54:08.171658 3766 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=70bded3494f1460e95248323a9e95ba7]
I20250814 01:54:08.173183 3766 catalog_manager.cc:671] Loaded metadata for table TestTable [id=ab7a9854ea3c43b3b3ca7586a1e568a7]
I20250814 01:54:08.180779 3766 tablet_loader.cc:96] loaded metadata for tablet 9184dfddca454231b2eabe3e05851953 (table TestTable2 [id=256e11a38719467d94d49047e335b05d])
I20250814 01:54:08.182049 3766 tablet_loader.cc:96] loaded metadata for tablet b3acc7639edd406eb75d2d8662b9fc63 (table TestTable1 [id=70bded3494f1460e95248323a9e95ba7])
I20250814 01:54:08.183178 3766 tablet_loader.cc:96] loaded metadata for tablet ea532542ffb34d05bafdc9cdf0dbf89a (table TestTable [id=ab7a9854ea3c43b3b3ca7586a1e568a7])
I20250814 01:54:08.184617 3766 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250814 01:54:08.189587 3766 catalog_manager.cc:1261] Loaded cluster ID: 51839d06f43f4f4d8312b038947fb808
I20250814 01:54:08.189893 3766 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250814 01:54:08.197913 3766 catalog_manager.cc:1506] Loading token signing keys...
I20250814 01:54:08.203063 3766 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Loaded TSK: 0
I20250814 01:54:08.204641 3766 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250814 01:54:08.361670 3757 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:54:08.362195 3757 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:54:08.362694 3757 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:54:08.393898 3757 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:54:08.394740 3757 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250814 01:54:08.428908 3757 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35399
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:33085
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=44955
--tserver_master_addrs=127.0.106.190:41981
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:54:08.430711 3757 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:54:08.432281 3757 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:54:08.444475 3783 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:08.445420 3784 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:09.559466 3757 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.113s user 0.000s sys 0.008s
W20250814 01:54:09.560353 3757 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.114s user 0.000s sys 0.008s
I20250814 01:54:09.560652 3757 server_base.cc:1047] running on GCE node
W20250814 01:54:09.559974 3786 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:54:09.562294 3757 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:54:09.566841 3757 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:54:09.568318 3757 hybrid_clock.cc:648] HybridClock initialized: now 1755136449568248 us; error 71 us; skew 500 ppm
I20250814 01:54:09.569428 3757 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:54:09.578001 3757 webserver.cc:480] Webserver started at http://127.0.106.129:44955/ using document root <none> and password file <none>
I20250814 01:54:09.579197 3757 fs_manager.cc:362] Metadata directory not provided
I20250814 01:54:09.579476 3757 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:54:09.590581 3757 fs_manager.cc:714] Time spent opening directory manager: real 0.007s user 0.007s sys 0.000s
I20250814 01:54:09.596764 3794 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:09.598094 3757 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.005s sys 0.001s
I20250814 01:54:09.598482 3757 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "9133c463c51a41d1bcda681cad9e6d9b"
format_stamp: "Formatted at 2025-08-14 01:53:56 on dist-test-slave-30wj"
I20250814 01:54:09.601181 3757 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:54:09.674182 3757 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:54:09.676071 3757 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:54:09.676607 3757 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:54:09.679803 3757 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:54:09.686960 3801 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250814 01:54:09.694224 3757 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250814 01:54:09.694521 3757 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.000s sys 0.002s
I20250814 01:54:09.694841 3757 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250814 01:54:09.701781 3757 ts_tablet_manager.cc:610] Registered 1 tablets
I20250814 01:54:09.702028 3757 ts_tablet_manager.cc:589] Time spent register tablets: real 0.007s user 0.002s sys 0.005s
I20250814 01:54:09.702306 3801 tablet_bootstrap.cc:492] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap starting.
I20250814 01:54:09.769598 3801 log.cc:826] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Log is configured to *not* fsync() on all Append() calls
I20250814 01:54:09.877242 3801 tablet_bootstrap.cc:492] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap replayed 1/1 log segments. Stats: ops{read=6 overwritten=0 applied=6 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:09.878320 3801 tablet_bootstrap.cc:492] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap complete.
I20250814 01:54:09.879990 3801 ts_tablet_manager.cc:1397] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Time spent bootstrapping tablet: real 0.178s user 0.126s sys 0.045s
I20250814 01:54:09.881055 3757 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:33085
I20250814 01:54:09.881232 3908 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:33085 every 8 connection(s)
I20250814 01:54:09.883752 3757 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250814 01:54:09.891074 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 3757
I20250814 01:54:09.893177 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:34695
--local_ip_for_outbound_sockets=127.0.106.130
--tserver_master_addrs=127.0.106.190:41981
--webserver_port=34373
--webserver_interface=127.0.106.130
--builtin_ntp_servers=127.0.106.148:35399
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250814 01:54:09.901549 3801 raft_consensus.cc:357] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } }
I20250814 01:54:09.904539 3801 raft_consensus.cc:738] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9133c463c51a41d1bcda681cad9e6d9b, State: Initialized, Role: FOLLOWER
I20250814 01:54:09.905532 3801 consensus_queue.cc:260] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } }
I20250814 01:54:09.906198 3801 raft_consensus.cc:397] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:54:09.906550 3801 raft_consensus.cc:491] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:54:09.906960 3801 raft_consensus.cc:3058] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 1 FOLLOWER]: Advancing to term 2
I20250814 01:54:09.914691 3801 raft_consensus.cc:513] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } }
I20250814 01:54:09.915563 3801 leader_election.cc:304] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9133c463c51a41d1bcda681cad9e6d9b; no voters:
I20250814 01:54:09.918097 3801 leader_election.cc:290] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 2 election: Requested vote from peers
I20250814 01:54:09.918948 3914 raft_consensus.cc:2802] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Leader election won for term 2
I20250814 01:54:09.927632 3909 heartbeater.cc:344] Connected to a master server at 127.0.106.190:41981
I20250814 01:54:09.928179 3909 heartbeater.cc:461] Registering TS with master...
I20250814 01:54:09.929502 3909 heartbeater.cc:507] Master 127.0.106.190:41981 requested a full tablet report, sending...
I20250814 01:54:09.935665 3801 ts_tablet_manager.cc:1428] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Time spent starting tablet: real 0.055s user 0.038s sys 0.015s
I20250814 01:54:09.936825 3914 raft_consensus.cc:695] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 LEADER]: Becoming Leader. State: Replica: 9133c463c51a41d1bcda681cad9e6d9b, State: Running, Role: LEADER
I20250814 01:54:09.937759 3914 consensus_queue.cc:237] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 6, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } }
I20250814 01:54:09.949026 3720 ts_manager.cc:194] Registered new tserver with Master: 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129:33085)
I20250814 01:54:09.952140 3720 catalog_manager.cc:5582] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "9133c463c51a41d1bcda681cad9e6d9b" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } health_report { overall_health: HEALTHY } } }
I20250814 01:54:09.985152 3720 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:42147
I20250814 01:54:09.989091 3909 heartbeater.cc:499] Master 127.0.106.190:41981 was elected leader, sending a full tablet report...
W20250814 01:54:10.249195 3913 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:54:10.249676 3913 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:54:10.250209 3913 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:54:10.282107 3913 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:54:10.282963 3913 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250814 01:54:10.317862 3913 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35399
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:34695
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=34373
--tserver_master_addrs=127.0.106.190:41981
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:54:10.319238 3913 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:54:10.320808 3913 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:54:10.333002 3928 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:10.335523 3929 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:11.735522 3927 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 3913
W20250814 01:54:11.863762 3913 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.531s user 0.562s sys 0.963s
W20250814 01:54:11.865887 3913 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.533s user 0.563s sys 0.963s
W20250814 01:54:11.866039 3931 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:11.866160 3930 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1530 milliseconds
I20250814 01:54:11.866381 3913 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:54:11.869481 3913 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:54:11.871516 3913 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:54:11.872864 3913 hybrid_clock.cc:648] HybridClock initialized: now 1755136451872823 us; error 46 us; skew 500 ppm
I20250814 01:54:11.873616 3913 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:54:11.879984 3913 webserver.cc:480] Webserver started at http://127.0.106.130:34373/ using document root <none> and password file <none>
I20250814 01:54:11.881201 3913 fs_manager.cc:362] Metadata directory not provided
I20250814 01:54:11.881495 3913 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:54:11.891840 3913 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.004s sys 0.001s
I20250814 01:54:11.897327 3938 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:11.898380 3913 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.002s
I20250814 01:54:11.898684 3913 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "e1ba29edf7c9461ca140735ae3609839"
format_stamp: "Formatted at 2025-08-14 01:53:58 on dist-test-slave-30wj"
I20250814 01:54:11.900538 3913 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:54:11.953471 3913 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:54:11.954943 3913 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:54:11.955367 3913 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:54:11.957954 3913 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:54:11.963480 3945 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250814 01:54:11.970963 3913 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250814 01:54:11.971204 3913 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.000s sys 0.002s
I20250814 01:54:11.971474 3913 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250814 01:54:11.976091 3913 ts_tablet_manager.cc:610] Registered 1 tablets
I20250814 01:54:11.976282 3913 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.002s sys 0.000s
I20250814 01:54:11.976648 3945 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Bootstrap starting.
I20250814 01:54:12.045601 3945 log.cc:826] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Log is configured to *not* fsync() on all Append() calls
I20250814 01:54:12.130980 3913 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:34695
I20250814 01:54:12.131098 4052 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:34695 every 8 connection(s)
I20250814 01:54:12.134449 3913 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250814 01:54:12.143292 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 3913
I20250814 01:54:12.145098 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:40049
--local_ip_for_outbound_sockets=127.0.106.131
--tserver_master_addrs=127.0.106.190:41981
--webserver_port=43431
--webserver_interface=127.0.106.131
--builtin_ntp_servers=127.0.106.148:35399
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250814 01:54:12.155134 4053 heartbeater.cc:344] Connected to a master server at 127.0.106.190:41981
I20250814 01:54:12.155618 4053 heartbeater.cc:461] Registering TS with master...
I20250814 01:54:12.156574 4053 heartbeater.cc:507] Master 127.0.106.190:41981 requested a full tablet report, sending...
I20250814 01:54:12.160107 3720 ts_manager.cc:194] Registered new tserver with Master: e1ba29edf7c9461ca140735ae3609839 (127.0.106.130:34695)
I20250814 01:54:12.163051 3720 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:41977
I20250814 01:54:12.169236 3945 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Bootstrap replayed 1/1 log segments. Stats: ops{read=8 overwritten=0 applied=8 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:12.169998 3945 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Bootstrap complete.
I20250814 01:54:12.171058 3945 ts_tablet_manager.cc:1397] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Time spent bootstrapping tablet: real 0.195s user 0.167s sys 0.025s
I20250814 01:54:12.181679 3945 raft_consensus.cc:357] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } }
I20250814 01:54:12.183599 3945 raft_consensus.cc:738] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: e1ba29edf7c9461ca140735ae3609839, State: Initialized, Role: FOLLOWER
I20250814 01:54:12.184218 3945 consensus_queue.cc:260] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 8, Last appended: 1.8, Last appended by leader: 8, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } }
I20250814 01:54:12.184659 3945 raft_consensus.cc:397] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:54:12.184898 3945 raft_consensus.cc:491] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:54:12.185163 3945 raft_consensus.cc:3058] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 1 FOLLOWER]: Advancing to term 2
I20250814 01:54:12.190107 3945 raft_consensus.cc:513] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } }
I20250814 01:54:12.190692 3945 leader_election.cc:304] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: e1ba29edf7c9461ca140735ae3609839; no voters:
I20250814 01:54:12.192597 3945 leader_election.cc:290] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250814 01:54:12.192984 4058 raft_consensus.cc:2802] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Leader election won for term 2
I20250814 01:54:12.195529 4053 heartbeater.cc:499] Master 127.0.106.190:41981 was elected leader, sending a full tablet report...
I20250814 01:54:12.196761 4058 raft_consensus.cc:695] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 2 LEADER]: Becoming Leader. State: Replica: e1ba29edf7c9461ca140735ae3609839, State: Running, Role: LEADER
I20250814 01:54:12.197253 3945 ts_tablet_manager.cc:1428] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Time spent starting tablet: real 0.026s user 0.022s sys 0.004s
I20250814 01:54:12.197477 4058 consensus_queue.cc:237] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 1.8, Last appended by leader: 8, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } }
I20250814 01:54:12.208230 3720 catalog_manager.cc:5582] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 reported cstate change: term changed from 0 to 2, leader changed from <none> to e1ba29edf7c9461ca140735ae3609839 (127.0.106.130), VOTER e1ba29edf7c9461ca140735ae3609839 (127.0.106.130) added. New cstate: current_term: 2 leader_uuid: "e1ba29edf7c9461ca140735ae3609839" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } health_report { overall_health: HEALTHY } } }
I20250814 01:54:12.240425 4008 consensus_queue.cc:237] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 2.9, Last appended by leader: 8, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } }
I20250814 01:54:12.243585 4060 raft_consensus.cc:2953] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 2 LEADER]: Committing config change with OpId 2.10: config changed from index -1 to 10, NON_VOTER 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129) added. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } } }
I20250814 01:54:12.252794 3707 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet b3acc7639edd406eb75d2d8662b9fc63 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250814 01:54:12.255182 3720 catalog_manager.cc:5582] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 reported cstate change: config changed from index -1 to 10, NON_VOTER 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129) added. New cstate: current_term: 2 leader_uuid: "e1ba29edf7c9461ca140735ae3609839" committed_config { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
W20250814 01:54:12.258914 3942 consensus_peers.cc:489] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 -> Peer 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129:33085): Couldn't send request to peer 9133c463c51a41d1bcda681cad9e6d9b. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: b3acc7639edd406eb75d2d8662b9fc63. This is attempt 1: this message will repeat every 5th retry.
W20250814 01:54:12.262634 3720 catalog_manager.cc:5260] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet b3acc7639edd406eb75d2d8662b9fc63 with cas_config_opid_index 10: no extra replica candidate found for tablet b3acc7639edd406eb75d2d8662b9fc63 (table TestTable1 [id=70bded3494f1460e95248323a9e95ba7]): Not found: could not select location for extra replica: not enough tablet servers to satisfy replica placement policy: the total number of registered tablet servers (2) does not allow for adding an extra replica; consider bringing up more to have at least 4 tablet servers up and running
W20250814 01:54:12.462091 4057 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:54:12.462641 4057 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:54:12.463116 4057 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:54:12.496145 4057 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:54:12.497014 4057 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250814 01:54:12.533176 4057 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35399
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:40049
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=43431
--tserver_master_addrs=127.0.106.190:41981
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:54:12.534459 4057 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:54:12.536062 4057 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:54:12.546900 4075 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:54:12.832494 4082 ts_tablet_manager.cc:927] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b: Initiating tablet copy from peer e1ba29edf7c9461ca140735ae3609839 (127.0.106.130:34695)
I20250814 01:54:12.846603 4082 tablet_copy_client.cc:323] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b: tablet copy: Beginning tablet copy session from remote peer at address 127.0.106.130:34695
I20250814 01:54:12.870846 4028 tablet_copy_service.cc:140] P e1ba29edf7c9461ca140735ae3609839: Received BeginTabletCopySession request for tablet b3acc7639edd406eb75d2d8662b9fc63 from peer 9133c463c51a41d1bcda681cad9e6d9b ({username='slave'} at 127.0.106.129:58751)
I20250814 01:54:12.871554 4028 tablet_copy_service.cc:161] P e1ba29edf7c9461ca140735ae3609839: Beginning new tablet copy session on tablet b3acc7639edd406eb75d2d8662b9fc63 from peer 9133c463c51a41d1bcda681cad9e6d9b at {username='slave'} at 127.0.106.129:58751: session id = 9133c463c51a41d1bcda681cad9e6d9b-b3acc7639edd406eb75d2d8662b9fc63
I20250814 01:54:12.893007 4028 tablet_copy_source_session.cc:215] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Tablet Copy: opened 0 blocks and 1 log segments
I20250814 01:54:12.898511 4082 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b3acc7639edd406eb75d2d8662b9fc63. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:54:12.931020 4082 tablet_copy_client.cc:806] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b: tablet copy: Starting download of 0 data blocks...
I20250814 01:54:12.932039 4082 tablet_copy_client.cc:670] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b: tablet copy: Starting download of 1 WAL segments...
I20250814 01:54:12.940948 4082 tablet_copy_client.cc:538] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250814 01:54:12.950551 4082 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap starting.
I20250814 01:54:13.129041 4082 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap replayed 1/1 log segments. Stats: ops{read=10 overwritten=0 applied=10 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:13.130213 4082 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap complete.
I20250814 01:54:13.131129 4082 ts_tablet_manager.cc:1397] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b: Time spent bootstrapping tablet: real 0.181s user 0.125s sys 0.035s
I20250814 01:54:13.134137 4082 raft_consensus.cc:357] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } }
I20250814 01:54:13.134989 4082 raft_consensus.cc:738] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 9133c463c51a41d1bcda681cad9e6d9b, State: Initialized, Role: LEARNER
I20250814 01:54:13.135813 4082 consensus_queue.cc:260] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 10, Last appended: 2.10, Last appended by leader: 10, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } }
I20250814 01:54:13.155489 4082 ts_tablet_manager.cc:1428] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b: Time spent starting tablet: real 0.024s user 0.014s sys 0.007s
I20250814 01:54:13.157676 4028 tablet_copy_service.cc:342] P e1ba29edf7c9461ca140735ae3609839: Request end of tablet copy session 9133c463c51a41d1bcda681cad9e6d9b-b3acc7639edd406eb75d2d8662b9fc63 received from {username='slave'} at 127.0.106.129:58751
I20250814 01:54:13.158248 4028 tablet_copy_service.cc:434] P e1ba29edf7c9461ca140735ae3609839: ending tablet copy session 9133c463c51a41d1bcda681cad9e6d9b-b3acc7639edd406eb75d2d8662b9fc63 on tablet b3acc7639edd406eb75d2d8662b9fc63 with peer 9133c463c51a41d1bcda681cad9e6d9b
I20250814 01:54:13.314949 3864 raft_consensus.cc:1215] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 LEARNER]: Deduplicated request from leader. Original: 2.9->[2.10-2.10] Dedup: 2.10->[]
W20250814 01:54:12.547894 4076 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:13.787173 4077 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1237 milliseconds
W20250814 01:54:13.788249 4078 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:13.790762 4057 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.244s user 0.444s sys 0.697s
W20250814 01:54:13.791096 4057 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.244s user 0.444s sys 0.697s
I20250814 01:54:13.791319 4057 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:54:13.793296 4057 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:54:13.796005 4057 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:54:13.797495 4057 hybrid_clock.cc:648] HybridClock initialized: now 1755136453797422 us; error 69 us; skew 500 ppm
I20250814 01:54:13.798588 4057 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:54:13.807293 4057 webserver.cc:480] Webserver started at http://127.0.106.131:43431/ using document root <none> and password file <none>
I20250814 01:54:13.808641 4057 fs_manager.cc:362] Metadata directory not provided
I20250814 01:54:13.808959 4057 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:54:13.819846 4057 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.004s sys 0.004s
I20250814 01:54:13.826009 4094 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:13.827235 4057 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.003s sys 0.001s
I20250814 01:54:13.827641 4057 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "136ed1bf01a24d0db8541678c6fed252"
format_stamp: "Formatted at 2025-08-14 01:54:00 on dist-test-slave-30wj"
I20250814 01:54:13.830376 4057 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:54:13.858824 4095 raft_consensus.cc:1062] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: attempting to promote NON_VOTER 9133c463c51a41d1bcda681cad9e6d9b to VOTER
I20250814 01:54:13.860209 4095 consensus_queue.cc:237] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 8, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } }
I20250814 01:54:13.864537 3864 raft_consensus.cc:1273] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 LEARNER]: Refusing update from remote peer e1ba29edf7c9461ca140735ae3609839: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250814 01:54:13.865836 4096 consensus_queue.cc:1035] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [LEADER]: Connected to new peer: Peer: permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.000s
I20250814 01:54:13.874146 3864 raft_consensus.cc:2953] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } }
I20250814 01:54:13.872426 4095 raft_consensus.cc:2953] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } }
I20250814 01:54:13.893478 3719 catalog_manager.cc:5582] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 reported cstate change: config changed from index 10 to 11, 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "e1ba29edf7c9461ca140735ae3609839" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250814 01:54:13.942911 4057 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:54:13.944347 4057 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:54:13.944772 4057 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:54:13.947325 4057 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:54:13.953796 4112 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250814 01:54:13.961361 4057 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250814 01:54:13.961613 4057 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.001s sys 0.001s
I20250814 01:54:13.961913 4057 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250814 01:54:13.966553 4057 ts_tablet_manager.cc:610] Registered 1 tablets
I20250814 01:54:13.966744 4057 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.004s sys 0.000s
I20250814 01:54:13.967137 4112 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Bootstrap starting.
I20250814 01:54:14.033063 4112 log.cc:826] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Log is configured to *not* fsync() on all Append() calls
I20250814 01:54:14.147460 4057 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:40049
I20250814 01:54:14.147612 4219 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:40049 every 8 connection(s)
I20250814 01:54:14.151044 4057 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250814 01:54:14.153475 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 4057
I20250814 01:54:14.191529 4112 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Bootstrap replayed 1/1 log segments. Stats: ops{read=8 overwritten=0 applied=8 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:14.192654 4112 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Bootstrap complete.
I20250814 01:54:14.194337 4112 ts_tablet_manager.cc:1397] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Time spent bootstrapping tablet: real 0.228s user 0.138s sys 0.047s
I20250814 01:54:14.201601 4220 heartbeater.cc:344] Connected to a master server at 127.0.106.190:41981
I20250814 01:54:14.202230 4220 heartbeater.cc:461] Registering TS with master...
I20250814 01:54:14.203558 4220 heartbeater.cc:507] Master 127.0.106.190:41981 requested a full tablet report, sending...
I20250814 01:54:14.207402 3719 ts_manager.cc:194] Registered new tserver with Master: 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049)
I20250814 01:54:14.210462 3719 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:49923
I20250814 01:54:14.211359 4112 raft_consensus.cc:357] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } }
I20250814 01:54:14.214363 4112 raft_consensus.cc:738] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 136ed1bf01a24d0db8541678c6fed252, State: Initialized, Role: FOLLOWER
I20250814 01:54:14.214569 426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250814 01:54:14.215365 4112 consensus_queue.cc:260] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 8, Last appended: 1.8, Last appended by leader: 8, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } }
I20250814 01:54:14.215953 4112 raft_consensus.cc:397] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:54:14.216226 4112 raft_consensus.cc:491] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:54:14.216534 4112 raft_consensus.cc:3058] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 1 FOLLOWER]: Advancing to term 2
I20250814 01:54:14.219264 426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
W20250814 01:54:14.222018 426 ts_itest-base.cc:209] found only 0 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" }
I20250814 01:54:14.222967 4112 raft_consensus.cc:513] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } }
I20250814 01:54:14.223541 4112 leader_election.cc:304] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 136ed1bf01a24d0db8541678c6fed252; no voters:
I20250814 01:54:14.224993 4112 leader_election.cc:290] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250814 01:54:14.225368 4226 raft_consensus.cc:2802] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 2 FOLLOWER]: Leader election won for term 2
I20250814 01:54:14.227442 4226 raft_consensus.cc:695] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 2 LEADER]: Becoming Leader. State: Replica: 136ed1bf01a24d0db8541678c6fed252, State: Running, Role: LEADER
I20250814 01:54:14.227780 4220 heartbeater.cc:499] Master 127.0.106.190:41981 was elected leader, sending a full tablet report...
I20250814 01:54:14.228228 4226 consensus_queue.cc:237] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 1.8, Last appended by leader: 8, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } }
I20250814 01:54:14.228581 4112 ts_tablet_manager.cc:1428] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Time spent starting tablet: real 0.034s user 0.030s sys 0.003s
I20250814 01:54:14.236277 3719 catalog_manager.cc:5582] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 reported cstate change: term changed from 0 to 2, leader changed from <none> to 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131), VOTER 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131) added. New cstate: current_term: 2 leader_uuid: "136ed1bf01a24d0db8541678c6fed252" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } health_report { overall_health: HEALTHY } } }
I20250814 01:54:14.256338 4175 consensus_queue.cc:237] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 2.9, Last appended by leader: 8, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: true } }
I20250814 01:54:14.259450 4227 raft_consensus.cc:2953] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 2 LEADER]: Committing config change with OpId 2.10: config changed from index -1 to 10, NON_VOTER e1ba29edf7c9461ca140735ae3609839 (127.0.106.130) added. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: true } } }
I20250814 01:54:14.266175 3706 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet ea532542ffb34d05bafdc9cdf0dbf89a with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
W20250814 01:54:14.268862 4106 consensus_peers.cc:489] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 -> Peer e1ba29edf7c9461ca140735ae3609839 (127.0.106.130:34695): Couldn't send request to peer e1ba29edf7c9461ca140735ae3609839. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: ea532542ffb34d05bafdc9cdf0dbf89a. This is attempt 1: this message will repeat every 5th retry.
I20250814 01:54:14.269040 3719 catalog_manager.cc:5582] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 reported cstate change: config changed from index -1 to 10, NON_VOTER e1ba29edf7c9461ca140735ae3609839 (127.0.106.130) added. New cstate: current_term: 2 leader_uuid: "136ed1bf01a24d0db8541678c6fed252" committed_config { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250814 01:54:14.275835 4008 consensus_queue.cc:237] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 8, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: NON_VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: true } }
I20250814 01:54:14.278275 4175 consensus_queue.cc:237] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 8, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: true } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } }
I20250814 01:54:14.280632 3864 raft_consensus.cc:1273] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Refusing update from remote peer e1ba29edf7c9461ca140735ae3609839: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 2 index: 12. (index mismatch)
I20250814 01:54:14.282254 4100 consensus_queue.cc:1035] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [LEADER]: Connected to new peer: Peer: permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.001s
I20250814 01:54:14.283445 4227 raft_consensus.cc:2953] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index 10 to 11, NON_VOTER 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129) added. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: true } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } } }
W20250814 01:54:14.286288 4106 consensus_peers.cc:489] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 -> Peer e1ba29edf7c9461ca140735ae3609839 (127.0.106.130:34695): Couldn't send request to peer e1ba29edf7c9461ca140735ae3609839. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: ea532542ffb34d05bafdc9cdf0dbf89a. This is attempt 1: this message will repeat every 5th retry.
I20250814 01:54:14.291707 4101 raft_consensus.cc:2953] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 2 LEADER]: Committing config change with OpId 2.12: config changed from index 11 to 12, NON_VOTER 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131) added. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: NON_VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: true } } }
I20250814 01:54:14.292958 3706 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet ea532542ffb34d05bafdc9cdf0dbf89a with cas_config_opid_index 10: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
W20250814 01:54:14.294981 3941 consensus_peers.cc:489] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 -> Peer 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049): Couldn't send request to peer 136ed1bf01a24d0db8541678c6fed252. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: b3acc7639edd406eb75d2d8662b9fc63. This is attempt 1: this message will repeat every 5th retry.
I20250814 01:54:14.294266 3864 raft_consensus.cc:2953] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Committing config change with OpId 2.12: config changed from index 11 to 12, NON_VOTER 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131) added. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: NON_VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: true } } }
W20250814 01:54:14.295753 4106 consensus_peers.cc:489] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 -> Peer 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129:33085): Couldn't send request to peer 9133c463c51a41d1bcda681cad9e6d9b. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: ea532542ffb34d05bafdc9cdf0dbf89a. This is attempt 1: this message will repeat every 5th retry.
I20250814 01:54:14.299225 3719 catalog_manager.cc:5582] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 reported cstate change: config changed from index 10 to 11, NON_VOTER 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129) added. New cstate: current_term: 2 leader_uuid: "136ed1bf01a24d0db8541678c6fed252" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: true } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250814 01:54:14.301132 3707 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet b3acc7639edd406eb75d2d8662b9fc63 with cas_config_opid_index 11: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 5)
I20250814 01:54:14.305164 3720 catalog_manager.cc:5582] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 reported cstate change: config changed from index 11 to 12, NON_VOTER 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131) added. New cstate: current_term: 2 leader_uuid: "e1ba29edf7c9461ca140735ae3609839" committed_config { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: NON_VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250814 01:54:14.563246 3705 catalog_manager.cc:5129] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet b3acc7639edd406eb75d2d8662b9fc63 with cas_config_opid_index 10: aborting the task: latest config opid_index 12; task opid_index 10
I20250814 01:54:14.733489 4236 ts_tablet_manager.cc:927] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: Initiating tablet copy from peer 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049)
I20250814 01:54:14.734927 4236 tablet_copy_client.cc:323] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: tablet copy: Beginning tablet copy session from remote peer at address 127.0.106.131:40049
I20250814 01:54:14.744268 4195 tablet_copy_service.cc:140] P 136ed1bf01a24d0db8541678c6fed252: Received BeginTabletCopySession request for tablet ea532542ffb34d05bafdc9cdf0dbf89a from peer 9133c463c51a41d1bcda681cad9e6d9b ({username='slave'} at 127.0.106.129:33635)
I20250814 01:54:14.744716 4195 tablet_copy_service.cc:161] P 136ed1bf01a24d0db8541678c6fed252: Beginning new tablet copy session on tablet ea532542ffb34d05bafdc9cdf0dbf89a from peer 9133c463c51a41d1bcda681cad9e6d9b at {username='slave'} at 127.0.106.129:33635: session id = 9133c463c51a41d1bcda681cad9e6d9b-ea532542ffb34d05bafdc9cdf0dbf89a
I20250814 01:54:14.749135 4195 tablet_copy_source_session.cc:215] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Tablet Copy: opened 0 blocks and 1 log segments
I20250814 01:54:14.752074 4236 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ea532542ffb34d05bafdc9cdf0dbf89a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:54:14.761391 4236 tablet_copy_client.cc:806] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: tablet copy: Starting download of 0 data blocks...
I20250814 01:54:14.761868 4236 tablet_copy_client.cc:670] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: tablet copy: Starting download of 1 WAL segments...
I20250814 01:54:14.765159 4236 tablet_copy_client.cc:538] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250814 01:54:14.770221 4236 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap starting.
I20250814 01:54:14.802035 4240 ts_tablet_manager.cc:927] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: Initiating tablet copy from peer 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049)
I20250814 01:54:14.803771 4240 tablet_copy_client.cc:323] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: tablet copy: Beginning tablet copy session from remote peer at address 127.0.106.131:40049
I20250814 01:54:14.805580 4195 tablet_copy_service.cc:140] P 136ed1bf01a24d0db8541678c6fed252: Received BeginTabletCopySession request for tablet ea532542ffb34d05bafdc9cdf0dbf89a from peer e1ba29edf7c9461ca140735ae3609839 ({username='slave'} at 127.0.106.130:50637)
I20250814 01:54:14.806012 4195 tablet_copy_service.cc:161] P 136ed1bf01a24d0db8541678c6fed252: Beginning new tablet copy session on tablet ea532542ffb34d05bafdc9cdf0dbf89a from peer e1ba29edf7c9461ca140735ae3609839 at {username='slave'} at 127.0.106.130:50637: session id = e1ba29edf7c9461ca140735ae3609839-ea532542ffb34d05bafdc9cdf0dbf89a
I20250814 01:54:14.810432 4195 tablet_copy_source_session.cc:215] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Tablet Copy: opened 0 blocks and 1 log segments
I20250814 01:54:14.813506 4240 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ea532542ffb34d05bafdc9cdf0dbf89a. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:54:14.828298 4240 tablet_copy_client.cc:806] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: tablet copy: Starting download of 0 data blocks...
I20250814 01:54:14.828873 4240 tablet_copy_client.cc:670] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: tablet copy: Starting download of 1 WAL segments...
I20250814 01:54:14.832079 4240 tablet_copy_client.cc:538] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250814 01:54:14.840065 4240 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: Bootstrap starting.
I20250814 01:54:14.875875 4244 ts_tablet_manager.cc:927] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252: Initiating tablet copy from peer e1ba29edf7c9461ca140735ae3609839 (127.0.106.130:34695)
I20250814 01:54:14.878120 4244 tablet_copy_client.cc:323] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252: tablet copy: Beginning tablet copy session from remote peer at address 127.0.106.130:34695
I20250814 01:54:14.881260 4028 tablet_copy_service.cc:140] P e1ba29edf7c9461ca140735ae3609839: Received BeginTabletCopySession request for tablet b3acc7639edd406eb75d2d8662b9fc63 from peer 136ed1bf01a24d0db8541678c6fed252 ({username='slave'} at 127.0.106.131:42311)
I20250814 01:54:14.881763 4028 tablet_copy_service.cc:161] P e1ba29edf7c9461ca140735ae3609839: Beginning new tablet copy session on tablet b3acc7639edd406eb75d2d8662b9fc63 from peer 136ed1bf01a24d0db8541678c6fed252 at {username='slave'} at 127.0.106.131:42311: session id = 136ed1bf01a24d0db8541678c6fed252-b3acc7639edd406eb75d2d8662b9fc63
I20250814 01:54:14.888357 4236 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:14.888429 4028 tablet_copy_source_session.cc:215] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Tablet Copy: opened 0 blocks and 1 log segments
I20250814 01:54:14.889142 4236 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap complete.
I20250814 01:54:14.889835 4236 ts_tablet_manager.cc:1397] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: Time spent bootstrapping tablet: real 0.120s user 0.105s sys 0.013s
I20250814 01:54:14.891889 4244 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet b3acc7639edd406eb75d2d8662b9fc63. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:54:14.891817 4236 raft_consensus.cc:357] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: true } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } }
I20250814 01:54:14.892470 4236 raft_consensus.cc:738] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 9133c463c51a41d1bcda681cad9e6d9b, State: Initialized, Role: LEARNER
I20250814 01:54:14.892959 4236 consensus_queue.cc:260] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: true } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } }
I20250814 01:54:14.910133 4236 ts_tablet_manager.cc:1428] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: Time spent starting tablet: real 0.020s user 0.013s sys 0.004s
I20250814 01:54:14.911935 4195 tablet_copy_service.cc:342] P 136ed1bf01a24d0db8541678c6fed252: Request end of tablet copy session 9133c463c51a41d1bcda681cad9e6d9b-ea532542ffb34d05bafdc9cdf0dbf89a received from {username='slave'} at 127.0.106.129:33635
I20250814 01:54:14.912395 4195 tablet_copy_service.cc:434] P 136ed1bf01a24d0db8541678c6fed252: ending tablet copy session 9133c463c51a41d1bcda681cad9e6d9b-ea532542ffb34d05bafdc9cdf0dbf89a on tablet ea532542ffb34d05bafdc9cdf0dbf89a with peer 9133c463c51a41d1bcda681cad9e6d9b
I20250814 01:54:14.919035 4244 tablet_copy_client.cc:806] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252: tablet copy: Starting download of 0 data blocks...
I20250814 01:54:14.919612 4244 tablet_copy_client.cc:670] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252: tablet copy: Starting download of 1 WAL segments...
I20250814 01:54:14.923722 4244 tablet_copy_client.cc:538] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250814 01:54:14.931829 4244 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252: Bootstrap starting.
I20250814 01:54:14.975589 4240 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: Bootstrap replayed 1/1 log segments. Stats: ops{read=11 overwritten=0 applied=11 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:14.976357 4240 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: Bootstrap complete.
I20250814 01:54:14.976907 4240 ts_tablet_manager.cc:1397] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: Time spent bootstrapping tablet: real 0.137s user 0.126s sys 0.008s
I20250814 01:54:14.979187 4240 raft_consensus.cc:357] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: true } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } }
I20250814 01:54:14.979825 4240 raft_consensus.cc:738] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: e1ba29edf7c9461ca140735ae3609839, State: Initialized, Role: LEARNER
I20250814 01:54:14.980324 4240 consensus_queue.cc:260] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 11, Last appended: 2.11, Last appended by leader: 11, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: NON_VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: true } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } }
I20250814 01:54:14.982221 4240 ts_tablet_manager.cc:1428] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: Time spent starting tablet: real 0.005s user 0.008s sys 0.000s
I20250814 01:54:14.983608 4195 tablet_copy_service.cc:342] P 136ed1bf01a24d0db8541678c6fed252: Request end of tablet copy session e1ba29edf7c9461ca140735ae3609839-ea532542ffb34d05bafdc9cdf0dbf89a received from {username='slave'} at 127.0.106.130:50637
I20250814 01:54:14.983919 4195 tablet_copy_service.cc:434] P 136ed1bf01a24d0db8541678c6fed252: ending tablet copy session e1ba29edf7c9461ca140735ae3609839-ea532542ffb34d05bafdc9cdf0dbf89a on tablet ea532542ffb34d05bafdc9cdf0dbf89a with peer e1ba29edf7c9461ca140735ae3609839
I20250814 01:54:15.040894 4244 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:15.041441 4244 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252: Bootstrap complete.
I20250814 01:54:15.041887 4244 ts_tablet_manager.cc:1397] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252: Time spent bootstrapping tablet: real 0.110s user 0.100s sys 0.011s
I20250814 01:54:15.043489 4244 raft_consensus.cc:357] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: NON_VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: true } }
I20250814 01:54:15.043939 4244 raft_consensus.cc:738] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 136ed1bf01a24d0db8541678c6fed252, State: Initialized, Role: LEARNER
I20250814 01:54:15.044287 4244 consensus_queue.cc:260] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: NON_VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: true } }
I20250814 01:54:15.045585 4244 ts_tablet_manager.cc:1428] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252: Time spent starting tablet: real 0.004s user 0.000s sys 0.000s
I20250814 01:54:15.046922 4028 tablet_copy_service.cc:342] P e1ba29edf7c9461ca140735ae3609839: Request end of tablet copy session 136ed1bf01a24d0db8541678c6fed252-b3acc7639edd406eb75d2d8662b9fc63 received from {username='slave'} at 127.0.106.131:42311
I20250814 01:54:15.047206 4028 tablet_copy_service.cc:434] P e1ba29edf7c9461ca140735ae3609839: ending tablet copy session 136ed1bf01a24d0db8541678c6fed252-b3acc7639edd406eb75d2d8662b9fc63 on tablet b3acc7639edd406eb75d2d8662b9fc63 with peer 136ed1bf01a24d0db8541678c6fed252
I20250814 01:54:15.223183 4008 raft_consensus.cc:1215] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.10->[2.11-2.11] Dedup: 2.11->[]
I20250814 01:54:15.226898 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 9133c463c51a41d1bcda681cad9e6d9b to finish bootstrapping
I20250814 01:54:15.242327 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver e1ba29edf7c9461ca140735ae3609839 to finish bootstrapping
I20250814 01:54:15.252888 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 136ed1bf01a24d0db8541678c6fed252 to finish bootstrapping
I20250814 01:54:15.275825 4175 raft_consensus.cc:1215] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.11->[2.12-2.12] Dedup: 2.12->[]
I20250814 01:54:15.328778 3864 raft_consensus.cc:1215] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 LEARNER]: Deduplicated request from leader. Original: 2.10->[2.11-2.11] Dedup: 2.11->[]
I20250814 01:54:15.501377 4155 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250814 01:54:15.505681 3844 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250814 01:54:15.506083 3987 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
Master Summary
UUID | Address | Status
----------------------------------+---------------------+---------
7f0da5f36e8940ea919b6eabe2ddbc00 | 127.0.106.190:41981 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+---------------------+-------------------------
builtin_ntp_servers | 127.0.106.148:35399 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+---------------------+---------+----------+----------------+-----------------
136ed1bf01a24d0db8541678c6fed252 | 127.0.106.131:40049 | HEALTHY | <none> | 1 | 0
9133c463c51a41d1bcda681cad9e6d9b | 127.0.106.129:33085 | HEALTHY | <none> | 1 | 0
e1ba29edf7c9461ca140735ae3609839 | 127.0.106.130:34695 | HEALTHY | <none> | 1 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.0.106.129 | experimental | 127.0.106.129:33085
local_ip_for_outbound_sockets | 127.0.106.130 | experimental | 127.0.106.130:34695
local_ip_for_outbound_sockets | 127.0.106.131 | experimental | 127.0.106.131:40049
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb | hidden | 127.0.106.129:33085
server_dump_info_path | /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb | hidden | 127.0.106.130:34695
server_dump_info_path | /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb | hidden | 127.0.106.131:40049
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+---------------------+-------------------------
builtin_ntp_servers | 127.0.106.148:35399 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.19.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
------------+----+---------+---------------+---------+------------+------------------+-------------
TestTable | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
TestTable1 | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
TestTable2 | 1 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 2
First Quartile | 2
Median | 2
Third Quartile | 3
Maximum | 3
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 3
Tablets | 3
Replicas | 7
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250814 01:54:15.709751 426 log_verifier.cc:126] Checking tablet 9184dfddca454231b2eabe3e05851953
I20250814 01:54:15.726585 4261 raft_consensus.cc:1062] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: attempting to promote NON_VOTER 136ed1bf01a24d0db8541678c6fed252 to VOTER
I20250814 01:54:15.728502 4261 consensus_queue.cc:237] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 12, Committed index: 12, Last appended: 2.12, Last appended by leader: 8, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } }
I20250814 01:54:15.734155 4175 raft_consensus.cc:1273] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252 [term 2 LEARNER]: Refusing update from remote peer e1ba29edf7c9461ca140735ae3609839: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 2 index: 13. (index mismatch)
I20250814 01:54:15.734158 3864 raft_consensus.cc:1273] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Refusing update from remote peer e1ba29edf7c9461ca140735ae3609839: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 2 index: 13. (index mismatch)
I20250814 01:54:15.735411 4261 consensus_queue.cc:1035] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [LEADER]: Connected to new peer: Peer: permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.000s
I20250814 01:54:15.736346 4262 consensus_queue.cc:1035] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [LEADER]: Connected to new peer: Peer: permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.000s
I20250814 01:54:15.743142 4243 raft_consensus.cc:2953] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 2 LEADER]: Committing config change with OpId 2.13: config changed from index 12 to 13, 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } } }
I20250814 01:54:15.745116 426 log_verifier.cc:177] Verified matching terms for 7 ops in tablet 9184dfddca454231b2eabe3e05851953
I20250814 01:54:15.744447 3864 raft_consensus.cc:2953] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Committing config change with OpId 2.13: config changed from index 12 to 13, 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } } }
I20250814 01:54:15.745405 426 log_verifier.cc:126] Checking tablet b3acc7639edd406eb75d2d8662b9fc63
I20250814 01:54:15.751346 4175 raft_consensus.cc:2953] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252 [term 2 FOLLOWER]: Committing config change with OpId 2.13: config changed from index 12 to 13, 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } } }
I20250814 01:54:15.752957 3718 catalog_manager.cc:5582] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 reported cstate change: config changed from index 12 to 13, 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "e1ba29edf7c9461ca140735ae3609839" committed_config { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250814 01:54:15.766934 4249 raft_consensus.cc:1062] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: attempting to promote NON_VOTER e1ba29edf7c9461ca140735ae3609839 to VOTER
I20250814 01:54:15.769080 4249 consensus_queue.cc:237] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 8, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } }
I20250814 01:54:15.777957 3864 raft_consensus.cc:1273] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 LEARNER]: Refusing update from remote peer 136ed1bf01a24d0db8541678c6fed252: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 2 index: 12. (index mismatch)
I20250814 01:54:15.778437 4008 raft_consensus.cc:1273] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 2 LEARNER]: Refusing update from remote peer 136ed1bf01a24d0db8541678c6fed252: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 2 index: 12. (index mismatch)
I20250814 01:54:15.779467 4249 consensus_queue.cc:1035] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [LEADER]: Connected to new peer: Peer: permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.000s
I20250814 01:54:15.780191 4248 consensus_queue.cc:1035] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [LEADER]: Connected to new peer: Peer: permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.000s
I20250814 01:54:15.786940 4248 raft_consensus.cc:2953] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 2 LEADER]: Committing config change with OpId 2.12: config changed from index 11 to 12, e1ba29edf7c9461ca140735ae3609839 (127.0.106.130) changed from NON_VOTER to VOTER. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } } }
I20250814 01:54:15.788683 4008 raft_consensus.cc:2953] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Committing config change with OpId 2.12: config changed from index 11 to 12, e1ba29edf7c9461ca140735ae3609839 (127.0.106.130) changed from NON_VOTER to VOTER. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } } }
I20250814 01:54:15.790232 3864 raft_consensus.cc:2953] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 LEARNER]: Committing config change with OpId 2.12: config changed from index 11 to 12, e1ba29edf7c9461ca140735ae3609839 (127.0.106.130) changed from NON_VOTER to VOTER. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } } }
I20250814 01:54:15.800729 4227 raft_consensus.cc:1062] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: attempting to promote NON_VOTER 9133c463c51a41d1bcda681cad9e6d9b to VOTER
I20250814 01:54:15.802373 4227 consensus_queue.cc:237] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 12, Committed index: 12, Last appended: 2.12, Last appended by leader: 8, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } }
I20250814 01:54:15.804330 4249 raft_consensus.cc:1025] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 2 LEADER]: attempt to promote peer 9133c463c51a41d1bcda681cad9e6d9b: there is already a config change operation in progress. Unable to promote follower until it completes. Doing nothing.
I20250814 01:54:15.807861 3864 raft_consensus.cc:1273] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 LEARNER]: Refusing update from remote peer 136ed1bf01a24d0db8541678c6fed252: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 2 index: 13. (index mismatch)
I20250814 01:54:15.808825 4007 raft_consensus.cc:1273] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Refusing update from remote peer 136ed1bf01a24d0db8541678c6fed252: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 2 index: 13. (index mismatch)
I20250814 01:54:15.808358 3719 catalog_manager.cc:5582] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 reported cstate change: config changed from index 11 to 12, e1ba29edf7c9461ca140735ae3609839 (127.0.106.130) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "136ed1bf01a24d0db8541678c6fed252" committed_config { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: NON_VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: true } health_report { overall_health: HEALTHY } } }
I20250814 01:54:15.810824 4291 consensus_queue.cc:1035] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [LEADER]: Connected to new peer: Peer: permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.001s
I20250814 01:54:15.814422 4291 consensus_queue.cc:1035] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [LEADER]: Connected to new peer: Peer: permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.000s
I20250814 01:54:15.821283 4248 raft_consensus.cc:2953] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 2 LEADER]: Committing config change with OpId 2.13: config changed from index 12 to 13, 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } }
I20250814 01:54:15.823639 3863 raft_consensus.cc:2953] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Committing config change with OpId 2.13: config changed from index 12 to 13, 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } }
I20250814 01:54:15.832190 4007 raft_consensus.cc:2953] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Committing config change with OpId 2.13: config changed from index 12 to 13, 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } }
I20250814 01:54:15.834004 3718 catalog_manager.cc:5582] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 reported cstate change: config changed from index 12 to 13, 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "136ed1bf01a24d0db8541678c6fed252" committed_config { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250814 01:54:15.864879 426 log_verifier.cc:177] Verified matching terms for 13 ops in tablet b3acc7639edd406eb75d2d8662b9fc63
I20250814 01:54:15.865110 426 log_verifier.cc:126] Checking tablet ea532542ffb34d05bafdc9cdf0dbf89a
I20250814 01:54:15.952519 426 log_verifier.cc:177] Verified matching terms for 13 ops in tablet ea532542ffb34d05bafdc9cdf0dbf89a
I20250814 01:54:15.952927 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 3687
I20250814 01:54:15.977990 426 minidump.cc:252] Setting minidump size limit to 20M
I20250814 01:54:15.979372 426 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:54:15.980311 426 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:54:15.990237 4294 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:15.990507 4293 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:54:15.991604 426 server_base.cc:1047] running on GCE node
W20250814 01:54:16.085894 4296 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:54:16.086915 426 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250814 01:54:16.087148 426 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250814 01:54:16.087314 426 hybrid_clock.cc:648] HybridClock initialized: now 1755136456087294 us; error 0 us; skew 500 ppm
I20250814 01:54:16.087931 426 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:54:16.090849 426 webserver.cc:480] Webserver started at http://0.0.0.0:39701/ using document root <none> and password file <none>
I20250814 01:54:16.091610 426 fs_manager.cc:362] Metadata directory not provided
I20250814 01:54:16.091804 426 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:54:16.096678 426 fs_manager.cc:714] Time spent opening directory manager: real 0.003s user 0.005s sys 0.000s
I20250814 01:54:16.100111 4301 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:16.100978 426 fs_manager.cc:730] Time spent opening block manager: real 0.002s user 0.002s sys 0.000s
I20250814 01:54:16.101262 426 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "7f0da5f36e8940ea919b6eabe2ddbc00"
format_stamp: "Formatted at 2025-08-14 01:53:54 on dist-test-slave-30wj"
I20250814 01:54:16.102885 426 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:54:16.125145 426 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:54:16.126525 426 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:54:16.126952 426 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:54:16.135807 426 sys_catalog.cc:263] Verifying existing consensus state
W20250814 01:54:16.139163 426 sys_catalog.cc:243] For a single master config, on-disk Raft master: 127.0.106.190:41981 exists but no master address supplied!
I20250814 01:54:16.140949 426 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Bootstrap starting.
I20250814 01:54:16.200063 426 log.cc:826] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Log is configured to *not* fsync() on all Append() calls
I20250814 01:54:16.263300 426 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Bootstrap replayed 1/1 log segments. Stats: ops{read=30 overwritten=0 applied=30 ignored=0} inserts{seen=13 ignored=0} mutations{seen=21 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:16.263978 426 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Bootstrap complete.
I20250814 01:54:16.276396 426 raft_consensus.cc:357] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 3 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:54:16.276925 426 raft_consensus.cc:738] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 3 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7f0da5f36e8940ea919b6eabe2ddbc00, State: Initialized, Role: FOLLOWER
I20250814 01:54:16.277626 426 consensus_queue.cc:260] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 30, Last appended: 3.30, Last appended by leader: 30, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:54:16.278105 426 raft_consensus.cc:397] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 3 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:54:16.278312 426 raft_consensus.cc:491] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 3 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:54:16.278573 426 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 3 FOLLOWER]: Advancing to term 4
I20250814 01:54:16.283528 426 raft_consensus.cc:513] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 4 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:54:16.284143 426 leader_election.cc:304] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7f0da5f36e8940ea919b6eabe2ddbc00; no voters:
I20250814 01:54:16.285176 426 leader_election.cc:290] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [CANDIDATE]: Term 4 election: Requested vote from peers
I20250814 01:54:16.285422 4308 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 4 FOLLOWER]: Leader election won for term 4
I20250814 01:54:16.288213 4308 raft_consensus.cc:695] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 4 LEADER]: Becoming Leader. State: Replica: 7f0da5f36e8940ea919b6eabe2ddbc00, State: Running, Role: LEADER
I20250814 01:54:16.288946 4308 consensus_queue.cc:237] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 30, Committed index: 30, Last appended: 3.30, Last appended by leader: 30, Current term: 4, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:54:16.294817 4310 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 7f0da5f36e8940ea919b6eabe2ddbc00. Latest consensus state: current_term: 4 leader_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } } }
I20250814 01:54:16.295238 4310 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: This master's current role is: LEADER
I20250814 01:54:16.296732 4309 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 4 leader_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } } }
I20250814 01:54:16.297230 4309 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: This master's current role is: LEADER
I20250814 01:54:16.321686 426 tablet_replica.cc:331] stopping tablet replica
I20250814 01:54:16.322273 426 raft_consensus.cc:2241] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 4 LEADER]: Raft consensus shutting down.
I20250814 01:54:16.322664 426 raft_consensus.cc:2270] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 4 FOLLOWER]: Raft consensus is shut down!
I20250814 01:54:16.324754 426 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250814 01:54:16.325202 426 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250814 01:54:16.354321 426 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
W20250814 01:54:16.861423 4220 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:41981 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:41981: connect: Connection refused (error 111)
W20250814 01:54:16.866350 4053 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:41981 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:41981: connect: Connection refused (error 111)
W20250814 01:54:16.874405 3909 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:41981 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:41981: connect: Connection refused (error 111)
I20250814 01:54:21.326092 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 3757
I20250814 01:54:21.347584 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 3913
I20250814 01:54:21.371429 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 4057
I20250814 01:54:21.395793 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:41981
--webserver_interface=127.0.106.190
--webserver_port=38807
--builtin_ntp_servers=127.0.106.148:35399
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:41981 with env {}
W20250814 01:54:21.685604 4382 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:54:21.686221 4382 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:54:21.686668 4382 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:54:21.717288 4382 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250814 01:54:21.717593 4382 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:54:21.717871 4382 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250814 01:54:21.718101 4382 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250814 01:54:21.752715 4382 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35399
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:41981
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:41981
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=38807
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:54:21.754024 4382 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:54:21.755587 4382 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:54:21.766150 4388 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:21.766824 4389 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:21.770115 4391 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:22.850900 4390 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250814 01:54:22.850966 4382 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:54:22.854586 4382 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:54:22.857127 4382 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:54:22.858451 4382 hybrid_clock.cc:648] HybridClock initialized: now 1755136462858426 us; error 42 us; skew 500 ppm
I20250814 01:54:22.859222 4382 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:54:22.865510 4382 webserver.cc:480] Webserver started at http://127.0.106.190:38807/ using document root <none> and password file <none>
I20250814 01:54:22.866420 4382 fs_manager.cc:362] Metadata directory not provided
I20250814 01:54:22.866621 4382 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:54:22.874047 4382 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.003s sys 0.004s
I20250814 01:54:22.878223 4398 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:22.879201 4382 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.002s
I20250814 01:54:22.879501 4382 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "7f0da5f36e8940ea919b6eabe2ddbc00"
format_stamp: "Formatted at 2025-08-14 01:53:54 on dist-test-slave-30wj"
I20250814 01:54:22.881354 4382 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:54:22.938407 4382 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:54:22.939855 4382 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:54:22.940271 4382 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:54:23.008612 4382 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:41981
I20250814 01:54:23.008680 4449 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:41981 every 8 connection(s)
I20250814 01:54:23.011430 4382 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250814 01:54:23.020304 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 4382
I20250814 01:54:23.021816 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:33085
--local_ip_for_outbound_sockets=127.0.106.129
--tserver_master_addrs=127.0.106.190:41981
--webserver_port=44955
--webserver_interface=127.0.106.129
--builtin_ntp_servers=127.0.106.148:35399
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250814 01:54:23.022166 4450 sys_catalog.cc:263] Verifying existing consensus state
I20250814 01:54:23.027437 4450 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Bootstrap starting.
I20250814 01:54:23.037490 4450 log.cc:826] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Log is configured to *not* fsync() on all Append() calls
I20250814 01:54:23.113451 4450 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Bootstrap replayed 1/1 log segments. Stats: ops{read=34 overwritten=0 applied=34 ignored=0} inserts{seen=15 ignored=0} mutations{seen=23 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:23.114230 4450 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Bootstrap complete.
I20250814 01:54:23.132494 4450 raft_consensus.cc:357] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 5 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:54:23.134531 4450 raft_consensus.cc:738] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 5 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7f0da5f36e8940ea919b6eabe2ddbc00, State: Initialized, Role: FOLLOWER
I20250814 01:54:23.135273 4450 consensus_queue.cc:260] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 34, Last appended: 5.34, Last appended by leader: 34, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:54:23.135737 4450 raft_consensus.cc:397] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 5 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:54:23.135979 4450 raft_consensus.cc:491] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 5 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:54:23.136253 4450 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 5 FOLLOWER]: Advancing to term 6
I20250814 01:54:23.141117 4450 raft_consensus.cc:513] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 6 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:54:23.141726 4450 leader_election.cc:304] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [CANDIDATE]: Term 6 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7f0da5f36e8940ea919b6eabe2ddbc00; no voters:
I20250814 01:54:23.143620 4450 leader_election.cc:290] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [CANDIDATE]: Term 6 election: Requested vote from peers
I20250814 01:54:23.143944 4454 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 6 FOLLOWER]: Leader election won for term 6
I20250814 01:54:23.146788 4454 raft_consensus.cc:695] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [term 6 LEADER]: Becoming Leader. State: Replica: 7f0da5f36e8940ea919b6eabe2ddbc00, State: Running, Role: LEADER
I20250814 01:54:23.147578 4454 consensus_queue.cc:237] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 34, Committed index: 34, Last appended: 5.34, Last appended by leader: 34, Current term: 6, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } }
I20250814 01:54:23.148118 4450 sys_catalog.cc:564] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: configured and running, proceeding with master startup.
I20250814 01:54:23.157332 4455 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 6 leader_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } } }
I20250814 01:54:23.158110 4455 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: This master's current role is: LEADER
I20250814 01:54:23.157943 4456 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 7f0da5f36e8940ea919b6eabe2ddbc00. Latest consensus state: current_term: 6 leader_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7f0da5f36e8940ea919b6eabe2ddbc00" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 41981 } } }
I20250814 01:54:23.158665 4456 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00 [sys.catalog]: This master's current role is: LEADER
I20250814 01:54:23.171439 4462 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250814 01:54:23.185305 4462 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=256e11a38719467d94d49047e335b05d]
I20250814 01:54:23.187027 4462 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=70bded3494f1460e95248323a9e95ba7]
I20250814 01:54:23.188598 4462 catalog_manager.cc:671] Loaded metadata for table TestTable [id=e3d5dc91a30e4feeb8bc61c7491a429d]
I20250814 01:54:23.196759 4462 tablet_loader.cc:96] loaded metadata for tablet 9184dfddca454231b2eabe3e05851953 (table TestTable2 [id=256e11a38719467d94d49047e335b05d])
I20250814 01:54:23.199025 4462 tablet_loader.cc:96] loaded metadata for tablet b3acc7639edd406eb75d2d8662b9fc63 (table TestTable1 [id=70bded3494f1460e95248323a9e95ba7])
I20250814 01:54:23.200392 4462 tablet_loader.cc:96] loaded metadata for tablet ea532542ffb34d05bafdc9cdf0dbf89a (table TestTable [id=e3d5dc91a30e4feeb8bc61c7491a429d])
I20250814 01:54:23.201826 4462 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250814 01:54:23.207293 4462 catalog_manager.cc:1261] Loaded cluster ID: 51839d06f43f4f4d8312b038947fb808
I20250814 01:54:23.207614 4462 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250814 01:54:23.216197 4462 catalog_manager.cc:1506] Loading token signing keys...
I20250814 01:54:23.222110 4462 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 7f0da5f36e8940ea919b6eabe2ddbc00: Loaded TSK: 0
I20250814 01:54:23.223651 4462 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250814 01:54:23.345777 4452 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:54:23.346259 4452 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:54:23.346727 4452 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:54:23.377985 4452 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:54:23.378796 4452 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250814 01:54:23.412963 4452 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35399
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:33085
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=44955
--tserver_master_addrs=127.0.106.190:41981
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:54:23.414258 4452 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:54:23.415771 4452 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:54:23.427668 4477 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:23.430330 4478 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:24.543258 4480 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:24.546710 4479 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1116 milliseconds
I20250814 01:54:24.546783 4452 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:54:24.547940 4452 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:54:24.550602 4452 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:54:24.552042 4452 hybrid_clock.cc:648] HybridClock initialized: now 1755136464551983 us; error 79 us; skew 500 ppm
I20250814 01:54:24.552923 4452 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:54:24.559505 4452 webserver.cc:480] Webserver started at http://127.0.106.129:44955/ using document root <none> and password file <none>
I20250814 01:54:24.560640 4452 fs_manager.cc:362] Metadata directory not provided
I20250814 01:54:24.560910 4452 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:54:24.569295 4452 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.006s sys 0.000s
I20250814 01:54:24.574179 4487 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:24.575302 4452 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.003s
I20250814 01:54:24.575661 4452 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "9133c463c51a41d1bcda681cad9e6d9b"
format_stamp: "Formatted at 2025-08-14 01:53:56 on dist-test-slave-30wj"
I20250814 01:54:24.577826 4452 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:54:24.625234 4452 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:54:24.626727 4452 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:54:24.627138 4452 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:54:24.630030 4452 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:54:24.636061 4494 ts_tablet_manager.cc:542] Loading tablet metadata (0/3 complete)
I20250814 01:54:24.651849 4452 ts_tablet_manager.cc:579] Loaded tablet metadata (3 total tablets, 3 live tablets)
I20250814 01:54:24.652091 4452 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.018s user 0.002s sys 0.000s
I20250814 01:54:24.652372 4452 ts_tablet_manager.cc:594] Registering tablets (0/3 complete)
I20250814 01:54:24.657636 4494 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap starting.
I20250814 01:54:24.667307 4452 ts_tablet_manager.cc:610] Registered 3 tablets
I20250814 01:54:24.667657 4452 ts_tablet_manager.cc:589] Time spent register tablets: real 0.015s user 0.016s sys 0.000s
I20250814 01:54:24.728926 4494 log.cc:826] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: Log is configured to *not* fsync() on all Append() calls
I20250814 01:54:24.843001 4452 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:33085
I20250814 01:54:24.843144 4601 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:33085 every 8 connection(s)
I20250814 01:54:24.846521 4452 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250814 01:54:24.848866 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 4452
I20250814 01:54:24.850670 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:34695
--local_ip_for_outbound_sockets=127.0.106.130
--tserver_master_addrs=127.0.106.190:41981
--webserver_port=34373
--webserver_interface=127.0.106.130
--builtin_ntp_servers=127.0.106.148:35399
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250814 01:54:24.868568 4494 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap replayed 1/1 log segments. Stats: ops{read=13 overwritten=0 applied=13 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:24.869760 4494 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap complete.
I20250814 01:54:24.871467 4494 ts_tablet_manager.cc:1397] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: Time spent bootstrapping tablet: real 0.214s user 0.169s sys 0.039s
I20250814 01:54:24.889946 4494 raft_consensus.cc:357] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } }
I20250814 01:54:24.893106 4494 raft_consensus.cc:738] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9133c463c51a41d1bcda681cad9e6d9b, State: Initialized, Role: FOLLOWER
I20250814 01:54:24.894097 4494 consensus_queue.cc:260] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } }
I20250814 01:54:24.907222 4494 ts_tablet_manager.cc:1428] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b: Time spent starting tablet: real 0.035s user 0.023s sys 0.007s
I20250814 01:54:24.907976 4602 heartbeater.cc:344] Connected to a master server at 127.0.106.190:41981
I20250814 01:54:24.908296 4494 tablet_bootstrap.cc:492] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap starting.
I20250814 01:54:24.908406 4602 heartbeater.cc:461] Registering TS with master...
I20250814 01:54:24.909560 4602 heartbeater.cc:507] Master 127.0.106.190:41981 requested a full tablet report, sending...
I20250814 01:54:24.915130 4415 ts_manager.cc:194] Registered new tserver with Master: 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129:33085)
I20250814 01:54:24.919879 4415 catalog_manager.cc:5582] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b reported cstate change: config changed from index -1 to 13, term changed from 0 to 2, VOTER 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131) added, VOTER 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129) added, VOTER e1ba29edf7c9461ca140735ae3609839 (127.0.106.130) added. New cstate: current_term: 2 committed_config { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } }
I20250814 01:54:24.972247 4415 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:48461
I20250814 01:54:24.976648 4602 heartbeater.cc:499] Master 127.0.106.190:41981 was elected leader, sending a full tablet report...
I20250814 01:54:25.041105 4494 tablet_bootstrap.cc:492] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:25.042109 4494 tablet_bootstrap.cc:492] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap complete.
I20250814 01:54:25.043610 4494 ts_tablet_manager.cc:1397] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Time spent bootstrapping tablet: real 0.136s user 0.099s sys 0.032s
I20250814 01:54:25.045796 4494 raft_consensus.cc:357] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } }
I20250814 01:54:25.046329 4494 raft_consensus.cc:738] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9133c463c51a41d1bcda681cad9e6d9b, State: Initialized, Role: FOLLOWER
I20250814 01:54:25.047034 4494 consensus_queue.cc:260] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 2.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } }
I20250814 01:54:25.047619 4494 raft_consensus.cc:397] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:54:25.047986 4494 raft_consensus.cc:491] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:54:25.048410 4494 raft_consensus.cc:3058] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Advancing to term 3
I20250814 01:54:25.056300 4494 raft_consensus.cc:513] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } }
I20250814 01:54:25.057169 4494 leader_election.cc:304] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 9133c463c51a41d1bcda681cad9e6d9b; no voters:
I20250814 01:54:25.057870 4494 leader_election.cc:290] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 election: Requested vote from peers
I20250814 01:54:25.058099 4607 raft_consensus.cc:2802] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 3 FOLLOWER]: Leader election won for term 3
I20250814 01:54:25.070317 4607 raft_consensus.cc:695] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [term 3 LEADER]: Becoming Leader. State: Replica: 9133c463c51a41d1bcda681cad9e6d9b, State: Running, Role: LEADER
I20250814 01:54:25.071278 4607 consensus_queue.cc:237] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 2.7, Last appended by leader: 7, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } }
I20250814 01:54:25.085465 4414 catalog_manager.cc:5582] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b reported cstate change: term changed from 2 to 3. New cstate: current_term: 3 leader_uuid: "9133c463c51a41d1bcda681cad9e6d9b" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } health_report { overall_health: HEALTHY } } }
I20250814 01:54:25.091449 4494 ts_tablet_manager.cc:1428] T 9184dfddca454231b2eabe3e05851953 P 9133c463c51a41d1bcda681cad9e6d9b: Time spent starting tablet: real 0.047s user 0.016s sys 0.003s
I20250814 01:54:25.092198 4494 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap starting.
W20250814 01:54:25.236254 4603 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:54:25.236732 4603 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:54:25.237221 4603 flags.cc:425] Enabled unsafe flag: --never_fsync=true
I20250814 01:54:25.252259 4494 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap replayed 1/1 log segments. Stats: ops{read=13 overwritten=0 applied=13 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:25.253142 4494 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b: Bootstrap complete.
I20250814 01:54:25.254558 4494 ts_tablet_manager.cc:1397] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b: Time spent bootstrapping tablet: real 0.163s user 0.153s sys 0.008s
I20250814 01:54:25.256610 4494 raft_consensus.cc:357] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } }
I20250814 01:54:25.257248 4494 raft_consensus.cc:738] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 9133c463c51a41d1bcda681cad9e6d9b, State: Initialized, Role: FOLLOWER
I20250814 01:54:25.257874 4494 consensus_queue.cc:260] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } }
I20250814 01:54:25.259743 4494 ts_tablet_manager.cc:1428] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b: Time spent starting tablet: real 0.005s user 0.004s sys 0.000s
W20250814 01:54:25.272822 4603 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:54:25.273661 4603 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250814 01:54:25.308099 4603 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35399
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:34695
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=34373
--tserver_master_addrs=127.0.106.190:41981
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:54:25.309479 4603 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:54:25.311014 4603 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:54:25.322932 4623 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:54:26.051882 4629 raft_consensus.cc:491] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250814 01:54:26.052682 4629 raft_consensus.cc:513] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } }
W20250814 01:54:26.062364 4490 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.106.131:40049: connect: Connection refused (error 111)
I20250814 01:54:26.069947 4629 leader_election.cc:290] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049), e1ba29edf7c9461ca140735ae3609839 (127.0.106.130:34695)
W20250814 01:54:26.070293 4490 leader_election.cc:336] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049): Network error: Client connection negotiation failed: client connection to 127.0.106.131:40049: connect: Connection refused (error 111)
W20250814 01:54:26.081665 4491 leader_election.cc:336] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer e1ba29edf7c9461ca140735ae3609839 (127.0.106.130:34695): Network error: Client connection negotiation failed: client connection to 127.0.106.130:34695: connect: Connection refused (error 111)
I20250814 01:54:26.082273 4491 leader_election.cc:304] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 9133c463c51a41d1bcda681cad9e6d9b; no voters: 136ed1bf01a24d0db8541678c6fed252, e1ba29edf7c9461ca140735ae3609839
I20250814 01:54:26.083122 4629 raft_consensus.cc:2747] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
W20250814 01:54:26.725502 4622 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 4603
W20250814 01:54:26.817636 4603 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.494s user 0.640s sys 0.837s
W20250814 01:54:25.324334 4624 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:26.818104 4603 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.495s user 0.641s sys 0.837s
W20250814 01:54:26.820418 4626 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:26.822590 4625 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1494 milliseconds
I20250814 01:54:26.822609 4603 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:54:26.823719 4603 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:54:26.826704 4603 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:54:26.828042 4603 hybrid_clock.cc:648] HybridClock initialized: now 1755136466828011 us; error 40 us; skew 500 ppm
I20250814 01:54:26.828778 4603 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:54:26.834699 4603 webserver.cc:480] Webserver started at http://127.0.106.130:34373/ using document root <none> and password file <none>
I20250814 01:54:26.835618 4603 fs_manager.cc:362] Metadata directory not provided
I20250814 01:54:26.835840 4603 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:54:26.843755 4603 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.005s sys 0.000s
I20250814 01:54:26.848435 4637 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:26.849551 4603 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.000s
I20250814 01:54:26.849864 4603 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "e1ba29edf7c9461ca140735ae3609839"
format_stamp: "Formatted at 2025-08-14 01:53:58 on dist-test-slave-30wj"
I20250814 01:54:26.851748 4603 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:54:26.903182 4603 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:54:26.904579 4603 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:54:26.904995 4603 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:54:26.907395 4603 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:54:26.913067 4644 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250814 01:54:26.918648 4645 raft_consensus.cc:491] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250814 01:54:26.919060 4645 raft_consensus.cc:513] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } }
I20250814 01:54:26.920504 4645 leader_election.cc:290] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers e1ba29edf7c9461ca140735ae3609839 (127.0.106.130:34695), 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049)
W20250814 01:54:26.925580 4491 leader_election.cc:336] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer e1ba29edf7c9461ca140735ae3609839 (127.0.106.130:34695): Network error: Client connection negotiation failed: client connection to 127.0.106.130:34695: connect: Connection refused (error 111)
W20250814 01:54:26.926012 4490 leader_election.cc:336] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049): Network error: Client connection negotiation failed: client connection to 127.0.106.131:40049: connect: Connection refused (error 111)
I20250814 01:54:26.926218 4603 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250814 01:54:26.926450 4603 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.015s user 0.002s sys 0.000s
I20250814 01:54:26.926365 4490 leader_election.cc:304] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 9133c463c51a41d1bcda681cad9e6d9b; no voters: 136ed1bf01a24d0db8541678c6fed252, e1ba29edf7c9461ca140735ae3609839
I20250814 01:54:26.926710 4603 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250814 01:54:26.926962 4645 raft_consensus.cc:2747] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250814 01:54:26.931813 4644 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: Bootstrap starting.
I20250814 01:54:26.934437 4603 ts_tablet_manager.cc:610] Registered 2 tablets
I20250814 01:54:26.934638 4603 ts_tablet_manager.cc:589] Time spent register tablets: real 0.008s user 0.008s sys 0.000s
I20250814 01:54:26.983844 4644 log.cc:826] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: Log is configured to *not* fsync() on all Append() calls
I20250814 01:54:27.099093 4644 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: Bootstrap replayed 1/1 log segments. Stats: ops{read=13 overwritten=0 applied=13 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:27.100155 4644 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: Bootstrap complete.
I20250814 01:54:27.101352 4644 ts_tablet_manager.cc:1397] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: Time spent bootstrapping tablet: real 0.170s user 0.136s sys 0.032s
I20250814 01:54:27.104130 4603 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:34695
I20250814 01:54:27.104249 4755 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:34695 every 8 connection(s)
I20250814 01:54:27.106611 4603 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250814 01:54:27.113355 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 4603
I20250814 01:54:27.115190 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:40049
--local_ip_for_outbound_sockets=127.0.106.131
--tserver_master_addrs=127.0.106.190:41981
--webserver_port=43431
--webserver_interface=127.0.106.131
--builtin_ntp_servers=127.0.106.148:35399
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250814 01:54:27.119992 4644 raft_consensus.cc:357] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } }
I20250814 01:54:27.122973 4644 raft_consensus.cc:738] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: e1ba29edf7c9461ca140735ae3609839, State: Initialized, Role: FOLLOWER
I20250814 01:54:27.123979 4644 consensus_queue.cc:260] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } }
I20250814 01:54:27.128728 4644 ts_tablet_manager.cc:1428] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: Time spent starting tablet: real 0.027s user 0.026s sys 0.000s
I20250814 01:54:27.129474 4644 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Bootstrap starting.
I20250814 01:54:27.135893 4756 heartbeater.cc:344] Connected to a master server at 127.0.106.190:41981
I20250814 01:54:27.136387 4756 heartbeater.cc:461] Registering TS with master...
I20250814 01:54:27.137571 4756 heartbeater.cc:507] Master 127.0.106.190:41981 requested a full tablet report, sending...
I20250814 01:54:27.141942 4414 ts_manager.cc:194] Registered new tserver with Master: e1ba29edf7c9461ca140735ae3609839 (127.0.106.130:34695)
I20250814 01:54:27.146081 4414 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:41291
I20250814 01:54:27.154486 4756 heartbeater.cc:499] Master 127.0.106.190:41981 was elected leader, sending a full tablet report...
I20250814 01:54:27.284134 4644 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Bootstrap replayed 1/1 log segments. Stats: ops{read=13 overwritten=0 applied=13 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:27.284780 4644 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Bootstrap complete.
I20250814 01:54:27.285737 4644 ts_tablet_manager.cc:1397] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Time spent bootstrapping tablet: real 0.156s user 0.111s sys 0.013s
I20250814 01:54:27.287240 4644 raft_consensus.cc:357] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } }
I20250814 01:54:27.287664 4644 raft_consensus.cc:738] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: e1ba29edf7c9461ca140735ae3609839, State: Initialized, Role: FOLLOWER
I20250814 01:54:27.288095 4644 consensus_queue.cc:260] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } }
I20250814 01:54:27.289567 4644 ts_tablet_manager.cc:1428] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839: Time spent starting tablet: real 0.004s user 0.000s sys 0.000s
W20250814 01:54:27.444542 4761 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:54:27.445019 4761 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:54:27.445492 4761 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:54:27.476496 4761 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:54:27.477317 4761 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250814 01:54:27.511549 4761 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:35399
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:40049
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=43431
--tserver_master_addrs=127.0.106.190:41981
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:54:27.512962 4761 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:54:27.514524 4761 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:54:27.526193 4768 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:54:27.743818 4774 raft_consensus.cc:491] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250814 01:54:27.744577 4774 raft_consensus.cc:513] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } }
I20250814 01:54:27.775009 4774 leader_election.cc:290] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049), e1ba29edf7c9461ca140735ae3609839 (127.0.106.130:34695)
W20250814 01:54:27.790658 4490 leader_election.cc:336] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049): Network error: Client connection negotiation failed: client connection to 127.0.106.131:40049: connect: Connection refused (error 111)
I20250814 01:54:27.804826 4711 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" candidate_uuid: "9133c463c51a41d1bcda681cad9e6d9b" candidate_term: 3 candidate_status { last_received { term: 2 index: 13 } } ignore_live_leader: false dest_uuid: "e1ba29edf7c9461ca140735ae3609839" is_pre_election: true
I20250814 01:54:27.805871 4711 raft_consensus.cc:2466] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 9133c463c51a41d1bcda681cad9e6d9b in term 2.
I20250814 01:54:27.807972 4491 leader_election.cc:304] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 9133c463c51a41d1bcda681cad9e6d9b, e1ba29edf7c9461ca140735ae3609839; no voters: 136ed1bf01a24d0db8541678c6fed252
I20250814 01:54:27.809293 4774 raft_consensus.cc:2802] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250814 01:54:27.809672 4774 raft_consensus.cc:491] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250814 01:54:27.810022 4774 raft_consensus.cc:3058] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Advancing to term 3
I20250814 01:54:27.818950 4774 raft_consensus.cc:513] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 3 FOLLOWER]: Starting leader election with config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } }
W20250814 01:54:27.827915 4490 leader_election.cc:336] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049): Network error: Client connection negotiation failed: client connection to 127.0.106.131:40049: connect: Connection refused (error 111)
I20250814 01:54:27.830407 4711 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" candidate_uuid: "9133c463c51a41d1bcda681cad9e6d9b" candidate_term: 3 candidate_status { last_received { term: 2 index: 13 } } ignore_live_leader: false dest_uuid: "e1ba29edf7c9461ca140735ae3609839"
I20250814 01:54:27.831041 4711 raft_consensus.cc:3058] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Advancing to term 3
I20250814 01:54:27.840166 4774 leader_election.cc:290] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 election: Requested vote from peers 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049), e1ba29edf7c9461ca140735ae3609839 (127.0.106.130:34695)
I20250814 01:54:27.842245 4711 raft_consensus.cc:2466] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 9133c463c51a41d1bcda681cad9e6d9b in term 3.
I20250814 01:54:27.843449 4491 leader_election.cc:304] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 9133c463c51a41d1bcda681cad9e6d9b, e1ba29edf7c9461ca140735ae3609839; no voters: 136ed1bf01a24d0db8541678c6fed252
I20250814 01:54:27.845594 4774 raft_consensus.cc:2802] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 3 FOLLOWER]: Leader election won for term 3
I20250814 01:54:27.852365 4774 raft_consensus.cc:695] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 3 LEADER]: Becoming Leader. State: Replica: 9133c463c51a41d1bcda681cad9e6d9b, State: Running, Role: LEADER
I20250814 01:54:27.853292 4774 consensus_queue.cc:237] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 13, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } }
I20250814 01:54:27.868146 4414 catalog_manager.cc:5582] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b reported cstate change: term changed from 2 to 3, leader changed from <none> to 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129). New cstate: current_term: 3 leader_uuid: "9133c463c51a41d1bcda681cad9e6d9b" committed_config { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
W20250814 01:54:28.211382 4752 debug-util.cc:398] Leaking SignalData structure 0x7b08000b5060 after lost signal to thread 4618
I20250814 01:54:28.252629 4711 raft_consensus.cc:1273] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 3 FOLLOWER]: Refusing update from remote peer 9133c463c51a41d1bcda681cad9e6d9b: Log matching property violated. Preceding OpId in replica: term: 2 index: 13. Preceding OpId from leader: term: 3 index: 14. (index mismatch)
I20250814 01:54:28.254567 4774 consensus_queue.cc:1035] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [LEADER]: Connected to new peer: Peer: permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 14, Last known committed idx: 13, Time since last communication: 0.000s
W20250814 01:54:28.328583 4490 consensus_peers.cc:489] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b -> Peer 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049): Couldn't send request to peer 136ed1bf01a24d0db8541678c6fed252. Status: Network error: Client connection negotiation failed: client connection to 127.0.106.131:40049: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250814 01:54:28.393121 4557 consensus_queue.cc:237] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 14, Committed index: 14, Last appended: 3.14, Last appended by leader: 13, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 15 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } }
I20250814 01:54:28.399823 4711 raft_consensus.cc:1273] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 3 FOLLOWER]: Refusing update from remote peer 9133c463c51a41d1bcda681cad9e6d9b: Log matching property violated. Preceding OpId in replica: term: 3 index: 14. Preceding OpId from leader: term: 3 index: 15. (index mismatch)
I20250814 01:54:28.401630 4779 consensus_queue.cc:1035] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [LEADER]: Connected to new peer: Peer: permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 15, Last known committed idx: 14, Time since last communication: 0.001s
I20250814 01:54:28.410104 4779 raft_consensus.cc:2953] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 3 LEADER]: Committing config change with OpId 3.15: config changed from index 13 to 15, VOTER 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131) evicted. New config: { opid_index: 15 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } }
I20250814 01:54:28.412637 4711 raft_consensus.cc:2953] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 3 FOLLOWER]: Committing config change with OpId 3.15: config changed from index 13 to 15, VOTER 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131) evicted. New config: { opid_index: 15 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } }
I20250814 01:54:28.438508 4402 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet ea532542ffb34d05bafdc9cdf0dbf89a with cas_config_opid_index 13: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250814 01:54:28.445199 4414 catalog_manager.cc:5582] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b reported cstate change: config changed from index 13 to 15, VOTER 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131) evicted. New cstate: current_term: 3 leader_uuid: "9133c463c51a41d1bcda681cad9e6d9b" committed_config { opid_index: 15 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
W20250814 01:54:28.472033 4414 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet ea532542ffb34d05bafdc9cdf0dbf89a on TS 136ed1bf01a24d0db8541678c6fed252: Not found: failed to reset TS proxy: Could not find TS for UUID 136ed1bf01a24d0db8541678c6fed252
I20250814 01:54:28.479310 4557 consensus_queue.cc:237] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 15, Committed index: 15, Last appended: 3.15, Last appended by leader: 13, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } }
I20250814 01:54:28.481950 4779 raft_consensus.cc:2953] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b [term 3 LEADER]: Committing config change with OpId 3.16: config changed from index 15 to 16, VOTER e1ba29edf7c9461ca140735ae3609839 (127.0.106.130) evicted. New config: { opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } }
I20250814 01:54:28.492600 4402 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet ea532542ffb34d05bafdc9cdf0dbf89a with cas_config_opid_index 15: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250814 01:54:28.496176 4415 catalog_manager.cc:5582] T ea532542ffb34d05bafdc9cdf0dbf89a P 9133c463c51a41d1bcda681cad9e6d9b reported cstate change: config changed from index 15 to 16, VOTER e1ba29edf7c9461ca140735ae3609839 (127.0.106.130) evicted. New cstate: current_term: 3 leader_uuid: "9133c463c51a41d1bcda681cad9e6d9b" committed_config { opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
W20250814 01:54:28.528735 4400 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet ea532542ffb34d05bafdc9cdf0dbf89a on TS 136ed1bf01a24d0db8541678c6fed252 failed: Not found: failed to reset TS proxy: Could not find TS for UUID 136ed1bf01a24d0db8541678c6fed252
I20250814 01:54:28.548141 4691 tablet_service.cc:1515] Processing DeleteTablet for tablet ea532542ffb34d05bafdc9cdf0dbf89a with delete_type TABLET_DATA_TOMBSTONED (TS e1ba29edf7c9461ca140735ae3609839 not found in new config with opid_index 16) from {username='slave'} at 127.0.0.1:58776
I20250814 01:54:28.560067 4795 tablet_replica.cc:331] stopping tablet replica
I20250814 01:54:28.561040 4795 raft_consensus.cc:2241] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 3 FOLLOWER]: Raft consensus shutting down.
I20250814 01:54:28.561837 4795 raft_consensus.cc:2270] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839 [term 3 FOLLOWER]: Raft consensus is shut down!
I20250814 01:54:28.581012 4795 ts_tablet_manager.cc:1905] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250814 01:54:28.616993 4795 ts_tablet_manager.cc:1918] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 3.15
I20250814 01:54:28.625114 4795 log.cc:1199] T ea532542ffb34d05bafdc9cdf0dbf89a P e1ba29edf7c9461ca140735ae3609839: Deleting WAL directory at /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal/wals/ea532542ffb34d05bafdc9cdf0dbf89a
I20250814 01:54:28.627136 4402 catalog_manager.cc:4928] TS e1ba29edf7c9461ca140735ae3609839 (127.0.106.130:34695): tablet ea532542ffb34d05bafdc9cdf0dbf89a (table TestTable [id=e3d5dc91a30e4feeb8bc61c7491a429d]) successfully deleted
W20250814 01:54:28.928773 4767 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 4761
W20250814 01:54:29.123818 4770 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1596 milliseconds
W20250814 01:54:29.124718 4771 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:27.526813 4769 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:29.123278 4761 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.597s user 0.644s sys 0.937s
W20250814 01:54:29.125738 4761 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.600s user 0.644s sys 0.937s
I20250814 01:54:29.125983 4761 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:54:29.127687 4796 raft_consensus.cc:491] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250814 01:54:29.128136 4796 raft_consensus.cc:513] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } }
I20250814 01:54:29.130266 4761 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:54:29.136157 4796 leader_election.cc:290] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129:33085), 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049)
I20250814 01:54:29.158390 4761 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
W20250814 01:54:29.160866 4640 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.106.131:40049: connect: Connection refused (error 111)
I20250814 01:54:29.163749 4761 hybrid_clock.cc:648] HybridClock initialized: now 1755136469163687 us; error 50 us; skew 500 ppm
I20250814 01:54:29.165096 4761 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
W20250814 01:54:29.165498 4640 leader_election.cc:336] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049): Network error: Client connection negotiation failed: client connection to 127.0.106.131:40049: connect: Connection refused (error 111)
I20250814 01:54:29.171897 4557 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b3acc7639edd406eb75d2d8662b9fc63" candidate_uuid: "e1ba29edf7c9461ca140735ae3609839" candidate_term: 3 candidate_status { last_received { term: 2 index: 13 } } ignore_live_leader: false dest_uuid: "9133c463c51a41d1bcda681cad9e6d9b" is_pre_election: true
I20250814 01:54:29.172647 4557 raft_consensus.cc:2466] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate e1ba29edf7c9461ca140735ae3609839 in term 2.
I20250814 01:54:29.173695 4802 raft_consensus.cc:491] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250814 01:54:29.174006 4641 leader_election.cc:304] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 9133c463c51a41d1bcda681cad9e6d9b, e1ba29edf7c9461ca140735ae3609839; no voters: 136ed1bf01a24d0db8541678c6fed252
I20250814 01:54:29.174216 4802 raft_consensus.cc:513] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } }
I20250814 01:54:29.175109 4796 raft_consensus.cc:2802] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250814 01:54:29.175422 4796 raft_consensus.cc:491] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250814 01:54:29.175724 4796 raft_consensus.cc:3058] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 2 FOLLOWER]: Advancing to term 3
I20250814 01:54:29.178756 4802 leader_election.cc:290] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers e1ba29edf7c9461ca140735ae3609839 (127.0.106.130:34695), 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049)
I20250814 01:54:29.179931 4761 webserver.cc:480] Webserver started at http://127.0.106.131:43431/ using document root <none> and password file <none>
I20250814 01:54:29.180176 4711 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b3acc7639edd406eb75d2d8662b9fc63" candidate_uuid: "9133c463c51a41d1bcda681cad9e6d9b" candidate_term: 3 candidate_status { last_received { term: 2 index: 13 } } ignore_live_leader: false dest_uuid: "e1ba29edf7c9461ca140735ae3609839" is_pre_election: true
I20250814 01:54:29.181147 4761 fs_manager.cc:362] Metadata directory not provided
I20250814 01:54:29.181428 4761 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:54:29.183180 4796 raft_consensus.cc:513] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 3 FOLLOWER]: Starting leader election with config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } }
I20250814 01:54:29.184398 4711 raft_consensus.cc:2391] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 3 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 9133c463c51a41d1bcda681cad9e6d9b in current term 3: Already voted for candidate e1ba29edf7c9461ca140735ae3609839 in this term.
W20250814 01:54:29.189548 4640 leader_election.cc:336] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049): Network error: Client connection negotiation failed: client connection to 127.0.106.131:40049: connect: Connection refused (error 111)
I20250814 01:54:29.190258 4796 leader_election.cc:290] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [CANDIDATE]: Term 3 election: Requested vote from peers 9133c463c51a41d1bcda681cad9e6d9b (127.0.106.129:33085), 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049)
I20250814 01:54:29.192144 4557 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "b3acc7639edd406eb75d2d8662b9fc63" candidate_uuid: "e1ba29edf7c9461ca140735ae3609839" candidate_term: 3 candidate_status { last_received { term: 2 index: 13 } } ignore_live_leader: false dest_uuid: "9133c463c51a41d1bcda681cad9e6d9b"
I20250814 01:54:29.192710 4557 raft_consensus.cc:3058] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 2 FOLLOWER]: Advancing to term 3
I20250814 01:54:29.199060 4557 raft_consensus.cc:2466] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate e1ba29edf7c9461ca140735ae3609839 in term 3.
I20250814 01:54:29.201367 4761 fs_manager.cc:714] Time spent opening directory manager: real 0.016s user 0.005s sys 0.009s
I20250814 01:54:29.201396 4641 leader_election.cc:304] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 9133c463c51a41d1bcda681cad9e6d9b, e1ba29edf7c9461ca140735ae3609839; no voters: 136ed1bf01a24d0db8541678c6fed252
I20250814 01:54:29.202410 4796 raft_consensus.cc:2802] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 3 FOLLOWER]: Leader election won for term 3
W20250814 01:54:29.203135 4490 leader_election.cc:336] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049): Network error: Client connection negotiation failed: client connection to 127.0.106.131:40049: connect: Connection refused (error 111)
I20250814 01:54:29.203572 4490 leader_election.cc:304] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 9133c463c51a41d1bcda681cad9e6d9b; no voters: 136ed1bf01a24d0db8541678c6fed252, e1ba29edf7c9461ca140735ae3609839
I20250814 01:54:29.204398 4802 raft_consensus.cc:2747] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 3 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250814 01:54:29.205778 4796 raft_consensus.cc:695] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [term 3 LEADER]: Becoming Leader. State: Replica: e1ba29edf7c9461ca140735ae3609839, State: Running, Role: LEADER
I20250814 01:54:29.206702 4796 consensus_queue.cc:237] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 13, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } }
I20250814 01:54:29.207648 4809 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:29.208796 4761 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.003s
I20250814 01:54:29.209174 4761 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "136ed1bf01a24d0db8541678c6fed252"
format_stamp: "Formatted at 2025-08-14 01:54:00 on dist-test-slave-30wj"
I20250814 01:54:29.211725 4761 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:54:29.216171 4415 catalog_manager.cc:5582] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 reported cstate change: term changed from 2 to 3. New cstate: current_term: 3 leader_uuid: "e1ba29edf7c9461ca140735ae3609839" committed_config { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } health_report { overall_health: UNKNOWN } } }
I20250814 01:54:29.273000 4761 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:54:29.274526 4761 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:54:29.274952 4761 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:54:29.277755 4761 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:54:29.283792 4820 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250814 01:54:29.294857 4761 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250814 01:54:29.295076 4761 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.013s user 0.002s sys 0.000s
I20250814 01:54:29.295357 4761 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250814 01:54:29.300428 4820 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Bootstrap starting.
I20250814 01:54:29.304734 4761 ts_tablet_manager.cc:610] Registered 2 tablets
I20250814 01:54:29.304991 4761 ts_tablet_manager.cc:589] Time spent register tablets: real 0.010s user 0.007s sys 0.000s
I20250814 01:54:29.372412 4820 log.cc:826] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Log is configured to *not* fsync() on all Append() calls
I20250814 01:54:29.491420 4761 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:40049
I20250814 01:54:29.491580 4927 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:40049 every 8 connection(s)
I20250814 01:54:29.494099 4761 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250814 01:54:29.501134 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 4761
I20250814 01:54:29.521917 4820 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Bootstrap replayed 1/1 log segments. Stats: ops{read=13 overwritten=0 applied=13 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:29.522970 4820 tablet_bootstrap.cc:492] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Bootstrap complete.
I20250814 01:54:29.524554 4820 ts_tablet_manager.cc:1397] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Time spent bootstrapping tablet: real 0.225s user 0.166s sys 0.044s
I20250814 01:54:29.533972 4928 heartbeater.cc:344] Connected to a master server at 127.0.106.190:41981
I20250814 01:54:29.534444 4928 heartbeater.cc:461] Registering TS with master...
I20250814 01:54:29.535573 4928 heartbeater.cc:507] Master 127.0.106.190:41981 requested a full tablet report, sending...
I20250814 01:54:29.539619 4415 ts_manager.cc:194] Registered new tserver with Master: 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049)
I20250814 01:54:29.538502 4820 raft_consensus.cc:357] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } }
I20250814 01:54:29.541390 4820 raft_consensus.cc:738] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 136ed1bf01a24d0db8541678c6fed252, State: Initialized, Role: FOLLOWER
I20250814 01:54:29.542273 4820 consensus_queue.cc:260] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } } peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } attrs { promote: false } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } }
I20250814 01:54:29.544481 4415 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:47827
I20250814 01:54:29.546701 426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250814 01:54:29.546974 4820 ts_tablet_manager.cc:1428] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Time spent starting tablet: real 0.022s user 0.021s sys 0.000s
I20250814 01:54:29.547865 4820 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252: Bootstrap starting.
I20250814 01:54:29.551748 4928 heartbeater.cc:499] Master 127.0.106.190:41981 was elected leader, sending a full tablet report...
I20250814 01:54:29.564361 426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
W20250814 01:54:29.569200 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
I20250814 01:54:29.660701 4820 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252: Bootstrap replayed 1/1 log segments. Stats: ops{read=13 overwritten=0 applied=13 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:54:29.661343 4820 tablet_bootstrap.cc:492] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252: Bootstrap complete.
I20250814 01:54:29.662330 4820 ts_tablet_manager.cc:1397] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252: Time spent bootstrapping tablet: real 0.115s user 0.098s sys 0.015s
I20250814 01:54:29.663877 4820 raft_consensus.cc:357] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } }
I20250814 01:54:29.664352 4820 raft_consensus.cc:738] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 136ed1bf01a24d0db8541678c6fed252, State: Initialized, Role: FOLLOWER
I20250814 01:54:29.664875 4820 consensus_queue.cc:260] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "e1ba29edf7c9461ca140735ae3609839" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34695 } } peers { permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false } } peers { permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false } }
I20250814 01:54:29.666254 4820 ts_tablet_manager.cc:1428] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252: Time spent starting tablet: real 0.004s user 0.004s sys 0.000s
I20250814 01:54:29.709817 4863 tablet_service.cc:1515] Processing DeleteTablet for tablet ea532542ffb34d05bafdc9cdf0dbf89a with delete_type TABLET_DATA_TOMBSTONED (TS 136ed1bf01a24d0db8541678c6fed252 not found in new config with opid_index 15) from {username='slave'} at 127.0.0.1:56666
I20250814 01:54:29.714540 4939 tablet_replica.cc:331] stopping tablet replica
I20250814 01:54:29.715212 4939 raft_consensus.cc:2241] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250814 01:54:29.715719 4939 raft_consensus.cc:2270] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250814 01:54:29.718160 4939 ts_tablet_manager.cc:1905] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250814 01:54:29.726473 4557 raft_consensus.cc:1273] T b3acc7639edd406eb75d2d8662b9fc63 P 9133c463c51a41d1bcda681cad9e6d9b [term 3 FOLLOWER]: Refusing update from remote peer e1ba29edf7c9461ca140735ae3609839: Log matching property violated. Preceding OpId in replica: term: 2 index: 13. Preceding OpId from leader: term: 3 index: 14. (index mismatch)
I20250814 01:54:29.727926 4940 consensus_queue.cc:1035] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [LEADER]: Connected to new peer: Peer: permanent_uuid: "9133c463c51a41d1bcda681cad9e6d9b" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33085 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 14, Last known committed idx: 13, Time since last communication: 0.000s
I20250814 01:54:29.731279 4939 ts_tablet_manager.cc:1918] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.13
I20250814 01:54:29.731707 4939 log.cc:1199] T ea532542ffb34d05bafdc9cdf0dbf89a P 136ed1bf01a24d0db8541678c6fed252: Deleting WAL directory at /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal/wals/ea532542ffb34d05bafdc9cdf0dbf89a
I20250814 01:54:29.733465 4401 catalog_manager.cc:4928] TS 136ed1bf01a24d0db8541678c6fed252 (127.0.106.131:40049): tablet ea532542ffb34d05bafdc9cdf0dbf89a (table TestTable [id=e3d5dc91a30e4feeb8bc61c7491a429d]) successfully deleted
I20250814 01:54:29.752228 4883 raft_consensus.cc:3058] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252 [term 2 FOLLOWER]: Advancing to term 3
I20250814 01:54:29.756767 4883 raft_consensus.cc:1273] T b3acc7639edd406eb75d2d8662b9fc63 P 136ed1bf01a24d0db8541678c6fed252 [term 3 FOLLOWER]: Refusing update from remote peer e1ba29edf7c9461ca140735ae3609839: Log matching property violated. Preceding OpId in replica: term: 2 index: 13. Preceding OpId from leader: term: 3 index: 14. (index mismatch)
I20250814 01:54:29.758133 4943 consensus_queue.cc:1035] T b3acc7639edd406eb75d2d8662b9fc63 P e1ba29edf7c9461ca140735ae3609839 [LEADER]: Connected to new peer: Peer: permanent_uuid: "136ed1bf01a24d0db8541678c6fed252" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 40049 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 14, Last known committed idx: 13, Time since last communication: 0.000s
W20250814 01:54:30.573060 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:31.576511 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:32.579989 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:33.583312 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:34.586591 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:35.590018 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:36.593425 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:37.596863 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:38.600159 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:39.604305 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:40.607947 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:41.611083 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:42.614804 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:43.618134 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:44.621492 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:45.624644 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:46.627878 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:47.631052 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250814 01:54:48.634312 426 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet ea532542ffb34d05bafdc9cdf0dbf89a: tablet_id: "ea532542ffb34d05bafdc9cdf0dbf89a" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
/home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/tools/kudu-admin-test.cc:3914: Failure
Failed
Bad status: Not found: not all replicas of tablets comprising table TestTable are registered yet
I20250814 01:54:49.639134 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 4452
I20250814 01:54:49.661528 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 4603
I20250814 01:54:49.685494 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 4761
I20250814 01:54:49.708108 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 4382
2025-08-14T01:54:49Z chronyd exiting
I20250814 01:54:49.754398 426 test_util.cc:183] -----------------------------------------------
I20250814 01:54:49.754623 426 test_util.cc:184] Had failures, leaving test files at /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1755136369252803-426-0
[ FAILED ] AdminCliTest.TestRebuildTables (57340 ms)
[----------] 5 tests from AdminCliTest (120433 ms total)
[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest
[ RUN ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4
I20250814 01:54:49.758606 426 test_util.cc:276] Using random seed: -1895751823
I20250814 01:54:49.762862 426 ts_itest-base.cc:115] Starting cluster with:
I20250814 01:54:49.763042 426 ts_itest-base.cc:116] --------------
I20250814 01:54:49.763192 426 ts_itest-base.cc:117] 5 tablet servers
I20250814 01:54:49.763324 426 ts_itest-base.cc:118] 3 replicas per TS
I20250814 01:54:49.763447 426 ts_itest-base.cc:119] --------------
2025-08-14T01:54:49Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-14T01:54:49Z Disabled control of system clock
I20250814 01:54:49.804476 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:37351
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:45387
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:37351
--raft_prepare_replacement_before_eviction=true with env {}
W20250814 01:54:50.105834 4971 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:54:50.106374 4971 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:54:50.106778 4971 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:54:50.137303 4971 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250814 01:54:50.137645 4971 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250814 01:54:50.137923 4971 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:54:50.138131 4971 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250814 01:54:50.138325 4971 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250814 01:54:50.173100 4971 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45387
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:37351
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:37351
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:54:50.174335 4971 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:54:50.175858 4971 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:54:50.186602 4977 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:51.589043 4976 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 4971
W20250814 01:54:51.734263 4971 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.548s user 0.649s sys 0.899s
W20250814 01:54:51.734635 4971 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.548s user 0.649s sys 0.899s
W20250814 01:54:50.187036 4978 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:51.735987 4979 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1547 milliseconds
W20250814 01:54:51.736413 4980 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:54:51.736763 4971 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:54:51.740226 4971 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:54:51.742583 4971 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:54:51.743958 4971 hybrid_clock.cc:648] HybridClock initialized: now 1755136491743910 us; error 61 us; skew 500 ppm
I20250814 01:54:51.744729 4971 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:54:51.750617 4971 webserver.cc:480] Webserver started at http://127.0.106.190:36597/ using document root <none> and password file <none>
I20250814 01:54:51.751505 4971 fs_manager.cc:362] Metadata directory not provided
I20250814 01:54:51.751699 4971 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:54:51.752101 4971 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:54:51.756470 4971 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "7275c0531bb54e9b8019bb2d13b15531"
format_stamp: "Formatted at 2025-08-14 01:54:51 on dist-test-slave-30wj"
I20250814 01:54:51.757561 4971 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "7275c0531bb54e9b8019bb2d13b15531"
format_stamp: "Formatted at 2025-08-14 01:54:51 on dist-test-slave-30wj"
I20250814 01:54:51.764894 4971 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.001s
I20250814 01:54:51.770327 4987 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:51.771350 4971 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.002s
I20250814 01:54:51.771673 4971 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "7275c0531bb54e9b8019bb2d13b15531"
format_stamp: "Formatted at 2025-08-14 01:54:51 on dist-test-slave-30wj"
I20250814 01:54:51.772007 4971 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:54:51.837684 4971 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:54:51.839218 4971 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:54:51.839651 4971 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:54:51.907534 4971 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:37351
I20250814 01:54:51.907593 5038 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:37351 every 8 connection(s)
I20250814 01:54:51.910212 4971 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250814 01:54:51.915071 5039 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:54:51.919135 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 4971
I20250814 01:54:51.919505 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250814 01:54:51.937992 5039 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531: Bootstrap starting.
I20250814 01:54:51.943219 5039 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531: Neither blocks nor log segments found. Creating new log.
I20250814 01:54:51.944922 5039 log.cc:826] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531: Log is configured to *not* fsync() on all Append() calls
I20250814 01:54:51.949601 5039 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531: No bootstrap required, opened a new log
I20250814 01:54:51.966269 5039 raft_consensus.cc:357] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7275c0531bb54e9b8019bb2d13b15531" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37351 } }
I20250814 01:54:51.966951 5039 raft_consensus.cc:383] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:54:51.967238 5039 raft_consensus.cc:738] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7275c0531bb54e9b8019bb2d13b15531, State: Initialized, Role: FOLLOWER
I20250814 01:54:51.968050 5039 consensus_queue.cc:260] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7275c0531bb54e9b8019bb2d13b15531" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37351 } }
I20250814 01:54:51.968537 5039 raft_consensus.cc:397] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:54:51.968806 5039 raft_consensus.cc:491] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:54:51.969113 5039 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:54:51.973069 5039 raft_consensus.cc:513] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7275c0531bb54e9b8019bb2d13b15531" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37351 } }
I20250814 01:54:51.973783 5039 leader_election.cc:304] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7275c0531bb54e9b8019bb2d13b15531; no voters:
I20250814 01:54:51.975906 5039 leader_election.cc:290] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250814 01:54:51.976583 5044 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:54:51.978773 5044 raft_consensus.cc:695] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [term 1 LEADER]: Becoming Leader. State: Replica: 7275c0531bb54e9b8019bb2d13b15531, State: Running, Role: LEADER
I20250814 01:54:51.979580 5044 consensus_queue.cc:237] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7275c0531bb54e9b8019bb2d13b15531" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37351 } }
I20250814 01:54:51.980294 5039 sys_catalog.cc:564] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [sys.catalog]: configured and running, proceeding with master startup.
I20250814 01:54:51.988499 5046 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 7275c0531bb54e9b8019bb2d13b15531. Latest consensus state: current_term: 1 leader_uuid: "7275c0531bb54e9b8019bb2d13b15531" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7275c0531bb54e9b8019bb2d13b15531" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37351 } } }
I20250814 01:54:51.988858 5045 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "7275c0531bb54e9b8019bb2d13b15531" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7275c0531bb54e9b8019bb2d13b15531" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 37351 } } }
I20250814 01:54:51.989404 5046 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [sys.catalog]: This master's current role is: LEADER
I20250814 01:54:51.989398 5045 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531 [sys.catalog]: This master's current role is: LEADER
I20250814 01:54:51.993992 5053 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250814 01:54:52.010769 5053 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250814 01:54:52.026818 5053 catalog_manager.cc:1349] Generated new cluster ID: 38638be606b1477187ecc03bb4f489b4
I20250814 01:54:52.027118 5053 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250814 01:54:52.063737 5053 catalog_manager.cc:1372] Generated new certificate authority record
I20250814 01:54:52.065189 5053 catalog_manager.cc:1506] Loading token signing keys...
I20250814 01:54:52.089958 5053 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 7275c0531bb54e9b8019bb2d13b15531: Generated new TSK 0
I20250814 01:54:52.090902 5053 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250814 01:54:52.116381 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37351
--builtin_ntp_servers=127.0.106.148:45387
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
W20250814 01:54:52.411365 5063 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:54:52.411854 5063 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:54:52.412340 5063 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:54:52.443022 5063 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250814 01:54:52.443420 5063 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:54:52.444198 5063 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250814 01:54:52.479084 5063 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45387
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37351
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:54:52.480320 5063 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:54:52.481946 5063 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:54:52.494267 5069 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:52.500172 5072 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:53.829597 5071 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1332 milliseconds
I20250814 01:54:53.829667 5063 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250814 01:54:52.496753 5070 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:54:53.834168 5063 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:54:53.848004 5063 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:54:53.849529 5063 hybrid_clock.cc:648] HybridClock initialized: now 1755136493849481 us; error 65 us; skew 500 ppm
I20250814 01:54:53.850569 5063 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:54:53.857115 5063 webserver.cc:480] Webserver started at http://127.0.106.129:38243/ using document root <none> and password file <none>
I20250814 01:54:53.858048 5063 fs_manager.cc:362] Metadata directory not provided
I20250814 01:54:53.858273 5063 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:54:53.858728 5063 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:54:53.863142 5063 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "588b5c2641bf46c5a4c9408ed32193d0"
format_stamp: "Formatted at 2025-08-14 01:54:53 on dist-test-slave-30wj"
I20250814 01:54:53.864192 5063 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "588b5c2641bf46c5a4c9408ed32193d0"
format_stamp: "Formatted at 2025-08-14 01:54:53 on dist-test-slave-30wj"
I20250814 01:54:53.871454 5063 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.003s
I20250814 01:54:53.877509 5079 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:53.878594 5063 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250814 01:54:53.878904 5063 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "588b5c2641bf46c5a4c9408ed32193d0"
format_stamp: "Formatted at 2025-08-14 01:54:53 on dist-test-slave-30wj"
I20250814 01:54:53.879231 5063 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:54:53.943795 5063 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:54:53.945466 5063 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:54:53.945915 5063 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:54:53.949304 5063 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:54:53.953752 5063 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:54:53.953943 5063 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.001s sys 0.000s
I20250814 01:54:53.954195 5063 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:54:53.954339 5063 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:54.111620 5063 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:33103
I20250814 01:54:54.111714 5191 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:33103 every 8 connection(s)
I20250814 01:54:54.114323 5063 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250814 01:54:54.124632 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 5063
I20250814 01:54:54.125137 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250814 01:54:54.133019 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:0
--local_ip_for_outbound_sockets=127.0.106.130
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37351
--builtin_ntp_servers=127.0.106.148:45387
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250814 01:54:54.136711 5192 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37351
I20250814 01:54:54.137142 5192 heartbeater.cc:461] Registering TS with master...
I20250814 01:54:54.138208 5192 heartbeater.cc:507] Master 127.0.106.190:37351 requested a full tablet report, sending...
I20250814 01:54:54.140633 5004 ts_manager.cc:194] Registered new tserver with Master: 588b5c2641bf46c5a4c9408ed32193d0 (127.0.106.129:33103)
I20250814 01:54:54.142639 5004 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:49119
W20250814 01:54:54.469283 5196 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:54:54.469781 5196 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:54:54.470238 5196 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:54:54.501348 5196 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250814 01:54:54.501845 5196 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:54:54.502940 5196 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250814 01:54:54.537401 5196 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45387
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37351
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:54:54.538645 5196 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:54:54.540153 5196 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:54:54.551357 5202 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:54:55.146071 5192 heartbeater.cc:499] Master 127.0.106.190:37351 was elected leader, sending a full tablet report...
W20250814 01:54:54.552156 5203 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:55.648950 5204 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1094 milliseconds
W20250814 01:54:55.649569 5205 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:55.652621 5196 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.101s user 0.370s sys 0.728s
W20250814 01:54:55.652980 5196 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.101s user 0.370s sys 0.728s
I20250814 01:54:55.653275 5196 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:54:55.654466 5196 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:54:55.656610 5196 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:54:55.657958 5196 hybrid_clock.cc:648] HybridClock initialized: now 1755136495657918 us; error 52 us; skew 500 ppm
I20250814 01:54:55.658718 5196 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:54:55.665532 5196 webserver.cc:480] Webserver started at http://127.0.106.130:45113/ using document root <none> and password file <none>
I20250814 01:54:55.666554 5196 fs_manager.cc:362] Metadata directory not provided
I20250814 01:54:55.666760 5196 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:54:55.667348 5196 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:54:55.671736 5196 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "b6be6be6e5e0461892b59c2712773abe"
format_stamp: "Formatted at 2025-08-14 01:54:55 on dist-test-slave-30wj"
I20250814 01:54:55.672881 5196 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "b6be6be6e5e0461892b59c2712773abe"
format_stamp: "Formatted at 2025-08-14 01:54:55 on dist-test-slave-30wj"
I20250814 01:54:55.680846 5196 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.009s sys 0.001s
I20250814 01:54:55.687736 5212 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:55.688863 5196 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.000s sys 0.005s
I20250814 01:54:55.689188 5196 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "b6be6be6e5e0461892b59c2712773abe"
format_stamp: "Formatted at 2025-08-14 01:54:55 on dist-test-slave-30wj"
I20250814 01:54:55.689523 5196 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:54:55.749394 5196 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:54:55.750803 5196 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:54:55.751224 5196 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:54:55.753682 5196 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:54:55.757606 5196 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:54:55.757829 5196 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.001s sys 0.000s
I20250814 01:54:55.758074 5196 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:54:55.758229 5196 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:55.883596 5196 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:46481
I20250814 01:54:55.883702 5324 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:46481 every 8 connection(s)
I20250814 01:54:55.886138 5196 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250814 01:54:55.888919 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 5196
I20250814 01:54:55.889429 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250814 01:54:55.895913 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:0
--local_ip_for_outbound_sockets=127.0.106.131
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37351
--builtin_ntp_servers=127.0.106.148:45387
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250814 01:54:55.906241 5325 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37351
I20250814 01:54:55.906711 5325 heartbeater.cc:461] Registering TS with master...
I20250814 01:54:55.907917 5325 heartbeater.cc:507] Master 127.0.106.190:37351 requested a full tablet report, sending...
I20250814 01:54:55.910322 5004 ts_manager.cc:194] Registered new tserver with Master: b6be6be6e5e0461892b59c2712773abe (127.0.106.130:46481)
I20250814 01:54:55.911903 5004 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:40235
W20250814 01:54:56.190227 5329 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:54:56.190706 5329 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:54:56.191197 5329 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:54:56.222720 5329 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250814 01:54:56.223115 5329 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:54:56.223886 5329 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250814 01:54:56.258749 5329 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45387
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37351
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:54:56.260149 5329 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:54:56.261682 5329 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:54:56.272490 5335 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:54:56.915449 5325 heartbeater.cc:499] Master 127.0.106.190:37351 was elected leader, sending a full tablet report...
W20250814 01:54:56.273766 5336 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:57.676148 5334 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 5329
W20250814 01:54:57.708958 5329 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.435s user 0.546s sys 0.879s
W20250814 01:54:57.709488 5329 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.436s user 0.546s sys 0.879s
W20250814 01:54:57.711371 5338 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:57.712746 5337 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1436 milliseconds
I20250814 01:54:57.712842 5329 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:54:57.714015 5329 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:54:57.716073 5329 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:54:57.717437 5329 hybrid_clock.cc:648] HybridClock initialized: now 1755136497717397 us; error 45 us; skew 500 ppm
I20250814 01:54:57.718288 5329 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:54:57.724366 5329 webserver.cc:480] Webserver started at http://127.0.106.131:41097/ using document root <none> and password file <none>
I20250814 01:54:57.725314 5329 fs_manager.cc:362] Metadata directory not provided
I20250814 01:54:57.725538 5329 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:54:57.726022 5329 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:54:57.730456 5329 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "7b17f2ac5ceb480ca587e062a10699fa"
format_stamp: "Formatted at 2025-08-14 01:54:57 on dist-test-slave-30wj"
I20250814 01:54:57.731578 5329 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "7b17f2ac5ceb480ca587e062a10699fa"
format_stamp: "Formatted at 2025-08-14 01:54:57 on dist-test-slave-30wj"
I20250814 01:54:57.738860 5329 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.008s sys 0.000s
I20250814 01:54:57.744513 5345 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:57.745507 5329 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.001s sys 0.001s
I20250814 01:54:57.745885 5329 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "7b17f2ac5ceb480ca587e062a10699fa"
format_stamp: "Formatted at 2025-08-14 01:54:57 on dist-test-slave-30wj"
I20250814 01:54:57.746223 5329 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:54:57.809336 5329 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:54:57.810758 5329 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:54:57.811195 5329 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:54:57.813691 5329 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:54:57.817572 5329 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:54:57.817798 5329 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.001s
I20250814 01:54:57.818048 5329 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:54:57.818205 5329 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:57.949334 5329 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:41957
I20250814 01:54:57.949437 5457 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:41957 every 8 connection(s)
I20250814 01:54:57.951830 5329 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250814 01:54:57.960412 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 5329
I20250814 01:54:57.960788 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250814 01:54:57.966935 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.132:0
--local_ip_for_outbound_sockets=127.0.106.132
--webserver_interface=127.0.106.132
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37351
--builtin_ntp_servers=127.0.106.148:45387
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250814 01:54:57.975626 5458 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37351
I20250814 01:54:57.976116 5458 heartbeater.cc:461] Registering TS with master...
I20250814 01:54:57.977098 5458 heartbeater.cc:507] Master 127.0.106.190:37351 requested a full tablet report, sending...
I20250814 01:54:57.979138 5004 ts_manager.cc:194] Registered new tserver with Master: 7b17f2ac5ceb480ca587e062a10699fa (127.0.106.131:41957)
I20250814 01:54:57.980355 5004 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:52943
W20250814 01:54:58.264356 5462 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:54:58.264856 5462 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:54:58.265337 5462 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:54:58.296029 5462 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250814 01:54:58.296418 5462 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:54:58.297188 5462 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.132
I20250814 01:54:58.331596 5462 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45387
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.132:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.0.106.132
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37351
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.132
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:54:58.332877 5462 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:54:58.334434 5462 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:54:58.345883 5468 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:54:58.983392 5458 heartbeater.cc:499] Master 127.0.106.190:37351 was elected leader, sending a full tablet report...
W20250814 01:54:58.346774 5469 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:59.598563 5471 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:54:59.601595 5470 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1253 milliseconds
W20250814 01:54:59.602434 5462 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.256s user 0.515s sys 0.733s
W20250814 01:54:59.602771 5462 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.257s user 0.515s sys 0.733s
I20250814 01:54:59.603034 5462 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:54:59.604418 5462 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:54:59.607040 5462 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:54:59.608537 5462 hybrid_clock.cc:648] HybridClock initialized: now 1755136499608486 us; error 45 us; skew 500 ppm
I20250814 01:54:59.609589 5462 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:54:59.618196 5462 webserver.cc:480] Webserver started at http://127.0.106.132:37861/ using document root <none> and password file <none>
I20250814 01:54:59.619550 5462 fs_manager.cc:362] Metadata directory not provided
I20250814 01:54:59.619829 5462 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:54:59.620402 5462 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:54:59.628131 5462 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "e940aefaa8c94c4e94f8b98e003c9d27"
format_stamp: "Formatted at 2025-08-14 01:54:59 on dist-test-slave-30wj"
I20250814 01:54:59.629616 5462 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "e940aefaa8c94c4e94f8b98e003c9d27"
format_stamp: "Formatted at 2025-08-14 01:54:59 on dist-test-slave-30wj"
I20250814 01:54:59.638967 5462 fs_manager.cc:696] Time spent creating directory manager: real 0.009s user 0.006s sys 0.002s
I20250814 01:54:59.646274 5479 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:59.647413 5462 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.003s
I20250814 01:54:59.647805 5462 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "e940aefaa8c94c4e94f8b98e003c9d27"
format_stamp: "Formatted at 2025-08-14 01:54:59 on dist-test-slave-30wj"
I20250814 01:54:59.648248 5462 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:54:59.728232 5462 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:54:59.729621 5462 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:54:59.730052 5462 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:54:59.732380 5462 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:54:59.736380 5462 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:54:59.736570 5462 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:59.736760 5462 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:54:59.736893 5462 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:54:59.860420 5462 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.132:45483
I20250814 01:54:59.860569 5591 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.132:45483 every 8 connection(s)
I20250814 01:54:59.862895 5462 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250814 01:54:59.871218 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 5462
I20250814 01:54:59.871774 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250814 01:54:59.878044 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.133:0
--local_ip_for_outbound_sockets=127.0.106.133
--webserver_interface=127.0.106.133
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37351
--builtin_ntp_servers=127.0.106.148:45387
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250814 01:54:59.882599 5592 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37351
I20250814 01:54:59.883049 5592 heartbeater.cc:461] Registering TS with master...
I20250814 01:54:59.884018 5592 heartbeater.cc:507] Master 127.0.106.190:37351 requested a full tablet report, sending...
I20250814 01:54:59.885993 5004 ts_manager.cc:194] Registered new tserver with Master: e940aefaa8c94c4e94f8b98e003c9d27 (127.0.106.132:45483)
I20250814 01:54:59.887174 5004 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.132:49403
W20250814 01:55:00.177484 5596 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:55:00.177995 5596 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:55:00.178475 5596 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:55:00.209632 5596 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250814 01:55:00.210047 5596 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:55:00.210824 5596 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.133
I20250814 01:55:00.244731 5596 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45387
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.133:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--webserver_interface=127.0.106.133
--webserver_port=0
--tserver_master_addrs=127.0.106.190:37351
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.133
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:55:00.246027 5596 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:55:00.247611 5596 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:55:00.258986 5603 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:55:00.890185 5592 heartbeater.cc:499] Master 127.0.106.190:37351 was elected leader, sending a full tablet report...
W20250814 01:55:00.259413 5604 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:01.452389 5606 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:01.455147 5605 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1190 milliseconds
W20250814 01:55:01.455780 5596 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.197s user 0.496s sys 0.685s
W20250814 01:55:01.456023 5596 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.197s user 0.496s sys 0.685s
I20250814 01:55:01.456233 5596 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:55:01.457228 5596 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:55:01.459414 5596 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:55:01.460747 5596 hybrid_clock.cc:648] HybridClock initialized: now 1755136501460722 us; error 42 us; skew 500 ppm
I20250814 01:55:01.461531 5596 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:55:01.468947 5596 webserver.cc:480] Webserver started at http://127.0.106.133:39523/ using document root <none> and password file <none>
I20250814 01:55:01.469954 5596 fs_manager.cc:362] Metadata directory not provided
I20250814 01:55:01.470172 5596 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:55:01.470604 5596 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:55:01.475025 5596 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/data/instance:
uuid: "1f82d3f68fe1425eabc95099b3e11fa4"
format_stamp: "Formatted at 2025-08-14 01:55:01 on dist-test-slave-30wj"
I20250814 01:55:01.476181 5596 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/wal/instance:
uuid: "1f82d3f68fe1425eabc95099b3e11fa4"
format_stamp: "Formatted at 2025-08-14 01:55:01 on dist-test-slave-30wj"
I20250814 01:55:01.483923 5596 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.004s
I20250814 01:55:01.489740 5613 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:01.490898 5596 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.003s
I20250814 01:55:01.491230 5596 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/wal
uuid: "1f82d3f68fe1425eabc95099b3e11fa4"
format_stamp: "Formatted at 2025-08-14 01:55:01 on dist-test-slave-30wj"
I20250814 01:55:01.491561 5596 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:55:01.551793 5596 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:55:01.553261 5596 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:55:01.553689 5596 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:55:01.556169 5596 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:55:01.560202 5596 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:55:01.560418 5596 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:01.560653 5596 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:55:01.560806 5596 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:01.706053 5596 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.133:38109
I20250814 01:55:01.706158 5725 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.133:38109 every 8 connection(s)
I20250814 01:55:01.708593 5596 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/data/info.pb
I20250814 01:55:01.709939 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 5596
I20250814 01:55:01.710470 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1755136369252803-426-0/raft_consensus-itest-cluster/ts-4/wal/instance
I20250814 01:55:01.733157 5726 heartbeater.cc:344] Connected to a master server at 127.0.106.190:37351
I20250814 01:55:01.733659 5726 heartbeater.cc:461] Registering TS with master...
I20250814 01:55:01.734863 5726 heartbeater.cc:507] Master 127.0.106.190:37351 requested a full tablet report, sending...
I20250814 01:55:01.737157 5004 ts_manager.cc:194] Registered new tserver with Master: 1f82d3f68fe1425eabc95099b3e11fa4 (127.0.106.133:38109)
I20250814 01:55:01.738618 5004 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.133:50663
I20250814 01:55:01.748106 426 external_mini_cluster.cc:949] 5 TS(s) registered with all masters
I20250814 01:55:01.787348 5004 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:34166:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250814 01:55:01.869032 5260 tablet_service.cc:1468] Processing CreateTablet for tablet 076af2d144254f5a9c5375a0402a36d4 (DEFAULT_TABLE table=TestTable [id=504c0218ec6a4a2bbc510b0aa5cdb5e3]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:55:01.870146 5127 tablet_service.cc:1468] Processing CreateTablet for tablet 076af2d144254f5a9c5375a0402a36d4 (DEFAULT_TABLE table=TestTable [id=504c0218ec6a4a2bbc510b0aa5cdb5e3]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:55:01.871699 5260 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 076af2d144254f5a9c5375a0402a36d4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:55:01.872090 5127 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 076af2d144254f5a9c5375a0402a36d4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:55:01.883605 5527 tablet_service.cc:1468] Processing CreateTablet for tablet 076af2d144254f5a9c5375a0402a36d4 (DEFAULT_TABLE table=TestTable [id=504c0218ec6a4a2bbc510b0aa5cdb5e3]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:55:01.885820 5527 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 076af2d144254f5a9c5375a0402a36d4. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:55:01.906973 5745 tablet_bootstrap.cc:492] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0: Bootstrap starting.
I20250814 01:55:01.914860 5745 tablet_bootstrap.cc:654] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0: Neither blocks nor log segments found. Creating new log.
I20250814 01:55:01.917855 5745 log.cc:826] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0: Log is configured to *not* fsync() on all Append() calls
I20250814 01:55:01.919993 5746 tablet_bootstrap.cc:492] T 076af2d144254f5a9c5375a0402a36d4 P b6be6be6e5e0461892b59c2712773abe: Bootstrap starting.
I20250814 01:55:01.929044 5746 tablet_bootstrap.cc:654] T 076af2d144254f5a9c5375a0402a36d4 P b6be6be6e5e0461892b59c2712773abe: Neither blocks nor log segments found. Creating new log.
I20250814 01:55:01.930123 5748 tablet_bootstrap.cc:492] T 076af2d144254f5a9c5375a0402a36d4 P e940aefaa8c94c4e94f8b98e003c9d27: Bootstrap starting.
I20250814 01:55:01.932448 5746 log.cc:826] T 076af2d144254f5a9c5375a0402a36d4 P b6be6be6e5e0461892b59c2712773abe: Log is configured to *not* fsync() on all Append() calls
I20250814 01:55:01.943672 5748 tablet_bootstrap.cc:654] T 076af2d144254f5a9c5375a0402a36d4 P e940aefaa8c94c4e94f8b98e003c9d27: Neither blocks nor log segments found. Creating new log.
I20250814 01:55:01.943835 5745 tablet_bootstrap.cc:492] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0: No bootstrap required, opened a new log
I20250814 01:55:01.944343 5745 ts_tablet_manager.cc:1397] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0: Time spent bootstrapping tablet: real 0.039s user 0.014s sys 0.011s
I20250814 01:55:01.946209 5748 log.cc:826] T 076af2d144254f5a9c5375a0402a36d4 P e940aefaa8c94c4e94f8b98e003c9d27: Log is configured to *not* fsync() on all Append() calls
I20250814 01:55:01.962399 5748 tablet_bootstrap.cc:492] T 076af2d144254f5a9c5375a0402a36d4 P e940aefaa8c94c4e94f8b98e003c9d27: No bootstrap required, opened a new log
I20250814 01:55:01.963268 5748 ts_tablet_manager.cc:1397] T 076af2d144254f5a9c5375a0402a36d4 P e940aefaa8c94c4e94f8b98e003c9d27: Time spent bootstrapping tablet: real 0.034s user 0.019s sys 0.003s
I20250814 01:55:01.972565 5745 raft_consensus.cc:357] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b6be6be6e5e0461892b59c2712773abe" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46481 } } peers { permanent_uuid: "588b5c2641bf46c5a4c9408ed32193d0" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33103 } } peers { permanent_uuid: "e940aefaa8c94c4e94f8b98e003c9d27" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 45483 } }
I20250814 01:55:01.973779 5745 raft_consensus.cc:383] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:55:01.974151 5745 raft_consensus.cc:738] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 588b5c2641bf46c5a4c9408ed32193d0, State: Initialized, Role: FOLLOWER
I20250814 01:55:01.975139 5745 consensus_queue.cc:260] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b6be6be6e5e0461892b59c2712773abe" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46481 } } peers { permanent_uuid: "588b5c2641bf46c5a4c9408ed32193d0" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33103 } } peers { permanent_uuid: "e940aefaa8c94c4e94f8b98e003c9d27" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 45483 } }
I20250814 01:55:01.986255 5745 ts_tablet_manager.cc:1428] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0: Time spent starting tablet: real 0.042s user 0.031s sys 0.011s
I20250814 01:55:01.987179 5746 tablet_bootstrap.cc:492] T 076af2d144254f5a9c5375a0402a36d4 P b6be6be6e5e0461892b59c2712773abe: No bootstrap required, opened a new log
I20250814 01:55:01.987800 5746 ts_tablet_manager.cc:1397] T 076af2d144254f5a9c5375a0402a36d4 P b6be6be6e5e0461892b59c2712773abe: Time spent bootstrapping tablet: real 0.069s user 0.016s sys 0.026s
I20250814 01:55:01.993258 5748 raft_consensus.cc:357] T 076af2d144254f5a9c5375a0402a36d4 P e940aefaa8c94c4e94f8b98e003c9d27 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b6be6be6e5e0461892b59c2712773abe" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46481 } } peers { permanent_uuid: "588b5c2641bf46c5a4c9408ed32193d0" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33103 } } peers { permanent_uuid: "e940aefaa8c94c4e94f8b98e003c9d27" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 45483 } }
I20250814 01:55:01.994112 5748 raft_consensus.cc:383] T 076af2d144254f5a9c5375a0402a36d4 P e940aefaa8c94c4e94f8b98e003c9d27 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:55:01.994537 5748 raft_consensus.cc:738] T 076af2d144254f5a9c5375a0402a36d4 P e940aefaa8c94c4e94f8b98e003c9d27 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: e940aefaa8c94c4e94f8b98e003c9d27, State: Initialized, Role: FOLLOWER
I20250814 01:55:01.995469 5748 consensus_queue.cc:260] T 076af2d144254f5a9c5375a0402a36d4 P e940aefaa8c94c4e94f8b98e003c9d27 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b6be6be6e5e0461892b59c2712773abe" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46481 } } peers { permanent_uuid: "588b5c2641bf46c5a4c9408ed32193d0" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33103 } } peers { permanent_uuid: "e940aefaa8c94c4e94f8b98e003c9d27" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 45483 } }
I20250814 01:55:02.000525 5748 ts_tablet_manager.cc:1428] T 076af2d144254f5a9c5375a0402a36d4 P e940aefaa8c94c4e94f8b98e003c9d27: Time spent starting tablet: real 0.037s user 0.022s sys 0.010s
I20250814 01:55:02.011420 5746 raft_consensus.cc:357] T 076af2d144254f5a9c5375a0402a36d4 P b6be6be6e5e0461892b59c2712773abe [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b6be6be6e5e0461892b59c2712773abe" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46481 } } peers { permanent_uuid: "588b5c2641bf46c5a4c9408ed32193d0" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33103 } } peers { permanent_uuid: "e940aefaa8c94c4e94f8b98e003c9d27" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 45483 } }
I20250814 01:55:02.012257 5746 raft_consensus.cc:383] T 076af2d144254f5a9c5375a0402a36d4 P b6be6be6e5e0461892b59c2712773abe [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:55:02.012527 5746 raft_consensus.cc:738] T 076af2d144254f5a9c5375a0402a36d4 P b6be6be6e5e0461892b59c2712773abe [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: b6be6be6e5e0461892b59c2712773abe, State: Initialized, Role: FOLLOWER
I20250814 01:55:02.013307 5746 consensus_queue.cc:260] T 076af2d144254f5a9c5375a0402a36d4 P b6be6be6e5e0461892b59c2712773abe [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b6be6be6e5e0461892b59c2712773abe" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46481 } } peers { permanent_uuid: "588b5c2641bf46c5a4c9408ed32193d0" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33103 } } peers { permanent_uuid: "e940aefaa8c94c4e94f8b98e003c9d27" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 45483 } }
I20250814 01:55:02.017171 5746 ts_tablet_manager.cc:1428] T 076af2d144254f5a9c5375a0402a36d4 P b6be6be6e5e0461892b59c2712773abe: Time spent starting tablet: real 0.029s user 0.024s sys 0.003s
W20250814 01:55:02.119208 5593 tablet.cc:2378] T 076af2d144254f5a9c5375a0402a36d4 P e940aefaa8c94c4e94f8b98e003c9d27: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250814 01:55:02.126904 5193 tablet.cc:2378] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250814 01:55:02.146765 5326 tablet.cc:2378] T 076af2d144254f5a9c5375a0402a36d4 P b6be6be6e5e0461892b59c2712773abe: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250814 01:55:02.205193 5751 raft_consensus.cc:491] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250814 01:55:02.205742 5751 raft_consensus.cc:513] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b6be6be6e5e0461892b59c2712773abe" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46481 } } peers { permanent_uuid: "588b5c2641bf46c5a4c9408ed32193d0" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33103 } } peers { permanent_uuid: "e940aefaa8c94c4e94f8b98e003c9d27" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 45483 } }
I20250814 01:55:02.208101 5751 leader_election.cc:290] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers b6be6be6e5e0461892b59c2712773abe (127.0.106.130:46481), e940aefaa8c94c4e94f8b98e003c9d27 (127.0.106.132:45483)
I20250814 01:55:02.219405 5280 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "076af2d144254f5a9c5375a0402a36d4" candidate_uuid: "588b5c2641bf46c5a4c9408ed32193d0" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b6be6be6e5e0461892b59c2712773abe" is_pre_election: true
I20250814 01:55:02.220329 5280 raft_consensus.cc:2466] T 076af2d144254f5a9c5375a0402a36d4 P b6be6be6e5e0461892b59c2712773abe [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 588b5c2641bf46c5a4c9408ed32193d0 in term 0.
I20250814 01:55:02.221500 5083 leader_election.cc:304] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 588b5c2641bf46c5a4c9408ed32193d0, b6be6be6e5e0461892b59c2712773abe; no voters:
I20250814 01:55:02.221453 5547 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "076af2d144254f5a9c5375a0402a36d4" candidate_uuid: "588b5c2641bf46c5a4c9408ed32193d0" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "e940aefaa8c94c4e94f8b98e003c9d27" is_pre_election: true
I20250814 01:55:02.222246 5751 raft_consensus.cc:2802] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250814 01:55:02.222163 5547 raft_consensus.cc:2466] T 076af2d144254f5a9c5375a0402a36d4 P e940aefaa8c94c4e94f8b98e003c9d27 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 588b5c2641bf46c5a4c9408ed32193d0 in term 0.
I20250814 01:55:02.222576 5751 raft_consensus.cc:491] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250814 01:55:02.222887 5751 raft_consensus.cc:3058] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:55:02.227885 5751 raft_consensus.cc:513] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b6be6be6e5e0461892b59c2712773abe" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46481 } } peers { permanent_uuid: "588b5c2641bf46c5a4c9408ed32193d0" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33103 } } peers { permanent_uuid: "e940aefaa8c94c4e94f8b98e003c9d27" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 45483 } }
I20250814 01:55:02.229213 5751 leader_election.cc:290] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [CANDIDATE]: Term 1 election: Requested vote from peers b6be6be6e5e0461892b59c2712773abe (127.0.106.130:46481), e940aefaa8c94c4e94f8b98e003c9d27 (127.0.106.132:45483)
I20250814 01:55:02.230024 5280 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "076af2d144254f5a9c5375a0402a36d4" candidate_uuid: "588b5c2641bf46c5a4c9408ed32193d0" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "b6be6be6e5e0461892b59c2712773abe"
I20250814 01:55:02.230116 5547 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "076af2d144254f5a9c5375a0402a36d4" candidate_uuid: "588b5c2641bf46c5a4c9408ed32193d0" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "e940aefaa8c94c4e94f8b98e003c9d27"
I20250814 01:55:02.230453 5280 raft_consensus.cc:3058] T 076af2d144254f5a9c5375a0402a36d4 P b6be6be6e5e0461892b59c2712773abe [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:55:02.230527 5547 raft_consensus.cc:3058] T 076af2d144254f5a9c5375a0402a36d4 P e940aefaa8c94c4e94f8b98e003c9d27 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:55:02.235930 5547 raft_consensus.cc:2466] T 076af2d144254f5a9c5375a0402a36d4 P e940aefaa8c94c4e94f8b98e003c9d27 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 588b5c2641bf46c5a4c9408ed32193d0 in term 1.
I20250814 01:55:02.236295 5280 raft_consensus.cc:2466] T 076af2d144254f5a9c5375a0402a36d4 P b6be6be6e5e0461892b59c2712773abe [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 588b5c2641bf46c5a4c9408ed32193d0 in term 1.
I20250814 01:55:02.236830 5083 leader_election.cc:304] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 588b5c2641bf46c5a4c9408ed32193d0, e940aefaa8c94c4e94f8b98e003c9d27; no voters:
I20250814 01:55:02.237473 5751 raft_consensus.cc:2802] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:55:02.238976 5751 raft_consensus.cc:695] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [term 1 LEADER]: Becoming Leader. State: Replica: 588b5c2641bf46c5a4c9408ed32193d0, State: Running, Role: LEADER
I20250814 01:55:02.239817 5751 consensus_queue.cc:237] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b6be6be6e5e0461892b59c2712773abe" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46481 } } peers { permanent_uuid: "588b5c2641bf46c5a4c9408ed32193d0" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33103 } } peers { permanent_uuid: "e940aefaa8c94c4e94f8b98e003c9d27" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 45483 } }
I20250814 01:55:02.251030 5004 catalog_manager.cc:5582] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 reported cstate change: term changed from 0 to 1, leader changed from <none> to 588b5c2641bf46c5a4c9408ed32193d0 (127.0.106.129). New cstate: current_term: 1 leader_uuid: "588b5c2641bf46c5a4c9408ed32193d0" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "b6be6be6e5e0461892b59c2712773abe" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46481 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "588b5c2641bf46c5a4c9408ed32193d0" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 33103 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "e940aefaa8c94c4e94f8b98e003c9d27" member_type: VOTER last_known_addr { host: "127.0.106.132" port: 45483 } health_report { overall_health: UNKNOWN } } }
I20250814 01:55:02.325109 426 external_mini_cluster.cc:949] 5 TS(s) registered with all masters
I20250814 01:55:02.329084 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 588b5c2641bf46c5a4c9408ed32193d0 to finish bootstrapping
I20250814 01:55:02.341908 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver b6be6be6e5e0461892b59c2712773abe to finish bootstrapping
I20250814 01:55:02.352361 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver e940aefaa8c94c4e94f8b98e003c9d27 to finish bootstrapping
I20250814 01:55:02.363521 426 test_util.cc:276] Using random seed: -1883146903
I20250814 01:55:02.388509 426 test_workload.cc:405] TestWorkload: Skipping table creation because table TestTable already exists
I20250814 01:55:02.389369 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 5462
W20250814 01:55:02.432202 5083 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.106.132:45483: connect: Connection refused (error 111)
I20250814 01:55:02.432742 5280 raft_consensus.cc:1273] T 076af2d144254f5a9c5375a0402a36d4 P b6be6be6e5e0461892b59c2712773abe [term 1 FOLLOWER]: Refusing update from remote peer 588b5c2641bf46c5a4c9408ed32193d0: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250814 01:55:02.435307 5751 consensus_queue.cc:1035] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 [LEADER]: Connected to new peer: Peer: permanent_uuid: "b6be6be6e5e0461892b59c2712773abe" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 46481 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250814 01:55:02.437254 5083 consensus_peers.cc:489] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 -> Peer e940aefaa8c94c4e94f8b98e003c9d27 (127.0.106.132:45483): Couldn't send request to peer e940aefaa8c94c4e94f8b98e003c9d27. Status: Network error: Client connection negotiation failed: client connection to 127.0.106.132:45483: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250814 01:55:02.453723 5768 mvcc.cc:204] Tried to move back new op lower bound from 7189039113943429120 to 7189039113183121408. Current Snapshot: MvccSnapshot[applied={T|T < 7189039113943429120}]
I20250814 01:55:02.457998 5771 mvcc.cc:204] Tried to move back new op lower bound from 7189039113943429120 to 7189039113183121408. Current Snapshot: MvccSnapshot[applied={T|T < 7189039113943429120}]
I20250814 01:55:02.742380 5726 heartbeater.cc:499] Master 127.0.106.190:37351 was elected leader, sending a full tablet report...
W20250814 01:55:04.680222 5083 consensus_peers.cc:489] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 -> Peer e940aefaa8c94c4e94f8b98e003c9d27 (127.0.106.132:45483): Couldn't send request to peer e940aefaa8c94c4e94f8b98e003c9d27. Status: Network error: Client connection negotiation failed: client connection to 127.0.106.132:45483: connect: Connection refused (error 111). This is attempt 6: this message will repeat every 5th retry.
I20250814 01:55:05.007444 5661 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250814 01:55:05.013345 5127 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250814 01:55:05.045125 5393 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250814 01:55:05.054821 5260 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250814 01:55:06.725232 5661 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250814 01:55:06.731710 5393 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250814 01:55:06.756366 5127 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250814 01:55:06.774029 5260 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250814 01:55:07.134544 5083 consensus_peers.cc:489] T 076af2d144254f5a9c5375a0402a36d4 P 588b5c2641bf46c5a4c9408ed32193d0 -> Peer e940aefaa8c94c4e94f8b98e003c9d27 (127.0.106.132:45483): Couldn't send request to peer e940aefaa8c94c4e94f8b98e003c9d27. Status: Network error: Client connection negotiation failed: client connection to 127.0.106.132:45483: connect: Connection refused (error 111). This is attempt 11: this message will repeat every 5th retry.
W20250814 01:55:07.536405 5083 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.106.132:45483: connect: Connection refused (error 111) [suppressed 10 similar messages]
I20250814 01:55:09.192385 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 5063
I20250814 01:55:09.223906 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 5196
I20250814 01:55:09.253690 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 5329
I20250814 01:55:09.274324 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 5596
I20250814 01:55:09.294786 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 4971
2025-08-14T01:55:09Z chronyd exiting
[ OK ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4 (19588 ms)
[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest (19589 ms total)
[----------] 1 test from ListTableCliSimpleParamTest
[ RUN ] ListTableCliSimpleParamTest.TestListTables/2
I20250814 01:55:09.347513 426 test_util.cc:276] Using random seed: -1876162914
I20250814 01:55:09.351548 426 ts_itest-base.cc:115] Starting cluster with:
I20250814 01:55:09.351711 426 ts_itest-base.cc:116] --------------
I20250814 01:55:09.351857 426 ts_itest-base.cc:117] 1 tablet servers
I20250814 01:55:09.352017 426 ts_itest-base.cc:118] 1 replicas per TS
I20250814 01:55:09.352152 426 ts_itest-base.cc:119] --------------
2025-08-14T01:55:09Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-14T01:55:09Z Disabled control of system clock
I20250814 01:55:09.393004 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:36167
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:45745
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:36167 with env {}
W20250814 01:55:09.686690 5865 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:55:09.687435 5865 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:55:09.687937 5865 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:55:09.720623 5865 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250814 01:55:09.720968 5865 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:55:09.721222 5865 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250814 01:55:09.721458 5865 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250814 01:55:09.758324 5865 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45745
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:36167
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:36167
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:55:09.759611 5865 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:55:09.761196 5865 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:55:09.771394 5872 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:09.771958 5873 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:10.882654 5875 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:10.884650 5865 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.112s user 0.004s sys 0.004s
W20250814 01:55:10.885030 5865 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.112s user 0.004s sys 0.004s
I20250814 01:55:10.885293 5865 server_base.cc:1047] running on GCE node
I20250814 01:55:10.886684 5865 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:55:10.889811 5865 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:55:10.891345 5865 hybrid_clock.cc:648] HybridClock initialized: now 1755136510891293 us; error 47 us; skew 500 ppm
I20250814 01:55:10.892411 5865 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:55:10.900807 5865 webserver.cc:480] Webserver started at http://127.0.106.190:45449/ using document root <none> and password file <none>
I20250814 01:55:10.902189 5865 fs_manager.cc:362] Metadata directory not provided
I20250814 01:55:10.902487 5865 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:55:10.903080 5865 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:55:10.909600 5865 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "fb6351be6d114164bd48d80f3dccce09"
format_stamp: "Formatted at 2025-08-14 01:55:10 on dist-test-slave-30wj"
I20250814 01:55:10.911123 5865 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "fb6351be6d114164bd48d80f3dccce09"
format_stamp: "Formatted at 2025-08-14 01:55:10 on dist-test-slave-30wj"
I20250814 01:55:10.920575 5865 fs_manager.cc:696] Time spent creating directory manager: real 0.009s user 0.005s sys 0.004s
I20250814 01:55:10.927865 5882 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:10.929036 5865 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.006s sys 0.000s
I20250814 01:55:10.929427 5865 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
uuid: "fb6351be6d114164bd48d80f3dccce09"
format_stamp: "Formatted at 2025-08-14 01:55:10 on dist-test-slave-30wj"
I20250814 01:55:10.929896 5865 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:55:11.011617 5865 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:55:11.013607 5865 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:55:11.014184 5865 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:55:11.082039 5865 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:36167
I20250814 01:55:11.082120 5933 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:36167 every 8 connection(s)
I20250814 01:55:11.084794 5865 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250814 01:55:11.088846 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 5865
I20250814 01:55:11.089547 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250814 01:55:11.090425 5934 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:55:11.110527 5934 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09: Bootstrap starting.
I20250814 01:55:11.115911 5934 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09: Neither blocks nor log segments found. Creating new log.
I20250814 01:55:11.117962 5934 log.cc:826] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09: Log is configured to *not* fsync() on all Append() calls
I20250814 01:55:11.122598 5934 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09: No bootstrap required, opened a new log
I20250814 01:55:11.139011 5934 raft_consensus.cc:357] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "fb6351be6d114164bd48d80f3dccce09" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 36167 } }
I20250814 01:55:11.139652 5934 raft_consensus.cc:383] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:55:11.139890 5934 raft_consensus.cc:738] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: fb6351be6d114164bd48d80f3dccce09, State: Initialized, Role: FOLLOWER
I20250814 01:55:11.140533 5934 consensus_queue.cc:260] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "fb6351be6d114164bd48d80f3dccce09" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 36167 } }
I20250814 01:55:11.141067 5934 raft_consensus.cc:397] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:55:11.141342 5934 raft_consensus.cc:491] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:55:11.141637 5934 raft_consensus.cc:3058] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:55:11.145614 5934 raft_consensus.cc:513] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "fb6351be6d114164bd48d80f3dccce09" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 36167 } }
I20250814 01:55:11.146355 5934 leader_election.cc:304] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: fb6351be6d114164bd48d80f3dccce09; no voters:
I20250814 01:55:11.148044 5934 leader_election.cc:290] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250814 01:55:11.148957 5939 raft_consensus.cc:2802] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:55:11.151026 5939 raft_consensus.cc:695] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [term 1 LEADER]: Becoming Leader. State: Replica: fb6351be6d114164bd48d80f3dccce09, State: Running, Role: LEADER
I20250814 01:55:11.151811 5939 consensus_queue.cc:237] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "fb6351be6d114164bd48d80f3dccce09" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 36167 } }
I20250814 01:55:11.152822 5934 sys_catalog.cc:564] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [sys.catalog]: configured and running, proceeding with master startup.
I20250814 01:55:11.158782 5941 sys_catalog.cc:455] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [sys.catalog]: SysCatalogTable state changed. Reason: New leader fb6351be6d114164bd48d80f3dccce09. Latest consensus state: current_term: 1 leader_uuid: "fb6351be6d114164bd48d80f3dccce09" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "fb6351be6d114164bd48d80f3dccce09" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 36167 } } }
I20250814 01:55:11.159322 5940 sys_catalog.cc:455] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "fb6351be6d114164bd48d80f3dccce09" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "fb6351be6d114164bd48d80f3dccce09" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 36167 } } }
I20250814 01:55:11.159655 5941 sys_catalog.cc:458] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [sys.catalog]: This master's current role is: LEADER
I20250814 01:55:11.159973 5940 sys_catalog.cc:458] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09 [sys.catalog]: This master's current role is: LEADER
I20250814 01:55:11.163806 5947 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250814 01:55:11.174988 5947 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250814 01:55:11.190127 5947 catalog_manager.cc:1349] Generated new cluster ID: f71da7602b454305b4869ee240fa12a0
I20250814 01:55:11.190443 5947 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250814 01:55:11.205103 5947 catalog_manager.cc:1372] Generated new certificate authority record
I20250814 01:55:11.206522 5947 catalog_manager.cc:1506] Loading token signing keys...
I20250814 01:55:11.219575 5947 catalog_manager.cc:5955] T 00000000000000000000000000000000 P fb6351be6d114164bd48d80f3dccce09: Generated new TSK 0
I20250814 01:55:11.220412 5947 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250814 01:55:11.231496 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:36167
--builtin_ntp_servers=127.0.106.148:45745
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250814 01:55:11.522838 5958 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:55:11.523331 5958 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:55:11.523804 5958 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:55:11.554816 5958 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:55:11.555653 5958 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250814 01:55:11.590118 5958 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:45745
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:36167
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:55:11.591387 5958 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:55:11.592922 5958 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:55:11.604650 5964 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:11.607239 5965 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:11.608520 5967 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:12.705086 5966 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250814 01:55:12.705338 5958 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:55:12.708899 5958 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:55:12.711426 5958 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:55:12.712832 5958 hybrid_clock.cc:648] HybridClock initialized: now 1755136512712777 us; error 76 us; skew 500 ppm
I20250814 01:55:12.713609 5958 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:55:12.719905 5958 webserver.cc:480] Webserver started at http://127.0.106.129:40725/ using document root <none> and password file <none>
I20250814 01:55:12.720777 5958 fs_manager.cc:362] Metadata directory not provided
I20250814 01:55:12.720971 5958 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:55:12.721422 5958 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:55:12.725659 5958 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "31602f7c084149ea9753b2c41a86ea40"
format_stamp: "Formatted at 2025-08-14 01:55:12 on dist-test-slave-30wj"
I20250814 01:55:12.726789 5958 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "31602f7c084149ea9753b2c41a86ea40"
format_stamp: "Formatted at 2025-08-14 01:55:12 on dist-test-slave-30wj"
I20250814 01:55:12.733920 5958 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.007s sys 0.000s
I20250814 01:55:12.739519 5974 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:12.740638 5958 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250814 01:55:12.740940 5958 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "31602f7c084149ea9753b2c41a86ea40"
format_stamp: "Formatted at 2025-08-14 01:55:12 on dist-test-slave-30wj"
I20250814 01:55:12.741273 5958 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:55:12.799813 5958 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:55:12.801252 5958 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:55:12.801684 5958 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:55:12.804539 5958 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:55:12.808997 5958 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:55:12.809194 5958 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:12.809456 5958 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:55:12.809607 5958 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:12.958918 5958 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:35921
I20250814 01:55:12.959021 6086 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:35921 every 8 connection(s)
I20250814 01:55:12.961374 5958 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250814 01:55:12.969363 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 5958
I20250814 01:55:12.969898 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1755136369252803-426-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250814 01:55:12.982148 6087 heartbeater.cc:344] Connected to a master server at 127.0.106.190:36167
I20250814 01:55:12.982546 6087 heartbeater.cc:461] Registering TS with master...
I20250814 01:55:12.983486 6087 heartbeater.cc:507] Master 127.0.106.190:36167 requested a full tablet report, sending...
I20250814 01:55:12.985756 5899 ts_manager.cc:194] Registered new tserver with Master: 31602f7c084149ea9753b2c41a86ea40 (127.0.106.129:35921)
I20250814 01:55:12.987618 5899 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:39105
I20250814 01:55:12.989166 426 external_mini_cluster.cc:949] 1 TS(s) registered with all masters
I20250814 01:55:13.022217 5898 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:50574:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250814 01:55:13.076897 6022 tablet_service.cc:1468] Processing CreateTablet for tablet 2e8f8a0334344fb5bdc45d111038bf34 (DEFAULT_TABLE table=TestTable [id=f634c0643710419b9062651259026924]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:55:13.078517 6022 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 2e8f8a0334344fb5bdc45d111038bf34. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:55:13.097962 6102 tablet_bootstrap.cc:492] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40: Bootstrap starting.
I20250814 01:55:13.104751 6102 tablet_bootstrap.cc:654] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40: Neither blocks nor log segments found. Creating new log.
I20250814 01:55:13.106957 6102 log.cc:826] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40: Log is configured to *not* fsync() on all Append() calls
I20250814 01:55:13.112356 6102 tablet_bootstrap.cc:492] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40: No bootstrap required, opened a new log
I20250814 01:55:13.112845 6102 ts_tablet_manager.cc:1397] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40: Time spent bootstrapping tablet: real 0.015s user 0.012s sys 0.000s
I20250814 01:55:13.133384 6102 raft_consensus.cc:357] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "31602f7c084149ea9753b2c41a86ea40" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35921 } }
I20250814 01:55:13.133929 6102 raft_consensus.cc:383] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:55:13.134195 6102 raft_consensus.cc:738] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 31602f7c084149ea9753b2c41a86ea40, State: Initialized, Role: FOLLOWER
I20250814 01:55:13.134832 6102 consensus_queue.cc:260] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "31602f7c084149ea9753b2c41a86ea40" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35921 } }
I20250814 01:55:13.135329 6102 raft_consensus.cc:397] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:55:13.135583 6102 raft_consensus.cc:491] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:55:13.135883 6102 raft_consensus.cc:3058] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:55:13.139853 6102 raft_consensus.cc:513] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "31602f7c084149ea9753b2c41a86ea40" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35921 } }
I20250814 01:55:13.140551 6102 leader_election.cc:304] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 31602f7c084149ea9753b2c41a86ea40; no voters:
I20250814 01:55:13.142168 6102 leader_election.cc:290] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250814 01:55:13.142503 6104 raft_consensus.cc:2802] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:55:13.144997 6104 raft_consensus.cc:695] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40 [term 1 LEADER]: Becoming Leader. State: Replica: 31602f7c084149ea9753b2c41a86ea40, State: Running, Role: LEADER
I20250814 01:55:13.145591 6087 heartbeater.cc:499] Master 127.0.106.190:36167 was elected leader, sending a full tablet report...
I20250814 01:55:13.145824 6104 consensus_queue.cc:237] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "31602f7c084149ea9753b2c41a86ea40" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35921 } }
I20250814 01:55:13.147073 6102 ts_tablet_manager.cc:1428] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40: Time spent starting tablet: real 0.034s user 0.022s sys 0.013s
I20250814 01:55:13.158882 5898 catalog_manager.cc:5582] T 2e8f8a0334344fb5bdc45d111038bf34 P 31602f7c084149ea9753b2c41a86ea40 reported cstate change: term changed from 0 to 1, leader changed from <none> to 31602f7c084149ea9753b2c41a86ea40 (127.0.106.129). New cstate: current_term: 1 leader_uuid: "31602f7c084149ea9753b2c41a86ea40" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "31602f7c084149ea9753b2c41a86ea40" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 35921 } health_report { overall_health: HEALTHY } } }
I20250814 01:55:13.180002 426 external_mini_cluster.cc:949] 1 TS(s) registered with all masters
I20250814 01:55:13.182940 426 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 31602f7c084149ea9753b2c41a86ea40 to finish bootstrapping
I20250814 01:55:15.779711 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 5958
I20250814 01:55:15.800112 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 5865
2025-08-14T01:55:15Z chronyd exiting
[ OK ] ListTableCliSimpleParamTest.TestListTables/2 (6502 ms)
[----------] 1 test from ListTableCliSimpleParamTest (6502 ms total)
[----------] 1 test from ListTableCliParamTest
[ RUN ] ListTableCliParamTest.ListTabletWithPartitionInfo/4
I20250814 01:55:15.850215 426 test_util.cc:276] Using random seed: -1869660210
[ OK ] ListTableCliParamTest.ListTabletWithPartitionInfo/4 (11 ms)
[----------] 1 test from ListTableCliParamTest (11 ms total)
[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest
[ RUN ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0
2025-08-14T01:55:15Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-14T01:55:15Z Disabled control of system clock
I20250814 01:55:15.897949 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:40489
--webserver_interface=127.0.106.190
--webserver_port=0
--builtin_ntp_servers=127.0.106.148:43375
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:40489 with env {}
W20250814 01:55:16.188752 6130 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:55:16.189333 6130 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:55:16.189801 6130 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:55:16.220328 6130 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250814 01:55:16.220635 6130 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:55:16.220882 6130 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250814 01:55:16.221113 6130 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250814 01:55:16.255826 6130 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:43375
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:40489
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:40489
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:55:16.257123 6130 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:55:16.258680 6130 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:55:16.268303 6136 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:16.269802 6137 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:17.808781 6130 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.540s user 0.000s sys 0.002s
W20250814 01:55:17.671752 6135 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 6130
W20250814 01:55:17.809381 6130 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.540s user 0.000s sys 0.003s
W20250814 01:55:17.809482 6138 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1538 milliseconds
W20250814 01:55:17.810809 6139 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:55:17.810819 6130 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:55:17.813947 6130 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:55:17.816658 6130 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:55:17.818080 6130 hybrid_clock.cc:648] HybridClock initialized: now 1755136517818038 us; error 49 us; skew 500 ppm
I20250814 01:55:17.818830 6130 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:55:17.824903 6130 webserver.cc:480] Webserver started at http://127.0.106.190:38209/ using document root <none> and password file <none>
I20250814 01:55:17.825809 6130 fs_manager.cc:362] Metadata directory not provided
I20250814 01:55:17.826001 6130 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:55:17.826397 6130 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:55:17.830647 6130 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data/instance:
uuid: "dadad262fa1f4337a6de0379fde22f3c"
format_stamp: "Formatted at 2025-08-14 01:55:17 on dist-test-slave-30wj"
I20250814 01:55:17.831655 6130 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal/instance:
uuid: "dadad262fa1f4337a6de0379fde22f3c"
format_stamp: "Formatted at 2025-08-14 01:55:17 on dist-test-slave-30wj"
I20250814 01:55:17.838449 6130 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.001s
I20250814 01:55:17.843657 6146 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:17.844655 6130 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.001s
I20250814 01:55:17.844949 6130 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal
uuid: "dadad262fa1f4337a6de0379fde22f3c"
format_stamp: "Formatted at 2025-08-14 01:55:17 on dist-test-slave-30wj"
I20250814 01:55:17.845269 6130 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:55:17.891759 6130 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:55:17.893224 6130 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:55:17.893633 6130 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:55:17.958855 6130 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:40489
I20250814 01:55:17.958966 6197 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:40489 every 8 connection(s)
I20250814 01:55:17.961447 6130 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data/info.pb
I20250814 01:55:17.966590 6198 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:55:17.967327 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 6130
I20250814 01:55:17.967657 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal/instance
I20250814 01:55:17.991680 6198 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c: Bootstrap starting.
I20250814 01:55:17.997879 6198 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c: Neither blocks nor log segments found. Creating new log.
I20250814 01:55:17.999622 6198 log.cc:826] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c: Log is configured to *not* fsync() on all Append() calls
I20250814 01:55:18.004098 6198 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c: No bootstrap required, opened a new log
I20250814 01:55:18.021577 6198 raft_consensus.cc:357] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dadad262fa1f4337a6de0379fde22f3c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 40489 } }
I20250814 01:55:18.022444 6198 raft_consensus.cc:383] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:55:18.022756 6198 raft_consensus.cc:738] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: dadad262fa1f4337a6de0379fde22f3c, State: Initialized, Role: FOLLOWER
I20250814 01:55:18.023499 6198 consensus_queue.cc:260] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dadad262fa1f4337a6de0379fde22f3c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 40489 } }
I20250814 01:55:18.024083 6198 raft_consensus.cc:397] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:55:18.024363 6198 raft_consensus.cc:491] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:55:18.024685 6198 raft_consensus.cc:3058] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:55:18.028753 6198 raft_consensus.cc:513] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dadad262fa1f4337a6de0379fde22f3c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 40489 } }
I20250814 01:55:18.029465 6198 leader_election.cc:304] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: dadad262fa1f4337a6de0379fde22f3c; no voters:
I20250814 01:55:18.031366 6198 leader_election.cc:290] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [CANDIDATE]: Term 1 election: Requested vote from peers
I20250814 01:55:18.032059 6203 raft_consensus.cc:2802] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:55:18.033962 6203 raft_consensus.cc:695] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [term 1 LEADER]: Becoming Leader. State: Replica: dadad262fa1f4337a6de0379fde22f3c, State: Running, Role: LEADER
I20250814 01:55:18.034646 6203 consensus_queue.cc:237] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dadad262fa1f4337a6de0379fde22f3c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 40489 } }
I20250814 01:55:18.035267 6198 sys_catalog.cc:564] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [sys.catalog]: configured and running, proceeding with master startup.
I20250814 01:55:18.044297 6204 sys_catalog.cc:455] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "dadad262fa1f4337a6de0379fde22f3c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dadad262fa1f4337a6de0379fde22f3c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 40489 } } }
I20250814 01:55:18.044471 6205 sys_catalog.cc:455] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [sys.catalog]: SysCatalogTable state changed. Reason: New leader dadad262fa1f4337a6de0379fde22f3c. Latest consensus state: current_term: 1 leader_uuid: "dadad262fa1f4337a6de0379fde22f3c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "dadad262fa1f4337a6de0379fde22f3c" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 40489 } } }
I20250814 01:55:18.044872 6204 sys_catalog.cc:458] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [sys.catalog]: This master's current role is: LEADER
I20250814 01:55:18.045081 6205 sys_catalog.cc:458] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c [sys.catalog]: This master's current role is: LEADER
I20250814 01:55:18.047549 6211 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250814 01:55:18.058172 6211 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250814 01:55:18.074474 6211 catalog_manager.cc:1349] Generated new cluster ID: 83ceeac7bc6340d29f78391780fd07e5
I20250814 01:55:18.074786 6211 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250814 01:55:18.094547 6211 catalog_manager.cc:1372] Generated new certificate authority record
I20250814 01:55:18.096231 6211 catalog_manager.cc:1506] Loading token signing keys...
I20250814 01:55:18.110353 6211 catalog_manager.cc:5955] T 00000000000000000000000000000000 P dadad262fa1f4337a6de0379fde22f3c: Generated new TSK 0
I20250814 01:55:18.111406 6211 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250814 01:55:18.131171 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:0
--local_ip_for_outbound_sockets=127.0.106.129
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:40489
--builtin_ntp_servers=127.0.106.148:43375
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
W20250814 01:55:18.428820 6222 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:55:18.429317 6222 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:55:18.429831 6222 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:55:18.461243 6222 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:55:18.462118 6222 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250814 01:55:18.497236 6222 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:43375
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=0
--tserver_master_addrs=127.0.106.190:40489
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:55:18.498520 6222 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:55:18.500041 6222 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:55:18.512290 6228 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:18.513047 6229 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:19.914358 6227 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 6222
W20250814 01:55:20.010257 6227 kernel_stack_watchdog.cc:198] Thread 6222 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 398ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250814 01:55:20.014458 6222 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.500s user 0.001s sys 0.004s
W20250814 01:55:20.014811 6222 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.501s user 0.001s sys 0.004s
W20250814 01:55:20.015846 6230 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1502 milliseconds
W20250814 01:55:20.016773 6232 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:55:20.016844 6222 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:55:20.018285 6222 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:55:20.020766 6222 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:55:20.022234 6222 hybrid_clock.cc:648] HybridClock initialized: now 1755136520022179 us; error 44 us; skew 500 ppm
I20250814 01:55:20.023273 6222 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:55:20.030462 6222 webserver.cc:480] Webserver started at http://127.0.106.129:40379/ using document root <none> and password file <none>
I20250814 01:55:20.031688 6222 fs_manager.cc:362] Metadata directory not provided
I20250814 01:55:20.031947 6222 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:55:20.032522 6222 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:55:20.038941 6222 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data/instance:
uuid: "944d756e92dc45f8b62aea14881661f7"
format_stamp: "Formatted at 2025-08-14 01:55:20 on dist-test-slave-30wj"
I20250814 01:55:20.040449 6222 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/wal/instance:
uuid: "944d756e92dc45f8b62aea14881661f7"
format_stamp: "Formatted at 2025-08-14 01:55:20 on dist-test-slave-30wj"
I20250814 01:55:20.049475 6222 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.010s sys 0.000s
I20250814 01:55:20.055629 6239 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:20.056710 6222 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250814 01:55:20.057034 6222 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/wal
uuid: "944d756e92dc45f8b62aea14881661f7"
format_stamp: "Formatted at 2025-08-14 01:55:20 on dist-test-slave-30wj"
I20250814 01:55:20.057353 6222 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:55:20.108661 6222 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:55:20.110112 6222 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:55:20.110536 6222 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:55:20.113353 6222 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:55:20.117417 6222 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:55:20.117607 6222 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:20.117903 6222 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:55:20.118052 6222 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:20.270515 6222 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:42025
I20250814 01:55:20.270614 6351 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:42025 every 8 connection(s)
I20250814 01:55:20.273097 6222 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data/info.pb
I20250814 01:55:20.282128 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 6222
I20250814 01:55:20.282855 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/wal/instance
I20250814 01:55:20.289884 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:0
--local_ip_for_outbound_sockets=127.0.106.130
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:40489
--builtin_ntp_servers=127.0.106.148:43375
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250814 01:55:20.295850 6352 heartbeater.cc:344] Connected to a master server at 127.0.106.190:40489
I20250814 01:55:20.296380 6352 heartbeater.cc:461] Registering TS with master...
I20250814 01:55:20.297683 6352 heartbeater.cc:507] Master 127.0.106.190:40489 requested a full tablet report, sending...
I20250814 01:55:20.300407 6163 ts_manager.cc:194] Registered new tserver with Master: 944d756e92dc45f8b62aea14881661f7 (127.0.106.129:42025)
I20250814 01:55:20.302351 6163 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:60905
W20250814 01:55:20.587491 6356 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:55:20.587980 6356 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:55:20.588477 6356 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:55:20.619030 6356 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:55:20.619865 6356 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250814 01:55:20.654028 6356 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:43375
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=0
--tserver_master_addrs=127.0.106.190:40489
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:55:20.655390 6356 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:55:20.656963 6356 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:55:20.667949 6362 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:55:21.305598 6352 heartbeater.cc:499] Master 127.0.106.190:40489 was elected leader, sending a full tablet report...
W20250814 01:55:20.669797 6363 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:21.756927 6365 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:21.759222 6364 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1085 milliseconds
I20250814 01:55:21.759361 6356 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:55:21.760565 6356 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:55:21.762624 6356 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:55:21.763942 6356 hybrid_clock.cc:648] HybridClock initialized: now 1755136521763911 us; error 38 us; skew 500 ppm
I20250814 01:55:21.764683 6356 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:55:21.770303 6356 webserver.cc:480] Webserver started at http://127.0.106.130:41227/ using document root <none> and password file <none>
I20250814 01:55:21.771164 6356 fs_manager.cc:362] Metadata directory not provided
I20250814 01:55:21.771363 6356 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:55:21.771785 6356 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:55:21.776046 6356 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data/instance:
uuid: "6067b6818f90453681dfb46f3d74281c"
format_stamp: "Formatted at 2025-08-14 01:55:21 on dist-test-slave-30wj"
I20250814 01:55:21.777153 6356 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/wal/instance:
uuid: "6067b6818f90453681dfb46f3d74281c"
format_stamp: "Formatted at 2025-08-14 01:55:21 on dist-test-slave-30wj"
I20250814 01:55:21.784096 6356 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.004s
I20250814 01:55:21.789873 6372 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:21.790787 6356 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.000s
I20250814 01:55:21.791090 6356 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/wal
uuid: "6067b6818f90453681dfb46f3d74281c"
format_stamp: "Formatted at 2025-08-14 01:55:21 on dist-test-slave-30wj"
I20250814 01:55:21.791392 6356 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:55:21.849968 6356 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:55:21.851394 6356 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:55:21.851799 6356 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:55:21.854219 6356 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:55:21.858116 6356 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:55:21.858326 6356 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:21.858575 6356 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:55:21.858747 6356 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:21.984901 6356 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:34081
I20250814 01:55:21.984993 6484 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:34081 every 8 connection(s)
I20250814 01:55:21.987349 6356 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data/info.pb
I20250814 01:55:21.991638 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 6356
I20250814 01:55:21.992357 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/wal/instance
I20250814 01:55:21.998415 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:0
--local_ip_for_outbound_sockets=127.0.106.131
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:40489
--builtin_ntp_servers=127.0.106.148:43375
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250814 01:55:22.007169 6485 heartbeater.cc:344] Connected to a master server at 127.0.106.190:40489
I20250814 01:55:22.007652 6485 heartbeater.cc:461] Registering TS with master...
I20250814 01:55:22.008848 6485 heartbeater.cc:507] Master 127.0.106.190:40489 requested a full tablet report, sending...
I20250814 01:55:22.011152 6163 ts_manager.cc:194] Registered new tserver with Master: 6067b6818f90453681dfb46f3d74281c (127.0.106.130:34081)
I20250814 01:55:22.012363 6163 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:53471
W20250814 01:55:22.292403 6489 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:55:22.292891 6489 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:55:22.293373 6489 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:55:22.324362 6489 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:55:22.325201 6489 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250814 01:55:22.359169 6489 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:43375
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=0
--tserver_master_addrs=127.0.106.190:40489
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:55:22.360427 6489 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:55:22.362008 6489 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:55:22.372771 6495 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:55:23.015820 6485 heartbeater.cc:499] Master 127.0.106.190:40489 was elected leader, sending a full tablet report...
W20250814 01:55:22.373894 6496 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:23.464534 6498 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:23.466437 6497 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1088 milliseconds
I20250814 01:55:23.466535 6489 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:55:23.467721 6489 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:55:23.469806 6489 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:55:23.471155 6489 hybrid_clock.cc:648] HybridClock initialized: now 1755136523471141 us; error 55 us; skew 500 ppm
I20250814 01:55:23.471895 6489 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:55:23.477648 6489 webserver.cc:480] Webserver started at http://127.0.106.131:35655/ using document root <none> and password file <none>
I20250814 01:55:23.478567 6489 fs_manager.cc:362] Metadata directory not provided
I20250814 01:55:23.478752 6489 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:55:23.479168 6489 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:55:23.483462 6489 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data/instance:
uuid: "6e96d697f7024f1cb4946b1b06e4f794"
format_stamp: "Formatted at 2025-08-14 01:55:23 on dist-test-slave-30wj"
I20250814 01:55:23.484483 6489 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/wal/instance:
uuid: "6e96d697f7024f1cb4946b1b06e4f794"
format_stamp: "Formatted at 2025-08-14 01:55:23 on dist-test-slave-30wj"
I20250814 01:55:23.491223 6489 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.001s sys 0.008s
I20250814 01:55:23.496560 6505 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:23.497515 6489 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.001s sys 0.001s
I20250814 01:55:23.497856 6489 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/wal
uuid: "6e96d697f7024f1cb4946b1b06e4f794"
format_stamp: "Formatted at 2025-08-14 01:55:23 on dist-test-slave-30wj"
I20250814 01:55:23.498186 6489 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:55:23.544791 6489 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:55:23.546285 6489 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:55:23.546707 6489 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:55:23.549062 6489 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:55:23.553071 6489 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250814 01:55:23.553278 6489 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:23.553512 6489 ts_tablet_manager.cc:610] Registered 0 tablets
I20250814 01:55:23.553663 6489 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:23.683995 6489 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:44083
I20250814 01:55:23.684067 6617 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:44083 every 8 connection(s)
I20250814 01:55:23.686633 6489 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data/info.pb
I20250814 01:55:23.695214 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 6489
I20250814 01:55:23.695715 426 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/wal/instance
I20250814 01:55:23.711514 6618 heartbeater.cc:344] Connected to a master server at 127.0.106.190:40489
I20250814 01:55:23.711906 6618 heartbeater.cc:461] Registering TS with master...
I20250814 01:55:23.713066 6618 heartbeater.cc:507] Master 127.0.106.190:40489 requested a full tablet report, sending...
I20250814 01:55:23.715251 6162 ts_manager.cc:194] Registered new tserver with Master: 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131:44083)
I20250814 01:55:23.716681 6162 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:47413
I20250814 01:55:23.729470 426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250814 01:55:23.757058 426 test_util.cc:276] Using random seed: -1861753371
I20250814 01:55:23.794937 6162 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:56014:
name: "pre_rebuild"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
W20250814 01:55:23.797266 6162 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table pre_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250814 01:55:23.852983 6553 tablet_service.cc:1468] Processing CreateTablet for tablet 01422312e499447c811b10f9c85d8f22 (DEFAULT_TABLE table=pre_rebuild [id=0e64e12309af4f74b621dd4d88a7ebfa]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:55:23.852983 6287 tablet_service.cc:1468] Processing CreateTablet for tablet 01422312e499447c811b10f9c85d8f22 (DEFAULT_TABLE table=pre_rebuild [id=0e64e12309af4f74b621dd4d88a7ebfa]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:55:23.854832 6553 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 01422312e499447c811b10f9c85d8f22. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:55:23.854815 6287 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 01422312e499447c811b10f9c85d8f22. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:55:23.862345 6420 tablet_service.cc:1468] Processing CreateTablet for tablet 01422312e499447c811b10f9c85d8f22 (DEFAULT_TABLE table=pre_rebuild [id=0e64e12309af4f74b621dd4d88a7ebfa]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:55:23.864223 6420 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 01422312e499447c811b10f9c85d8f22. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:55:23.881031 6642 tablet_bootstrap.cc:492] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: Bootstrap starting.
I20250814 01:55:23.882079 6643 tablet_bootstrap.cc:492] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: Bootstrap starting.
I20250814 01:55:23.888397 6642 tablet_bootstrap.cc:654] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: Neither blocks nor log segments found. Creating new log.
I20250814 01:55:23.889258 6643 tablet_bootstrap.cc:654] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: Neither blocks nor log segments found. Creating new log.
I20250814 01:55:23.890681 6644 tablet_bootstrap.cc:492] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: Bootstrap starting.
I20250814 01:55:23.891069 6642 log.cc:826] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: Log is configured to *not* fsync() on all Append() calls
I20250814 01:55:23.891491 6643 log.cc:826] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: Log is configured to *not* fsync() on all Append() calls
I20250814 01:55:23.896123 6642 tablet_bootstrap.cc:492] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: No bootstrap required, opened a new log
I20250814 01:55:23.896525 6642 ts_tablet_manager.cc:1397] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: Time spent bootstrapping tablet: real 0.017s user 0.008s sys 0.005s
I20250814 01:55:23.896926 6644 tablet_bootstrap.cc:654] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: Neither blocks nor log segments found. Creating new log.
I20250814 01:55:23.896939 6643 tablet_bootstrap.cc:492] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: No bootstrap required, opened a new log
I20250814 01:55:23.897400 6643 ts_tablet_manager.cc:1397] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: Time spent bootstrapping tablet: real 0.016s user 0.009s sys 0.003s
I20250814 01:55:23.899171 6644 log.cc:826] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: Log is configured to *not* fsync() on all Append() calls
I20250814 01:55:23.904908 6644 tablet_bootstrap.cc:492] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: No bootstrap required, opened a new log
I20250814 01:55:23.905400 6644 ts_tablet_manager.cc:1397] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: Time spent bootstrapping tablet: real 0.015s user 0.009s sys 0.003s
I20250814 01:55:23.914876 6642 raft_consensus.cc:357] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:23.915848 6642 raft_consensus.cc:383] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:55:23.916152 6642 raft_consensus.cc:738] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 944d756e92dc45f8b62aea14881661f7, State: Initialized, Role: FOLLOWER
I20250814 01:55:23.916992 6642 consensus_queue.cc:260] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:23.920897 6642 ts_tablet_manager.cc:1428] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: Time spent starting tablet: real 0.024s user 0.021s sys 0.004s
I20250814 01:55:23.922410 6643 raft_consensus.cc:357] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:23.923411 6643 raft_consensus.cc:383] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:55:23.923695 6643 raft_consensus.cc:738] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6e96d697f7024f1cb4946b1b06e4f794, State: Initialized, Role: FOLLOWER
I20250814 01:55:23.924523 6643 consensus_queue.cc:260] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:23.927542 6618 heartbeater.cc:499] Master 127.0.106.190:40489 was elected leader, sending a full tablet report...
I20250814 01:55:23.928880 6643 ts_tablet_manager.cc:1428] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: Time spent starting tablet: real 0.031s user 0.032s sys 0.000s
I20250814 01:55:23.930889 6644 raft_consensus.cc:357] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:23.931972 6644 raft_consensus.cc:383] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:55:23.932307 6644 raft_consensus.cc:738] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6067b6818f90453681dfb46f3d74281c, State: Initialized, Role: FOLLOWER
I20250814 01:55:23.933202 6644 consensus_queue.cc:260] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:23.937019 6644 ts_tablet_manager.cc:1428] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: Time spent starting tablet: real 0.031s user 0.031s sys 0.001s
W20250814 01:55:23.940721 6619 tablet.cc:2378] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250814 01:55:23.996079 6486 tablet.cc:2378] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250814 01:55:24.031229 6353 tablet.cc:2378] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250814 01:55:24.237668 6649 raft_consensus.cc:491] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250814 01:55:24.238183 6649 raft_consensus.cc:513] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:24.240464 6649 leader_election.cc:290] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 944d756e92dc45f8b62aea14881661f7 (127.0.106.129:42025), 6067b6818f90453681dfb46f3d74281c (127.0.106.130:34081)
I20250814 01:55:24.250012 6307 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "01422312e499447c811b10f9c85d8f22" candidate_uuid: "6e96d697f7024f1cb4946b1b06e4f794" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "944d756e92dc45f8b62aea14881661f7" is_pre_election: true
I20250814 01:55:24.250757 6307 raft_consensus.cc:2466] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 6e96d697f7024f1cb4946b1b06e4f794 in term 0.
I20250814 01:55:24.251243 6440 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "01422312e499447c811b10f9c85d8f22" candidate_uuid: "6e96d697f7024f1cb4946b1b06e4f794" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "6067b6818f90453681dfb46f3d74281c" is_pre_election: true
I20250814 01:55:24.251968 6440 raft_consensus.cc:2466] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 6e96d697f7024f1cb4946b1b06e4f794 in term 0.
I20250814 01:55:24.251978 6508 leader_election.cc:304] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 6e96d697f7024f1cb4946b1b06e4f794, 944d756e92dc45f8b62aea14881661f7; no voters:
I20250814 01:55:24.252637 6649 raft_consensus.cc:2802] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250814 01:55:24.252900 6649 raft_consensus.cc:491] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250814 01:55:24.253156 6649 raft_consensus.cc:3058] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:55:24.257297 6649 raft_consensus.cc:513] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:24.258656 6649 leader_election.cc:290] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [CANDIDATE]: Term 1 election: Requested vote from peers 944d756e92dc45f8b62aea14881661f7 (127.0.106.129:42025), 6067b6818f90453681dfb46f3d74281c (127.0.106.130:34081)
I20250814 01:55:24.259397 6307 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "01422312e499447c811b10f9c85d8f22" candidate_uuid: "6e96d697f7024f1cb4946b1b06e4f794" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "944d756e92dc45f8b62aea14881661f7"
I20250814 01:55:24.259498 6440 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "01422312e499447c811b10f9c85d8f22" candidate_uuid: "6e96d697f7024f1cb4946b1b06e4f794" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "6067b6818f90453681dfb46f3d74281c"
I20250814 01:55:24.259788 6307 raft_consensus.cc:3058] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:55:24.259893 6440 raft_consensus.cc:3058] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:55:24.264112 6307 raft_consensus.cc:2466] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 6e96d697f7024f1cb4946b1b06e4f794 in term 1.
I20250814 01:55:24.264114 6440 raft_consensus.cc:2466] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 6e96d697f7024f1cb4946b1b06e4f794 in term 1.
I20250814 01:55:24.264900 6508 leader_election.cc:304] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 6e96d697f7024f1cb4946b1b06e4f794, 944d756e92dc45f8b62aea14881661f7; no voters:
I20250814 01:55:24.265467 6649 raft_consensus.cc:2802] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:55:24.266866 6649 raft_consensus.cc:695] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [term 1 LEADER]: Becoming Leader. State: Replica: 6e96d697f7024f1cb4946b1b06e4f794, State: Running, Role: LEADER
I20250814 01:55:24.267686 6649 consensus_queue.cc:237] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:24.278959 6162 catalog_manager.cc:5582] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 reported cstate change: term changed from 0 to 1, leader changed from <none> to 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131). New cstate: current_term: 1 leader_uuid: "6e96d697f7024f1cb4946b1b06e4f794" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } health_report { overall_health: UNKNOWN } } }
I20250814 01:55:24.445195 6307 raft_consensus.cc:1273] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 1 FOLLOWER]: Refusing update from remote peer 6e96d697f7024f1cb4946b1b06e4f794: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250814 01:55:24.445276 6440 raft_consensus.cc:1273] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 1 FOLLOWER]: Refusing update from remote peer 6e96d697f7024f1cb4946b1b06e4f794: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250814 01:55:24.446552 6653 consensus_queue.cc:1035] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [LEADER]: Connected to new peer: Peer: permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250814 01:55:24.447400 6649 consensus_queue.cc:1035] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [LEADER]: Connected to new peer: Peer: permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250814 01:55:24.469504 6662 mvcc.cc:204] Tried to move back new op lower bound from 7189039204112146432 to 7189039203407007744. Current Snapshot: MvccSnapshot[applied={T|T < 7189039204112146432}]
I20250814 01:55:24.471776 6666 mvcc.cc:204] Tried to move back new op lower bound from 7189039204112146432 to 7189039203407007744. Current Snapshot: MvccSnapshot[applied={T|T < 7189039204112146432}]
I20250814 01:55:29.404336 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 6130
W20250814 01:55:29.524574 6618 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:40489 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:40489: connect: Connection refused (error 111)
W20250814 01:55:29.554061 6485 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:40489 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:40489: connect: Connection refused (error 111)
W20250814 01:55:29.607981 6352 heartbeater.cc:646] Failed to heartbeat to 127.0.106.190:40489 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.0.106.190:40489: connect: Connection refused (error 111)
W20250814 01:55:29.760074 6699 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:55:29.760871 6699 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:55:29.817056 6699 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250814 01:55:31.004439 6699 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.138s user 0.425s sys 0.708s
W20250814 01:55:31.004848 6699 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.139s user 0.426s sys 0.710s
I20250814 01:55:31.141155 6699 minidump.cc:252] Setting minidump size limit to 20M
I20250814 01:55:31.144187 6699 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:55:31.145272 6699 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:55:31.155712 6733 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:31.156181 6734 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:55:31.294026 6699 server_base.cc:1047] running on GCE node
W20250814 01:55:31.295392 6736 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:55:31.296381 6699 hybrid_clock.cc:584] initializing the hybrid clock with 'system' time source
I20250814 01:55:31.296845 6699 hybrid_clock.cc:648] HybridClock initialized: now 1755136531296821 us; error 121516 us; skew 500 ppm
I20250814 01:55:31.297528 6699 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:55:31.302124 6699 webserver.cc:480] Webserver started at http://0.0.0.0:38279/ using document root <none> and password file <none>
I20250814 01:55:31.302922 6699 fs_manager.cc:362] Metadata directory not provided
I20250814 01:55:31.303150 6699 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:55:31.303568 6699 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250814 01:55:31.307672 6699 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data/instance:
uuid: "60d4ddac349a4cb9a629c053710f479a"
format_stamp: "Formatted at 2025-08-14 01:55:31 on dist-test-slave-30wj"
I20250814 01:55:31.308712 6699 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal/instance:
uuid: "60d4ddac349a4cb9a629c053710f479a"
format_stamp: "Formatted at 2025-08-14 01:55:31 on dist-test-slave-30wj"
I20250814 01:55:31.314528 6699 fs_manager.cc:696] Time spent creating directory manager: real 0.005s user 0.006s sys 0.002s
I20250814 01:55:31.319124 6741 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:31.319999 6699 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.002s
I20250814 01:55:31.320290 6699 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal
uuid: "60d4ddac349a4cb9a629c053710f479a"
format_stamp: "Formatted at 2025-08-14 01:55:31 on dist-test-slave-30wj"
I20250814 01:55:31.320585 6699 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:55:31.348407 6699 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:55:31.349754 6699 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:55:31.350179 6699 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:55:31.354913 6699 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:55:31.368280 6699 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a: Bootstrap starting.
I20250814 01:55:31.372852 6699 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a: Neither blocks nor log segments found. Creating new log.
I20250814 01:55:31.374457 6699 log.cc:826] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a: Log is configured to *not* fsync() on all Append() calls
I20250814 01:55:31.378219 6699 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a: No bootstrap required, opened a new log
I20250814 01:55:31.393473 6699 raft_consensus.cc:357] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60d4ddac349a4cb9a629c053710f479a" member_type: VOTER }
I20250814 01:55:31.393956 6699 raft_consensus.cc:383] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:55:31.394168 6699 raft_consensus.cc:738] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 60d4ddac349a4cb9a629c053710f479a, State: Initialized, Role: FOLLOWER
I20250814 01:55:31.394816 6699 consensus_queue.cc:260] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60d4ddac349a4cb9a629c053710f479a" member_type: VOTER }
I20250814 01:55:31.395269 6699 raft_consensus.cc:397] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:55:31.395504 6699 raft_consensus.cc:491] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:55:31.395776 6699 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:55:31.399436 6699 raft_consensus.cc:513] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60d4ddac349a4cb9a629c053710f479a" member_type: VOTER }
I20250814 01:55:31.400027 6699 leader_election.cc:304] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 60d4ddac349a4cb9a629c053710f479a; no voters:
I20250814 01:55:31.401597 6699 leader_election.cc:290] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [CANDIDATE]: Term 1 election: Requested vote from peers
I20250814 01:55:31.401844 6748 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:55:31.404119 6748 raft_consensus.cc:695] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 1 LEADER]: Becoming Leader. State: Replica: 60d4ddac349a4cb9a629c053710f479a, State: Running, Role: LEADER
I20250814 01:55:31.404970 6748 consensus_queue.cc:237] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60d4ddac349a4cb9a629c053710f479a" member_type: VOTER }
I20250814 01:55:31.410271 6750 sys_catalog.cc:455] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [sys.catalog]: SysCatalogTable state changed. Reason: New leader 60d4ddac349a4cb9a629c053710f479a. Latest consensus state: current_term: 1 leader_uuid: "60d4ddac349a4cb9a629c053710f479a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60d4ddac349a4cb9a629c053710f479a" member_type: VOTER } }
I20250814 01:55:31.410951 6750 sys_catalog.cc:458] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [sys.catalog]: This master's current role is: LEADER
I20250814 01:55:31.411650 6749 sys_catalog.cc:455] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "60d4ddac349a4cb9a629c053710f479a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60d4ddac349a4cb9a629c053710f479a" member_type: VOTER } }
I20250814 01:55:31.412163 6749 sys_catalog.cc:458] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [sys.catalog]: This master's current role is: LEADER
I20250814 01:55:31.422326 6699 tablet_replica.cc:331] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a: stopping tablet replica
I20250814 01:55:31.423035 6699 raft_consensus.cc:2241] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 1 LEADER]: Raft consensus shutting down.
I20250814 01:55:31.423408 6699 raft_consensus.cc:2270] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 1 FOLLOWER]: Raft consensus is shut down!
I20250814 01:55:31.425261 6699 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250814 01:55:31.425618 6699 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250814 01:55:31.449848 6699 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
I20250814 01:55:32.476953 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 6222
I20250814 01:55:32.503551 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 6356
I20250814 01:55:32.529913 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 6489
W20250814 01:55:32.529695 6506 proxy.cc:239] Call had error, refreshing address and retrying: Network error: recv got EOF from 127.0.106.130:34081 (error 108)
I20250814 01:55:32.563077 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:40489
--webserver_interface=127.0.106.190
--webserver_port=38209
--builtin_ntp_servers=127.0.106.148:43375
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.0.106.190:40489 with env {}
W20250814 01:55:32.857128 6758 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:55:32.857690 6758 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:55:32.858161 6758 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:55:32.888692 6758 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250814 01:55:32.889011 6758 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:55:32.889266 6758 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250814 01:55:32.889493 6758 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250814 01:55:32.923765 6758 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:43375
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.0.106.190:40489
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.0.106.190:40489
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.0.106.190
--webserver_port=38209
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:55:32.925026 6758 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:55:32.926578 6758 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:55:32.936076 6764 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:32.937131 6765 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:34.042403 6758 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.105s user 0.363s sys 0.733s
W20250814 01:55:34.042462 6766 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1103 milliseconds
W20250814 01:55:34.042858 6758 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.106s user 0.364s sys 0.733s
W20250814 01:55:34.042976 6767 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:55:34.043221 6758 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:55:34.044683 6758 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:55:34.047807 6758 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:55:34.049428 6758 hybrid_clock.cc:648] HybridClock initialized: now 1755136534049351 us; error 30 us; skew 500 ppm
I20250814 01:55:34.050622 6758 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:55:34.058954 6758 webserver.cc:480] Webserver started at http://127.0.106.190:38209/ using document root <none> and password file <none>
I20250814 01:55:34.060294 6758 fs_manager.cc:362] Metadata directory not provided
I20250814 01:55:34.060592 6758 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:55:34.071097 6758 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.008s sys 0.000s
I20250814 01:55:34.076776 6774 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:34.078073 6758 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.000s
I20250814 01:55:34.078450 6758 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal
uuid: "60d4ddac349a4cb9a629c053710f479a"
format_stamp: "Formatted at 2025-08-14 01:55:31 on dist-test-slave-30wj"
I20250814 01:55:34.081068 6758 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:55:34.177959 6758 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:55:34.179376 6758 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:55:34.179769 6758 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:55:34.245131 6758 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.190:40489
I20250814 01:55:34.245178 6825 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.190:40489 every 8 connection(s)
I20250814 01:55:34.247905 6758 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data/info.pb
I20250814 01:55:34.250372 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 6758
I20250814 01:55:34.252024 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.129:42025
--local_ip_for_outbound_sockets=127.0.106.129
--tserver_master_addrs=127.0.106.190:40489
--webserver_port=40379
--webserver_interface=127.0.106.129
--builtin_ntp_servers=127.0.106.148:43375
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250814 01:55:34.257827 6826 sys_catalog.cc:263] Verifying existing consensus state
I20250814 01:55:34.269654 6826 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a: Bootstrap starting.
I20250814 01:55:34.279316 6826 log.cc:826] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a: Log is configured to *not* fsync() on all Append() calls
I20250814 01:55:34.290805 6826 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=2 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:55:34.291533 6826 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a: Bootstrap complete.
I20250814 01:55:34.311702 6826 raft_consensus.cc:357] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60d4ddac349a4cb9a629c053710f479a" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 40489 } }
I20250814 01:55:34.312369 6826 raft_consensus.cc:738] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 60d4ddac349a4cb9a629c053710f479a, State: Initialized, Role: FOLLOWER
I20250814 01:55:34.313180 6826 consensus_queue.cc:260] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60d4ddac349a4cb9a629c053710f479a" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 40489 } }
I20250814 01:55:34.313663 6826 raft_consensus.cc:397] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250814 01:55:34.313936 6826 raft_consensus.cc:491] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250814 01:55:34.314234 6826 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 1 FOLLOWER]: Advancing to term 2
I20250814 01:55:34.318145 6826 raft_consensus.cc:513] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60d4ddac349a4cb9a629c053710f479a" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 40489 } }
I20250814 01:55:34.318768 6826 leader_election.cc:304] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 60d4ddac349a4cb9a629c053710f479a; no voters:
I20250814 01:55:34.320789 6826 leader_election.cc:290] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [CANDIDATE]: Term 2 election: Requested vote from peers
I20250814 01:55:34.321337 6830 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 2 FOLLOWER]: Leader election won for term 2
I20250814 01:55:34.324234 6830 raft_consensus.cc:695] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [term 2 LEADER]: Becoming Leader. State: Replica: 60d4ddac349a4cb9a629c053710f479a, State: Running, Role: LEADER
I20250814 01:55:34.325112 6830 consensus_queue.cc:237] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60d4ddac349a4cb9a629c053710f479a" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 40489 } }
I20250814 01:55:34.325804 6826 sys_catalog.cc:564] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [sys.catalog]: configured and running, proceeding with master startup.
I20250814 01:55:34.332121 6832 sys_catalog.cc:455] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [sys.catalog]: SysCatalogTable state changed. Reason: New leader 60d4ddac349a4cb9a629c053710f479a. Latest consensus state: current_term: 2 leader_uuid: "60d4ddac349a4cb9a629c053710f479a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60d4ddac349a4cb9a629c053710f479a" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 40489 } } }
I20250814 01:55:34.333452 6831 sys_catalog.cc:455] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "60d4ddac349a4cb9a629c053710f479a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "60d4ddac349a4cb9a629c053710f479a" member_type: VOTER last_known_addr { host: "127.0.106.190" port: 40489 } } }
I20250814 01:55:34.334124 6831 sys_catalog.cc:458] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [sys.catalog]: This master's current role is: LEADER
I20250814 01:55:34.335908 6832 sys_catalog.cc:458] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a [sys.catalog]: This master's current role is: LEADER
I20250814 01:55:34.348095 6837 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250814 01:55:34.359112 6837 catalog_manager.cc:671] Loaded metadata for table pre_rebuild [id=e0381d252f944a4cba0911ad55982c41]
I20250814 01:55:34.366400 6837 tablet_loader.cc:96] loaded metadata for tablet 01422312e499447c811b10f9c85d8f22 (table pre_rebuild [id=e0381d252f944a4cba0911ad55982c41])
I20250814 01:55:34.368045 6837 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250814 01:55:34.391659 6837 catalog_manager.cc:1349] Generated new cluster ID: 7f02e38de01c4862afc649a10147e218
I20250814 01:55:34.392002 6837 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250814 01:55:34.403048 6848 catalog_manager.cc:797] Waiting for catalog manager background task thread to start: Service unavailable: Catalog manager is not initialized. State: Starting
I20250814 01:55:34.426421 6837 catalog_manager.cc:1372] Generated new certificate authority record
I20250814 01:55:34.428133 6837 catalog_manager.cc:1506] Loading token signing keys...
I20250814 01:55:34.440822 6837 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a: Generated new TSK 0
I20250814 01:55:34.441884 6837 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250814 01:55:34.590324 6828 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:55:34.590798 6828 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:55:34.591295 6828 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:55:34.621688 6828 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:55:34.622627 6828 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.129
I20250814 01:55:34.657366 6828 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:43375
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.129:42025
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.0.106.129
--webserver_port=40379
--tserver_master_addrs=127.0.106.190:40489
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.129
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:55:34.658680 6828 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:55:34.660588 6828 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:55:34.676785 6854 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:34.678493 6855 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:36.000833 6828 thread.cc:641] GCE (cloud detector) Time spent creating pthread: real 1.327s user 0.541s sys 0.780s
W20250814 01:55:36.001217 6828 thread.cc:608] GCE (cloud detector) Time spent starting thread: real 1.327s user 0.542s sys 0.780s
I20250814 01:55:36.003453 6828 server_base.cc:1047] running on GCE node
W20250814 01:55:36.005123 6859 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:55:36.006518 6828 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:55:36.009138 6828 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:55:36.010622 6828 hybrid_clock.cc:648] HybridClock initialized: now 1755136536010546 us; error 74 us; skew 500 ppm
I20250814 01:55:36.011672 6828 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:55:36.020066 6828 webserver.cc:480] Webserver started at http://127.0.106.129:40379/ using document root <none> and password file <none>
I20250814 01:55:36.021337 6828 fs_manager.cc:362] Metadata directory not provided
I20250814 01:55:36.021623 6828 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:55:36.032759 6828 fs_manager.cc:714] Time spent opening directory manager: real 0.007s user 0.005s sys 0.001s
I20250814 01:55:36.038914 6864 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:36.040192 6828 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.002s sys 0.002s
I20250814 01:55:36.040632 6828 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/wal
uuid: "944d756e92dc45f8b62aea14881661f7"
format_stamp: "Formatted at 2025-08-14 01:55:20 on dist-test-slave-30wj"
I20250814 01:55:36.043447 6828 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:55:36.129590 6828 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:55:36.131021 6828 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:55:36.131436 6828 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:55:36.133996 6828 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:55:36.139680 6871 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250814 01:55:36.150354 6828 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250814 01:55:36.150583 6828 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.012s user 0.000s sys 0.002s
I20250814 01:55:36.150839 6828 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250814 01:55:36.155483 6828 ts_tablet_manager.cc:610] Registered 1 tablets
I20250814 01:55:36.155665 6828 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.003s sys 0.001s
I20250814 01:55:36.156116 6871 tablet_bootstrap.cc:492] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: Bootstrap starting.
I20250814 01:55:36.341681 6828 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.129:42025
I20250814 01:55:36.342020 6977 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.129:42025 every 8 connection(s)
I20250814 01:55:36.344336 6828 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data/info.pb
I20250814 01:55:36.348796 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 6828
I20250814 01:55:36.350639 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.130:34081
--local_ip_for_outbound_sockets=127.0.106.130
--tserver_master_addrs=127.0.106.190:40489
--webserver_port=41227
--webserver_interface=127.0.106.130
--builtin_ntp_servers=127.0.106.148:43375
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250814 01:55:36.393847 6978 heartbeater.cc:344] Connected to a master server at 127.0.106.190:40489
I20250814 01:55:36.394356 6978 heartbeater.cc:461] Registering TS with master...
I20250814 01:55:36.395566 6978 heartbeater.cc:507] Master 127.0.106.190:40489 requested a full tablet report, sending...
I20250814 01:55:36.400107 6791 ts_manager.cc:194] Registered new tserver with Master: 944d756e92dc45f8b62aea14881661f7 (127.0.106.129:42025)
I20250814 01:55:36.407821 6791 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.129:36501
I20250814 01:55:36.452478 6871 log.cc:826] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: Log is configured to *not* fsync() on all Append() calls
W20250814 01:55:36.787391 6982 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:55:36.787914 6982 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:55:36.788445 6982 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:55:36.819271 6982 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:55:36.820101 6982 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.130
I20250814 01:55:36.856145 6982 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:43375
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.130:34081
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.0.106.130
--webserver_port=41227
--tserver_master_addrs=127.0.106.190:40489
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.130
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:55:36.857399 6982 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:55:36.858976 6982 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:55:36.870764 6989 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:55:37.412050 6978 heartbeater.cc:499] Master 127.0.106.190:40489 was elected leader, sending a full tablet report...
W20250814 01:55:36.871809 6990 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:38.182363 6992 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:38.184793 6991 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1313 milliseconds
W20250814 01:55:38.186260 6982 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.315s user 0.466s sys 0.817s
W20250814 01:55:38.186601 6982 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.316s user 0.466s sys 0.817s
I20250814 01:55:38.186859 6982 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:55:38.188199 6982 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:55:38.190913 6982 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:55:38.192479 6982 hybrid_clock.cc:648] HybridClock initialized: now 1755136538192426 us; error 44 us; skew 500 ppm
I20250814 01:55:38.193565 6982 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:55:38.202342 6982 webserver.cc:480] Webserver started at http://127.0.106.130:41227/ using document root <none> and password file <none>
I20250814 01:55:38.203639 6982 fs_manager.cc:362] Metadata directory not provided
I20250814 01:55:38.203922 6982 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:55:38.214416 6982 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.006s sys 0.001s
I20250814 01:55:38.220258 6999 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:38.221572 6982 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.005s sys 0.001s
I20250814 01:55:38.221971 6982 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/wal
uuid: "6067b6818f90453681dfb46f3d74281c"
format_stamp: "Formatted at 2025-08-14 01:55:21 on dist-test-slave-30wj"
I20250814 01:55:38.224726 6982 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:55:38.315114 6982 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:55:38.316489 6982 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:55:38.316872 6982 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:55:38.319375 6982 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:55:38.325132 7006 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250814 01:55:38.335626 6982 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250814 01:55:38.335850 6982 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.012s user 0.002s sys 0.000s
I20250814 01:55:38.336088 6982 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250814 01:55:38.340672 6982 ts_tablet_manager.cc:610] Registered 1 tablets
I20250814 01:55:38.341056 6982 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.005s sys 0.000s
I20250814 01:55:38.341249 7006 tablet_bootstrap.cc:492] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: Bootstrap starting.
I20250814 01:55:38.518340 6982 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.130:34081
I20250814 01:55:38.518460 7112 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.130:34081 every 8 connection(s)
I20250814 01:55:38.522030 6982 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data/info.pb
I20250814 01:55:38.530858 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 6982
I20250814 01:55:38.532683 426 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
/tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.0.106.131:44083
--local_ip_for_outbound_sockets=127.0.106.131
--tserver_master_addrs=127.0.106.190:40489
--webserver_port=35655
--webserver_interface=127.0.106.131
--builtin_ntp_servers=127.0.106.148:43375
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250814 01:55:38.568524 7113 heartbeater.cc:344] Connected to a master server at 127.0.106.190:40489
I20250814 01:55:38.569259 7113 heartbeater.cc:461] Registering TS with master...
I20250814 01:55:38.570475 7113 heartbeater.cc:507] Master 127.0.106.190:40489 requested a full tablet report, sending...
I20250814 01:55:38.574170 6791 ts_manager.cc:194] Registered new tserver with Master: 6067b6818f90453681dfb46f3d74281c (127.0.106.130:34081)
I20250814 01:55:38.576987 6791 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.130:40075
I20250814 01:55:38.634745 7006 log.cc:826] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: Log is configured to *not* fsync() on all Append() calls
W20250814 01:55:38.986251 7117 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250814 01:55:38.986871 7117 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250814 01:55:38.987644 7117 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250814 01:55:39.042596 7117 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250814 01:55:39.043969 7117 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.0.106.131
I20250814 01:55:39.065225 6871 tablet_bootstrap.cc:492] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:55:39.065992 6871 tablet_bootstrap.cc:492] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: Bootstrap complete.
I20250814 01:55:39.067498 6871 ts_tablet_manager.cc:1397] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: Time spent bootstrapping tablet: real 2.912s user 2.820s sys 0.068s
I20250814 01:55:39.078361 6871 raft_consensus.cc:357] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:39.080317 6871 raft_consensus.cc:738] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 944d756e92dc45f8b62aea14881661f7, State: Initialized, Role: FOLLOWER
I20250814 01:55:39.081084 6871 consensus_queue.cc:260] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:39.084349 6871 ts_tablet_manager.cc:1428] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: Time spent starting tablet: real 0.017s user 0.015s sys 0.000s
I20250814 01:55:39.104108 7117 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.0.106.148:43375
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.0.106.131:44083
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.0.106.131
--webserver_port=35655
--tserver_master_addrs=127.0.106.190:40489
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.0.106.131
--log_dir=/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 6307ab29b78ebc064971e01ca1da31590298dcda
build type FASTDEBUG
built by None at 14 Aug 2025 01:43:21 UTC on 5fd53c4cbb9d
build id 7557
TSAN enabled
I20250814 01:55:39.105756 7117 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250814 01:55:39.107760 7117 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250814 01:55:39.122169 7125 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250814 01:55:39.580585 7113 heartbeater.cc:499] Master 127.0.106.190:40489 was elected leader, sending a full tablet report...
I20250814 01:55:40.383018 7132 raft_consensus.cc:491] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250814 01:55:40.383663 7132 raft_consensus.cc:513] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
W20250814 01:55:40.403599 6865 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.0.106.131:44083: connect: Connection refused (error 111)
I20250814 01:55:40.409981 7132 leader_election.cc:290] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131:44083), 6067b6818f90453681dfb46f3d74281c (127.0.106.130:34081)
W20250814 01:55:40.427492 6865 leader_election.cc:336] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131:44083): Network error: Client connection negotiation failed: client connection to 127.0.106.131:44083: connect: Connection refused (error 111)
I20250814 01:55:40.441915 7068 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "01422312e499447c811b10f9c85d8f22" candidate_uuid: "944d756e92dc45f8b62aea14881661f7" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "6067b6818f90453681dfb46f3d74281c" is_pre_election: true
W20250814 01:55:40.454124 6865 leader_election.cc:343] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [CANDIDATE]: Term 2 pre-election: Tablet error from VoteRequest() call to peer 6067b6818f90453681dfb46f3d74281c (127.0.106.130:34081): Illegal state: must be running to vote when last-logged opid is not known
I20250814 01:55:40.454592 6865 leader_election.cc:304] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 944d756e92dc45f8b62aea14881661f7; no voters: 6067b6818f90453681dfb46f3d74281c, 6e96d697f7024f1cb4946b1b06e4f794
I20250814 01:55:40.455698 7132 raft_consensus.cc:2747] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 1 FOLLOWER]: Leader pre-election lost for term 2. Reason: could not achieve majority
W20250814 01:55:39.127421 7128 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:39.124130 7126 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250814 01:55:40.572649 7127 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1445 milliseconds
I20250814 01:55:40.572755 7117 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250814 01:55:40.573975 7117 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250814 01:55:40.576366 7117 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250814 01:55:40.577773 7117 hybrid_clock.cc:648] HybridClock initialized: now 1755136540577721 us; error 68 us; skew 500 ppm
I20250814 01:55:40.578528 7117 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250814 01:55:40.584718 7117 webserver.cc:480] Webserver started at http://127.0.106.131:35655/ using document root <none> and password file <none>
I20250814 01:55:40.585600 7117 fs_manager.cc:362] Metadata directory not provided
I20250814 01:55:40.585841 7117 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250814 01:55:40.594210 7117 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.004s sys 0.003s
I20250814 01:55:40.598853 7141 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250814 01:55:40.599951 7117 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.001s
I20250814 01:55:40.600248 7117 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data,/tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/wal
uuid: "6e96d697f7024f1cb4946b1b06e4f794"
format_stamp: "Formatted at 2025-08-14 01:55:23 on dist-test-slave-30wj"
I20250814 01:55:40.602116 7117 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250814 01:55:40.651252 7117 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250814 01:55:40.652603 7117 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250814 01:55:40.653017 7117 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250814 01:55:40.655530 7117 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250814 01:55:40.661623 7148 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250814 01:55:40.668761 7117 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250814 01:55:40.668973 7117 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.002s sys 0.000s
I20250814 01:55:40.669235 7117 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250814 01:55:40.673641 7117 ts_tablet_manager.cc:610] Registered 1 tablets
I20250814 01:55:40.673895 7117 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.005s sys 0.000s
I20250814 01:55:40.674245 7148 tablet_bootstrap.cc:492] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: Bootstrap starting.
I20250814 01:55:40.845063 7006 tablet_bootstrap.cc:492] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:55:40.846160 7006 tablet_bootstrap.cc:492] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: Bootstrap complete.
I20250814 01:55:40.847807 7006 ts_tablet_manager.cc:1397] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: Time spent bootstrapping tablet: real 2.507s user 2.424s sys 0.060s
I20250814 01:55:40.857739 7006 raft_consensus.cc:357] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:40.860195 7006 raft_consensus.cc:738] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6067b6818f90453681dfb46f3d74281c, State: Initialized, Role: FOLLOWER
I20250814 01:55:40.861104 7006 consensus_queue.cc:260] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:40.865080 7006 ts_tablet_manager.cc:1428] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: Time spent starting tablet: real 0.017s user 0.013s sys 0.000s
I20250814 01:55:40.866343 7117 rpc_server.cc:307] RPC server started. Bound to: 127.0.106.131:44083
I20250814 01:55:40.866490 7255 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.0.106.131:44083 every 8 connection(s)
I20250814 01:55:40.869738 7117 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data/info.pb
I20250814 01:55:40.875775 426 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu as pid 7117
I20250814 01:55:40.907112 7256 heartbeater.cc:344] Connected to a master server at 127.0.106.190:40489
I20250814 01:55:40.907516 7256 heartbeater.cc:461] Registering TS with master...
I20250814 01:55:40.908450 7256 heartbeater.cc:507] Master 127.0.106.190:40489 requested a full tablet report, sending...
I20250814 01:55:40.911396 6791 ts_manager.cc:194] Registered new tserver with Master: 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131:44083)
I20250814 01:55:40.913380 6791 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.0.106.131:53225
I20250814 01:55:40.919975 426 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250814 01:55:40.952281 7148 log.cc:826] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: Log is configured to *not* fsync() on all Append() calls
I20250814 01:55:41.916190 7256 heartbeater.cc:499] Master 127.0.106.190:40489 was elected leader, sending a full tablet report...
I20250814 01:55:42.240159 7268 raft_consensus.cc:491] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250814 01:55:42.240573 7268 raft_consensus.cc:513] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:42.242635 7268 leader_election.cc:290] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131:44083), 944d756e92dc45f8b62aea14881661f7 (127.0.106.129:42025)
I20250814 01:55:42.279672 6933 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "01422312e499447c811b10f9c85d8f22" candidate_uuid: "6067b6818f90453681dfb46f3d74281c" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "944d756e92dc45f8b62aea14881661f7" is_pre_election: true
I20250814 01:55:42.280443 6933 raft_consensus.cc:2466] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 6067b6818f90453681dfb46f3d74281c in term 1.
I20250814 01:55:42.281966 7002 leader_election.cc:304] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 6067b6818f90453681dfb46f3d74281c, 944d756e92dc45f8b62aea14881661f7; no voters:
I20250814 01:55:42.282855 7268 raft_consensus.cc:2802] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250814 01:55:42.283228 7268 raft_consensus.cc:491] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250814 01:55:42.283552 7268 raft_consensus.cc:3058] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 1 FOLLOWER]: Advancing to term 2
I20250814 01:55:42.278548 7210 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "01422312e499447c811b10f9c85d8f22" candidate_uuid: "6067b6818f90453681dfb46f3d74281c" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "6e96d697f7024f1cb4946b1b06e4f794" is_pre_election: true
W20250814 01:55:42.289178 7000 leader_election.cc:343] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [CANDIDATE]: Term 2 pre-election: Tablet error from VoteRequest() call to peer 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131:44083): Illegal state: must be running to vote when last-logged opid is not known
I20250814 01:55:42.291508 7268 raft_consensus.cc:513] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:42.292889 7268 leader_election.cc:290] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [CANDIDATE]: Term 2 election: Requested vote from peers 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131:44083), 944d756e92dc45f8b62aea14881661f7 (127.0.106.129:42025)
I20250814 01:55:42.293620 7210 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "01422312e499447c811b10f9c85d8f22" candidate_uuid: "6067b6818f90453681dfb46f3d74281c" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "6e96d697f7024f1cb4946b1b06e4f794"
I20250814 01:55:42.293998 6933 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "01422312e499447c811b10f9c85d8f22" candidate_uuid: "6067b6818f90453681dfb46f3d74281c" candidate_term: 2 candidate_status { last_received { term: 1 index: 205 } } ignore_live_leader: false dest_uuid: "944d756e92dc45f8b62aea14881661f7"
I20250814 01:55:42.294507 6933 raft_consensus.cc:3058] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 1 FOLLOWER]: Advancing to term 2
W20250814 01:55:42.294678 7000 leader_election.cc:343] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [CANDIDATE]: Term 2 election: Tablet error from VoteRequest() call to peer 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131:44083): Illegal state: must be running to vote when last-logged opid is not known
I20250814 01:55:42.300400 6933 raft_consensus.cc:2466] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 6067b6818f90453681dfb46f3d74281c in term 2.
I20250814 01:55:42.301167 7002 leader_election.cc:304] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 6067b6818f90453681dfb46f3d74281c, 944d756e92dc45f8b62aea14881661f7; no voters: 6e96d697f7024f1cb4946b1b06e4f794
I20250814 01:55:42.301788 7268 raft_consensus.cc:2802] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 2 FOLLOWER]: Leader election won for term 2
I20250814 01:55:42.303089 7268 raft_consensus.cc:695] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 2 LEADER]: Becoming Leader. State: Replica: 6067b6818f90453681dfb46f3d74281c, State: Running, Role: LEADER
I20250814 01:55:42.303898 7268 consensus_queue.cc:237] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 205, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:42.312281 6791 catalog_manager.cc:5582] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c reported cstate change: term changed from 0 to 2, leader changed from <none> to 6067b6818f90453681dfb46f3d74281c (127.0.106.130), VOTER 6067b6818f90453681dfb46f3d74281c (127.0.106.130) added, VOTER 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131) added, VOTER 944d756e92dc45f8b62aea14881661f7 (127.0.106.129) added. New cstate: current_term: 2 leader_uuid: "6067b6818f90453681dfb46f3d74281c" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } health_report { overall_health: HEALTHY } } }
W20250814 01:55:42.797842 7000 consensus_peers.cc:489] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c -> Peer 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131:44083): Couldn't send request to peer 6e96d697f7024f1cb4946b1b06e4f794. Error code: TABLET_NOT_RUNNING (12). Status: Illegal state: Tablet not RUNNING: BOOTSTRAPPING. This is attempt 1: this message will repeat every 5th retry.
W20250814 01:55:42.805459 426 scanner-internal.cc:458] Time spent opening tablet: real 1.858s user 0.007s sys 0.000s
I20250814 01:55:42.807145 7148 tablet_bootstrap.cc:492] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: Bootstrap replayed 1/1 log segments. Stats: ops{read=205 overwritten=0 applied=205 ignored=0} inserts{seen=10200 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250814 01:55:42.808079 7148 tablet_bootstrap.cc:492] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: Bootstrap complete.
I20250814 01:55:42.809664 7148 ts_tablet_manager.cc:1397] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: Time spent bootstrapping tablet: real 2.136s user 2.069s sys 0.060s
I20250814 01:55:42.817799 7148 raft_consensus.cc:357] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:42.820686 7148 raft_consensus.cc:738] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6e96d697f7024f1cb4946b1b06e4f794, State: Initialized, Role: FOLLOWER
I20250814 01:55:42.821583 7148 consensus_queue.cc:260] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 205, Last appended: 1.205, Last appended by leader: 205, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:42.825004 7148 ts_tablet_manager.cc:1428] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: Time spent starting tablet: real 0.015s user 0.014s sys 0.000s
I20250814 01:55:42.907943 6933 raft_consensus.cc:1273] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 2 FOLLOWER]: Refusing update from remote peer 6067b6818f90453681dfb46f3d74281c: Log matching property violated. Preceding OpId in replica: term: 1 index: 205. Preceding OpId from leader: term: 2 index: 206. (index mismatch)
I20250814 01:55:42.909499 7268 consensus_queue.cc:1035] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [LEADER]: Connected to new peer: Peer: permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 206, Last known committed idx: 205, Time since last communication: 0.000s
I20250814 01:55:42.954411 7068 consensus_queue.cc:237] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 206, Committed index: 206, Last appended: 2.206, Last appended by leader: 205, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:42.957976 6933 raft_consensus.cc:1273] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 2 FOLLOWER]: Refusing update from remote peer 6067b6818f90453681dfb46f3d74281c: Log matching property violated. Preceding OpId in replica: term: 2 index: 206. Preceding OpId from leader: term: 2 index: 207. (index mismatch)
I20250814 01:55:42.959084 7287 consensus_queue.cc:1035] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [LEADER]: Connected to new peer: Peer: permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 207, Last known committed idx: 206, Time since last communication: 0.000s
I20250814 01:55:42.963960 7268 raft_consensus.cc:2953] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 2 LEADER]: Committing config change with OpId 2.207: config changed from index -1 to 207, VOTER 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131) evicted. New config: { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } } }
I20250814 01:55:42.970396 6933 raft_consensus.cc:2953] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 2 FOLLOWER]: Committing config change with OpId 2.207: config changed from index -1 to 207, VOTER 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131) evicted. New config: { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } } }
I20250814 01:55:42.971668 6775 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 01422312e499447c811b10f9c85d8f22 with cas_config_opid_index -1: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250814 01:55:42.976276 6791 catalog_manager.cc:5582] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c reported cstate change: config changed from index -1 to 207, VOTER 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131) evicted. New cstate: current_term: 2 leader_uuid: "6067b6818f90453681dfb46f3d74281c" committed_config { opid_index: 207 OBSOLETE_local: false peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } health_report { overall_health: HEALTHY } } }
I20250814 01:55:43.003827 7068 consensus_queue.cc:237] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 207, Committed index: 207, Last appended: 2.207, Last appended by leader: 205, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:43.006577 7268 raft_consensus.cc:2953] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 2 LEADER]: Committing config change with OpId 2.208: config changed from index 207 to 208, VOTER 944d756e92dc45f8b62aea14881661f7 (127.0.106.129) evicted. New config: { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } } }
I20250814 01:55:43.014894 6775 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 01422312e499447c811b10f9c85d8f22 with cas_config_opid_index 207: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250814 01:55:43.019220 6790 catalog_manager.cc:5582] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c reported cstate change: config changed from index 207 to 208, VOTER 944d756e92dc45f8b62aea14881661f7 (127.0.106.129) evicted. New cstate: current_term: 2 leader_uuid: "6067b6818f90453681dfb46f3d74281c" committed_config { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } health_report { overall_health: HEALTHY } } }
I20250814 01:55:43.021801 7190 tablet_service.cc:1515] Processing DeleteTablet for tablet 01422312e499447c811b10f9c85d8f22 with delete_type TABLET_DATA_TOMBSTONED (TS 6e96d697f7024f1cb4946b1b06e4f794 not found in new config with opid_index 207) from {username='slave'} at 127.0.0.1:48824
I20250814 01:55:43.033393 7295 tablet_replica.cc:331] stopping tablet replica
I20250814 01:55:43.034263 7295 raft_consensus.cc:2241] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250814 01:55:43.034899 7295 raft_consensus.cc:2270] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250814 01:55:43.042732 6913 tablet_service.cc:1515] Processing DeleteTablet for tablet 01422312e499447c811b10f9c85d8f22 with delete_type TABLET_DATA_TOMBSTONED (TS 944d756e92dc45f8b62aea14881661f7 not found in new config with opid_index 208) from {username='slave'} at 127.0.0.1:54752
I20250814 01:55:43.054833 7297 tablet_replica.cc:331] stopping tablet replica
I20250814 01:55:43.055620 7297 raft_consensus.cc:2241] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250814 01:55:43.056409 7297 raft_consensus.cc:2270] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250814 01:55:43.068497 7295 ts_tablet_manager.cc:1905] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250814 01:55:43.085316 7295 ts_tablet_manager.cc:1918] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.205
I20250814 01:55:43.085820 7295 log.cc:1199] T 01422312e499447c811b10f9c85d8f22 P 6e96d697f7024f1cb4946b1b06e4f794: Deleting WAL directory at /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/wal/wals/01422312e499447c811b10f9c85d8f22
I20250814 01:55:43.086885 7297 ts_tablet_manager.cc:1905] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250814 01:55:43.087682 6775 catalog_manager.cc:4928] TS 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131:44083): tablet 01422312e499447c811b10f9c85d8f22 (table pre_rebuild [id=e0381d252f944a4cba0911ad55982c41]) successfully deleted
I20250814 01:55:43.096901 7297 ts_tablet_manager.cc:1918] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.207
I20250814 01:55:43.097210 7297 log.cc:1199] T 01422312e499447c811b10f9c85d8f22 P 944d756e92dc45f8b62aea14881661f7: Deleting WAL directory at /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/wal/wals/01422312e499447c811b10f9c85d8f22
I20250814 01:55:43.098640 6777 catalog_manager.cc:4928] TS 944d756e92dc45f8b62aea14881661f7 (127.0.106.129:42025): tablet 01422312e499447c811b10f9c85d8f22 (table pre_rebuild [id=e0381d252f944a4cba0911ad55982c41]) successfully deleted
I20250814 01:55:43.563314 7048 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250814 01:55:43.599171 6913 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250814 01:55:43.606099 7190 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
Master Summary
UUID | Address | Status
----------------------------------+---------------------+---------
60d4ddac349a4cb9a629c053710f479a | 127.0.106.190:40489 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+---------------------+-------------------------
builtin_ntp_servers | 127.0.106.148:43375 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+---------------------+---------+----------+----------------+-----------------
6067b6818f90453681dfb46f3d74281c | 127.0.106.130:34081 | HEALTHY | <none> | 1 | 0
6e96d697f7024f1cb4946b1b06e4f794 | 127.0.106.131:44083 | HEALTHY | <none> | 0 | 0
944d756e92dc45f8b62aea14881661f7 | 127.0.106.129:42025 | HEALTHY | <none> | 0 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.0.106.129 | experimental | 127.0.106.129:42025
local_ip_for_outbound_sockets | 127.0.106.130 | experimental | 127.0.106.130:34081
local_ip_for_outbound_sockets | 127.0.106.131 | experimental | 127.0.106.131:44083
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data/info.pb | hidden | 127.0.106.129:42025
server_dump_info_path | /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data/info.pb | hidden | 127.0.106.130:34081
server_dump_info_path | /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data/info.pb | hidden | 127.0.106.131:44083
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+---------------------+-------------------------
builtin_ntp_servers | 127.0.106.148:43375 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.19.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
-------------+----+---------+---------------+---------+------------+------------------+-------------
pre_rebuild | 1 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 0
First Quartile | 0
Median | 0
Third Quartile | 1
Maximum | 1
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 1
Tablets | 1
Replicas | 1
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250814 01:55:43.851866 426 log_verifier.cc:126] Checking tablet 01422312e499447c811b10f9c85d8f22
I20250814 01:55:44.127153 426 log_verifier.cc:177] Verified matching terms for 208 ops in tablet 01422312e499447c811b10f9c85d8f22
I20250814 01:55:44.129360 6791 catalog_manager.cc:2482] Servicing SoftDeleteTable request from {username='slave'} at 127.0.0.1:53684:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250814 01:55:44.129884 6791 catalog_manager.cc:2730] Servicing DeleteTable request from {username='slave'} at 127.0.0.1:53684:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250814 01:55:44.140969 6791 catalog_manager.cc:5869] T 00000000000000000000000000000000 P 60d4ddac349a4cb9a629c053710f479a: Sending DeleteTablet for 1 replicas of tablet 01422312e499447c811b10f9c85d8f22
I20250814 01:55:44.142746 7048 tablet_service.cc:1515] Processing DeleteTablet for tablet 01422312e499447c811b10f9c85d8f22 with delete_type TABLET_DATA_DELETED (Table deleted at 2025-08-14 01:55:44 UTC) from {username='slave'} at 127.0.0.1:47832
I20250814 01:55:44.143303 426 test_util.cc:276] Using random seed: -1841367122
I20250814 01:55:44.144471 7328 tablet_replica.cc:331] stopping tablet replica
I20250814 01:55:44.145217 7328 raft_consensus.cc:2241] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 2 LEADER]: Raft consensus shutting down.
I20250814 01:55:44.145818 7328 raft_consensus.cc:2270] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c [term 2 FOLLOWER]: Raft consensus is shut down!
I20250814 01:55:44.179627 7328 ts_tablet_manager.cc:1905] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250814 01:55:44.182093 6791 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:53732:
name: "post_rebuild"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
W20250814 01:55:44.185402 6791 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table post_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250814 01:55:44.192612 7328 ts_tablet_manager.cc:1918] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 2.208
I20250814 01:55:44.192989 7328 log.cc:1199] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: Deleting WAL directory at /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/wal/wals/01422312e499447c811b10f9c85d8f22
I20250814 01:55:44.193737 7328 ts_tablet_manager.cc:1939] T 01422312e499447c811b10f9c85d8f22 P 6067b6818f90453681dfb46f3d74281c: Deleting consensus metadata
I20250814 01:55:44.196426 6775 catalog_manager.cc:4928] TS 6067b6818f90453681dfb46f3d74281c (127.0.106.130:34081): tablet 01422312e499447c811b10f9c85d8f22 (table pre_rebuild [id=e0381d252f944a4cba0911ad55982c41]) successfully deleted
I20250814 01:55:44.209090 6913 tablet_service.cc:1468] Processing CreateTablet for tablet 796d3ef2d7ef4702a21d83a0e2c298f3 (DEFAULT_TABLE table=post_rebuild [id=75fa5a57f0684a13847a488c2f947b6d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:55:44.209612 7190 tablet_service.cc:1468] Processing CreateTablet for tablet 796d3ef2d7ef4702a21d83a0e2c298f3 (DEFAULT_TABLE table=post_rebuild [id=75fa5a57f0684a13847a488c2f947b6d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:55:44.209872 7048 tablet_service.cc:1468] Processing CreateTablet for tablet 796d3ef2d7ef4702a21d83a0e2c298f3 (DEFAULT_TABLE table=post_rebuild [id=75fa5a57f0684a13847a488c2f947b6d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250814 01:55:44.210436 6913 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 796d3ef2d7ef4702a21d83a0e2c298f3. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:55:44.210932 7190 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 796d3ef2d7ef4702a21d83a0e2c298f3. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:55:44.210968 7048 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 796d3ef2d7ef4702a21d83a0e2c298f3. 1 dirs total, 0 dirs full, 0 dirs failed
I20250814 01:55:44.231988 7335 tablet_bootstrap.cc:492] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794: Bootstrap starting.
I20250814 01:55:44.237888 7336 tablet_bootstrap.cc:492] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6067b6818f90453681dfb46f3d74281c: Bootstrap starting.
I20250814 01:55:44.238583 7337 tablet_bootstrap.cc:492] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 944d756e92dc45f8b62aea14881661f7: Bootstrap starting.
I20250814 01:55:44.239481 7335 tablet_bootstrap.cc:654] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794: Neither blocks nor log segments found. Creating new log.
I20250814 01:55:44.244012 7335 tablet_bootstrap.cc:492] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794: No bootstrap required, opened a new log
I20250814 01:55:44.244452 7335 ts_tablet_manager.cc:1397] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794: Time spent bootstrapping tablet: real 0.013s user 0.005s sys 0.005s
I20250814 01:55:44.246376 7336 tablet_bootstrap.cc:654] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6067b6818f90453681dfb46f3d74281c: Neither blocks nor log segments found. Creating new log.
I20250814 01:55:44.247351 7337 tablet_bootstrap.cc:654] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 944d756e92dc45f8b62aea14881661f7: Neither blocks nor log segments found. Creating new log.
I20250814 01:55:44.246968 7335 raft_consensus.cc:357] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:44.247602 7335 raft_consensus.cc:383] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:55:44.247898 7335 raft_consensus.cc:738] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6e96d697f7024f1cb4946b1b06e4f794, State: Initialized, Role: FOLLOWER
I20250814 01:55:44.248536 7335 consensus_queue.cc:260] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:44.259063 7335 ts_tablet_manager.cc:1428] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794: Time spent starting tablet: real 0.014s user 0.006s sys 0.007s
I20250814 01:55:44.261842 7336 tablet_bootstrap.cc:492] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6067b6818f90453681dfb46f3d74281c: No bootstrap required, opened a new log
I20250814 01:55:44.262313 7336 ts_tablet_manager.cc:1397] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6067b6818f90453681dfb46f3d74281c: Time spent bootstrapping tablet: real 0.025s user 0.012s sys 0.004s
I20250814 01:55:44.265093 7336 raft_consensus.cc:357] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6067b6818f90453681dfb46f3d74281c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:44.265812 7336 raft_consensus.cc:383] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6067b6818f90453681dfb46f3d74281c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:55:44.266129 7336 raft_consensus.cc:738] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6067b6818f90453681dfb46f3d74281c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6067b6818f90453681dfb46f3d74281c, State: Initialized, Role: FOLLOWER
I20250814 01:55:44.266824 7336 consensus_queue.cc:260] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6067b6818f90453681dfb46f3d74281c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:44.271160 7337 tablet_bootstrap.cc:492] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 944d756e92dc45f8b62aea14881661f7: No bootstrap required, opened a new log
I20250814 01:55:44.271580 7337 ts_tablet_manager.cc:1397] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 944d756e92dc45f8b62aea14881661f7: Time spent bootstrapping tablet: real 0.033s user 0.000s sys 0.013s
I20250814 01:55:44.272807 7336 ts_tablet_manager.cc:1428] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6067b6818f90453681dfb46f3d74281c: Time spent starting tablet: real 0.010s user 0.005s sys 0.000s
I20250814 01:55:44.274062 7337 raft_consensus.cc:357] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 944d756e92dc45f8b62aea14881661f7 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:44.274729 7337 raft_consensus.cc:383] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 944d756e92dc45f8b62aea14881661f7 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250814 01:55:44.275018 7337 raft_consensus.cc:738] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 944d756e92dc45f8b62aea14881661f7 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 944d756e92dc45f8b62aea14881661f7, State: Initialized, Role: FOLLOWER
I20250814 01:55:44.275753 7337 consensus_queue.cc:260] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 944d756e92dc45f8b62aea14881661f7 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:44.285068 7337 ts_tablet_manager.cc:1428] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 944d756e92dc45f8b62aea14881661f7: Time spent starting tablet: real 0.013s user 0.008s sys 0.003s
I20250814 01:55:44.289091 7340 raft_consensus.cc:491] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250814 01:55:44.289608 7340 raft_consensus.cc:513] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:44.295200 7340 leader_election.cc:290] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 944d756e92dc45f8b62aea14881661f7 (127.0.106.129:42025), 6067b6818f90453681dfb46f3d74281c (127.0.106.130:34081)
W20250814 01:55:44.301064 7114 tablet.cc:2378] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6067b6818f90453681dfb46f3d74281c: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250814 01:55:44.318509 6933 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "796d3ef2d7ef4702a21d83a0e2c298f3" candidate_uuid: "6e96d697f7024f1cb4946b1b06e4f794" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "944d756e92dc45f8b62aea14881661f7" is_pre_election: true
I20250814 01:55:44.319100 6933 raft_consensus.cc:2466] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 944d756e92dc45f8b62aea14881661f7 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 6e96d697f7024f1cb4946b1b06e4f794 in term 0.
I20250814 01:55:44.320240 7068 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "796d3ef2d7ef4702a21d83a0e2c298f3" candidate_uuid: "6e96d697f7024f1cb4946b1b06e4f794" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "6067b6818f90453681dfb46f3d74281c" is_pre_election: true
I20250814 01:55:44.320537 7144 leader_election.cc:304] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 6e96d697f7024f1cb4946b1b06e4f794, 944d756e92dc45f8b62aea14881661f7; no voters:
I20250814 01:55:44.320859 7068 raft_consensus.cc:2466] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6067b6818f90453681dfb46f3d74281c [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 6e96d697f7024f1cb4946b1b06e4f794 in term 0.
I20250814 01:55:44.321357 7340 raft_consensus.cc:2802] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250814 01:55:44.321668 7340 raft_consensus.cc:491] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250814 01:55:44.321975 7340 raft_consensus.cc:3058] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:55:44.327370 7340 raft_consensus.cc:513] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:44.329591 7068 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "796d3ef2d7ef4702a21d83a0e2c298f3" candidate_uuid: "6e96d697f7024f1cb4946b1b06e4f794" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "6067b6818f90453681dfb46f3d74281c"
I20250814 01:55:44.329770 6933 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "796d3ef2d7ef4702a21d83a0e2c298f3" candidate_uuid: "6e96d697f7024f1cb4946b1b06e4f794" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "944d756e92dc45f8b62aea14881661f7"
I20250814 01:55:44.330101 7068 raft_consensus.cc:3058] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6067b6818f90453681dfb46f3d74281c [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:55:44.330250 6933 raft_consensus.cc:3058] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 944d756e92dc45f8b62aea14881661f7 [term 0 FOLLOWER]: Advancing to term 1
I20250814 01:55:44.336216 6933 raft_consensus.cc:2466] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 944d756e92dc45f8b62aea14881661f7 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 6e96d697f7024f1cb4946b1b06e4f794 in term 1.
I20250814 01:55:44.336949 7340 leader_election.cc:290] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [CANDIDATE]: Term 1 election: Requested vote from peers 944d756e92dc45f8b62aea14881661f7 (127.0.106.129:42025), 6067b6818f90453681dfb46f3d74281c (127.0.106.130:34081)
I20250814 01:55:44.337203 7144 leader_election.cc:304] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 6e96d697f7024f1cb4946b1b06e4f794, 944d756e92dc45f8b62aea14881661f7; no voters:
I20250814 01:55:44.337949 7340 raft_consensus.cc:2802] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [term 1 FOLLOWER]: Leader election won for term 1
I20250814 01:55:44.338385 7068 raft_consensus.cc:2466] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6067b6818f90453681dfb46f3d74281c [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 6e96d697f7024f1cb4946b1b06e4f794 in term 1.
I20250814 01:55:44.340219 7340 raft_consensus.cc:695] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [term 1 LEADER]: Becoming Leader. State: Replica: 6e96d697f7024f1cb4946b1b06e4f794, State: Running, Role: LEADER
I20250814 01:55:44.341111 7340 consensus_queue.cc:237] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } } peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } }
I20250814 01:55:44.355474 6790 catalog_manager.cc:5582] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 reported cstate change: term changed from 0 to 1, leader changed from <none> to 6e96d697f7024f1cb4946b1b06e4f794 (127.0.106.131). New cstate: current_term: 1 leader_uuid: "6e96d697f7024f1cb4946b1b06e4f794" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "6e96d697f7024f1cb4946b1b06e4f794" member_type: VOTER last_known_addr { host: "127.0.106.131" port: 44083 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 } health_report { overall_health: UNKNOWN } } }
W20250814 01:55:44.378824 7257 tablet.cc:2378] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250814 01:55:44.393328 6979 tablet.cc:2378] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 944d756e92dc45f8b62aea14881661f7: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250814 01:55:44.550087 7068 raft_consensus.cc:1273] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6067b6818f90453681dfb46f3d74281c [term 1 FOLLOWER]: Refusing update from remote peer 6e96d697f7024f1cb4946b1b06e4f794: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250814 01:55:44.552400 7348 consensus_queue.cc:1035] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [LEADER]: Connected to new peer: Peer: permanent_uuid: "6067b6818f90453681dfb46f3d74281c" member_type: VOTER last_known_addr { host: "127.0.106.130" port: 34081 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250814 01:55:44.555115 6933 raft_consensus.cc:1273] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 944d756e92dc45f8b62aea14881661f7 [term 1 FOLLOWER]: Refusing update from remote peer 6e96d697f7024f1cb4946b1b06e4f794: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250814 01:55:44.558151 7340 consensus_queue.cc:1035] T 796d3ef2d7ef4702a21d83a0e2c298f3 P 6e96d697f7024f1cb4946b1b06e4f794 [LEADER]: Connected to new peer: Peer: permanent_uuid: "944d756e92dc45f8b62aea14881661f7" member_type: VOTER last_known_addr { host: "127.0.106.129" port: 42025 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250814 01:55:44.659289 7359 mvcc.cc:204] Tried to move back new op lower bound from 7189039286462623744 to 7189039285635227648. Current Snapshot: MvccSnapshot[applied={T|T < 7189039286462623744}]
I20250814 01:55:49.329496 7048 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250814 01:55:49.330991 7190 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250814 01:55:49.347334 6913 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
Master Summary
UUID | Address | Status
----------------------------------+---------------------+---------
60d4ddac349a4cb9a629c053710f479a | 127.0.106.190:40489 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+---------------------+-------------------------
builtin_ntp_servers | 127.0.106.148:43375 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+---------------------+---------+----------+----------------+-----------------
6067b6818f90453681dfb46f3d74281c | 127.0.106.130:34081 | HEALTHY | <none> | 0 | 0
6e96d697f7024f1cb4946b1b06e4f794 | 127.0.106.131:44083 | HEALTHY | <none> | 1 | 0
944d756e92dc45f8b62aea14881661f7 | 127.0.106.129:42025 | HEALTHY | <none> | 0 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.0.106.129 | experimental | 127.0.106.129:42025
local_ip_for_outbound_sockets | 127.0.106.130 | experimental | 127.0.106.130:34081
local_ip_for_outbound_sockets | 127.0.106.131 | experimental | 127.0.106.131:44083
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-0/data/info.pb | hidden | 127.0.106.129:42025
server_dump_info_path | /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-1/data/info.pb | hidden | 127.0.106.130:34081
server_dump_info_path | /tmp/dist-test-taskF9ktMs/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1755136369252803-426-0/minicluster-data/ts-2/data/info.pb | hidden | 127.0.106.131:44083
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+---------------------+-------------------------
builtin_ntp_servers | 127.0.106.148:43375 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.19.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
--------------+----+---------+---------------+---------+------------+------------------+-------------
post_rebuild | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 1
First Quartile | 1
Median | 1
Third Quartile | 1
Maximum | 1
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 1
Tablets | 1
Replicas | 3
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250814 01:55:49.559262 426 log_verifier.cc:126] Checking tablet 01422312e499447c811b10f9c85d8f22
I20250814 01:55:49.559551 426 log_verifier.cc:177] Verified matching terms for 0 ops in tablet 01422312e499447c811b10f9c85d8f22
I20250814 01:55:49.559697 426 log_verifier.cc:126] Checking tablet 796d3ef2d7ef4702a21d83a0e2c298f3
I20250814 01:55:50.304564 426 log_verifier.cc:177] Verified matching terms for 205 ops in tablet 796d3ef2d7ef4702a21d83a0e2c298f3
I20250814 01:55:50.329483 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 6828
I20250814 01:55:50.362413 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 6982
I20250814 01:55:50.395298 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 7117
I20250814 01:55:50.429652 426 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskF9ktMs/build/tsan/bin/kudu with pid 6758
2025-08-14T01:55:50Z chronyd exiting
[ OK ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0 (34622 ms)
[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest (34623 ms total)
[----------] Global test environment tear-down
[==========] 9 tests from 5 test suites ran. (181161 ms total)
[ PASSED ] 8 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] AdminCliTest.TestRebuildTables
1 FAILED TEST