Note: This is test shard 6 of 8.
[==========] Running 9 tests from 5 test suites.
[----------] Global test environment set-up.
[----------] 5 tests from AdminCliTest
[ RUN ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes
WARNING: Logging before InitGoogleLogging() is written to STDERR
I20250811 20:46:01.457835 32747 test_util.cc:276] Using random seed: 49475634
W20250811 20:46:02.661825 32747 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.155s user 0.438s sys 0.715s
W20250811 20:46:02.662149 32747 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.156s user 0.438s sys 0.715s
I20250811 20:46:02.664160 32747 ts_itest-base.cc:115] Starting cluster with:
I20250811 20:46:02.664312 32747 ts_itest-base.cc:116] --------------
I20250811 20:46:02.664477 32747 ts_itest-base.cc:117] 4 tablet servers
I20250811 20:46:02.664634 32747 ts_itest-base.cc:118] 3 replicas per TS
I20250811 20:46:02.664776 32747 ts_itest-base.cc:119] --------------
2025-08-11T20:46:02Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T20:46:02Z Disabled control of system clock
I20250811 20:46:02.700910 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:45355
--webserver_interface=127.31.250.254
--webserver_port=0
--builtin_ntp_servers=127.31.250.212:34519
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.31.250.254:45355 with env {}
W20250811 20:46:03.004655 32761 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:03.005193 32761 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:03.005573 32761 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:03.036309 32761 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 20:46:03.036628 32761 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:03.036829 32761 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 20:46:03.037019 32761 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 20:46:03.073602 32761 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:34519
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.31.250.254:45355
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:45355
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.31.250.254
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:03.074913 32761 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:03.076581 32761 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:03.087949 32767 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:03.088214 300 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:04.282502 301 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1189 milliseconds
W20250811 20:46:04.283660 32761 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.196s user 0.453s sys 0.742s
W20250811 20:46:04.283665 302 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:04.284055 32761 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.197s user 0.454s sys 0.743s
I20250811 20:46:04.284312 32761 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:46:04.285365 32761 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:04.287881 32761 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:04.289247 32761 hybrid_clock.cc:648] HybridClock initialized: now 1754945164289203 us; error 51 us; skew 500 ppm
I20250811 20:46:04.290026 32761 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:04.297600 32761 webserver.cc:489] Webserver started at http://127.31.250.254:44833/ using document root <none> and password file <none>
I20250811 20:46:04.298494 32761 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:04.298722 32761 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:04.299136 32761 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:46:04.303500 32761 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "373ac9196e6146528a87ff49182a735a"
format_stamp: "Formatted at 2025-08-11 20:46:04 on dist-test-slave-4gzk"
I20250811 20:46:04.304533 32761 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "373ac9196e6146528a87ff49182a735a"
format_stamp: "Formatted at 2025-08-11 20:46:04 on dist-test-slave-4gzk"
I20250811 20:46:04.312335 32761 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.008s sys 0.000s
I20250811 20:46:04.318070 309 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:04.319296 32761 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.001s
I20250811 20:46:04.319615 32761 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
uuid: "373ac9196e6146528a87ff49182a735a"
format_stamp: "Formatted at 2025-08-11 20:46:04 on dist-test-slave-4gzk"
I20250811 20:46:04.319963 32761 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:04.387116 32761 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:04.388556 32761 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:04.389025 32761 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:04.457785 32761 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.254:45355
I20250811 20:46:04.457859 360 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.254:45355 every 8 connection(s)
I20250811 20:46:04.460505 32761 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 20:46:04.465407 361 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:04.471774 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 32761
I20250811 20:46:04.472334 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250811 20:46:04.486300 361 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a: Bootstrap starting.
I20250811 20:46:04.491483 361 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:04.493110 361 log.cc:826] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a: Log is configured to *not* fsync() on all Append() calls
I20250811 20:46:04.498353 361 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a: No bootstrap required, opened a new log
I20250811 20:46:04.518994 361 raft_consensus.cc:357] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "373ac9196e6146528a87ff49182a735a" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45355 } }
I20250811 20:46:04.519644 361 raft_consensus.cc:383] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:46:04.519891 361 raft_consensus.cc:738] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 373ac9196e6146528a87ff49182a735a, State: Initialized, Role: FOLLOWER
I20250811 20:46:04.520576 361 consensus_queue.cc:260] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "373ac9196e6146528a87ff49182a735a" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45355 } }
I20250811 20:46:04.521070 361 raft_consensus.cc:397] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:46:04.521323 361 raft_consensus.cc:491] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:46:04.521632 361 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:04.525596 361 raft_consensus.cc:513] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "373ac9196e6146528a87ff49182a735a" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45355 } }
I20250811 20:46:04.526257 361 leader_election.cc:304] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 373ac9196e6146528a87ff49182a735a; no voters:
I20250811 20:46:04.528345 361 leader_election.cc:290] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 20:46:04.528769 366 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:46:04.531200 366 raft_consensus.cc:695] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 1 LEADER]: Becoming Leader. State: Replica: 373ac9196e6146528a87ff49182a735a, State: Running, Role: LEADER
I20250811 20:46:04.531981 366 consensus_queue.cc:237] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "373ac9196e6146528a87ff49182a735a" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45355 } }
I20250811 20:46:04.532909 361 sys_catalog.cc:564] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [sys.catalog]: configured and running, proceeding with master startup.
I20250811 20:46:04.539227 367 sys_catalog.cc:455] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "373ac9196e6146528a87ff49182a735a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "373ac9196e6146528a87ff49182a735a" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45355 } } }
I20250811 20:46:04.539703 368 sys_catalog.cc:455] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [sys.catalog]: SysCatalogTable state changed. Reason: New leader 373ac9196e6146528a87ff49182a735a. Latest consensus state: current_term: 1 leader_uuid: "373ac9196e6146528a87ff49182a735a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "373ac9196e6146528a87ff49182a735a" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45355 } } }
I20250811 20:46:04.540247 367 sys_catalog.cc:458] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [sys.catalog]: This master's current role is: LEADER
I20250811 20:46:04.540539 368 sys_catalog.cc:458] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [sys.catalog]: This master's current role is: LEADER
I20250811 20:46:04.549041 373 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 20:46:04.562480 373 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 20:46:04.579574 373 catalog_manager.cc:1349] Generated new cluster ID: 18593a7aac8c429ba46b8e0d92a6d510
I20250811 20:46:04.579907 373 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 20:46:04.610196 373 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 20:46:04.612200 373 catalog_manager.cc:1506] Loading token signing keys...
I20250811 20:46:04.629498 373 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a: Generated new TSK 0
I20250811 20:46:04.630632 373 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 20:46:04.642388 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.193:0
--local_ip_for_outbound_sockets=127.31.250.193
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45355
--builtin_ntp_servers=127.31.250.212:34519
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250811 20:46:04.940055 385 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:04.940593 385 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:04.941085 385 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:04.973150 385 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:04.974009 385 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.193
I20250811 20:46:05.008227 385 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:34519
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.193:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45355
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.193
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:05.009493 385 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:05.011003 385 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:05.023978 391 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:06.426967 390 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 385
W20250811 20:46:06.741343 390 kernel_stack_watchdog.cc:198] Thread 385 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250811 20:46:06.741782 393 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1715 milliseconds
W20250811 20:46:05.024936 392 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:06.742089 385 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.718s user 0.629s sys 0.965s
W20250811 20:46:06.742863 385 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.718s user 0.629s sys 0.965s
W20250811 20:46:06.743274 394 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:46:06.743237 385 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:46:06.747699 385 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:06.750278 385 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:06.751756 385 hybrid_clock.cc:648] HybridClock initialized: now 1754945166751683 us; error 76 us; skew 500 ppm
I20250811 20:46:06.752799 385 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:06.760089 385 webserver.cc:489] Webserver started at http://127.31.250.193:40447/ using document root <none> and password file <none>
I20250811 20:46:06.761351 385 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:06.761637 385 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:06.762202 385 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:46:06.769840 385 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "f3fac04926ac442cbde92c6cdec496bc"
format_stamp: "Formatted at 2025-08-11 20:46:06 on dist-test-slave-4gzk"
I20250811 20:46:06.771381 385 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "f3fac04926ac442cbde92c6cdec496bc"
format_stamp: "Formatted at 2025-08-11 20:46:06 on dist-test-slave-4gzk"
I20250811 20:46:06.780848 385 fs_manager.cc:696] Time spent creating directory manager: real 0.009s user 0.007s sys 0.001s
I20250811 20:46:06.788583 401 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:06.789839 385 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250811 20:46:06.790232 385 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "f3fac04926ac442cbde92c6cdec496bc"
format_stamp: "Formatted at 2025-08-11 20:46:06 on dist-test-slave-4gzk"
I20250811 20:46:06.790674 385 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:06.849406 385 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:06.851006 385 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:06.851445 385 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:06.854010 385 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:46:06.858011 385 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:46:06.858199 385 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:06.858419 385 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:46:06.858665 385 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:07.015690 385 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.193:43471
I20250811 20:46:07.015794 513 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.193:43471 every 8 connection(s)
I20250811 20:46:07.018177 385 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 20:46:07.024689 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 385
I20250811 20:46:07.025238 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250811 20:46:07.032384 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.194:0
--local_ip_for_outbound_sockets=127.31.250.194
--webserver_interface=127.31.250.194
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45355
--builtin_ntp_servers=127.31.250.212:34519
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 20:46:07.042423 514 heartbeater.cc:344] Connected to a master server at 127.31.250.254:45355
I20250811 20:46:07.042850 514 heartbeater.cc:461] Registering TS with master...
I20250811 20:46:07.043864 514 heartbeater.cc:507] Master 127.31.250.254:45355 requested a full tablet report, sending...
I20250811 20:46:07.046142 326 ts_manager.cc:194] Registered new tserver with Master: f3fac04926ac442cbde92c6cdec496bc (127.31.250.193:43471)
I20250811 20:46:07.047930 326 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.193:52225
W20250811 20:46:07.337981 518 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:07.338486 518 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:07.338979 518 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:07.369576 518 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:07.370414 518 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.194
I20250811 20:46:07.404186 518 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:34519
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.194:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.31.250.194
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45355
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.194
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:07.405450 518 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:07.406932 518 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:07.417536 524 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:46:08.051292 514 heartbeater.cc:499] Master 127.31.250.254:45355 was elected leader, sending a full tablet report...
W20250811 20:46:07.419026 525 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:08.805554 527 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:08.808048 526 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1383 milliseconds
W20250811 20:46:08.808140 518 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.390s user 0.410s sys 0.965s
W20250811 20:46:08.808506 518 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.390s user 0.410s sys 0.966s
I20250811 20:46:08.808808 518 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:46:08.809808 518 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:08.812192 518 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:08.813566 518 hybrid_clock.cc:648] HybridClock initialized: now 1754945168813533 us; error 42 us; skew 500 ppm
I20250811 20:46:08.814322 518 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:08.822220 518 webserver.cc:489] Webserver started at http://127.31.250.194:35993/ using document root <none> and password file <none>
I20250811 20:46:08.823343 518 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:08.823572 518 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:08.824028 518 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:46:08.828472 518 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "a540b3ba92124aa1a30796dfd6a829ba"
format_stamp: "Formatted at 2025-08-11 20:46:08 on dist-test-slave-4gzk"
I20250811 20:46:08.829586 518 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "a540b3ba92124aa1a30796dfd6a829ba"
format_stamp: "Formatted at 2025-08-11 20:46:08 on dist-test-slave-4gzk"
I20250811 20:46:08.838006 518 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.005s sys 0.005s
I20250811 20:46:08.844120 534 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:08.845341 518 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250811 20:46:08.845677 518 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "a540b3ba92124aa1a30796dfd6a829ba"
format_stamp: "Formatted at 2025-08-11 20:46:08 on dist-test-slave-4gzk"
I20250811 20:46:08.846030 518 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:08.914358 518 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:08.915902 518 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:08.916297 518 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:08.918617 518 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:46:08.922505 518 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:46:08.922752 518 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:08.922997 518 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:46:08.923166 518 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:09.061617 518 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.194:32787
I20250811 20:46:09.061725 646 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.194:32787 every 8 connection(s)
I20250811 20:46:09.064263 518 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 20:46:09.072953 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 518
I20250811 20:46:09.073364 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250811 20:46:09.078874 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.195:0
--local_ip_for_outbound_sockets=127.31.250.195
--webserver_interface=127.31.250.195
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45355
--builtin_ntp_servers=127.31.250.212:34519
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 20:46:09.084484 647 heartbeater.cc:344] Connected to a master server at 127.31.250.254:45355
I20250811 20:46:09.084923 647 heartbeater.cc:461] Registering TS with master...
I20250811 20:46:09.085912 647 heartbeater.cc:507] Master 127.31.250.254:45355 requested a full tablet report, sending...
I20250811 20:46:09.088052 326 ts_manager.cc:194] Registered new tserver with Master: a540b3ba92124aa1a30796dfd6a829ba (127.31.250.194:32787)
I20250811 20:46:09.089246 326 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.194:34941
W20250811 20:46:09.378520 651 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:09.379076 651 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:09.379571 651 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:09.411401 651 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:09.412274 651 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.195
I20250811 20:46:09.447283 651 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:34519
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.195:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.31.250.195
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45355
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.195
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:09.448572 651 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:09.450174 651 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:09.462175 657 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:46:10.092319 647 heartbeater.cc:499] Master 127.31.250.254:45355 was elected leader, sending a full tablet report...
W20250811 20:46:09.464124 658 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:09.469110 660 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:10.646518 659 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1179 milliseconds
I20250811 20:46:10.646616 651 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:46:10.647787 651 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:10.650413 651 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:10.651815 651 hybrid_clock.cc:648] HybridClock initialized: now 1754945170651775 us; error 57 us; skew 500 ppm
I20250811 20:46:10.652588 651 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:10.658952 651 webserver.cc:489] Webserver started at http://127.31.250.195:44725/ using document root <none> and password file <none>
I20250811 20:46:10.659904 651 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:10.660110 651 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:10.660612 651 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:46:10.665032 651 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3"
format_stamp: "Formatted at 2025-08-11 20:46:10 on dist-test-slave-4gzk"
I20250811 20:46:10.666110 651 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3"
format_stamp: "Formatted at 2025-08-11 20:46:10 on dist-test-slave-4gzk"
I20250811 20:46:10.673287 651 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.003s
I20250811 20:46:10.678557 667 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:10.679529 651 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.000s
I20250811 20:46:10.679808 651 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3"
format_stamp: "Formatted at 2025-08-11 20:46:10 on dist-test-slave-4gzk"
I20250811 20:46:10.680087 651 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:10.732168 651 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:10.733599 651 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:10.734009 651 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:10.736536 651 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:46:10.740468 651 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:46:10.740657 651 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:10.740931 651 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:46:10.741094 651 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:10.874513 651 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.195:39555
I20250811 20:46:10.874614 779 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.195:39555 every 8 connection(s)
I20250811 20:46:10.877002 651 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 20:46:10.881928 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 651
I20250811 20:46:10.882519 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250811 20:46:10.892151 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.196:0
--local_ip_for_outbound_sockets=127.31.250.196
--webserver_interface=127.31.250.196
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45355
--builtin_ntp_servers=127.31.250.212:34519
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 20:46:10.929031 780 heartbeater.cc:344] Connected to a master server at 127.31.250.254:45355
I20250811 20:46:10.929427 780 heartbeater.cc:461] Registering TS with master...
I20250811 20:46:10.930382 780 heartbeater.cc:507] Master 127.31.250.254:45355 requested a full tablet report, sending...
I20250811 20:46:10.932391 326 ts_manager.cc:194] Registered new tserver with Master: cdbb30b5688c4bed8da0f52c9b5a70a3 (127.31.250.195:39555)
I20250811 20:46:10.933555 326 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.195:34415
W20250811 20:46:11.210337 783 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:11.210846 783 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:11.211561 783 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:11.246305 783 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:11.247097 783 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.196
I20250811 20:46:11.282423 783 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:34519
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.196:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.31.250.196
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45355
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.196
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:11.283788 783 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:11.285290 783 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:11.296375 790 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:46:11.936852 780 heartbeater.cc:499] Master 127.31.250.254:45355 was elected leader, sending a full tablet report...
W20250811 20:46:11.296908 791 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:12.538017 793 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:12.540707 792 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1238 milliseconds
I20250811 20:46:12.540838 783 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:46:12.541931 783 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:12.544554 783 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:12.545955 783 hybrid_clock.cc:648] HybridClock initialized: now 1754945172545908 us; error 54 us; skew 500 ppm
I20250811 20:46:12.546741 783 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:12.553854 783 webserver.cc:489] Webserver started at http://127.31.250.196:42061/ using document root <none> and password file <none>
I20250811 20:46:12.555020 783 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:12.555305 783 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:12.555847 783 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:46:12.561866 783 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "809f3a74f4b34f84820235c7d19deb76"
format_stamp: "Formatted at 2025-08-11 20:46:12 on dist-test-slave-4gzk"
I20250811 20:46:12.563337 783 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "809f3a74f4b34f84820235c7d19deb76"
format_stamp: "Formatted at 2025-08-11 20:46:12 on dist-test-slave-4gzk"
I20250811 20:46:12.571990 783 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.006s sys 0.001s
I20250811 20:46:12.578855 801 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:12.579874 783 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.001s
I20250811 20:46:12.580183 783 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "809f3a74f4b34f84820235c7d19deb76"
format_stamp: "Formatted at 2025-08-11 20:46:12 on dist-test-slave-4gzk"
I20250811 20:46:12.580488 783 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:12.628953 783 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:12.630389 783 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:12.630818 783 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:12.633289 783 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:46:12.637391 783 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:46:12.637581 783 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:12.637768 783 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:46:12.637897 783 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:12.771512 783 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.196:36001
I20250811 20:46:12.771658 913 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.196:36001 every 8 connection(s)
I20250811 20:46:12.774838 783 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250811 20:46:12.781474 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 783
I20250811 20:46:12.782024 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250811 20:46:12.794955 914 heartbeater.cc:344] Connected to a master server at 127.31.250.254:45355
I20250811 20:46:12.795397 914 heartbeater.cc:461] Registering TS with master...
I20250811 20:46:12.796341 914 heartbeater.cc:507] Master 127.31.250.254:45355 requested a full tablet report, sending...
I20250811 20:46:12.798213 326 ts_manager.cc:194] Registered new tserver with Master: 809f3a74f4b34f84820235c7d19deb76 (127.31.250.196:36001)
I20250811 20:46:12.799551 326 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.196:52927
I20250811 20:46:12.802254 32747 external_mini_cluster.cc:949] 4 TS(s) registered with all masters
I20250811 20:46:12.840574 326 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:46636:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250811 20:46:12.909132 582 tablet_service.cc:1468] Processing CreateTablet for tablet a07e39bc140f46d3ace5ba69d8d294a5 (DEFAULT_TABLE table=TestTable [id=dada49b4de0844a9aaf9cc41894512b7]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:46:12.911144 582 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet a07e39bc140f46d3ace5ba69d8d294a5. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:12.911016 849 tablet_service.cc:1468] Processing CreateTablet for tablet a07e39bc140f46d3ace5ba69d8d294a5 (DEFAULT_TABLE table=TestTable [id=dada49b4de0844a9aaf9cc41894512b7]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:46:12.912804 849 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet a07e39bc140f46d3ace5ba69d8d294a5. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:12.912545 715 tablet_service.cc:1468] Processing CreateTablet for tablet a07e39bc140f46d3ace5ba69d8d294a5 (DEFAULT_TABLE table=TestTable [id=dada49b4de0844a9aaf9cc41894512b7]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:46:12.914248 715 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet a07e39bc140f46d3ace5ba69d8d294a5. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:12.932607 933 tablet_bootstrap.cc:492] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba: Bootstrap starting.
I20250811 20:46:12.938447 933 tablet_bootstrap.cc:654] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:12.939507 934 tablet_bootstrap.cc:492] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76: Bootstrap starting.
I20250811 20:46:12.940939 933 log.cc:826] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba: Log is configured to *not* fsync() on all Append() calls
I20250811 20:46:12.942078 935 tablet_bootstrap.cc:492] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3: Bootstrap starting.
I20250811 20:46:12.948922 934 tablet_bootstrap.cc:654] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:12.949322 935 tablet_bootstrap.cc:654] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:12.950322 933 tablet_bootstrap.cc:492] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba: No bootstrap required, opened a new log
I20250811 20:46:12.950821 933 ts_tablet_manager.cc:1397] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba: Time spent bootstrapping tablet: real 0.019s user 0.006s sys 0.007s
I20250811 20:46:12.951025 935 log.cc:826] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3: Log is configured to *not* fsync() on all Append() calls
I20250811 20:46:12.951154 934 log.cc:826] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76: Log is configured to *not* fsync() on all Append() calls
I20250811 20:46:12.958935 934 tablet_bootstrap.cc:492] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76: No bootstrap required, opened a new log
I20250811 20:46:12.959167 935 tablet_bootstrap.cc:492] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3: No bootstrap required, opened a new log
I20250811 20:46:12.959347 934 ts_tablet_manager.cc:1397] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76: Time spent bootstrapping tablet: real 0.020s user 0.014s sys 0.003s
I20250811 20:46:12.959650 935 ts_tablet_manager.cc:1397] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3: Time spent bootstrapping tablet: real 0.018s user 0.013s sys 0.004s
I20250811 20:46:12.977921 934 raft_consensus.cc:357] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }
I20250811 20:46:12.977944 933 raft_consensus.cc:357] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }
I20250811 20:46:12.978686 934 raft_consensus.cc:383] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:46:12.978832 933 raft_consensus.cc:383] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:46:12.978967 934 raft_consensus.cc:738] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 809f3a74f4b34f84820235c7d19deb76, State: Initialized, Role: FOLLOWER
I20250811 20:46:12.979153 933 raft_consensus.cc:738] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a540b3ba92124aa1a30796dfd6a829ba, State: Initialized, Role: FOLLOWER
I20250811 20:46:12.979773 934 consensus_queue.cc:260] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }
I20250811 20:46:12.980091 933 consensus_queue.cc:260] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }
I20250811 20:46:12.983017 914 heartbeater.cc:499] Master 127.31.250.254:45355 was elected leader, sending a full tablet report...
I20250811 20:46:12.983657 934 ts_tablet_manager.cc:1428] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76: Time spent starting tablet: real 0.024s user 0.023s sys 0.000s
I20250811 20:46:12.989431 935 raft_consensus.cc:357] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }
I20250811 20:46:12.990229 933 ts_tablet_manager.cc:1428] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba: Time spent starting tablet: real 0.039s user 0.033s sys 0.006s
I20250811 20:46:12.990451 935 raft_consensus.cc:383] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:46:12.990749 935 raft_consensus.cc:738] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: cdbb30b5688c4bed8da0f52c9b5a70a3, State: Initialized, Role: FOLLOWER
I20250811 20:46:12.991691 935 consensus_queue.cc:260] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }
I20250811 20:46:12.998167 935 ts_tablet_manager.cc:1428] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3: Time spent starting tablet: real 0.038s user 0.032s sys 0.001s
W20250811 20:46:13.029795 915 tablet.cc:2378] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250811 20:46:13.074954 648 tablet.cc:2378] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250811 20:46:13.155912 781 tablet.cc:2378] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 20:46:13.168342 942 raft_consensus.cc:491] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:46:13.168843 942 raft_consensus.cc:513] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }
I20250811 20:46:13.171203 942 leader_election.cc:290] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers a540b3ba92124aa1a30796dfd6a829ba (127.31.250.194:32787), 809f3a74f4b34f84820235c7d19deb76 (127.31.250.196:36001)
I20250811 20:46:13.183545 602 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a07e39bc140f46d3ace5ba69d8d294a5" candidate_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a540b3ba92124aa1a30796dfd6a829ba" is_pre_election: true
I20250811 20:46:13.183735 869 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a07e39bc140f46d3ace5ba69d8d294a5" candidate_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "809f3a74f4b34f84820235c7d19deb76" is_pre_election: true
I20250811 20:46:13.184275 602 raft_consensus.cc:2466] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate cdbb30b5688c4bed8da0f52c9b5a70a3 in term 0.
I20250811 20:46:13.184399 869 raft_consensus.cc:2466] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate cdbb30b5688c4bed8da0f52c9b5a70a3 in term 0.
I20250811 20:46:13.185438 671 leader_election.cc:304] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: a540b3ba92124aa1a30796dfd6a829ba, cdbb30b5688c4bed8da0f52c9b5a70a3; no voters:
I20250811 20:46:13.186138 942 raft_consensus.cc:2802] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250811 20:46:13.186405 942 raft_consensus.cc:491] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 20:46:13.186652 942 raft_consensus.cc:3058] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:13.190953 942 raft_consensus.cc:513] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }
I20250811 20:46:13.192281 942 leader_election.cc:290] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [CANDIDATE]: Term 1 election: Requested vote from peers a540b3ba92124aa1a30796dfd6a829ba (127.31.250.194:32787), 809f3a74f4b34f84820235c7d19deb76 (127.31.250.196:36001)
I20250811 20:46:13.192960 602 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a07e39bc140f46d3ace5ba69d8d294a5" candidate_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a540b3ba92124aa1a30796dfd6a829ba"
I20250811 20:46:13.193141 869 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a07e39bc140f46d3ace5ba69d8d294a5" candidate_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "809f3a74f4b34f84820235c7d19deb76"
I20250811 20:46:13.193353 602 raft_consensus.cc:3058] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:13.193540 869 raft_consensus.cc:3058] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:13.197953 602 raft_consensus.cc:2466] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate cdbb30b5688c4bed8da0f52c9b5a70a3 in term 1.
I20250811 20:46:13.198112 869 raft_consensus.cc:2466] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate cdbb30b5688c4bed8da0f52c9b5a70a3 in term 1.
I20250811 20:46:13.198849 671 leader_election.cc:304] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: a540b3ba92124aa1a30796dfd6a829ba, cdbb30b5688c4bed8da0f52c9b5a70a3; no voters:
I20250811 20:46:13.199522 942 raft_consensus.cc:2802] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:46:13.201100 942 raft_consensus.cc:695] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [term 1 LEADER]: Becoming Leader. State: Replica: cdbb30b5688c4bed8da0f52c9b5a70a3, State: Running, Role: LEADER
I20250811 20:46:13.201799 942 consensus_queue.cc:237] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }
I20250811 20:46:13.212673 324 catalog_manager.cc:5582] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 reported cstate change: term changed from 0 to 1, leader changed from <none> to cdbb30b5688c4bed8da0f52c9b5a70a3 (127.31.250.195). New cstate: current_term: 1 leader_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } health_report { overall_health: HEALTHY } } }
I20250811 20:46:13.292645 32747 external_mini_cluster.cc:949] 4 TS(s) registered with all masters
I20250811 20:46:13.296561 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver a540b3ba92124aa1a30796dfd6a829ba to finish bootstrapping
I20250811 20:46:13.309510 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver cdbb30b5688c4bed8da0f52c9b5a70a3 to finish bootstrapping
I20250811 20:46:13.320254 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 809f3a74f4b34f84820235c7d19deb76 to finish bootstrapping
I20250811 20:46:13.330085 32747 kudu-admin-test.cc:709] Waiting for Master to see the current replicas...
I20250811 20:46:13.333158 32747 kudu-admin-test.cc:716] Tablet locations:
tablet_locations {
tablet_id: "a07e39bc140f46d3ace5ba69d8d294a5"
DEPRECATED_stale: false
partition {
partition_key_start: ""
partition_key_end: ""
}
interned_replicas {
ts_info_idx: 0
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 1
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 2
role: LEADER
}
}
ts_infos {
permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba"
rpc_addresses {
host: "127.31.250.194"
port: 32787
}
}
ts_infos {
permanent_uuid: "809f3a74f4b34f84820235c7d19deb76"
rpc_addresses {
host: "127.31.250.196"
port: 36001
}
}
ts_infos {
permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3"
rpc_addresses {
host: "127.31.250.195"
port: 39555
}
}
I20250811 20:46:13.609702 942 consensus_queue.cc:1035] T a07e39bc140f46d3ace5ba69d8d294a5 P cdbb30b5688c4bed8da0f52c9b5a70a3 [LEADER]: Connected to new peer: Peer: permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 20:46:13.622877 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 651
W20250811 20:46:13.650096 804 connection.cc:537] server connection from 127.31.250.195:40101 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250811 20:46:13.650115 537 connection.cc:537] server connection from 127.31.250.195:51831 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250811 20:46:13.651060 313 connection.cc:537] server connection from 127.31.250.195:34415 recv error: Network error: recv error from unknown peer: Transport endpoint is not connected (error 107)
I20250811 20:46:13.651350 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 32761
I20250811 20:46:13.676463 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:45355
--webserver_interface=127.31.250.254
--webserver_port=44833
--builtin_ntp_servers=127.31.250.212:34519
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.31.250.254:45355 with env {}
W20250811 20:46:13.974259 953 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:13.974862 953 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:13.975342 953 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:14.005635 953 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 20:46:14.005955 953 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:14.006211 953 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 20:46:14.006436 953 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
W20250811 20:46:14.015674 914 heartbeater.cc:646] Failed to heartbeat to 127.31.250.254:45355 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.31.250.254:45355: connect: Connection refused (error 111)
I20250811 20:46:14.042560 953 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:34519
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.31.250.254:45355
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:45355
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.31.250.254
--webserver_port=44833
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:14.043889 953 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:14.045439 953 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:14.056555 960 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:14.093489 514 heartbeater.cc:646] Failed to heartbeat to 127.31.250.254:45355 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.31.250.254:45355: connect: Connection refused (error 111)
W20250811 20:46:14.635861 647 heartbeater.cc:646] Failed to heartbeat to 127.31.250.254:45355 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.31.250.254:45355: connect: Connection refused (error 111)
I20250811 20:46:14.944550 969 raft_consensus.cc:491] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:46:14.945269 969 raft_consensus.cc:513] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }
I20250811 20:46:14.955389 969 leader_election.cc:290] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers a540b3ba92124aa1a30796dfd6a829ba (127.31.250.194:32787), cdbb30b5688c4bed8da0f52c9b5a70a3 (127.31.250.195:39555)
W20250811 20:46:14.968233 803 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.31.250.195:39555: connect: Connection refused (error 111)
I20250811 20:46:14.981050 602 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a07e39bc140f46d3ace5ba69d8d294a5" candidate_uuid: "809f3a74f4b34f84820235c7d19deb76" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "a540b3ba92124aa1a30796dfd6a829ba" is_pre_election: true
W20250811 20:46:14.981550 803 leader_election.cc:336] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer cdbb30b5688c4bed8da0f52c9b5a70a3 (127.31.250.195:39555): Network error: Client connection negotiation failed: client connection to 127.31.250.195:39555: connect: Connection refused (error 111)
I20250811 20:46:14.983453 805 leader_election.cc:304] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 809f3a74f4b34f84820235c7d19deb76; no voters: a540b3ba92124aa1a30796dfd6a829ba, cdbb30b5688c4bed8da0f52c9b5a70a3
I20250811 20:46:14.984655 969 raft_consensus.cc:2747] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 1 FOLLOWER]: Leader pre-election lost for term 2. Reason: could not achieve majority
I20250811 20:46:15.123019 975 raft_consensus.cc:491] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 1 FOLLOWER]: Starting pre-election (detected failure of leader cdbb30b5688c4bed8da0f52c9b5a70a3)
I20250811 20:46:15.123616 975 raft_consensus.cc:513] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }
I20250811 20:46:15.126549 975 leader_election.cc:290] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 809f3a74f4b34f84820235c7d19deb76 (127.31.250.196:36001), cdbb30b5688c4bed8da0f52c9b5a70a3 (127.31.250.195:39555)
W20250811 20:46:15.140942 536 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.31.250.195:39555: connect: Connection refused (error 111)
W20250811 20:46:15.151494 536 leader_election.cc:336] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer cdbb30b5688c4bed8da0f52c9b5a70a3 (127.31.250.195:39555): Network error: Client connection negotiation failed: client connection to 127.31.250.195:39555: connect: Connection refused (error 111)
I20250811 20:46:15.160751 869 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a07e39bc140f46d3ace5ba69d8d294a5" candidate_uuid: "a540b3ba92124aa1a30796dfd6a829ba" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "809f3a74f4b34f84820235c7d19deb76" is_pre_election: true
I20250811 20:46:15.161322 869 raft_consensus.cc:2466] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate a540b3ba92124aa1a30796dfd6a829ba in term 1.
I20250811 20:46:15.162750 536 leader_election.cc:304] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 809f3a74f4b34f84820235c7d19deb76, a540b3ba92124aa1a30796dfd6a829ba; no voters: cdbb30b5688c4bed8da0f52c9b5a70a3
I20250811 20:46:15.163851 975 raft_consensus.cc:2802] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250811 20:46:15.164149 975 raft_consensus.cc:491] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 1 FOLLOWER]: Starting leader election (detected failure of leader cdbb30b5688c4bed8da0f52c9b5a70a3)
I20250811 20:46:15.164417 975 raft_consensus.cc:3058] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 1 FOLLOWER]: Advancing to term 2
I20250811 20:46:15.170564 975 raft_consensus.cc:513] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }
I20250811 20:46:15.172490 975 leader_election.cc:290] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [CANDIDATE]: Term 2 election: Requested vote from peers 809f3a74f4b34f84820235c7d19deb76 (127.31.250.196:36001), cdbb30b5688c4bed8da0f52c9b5a70a3 (127.31.250.195:39555)
I20250811 20:46:15.173617 869 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a07e39bc140f46d3ace5ba69d8d294a5" candidate_uuid: "a540b3ba92124aa1a30796dfd6a829ba" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: false dest_uuid: "809f3a74f4b34f84820235c7d19deb76"
I20250811 20:46:15.174131 869 raft_consensus.cc:3058] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 1 FOLLOWER]: Advancing to term 2
W20250811 20:46:15.179314 536 leader_election.cc:336] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer cdbb30b5688c4bed8da0f52c9b5a70a3 (127.31.250.195:39555): Network error: Client connection negotiation failed: client connection to 127.31.250.195:39555: connect: Connection refused (error 111)
I20250811 20:46:15.180848 869 raft_consensus.cc:2466] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate a540b3ba92124aa1a30796dfd6a829ba in term 2.
I20250811 20:46:15.181887 536 leader_election.cc:304] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 809f3a74f4b34f84820235c7d19deb76, a540b3ba92124aa1a30796dfd6a829ba; no voters: cdbb30b5688c4bed8da0f52c9b5a70a3
I20250811 20:46:15.183056 975 raft_consensus.cc:2802] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 2 FOLLOWER]: Leader election won for term 2
I20250811 20:46:15.185437 975 raft_consensus.cc:695] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 2 LEADER]: Becoming Leader. State: Replica: a540b3ba92124aa1a30796dfd6a829ba, State: Running, Role: LEADER
I20250811 20:46:15.186707 975 consensus_queue.cc:237] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }
W20250811 20:46:14.070349 963 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:14.057299 961 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:15.344388 962 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1283 milliseconds
I20250811 20:46:15.344499 953 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:46:15.345614 953 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:15.348644 953 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:15.350052 953 hybrid_clock.cc:648] HybridClock initialized: now 1754945175350013 us; error 57 us; skew 500 ppm
I20250811 20:46:15.350756 953 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:15.357184 953 webserver.cc:489] Webserver started at http://127.31.250.254:44833/ using document root <none> and password file <none>
I20250811 20:46:15.358048 953 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:15.358261 953 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:15.365617 953 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.005s sys 0.000s
I20250811 20:46:15.369969 985 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:15.370981 953 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250811 20:46:15.371330 953 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
uuid: "373ac9196e6146528a87ff49182a735a"
format_stamp: "Formatted at 2025-08-11 20:46:04 on dist-test-slave-4gzk"
I20250811 20:46:15.373152 953 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:15.422284 953 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:15.423731 953 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:15.424168 953 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:15.493923 953 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.254:45355
I20250811 20:46:15.494009 1036 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.254:45355 every 8 connection(s)
I20250811 20:46:15.496726 953 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 20:46:15.499919 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 953
I20250811 20:46:15.500540 32747 kudu-admin-test.cc:735] Forcing unsafe config change on tserver a540b3ba92124aa1a30796dfd6a829ba
I20250811 20:46:15.510836 1037 sys_catalog.cc:263] Verifying existing consensus state
I20250811 20:46:15.515816 1037 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a: Bootstrap starting.
I20250811 20:46:15.554605 1037 log.cc:826] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a: Log is configured to *not* fsync() on all Append() calls
I20250811 20:46:15.577143 1037 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=5 ignored=0} mutations{seen=2 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:46:15.578015 1037 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a: Bootstrap complete.
I20250811 20:46:15.599180 1037 raft_consensus.cc:357] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "373ac9196e6146528a87ff49182a735a" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45355 } }
I20250811 20:46:15.601367 1037 raft_consensus.cc:738] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 373ac9196e6146528a87ff49182a735a, State: Initialized, Role: FOLLOWER
I20250811 20:46:15.602150 1037 consensus_queue.cc:260] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "373ac9196e6146528a87ff49182a735a" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45355 } }
I20250811 20:46:15.602633 1037 raft_consensus.cc:397] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:46:15.602931 1037 raft_consensus.cc:491] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:46:15.603237 1037 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 1 FOLLOWER]: Advancing to term 2
W20250811 20:46:15.609339 536 consensus_peers.cc:489] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba -> Peer cdbb30b5688c4bed8da0f52c9b5a70a3 (127.31.250.195:39555): Couldn't send request to peer cdbb30b5688c4bed8da0f52c9b5a70a3. Status: Network error: Client connection negotiation failed: client connection to 127.31.250.195:39555: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250811 20:46:15.610697 1037 raft_consensus.cc:513] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "373ac9196e6146528a87ff49182a735a" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45355 } }
I20250811 20:46:15.611413 1037 leader_election.cc:304] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 373ac9196e6146528a87ff49182a735a; no voters:
I20250811 20:46:15.613613 1037 leader_election.cc:290] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [CANDIDATE]: Term 2 election: Requested vote from peers
I20250811 20:46:15.614151 1041 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 2 FOLLOWER]: Leader election won for term 2
I20250811 20:46:15.617478 1041 raft_consensus.cc:695] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [term 2 LEADER]: Becoming Leader. State: Replica: 373ac9196e6146528a87ff49182a735a, State: Running, Role: LEADER
I20250811 20:46:15.618438 1041 consensus_queue.cc:237] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "373ac9196e6146528a87ff49182a735a" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45355 } }
I20250811 20:46:15.619136 1037 sys_catalog.cc:564] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [sys.catalog]: configured and running, proceeding with master startup.
I20250811 20:46:15.629243 1043 sys_catalog.cc:455] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [sys.catalog]: SysCatalogTable state changed. Reason: New leader 373ac9196e6146528a87ff49182a735a. Latest consensus state: current_term: 2 leader_uuid: "373ac9196e6146528a87ff49182a735a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "373ac9196e6146528a87ff49182a735a" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45355 } } }
I20250811 20:46:15.630510 1043 sys_catalog.cc:458] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [sys.catalog]: This master's current role is: LEADER
I20250811 20:46:15.633682 1042 sys_catalog.cc:455] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "373ac9196e6146528a87ff49182a735a" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "373ac9196e6146528a87ff49182a735a" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45355 } } }
I20250811 20:46:15.634447 1042 sys_catalog.cc:458] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a [sys.catalog]: This master's current role is: LEADER
I20250811 20:46:15.636440 1047 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 20:46:15.650549 1047 catalog_manager.cc:671] Loaded metadata for table TestTable [id=dada49b4de0844a9aaf9cc41894512b7]
I20250811 20:46:15.658802 1047 tablet_loader.cc:96] loaded metadata for tablet a07e39bc140f46d3ace5ba69d8d294a5 (table TestTable [id=dada49b4de0844a9aaf9cc41894512b7])
I20250811 20:46:15.660487 1047 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 20:46:15.665198 1047 catalog_manager.cc:1261] Loaded cluster ID: 18593a7aac8c429ba46b8e0d92a6d510
I20250811 20:46:15.665505 1047 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 20:46:15.672778 1047 catalog_manager.cc:1506] Loading token signing keys...
I20250811 20:46:15.677587 1047 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 373ac9196e6146528a87ff49182a735a: Loaded TSK: 0
I20250811 20:46:15.679242 1047 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 20:46:15.802709 869 raft_consensus.cc:1273] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 2 FOLLOWER]: Refusing update from remote peer a540b3ba92124aa1a30796dfd6a829ba: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250811 20:46:15.804177 975 consensus_queue.cc:1035] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [LEADER]: Connected to new peer: Peer: permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 20:46:15.851820 914 heartbeater.cc:344] Connected to a master server at 127.31.250.254:45355
I20250811 20:46:15.859069 1002 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" instance_seqno: 1754945172739851) as {username='slave'} at 127.31.250.196:46895; Asking this server to re-register.
I20250811 20:46:15.860896 914 heartbeater.cc:461] Registering TS with master...
I20250811 20:46:15.861524 914 heartbeater.cc:507] Master 127.31.250.254:45355 requested a full tablet report, sending...
I20250811 20:46:15.865196 1001 ts_manager.cc:194] Registered new tserver with Master: 809f3a74f4b34f84820235c7d19deb76 (127.31.250.196:36001)
I20250811 20:46:15.870249 1001 catalog_manager.cc:5582] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 reported cstate change: term changed from 1 to 2, leader changed from cdbb30b5688c4bed8da0f52c9b5a70a3 (127.31.250.195) to a540b3ba92124aa1a30796dfd6a829ba (127.31.250.194). New cstate: current_term: 2 leader_uuid: "a540b3ba92124aa1a30796dfd6a829ba" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } } }
I20250811 20:46:15.886188 647 heartbeater.cc:344] Connected to a master server at 127.31.250.254:45355
I20250811 20:46:15.889437 1002 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" instance_seqno: 1754945169028715) as {username='slave'} at 127.31.250.194:36579; Asking this server to re-register.
I20250811 20:46:15.890813 647 heartbeater.cc:461] Registering TS with master...
I20250811 20:46:15.891395 647 heartbeater.cc:507] Master 127.31.250.254:45355 requested a full tablet report, sending...
I20250811 20:46:15.894250 1002 ts_manager.cc:194] Registered new tserver with Master: a540b3ba92124aa1a30796dfd6a829ba (127.31.250.194:32787)
W20250811 20:46:15.897544 1039 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:15.898110 1039 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:15.934895 1039 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250811 20:46:16.139343 514 heartbeater.cc:344] Connected to a master server at 127.31.250.254:45355
I20250811 20:46:16.142259 1002 master_service.cc:432] Got heartbeat from unknown tserver (permanent_uuid: "f3fac04926ac442cbde92c6cdec496bc" instance_seqno: 1754945166977195) as {username='slave'} at 127.31.250.193:49725; Asking this server to re-register.
I20250811 20:46:16.144290 514 heartbeater.cc:461] Registering TS with master...
I20250811 20:46:16.145308 514 heartbeater.cc:507] Master 127.31.250.254:45355 requested a full tablet report, sending...
I20250811 20:46:16.148234 1002 ts_manager.cc:194] Registered new tserver with Master: f3fac04926ac442cbde92c6cdec496bc (127.31.250.193:43471)
W20250811 20:46:17.380613 1072 debug-util.cc:398] Leaking SignalData structure 0x7b08000347a0 after lost signal to thread 1039
W20250811 20:46:17.381510 1072 kernel_stack_watchdog.cc:198] Thread 1039 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 401ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250811 20:46:17.386947 1039 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.410s user 0.426s sys 0.945s
W20250811 20:46:17.508189 1039 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.532s user 0.433s sys 0.952s
I20250811 20:46:17.588233 602 tablet_service.cc:1905] Received UnsafeChangeConfig RPC: dest_uuid: "a540b3ba92124aa1a30796dfd6a829ba"
tablet_id: "a07e39bc140f46d3ace5ba69d8d294a5"
caller_id: "kudu-tools"
new_config {
peers {
permanent_uuid: "809f3a74f4b34f84820235c7d19deb76"
}
peers {
permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba"
}
}
from {username='slave'} at 127.0.0.1:32818
W20250811 20:46:17.590173 602 raft_consensus.cc:2216] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 2 LEADER]: PROCEEDING WITH UNSAFE CONFIG CHANGE ON THIS SERVER, COMMITTED CONFIG: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }NEW CONFIG: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } unsafe_config_change: true
I20250811 20:46:17.591100 602 raft_consensus.cc:3053] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 2 LEADER]: Stepping down as leader of term 2
I20250811 20:46:17.591374 602 raft_consensus.cc:738] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 2 LEADER]: Becoming Follower/Learner. State: Replica: a540b3ba92124aa1a30796dfd6a829ba, State: Running, Role: LEADER
I20250811 20:46:17.592128 602 consensus_queue.cc:260] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 2.2, Last appended by leader: 2, Current term: 2, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }
I20250811 20:46:17.593174 602 raft_consensus.cc:3058] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 2 FOLLOWER]: Advancing to term 3
W20250811 20:46:18.579666 1033 debug-util.cc:398] Leaking SignalData structure 0x7b08000b0f40 after lost signal to thread 954
W20250811 20:46:18.580261 1033 debug-util.cc:398] Leaking SignalData structure 0x7b080006f320 after lost signal to thread 1036
I20250811 20:46:18.791976 1094 raft_consensus.cc:491] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 2 FOLLOWER]: Starting pre-election (detected failure of leader a540b3ba92124aa1a30796dfd6a829ba)
I20250811 20:46:18.792538 1094 raft_consensus.cc:513] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } }
I20250811 20:46:18.794648 1094 leader_election.cc:290] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers a540b3ba92124aa1a30796dfd6a829ba (127.31.250.194:32787), cdbb30b5688c4bed8da0f52c9b5a70a3 (127.31.250.195:39555)
I20250811 20:46:18.795575 602 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a07e39bc140f46d3ace5ba69d8d294a5" candidate_uuid: "809f3a74f4b34f84820235c7d19deb76" candidate_term: 3 candidate_status { last_received { term: 2 index: 2 } } ignore_live_leader: false dest_uuid: "a540b3ba92124aa1a30796dfd6a829ba" is_pre_election: true
W20250811 20:46:18.800274 803 leader_election.cc:336] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer cdbb30b5688c4bed8da0f52c9b5a70a3 (127.31.250.195:39555): Network error: Client connection negotiation failed: client connection to 127.31.250.195:39555: connect: Connection refused (error 111)
I20250811 20:46:18.800585 803 leader_election.cc:304] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 809f3a74f4b34f84820235c7d19deb76; no voters: a540b3ba92124aa1a30796dfd6a829ba, cdbb30b5688c4bed8da0f52c9b5a70a3
I20250811 20:46:18.801486 1094 raft_consensus.cc:2747] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250811 20:46:19.196425 1097 raft_consensus.cc:491] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 3 FOLLOWER]: Starting pre-election (detected failure of leader kudu-tools)
I20250811 20:46:19.196803 1097 raft_consensus.cc:513] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 3 FOLLOWER]: Starting pre-election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } unsafe_config_change: true
I20250811 20:46:19.197952 1097 leader_election.cc:290] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [CANDIDATE]: Term 4 pre-election: Requested pre-vote from peers 809f3a74f4b34f84820235c7d19deb76 (127.31.250.196:36001)
I20250811 20:46:19.199184 869 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a07e39bc140f46d3ace5ba69d8d294a5" candidate_uuid: "a540b3ba92124aa1a30796dfd6a829ba" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "809f3a74f4b34f84820235c7d19deb76" is_pre_election: true
I20250811 20:46:19.199729 869 raft_consensus.cc:2466] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate a540b3ba92124aa1a30796dfd6a829ba in term 2.
I20250811 20:46:19.200698 536 leader_election.cc:304] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [CANDIDATE]: Term 4 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: 809f3a74f4b34f84820235c7d19deb76, a540b3ba92124aa1a30796dfd6a829ba; no voters:
I20250811 20:46:19.201309 1097 raft_consensus.cc:2802] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 3 FOLLOWER]: Leader pre-election won for term 4
I20250811 20:46:19.201640 1097 raft_consensus.cc:491] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 3 FOLLOWER]: Starting leader election (detected failure of leader kudu-tools)
I20250811 20:46:19.202003 1097 raft_consensus.cc:3058] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 3 FOLLOWER]: Advancing to term 4
I20250811 20:46:19.212049 1097 raft_consensus.cc:513] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 4 FOLLOWER]: Starting leader election with config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } unsafe_config_change: true
I20250811 20:46:19.213631 1097 leader_election.cc:290] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [CANDIDATE]: Term 4 election: Requested vote from peers 809f3a74f4b34f84820235c7d19deb76 (127.31.250.196:36001)
I20250811 20:46:19.214918 869 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "a07e39bc140f46d3ace5ba69d8d294a5" candidate_uuid: "a540b3ba92124aa1a30796dfd6a829ba" candidate_term: 4 candidate_status { last_received { term: 3 index: 3 } } ignore_live_leader: false dest_uuid: "809f3a74f4b34f84820235c7d19deb76"
I20250811 20:46:19.215696 869 raft_consensus.cc:3058] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 2 FOLLOWER]: Advancing to term 4
I20250811 20:46:19.316788 869 raft_consensus.cc:2466] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 4 FOLLOWER]: Leader election vote request: Granting yes vote for candidate a540b3ba92124aa1a30796dfd6a829ba in term 4.
I20250811 20:46:19.318081 536 leader_election.cc:304] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 2 voters: 2 yes votes; 0 no votes. yes voters: 809f3a74f4b34f84820235c7d19deb76, a540b3ba92124aa1a30796dfd6a829ba; no voters:
I20250811 20:46:19.319016 1097 raft_consensus.cc:2802] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 4 FOLLOWER]: Leader election won for term 4
I20250811 20:46:19.320124 1097 raft_consensus.cc:695] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 4 LEADER]: Becoming Leader. State: Replica: a540b3ba92124aa1a30796dfd6a829ba, State: Running, Role: LEADER
I20250811 20:46:19.321413 1097 consensus_queue.cc:237] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 3.3, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } unsafe_config_change: true
I20250811 20:46:19.328521 1002 catalog_manager.cc:5582] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba reported cstate change: term changed from 2 to 4, now has a pending config: VOTER a540b3ba92124aa1a30796dfd6a829ba (127.31.250.194), VOTER 809f3a74f4b34f84820235c7d19deb76 (127.31.250.196). New cstate: current_term: 4 leader_uuid: "a540b3ba92124aa1a30796dfd6a829ba" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "cdbb30b5688c4bed8da0f52c9b5a70a3" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 39555 } } } pending_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } unsafe_config_change: true }
I20250811 20:46:19.761848 869 raft_consensus.cc:1273] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 4 FOLLOWER]: Refusing update from remote peer a540b3ba92124aa1a30796dfd6a829ba: Log matching property violated. Preceding OpId in replica: term: 2 index: 2. Preceding OpId from leader: term: 4 index: 4. (index mismatch)
I20250811 20:46:19.763005 1097 consensus_queue.cc:1035] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [LEADER]: Connected to new peer: Peer: permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 4, Last known committed idx: 2, Time since last communication: 0.000s
I20250811 20:46:19.773231 1097 raft_consensus.cc:2953] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 4 LEADER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER cdbb30b5688c4bed8da0f52c9b5a70a3 (127.31.250.195) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } unsafe_config_change: true }
I20250811 20:46:19.781077 869 raft_consensus.cc:2953] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 4 FOLLOWER]: Committing config change with OpId 3.3: config changed from index -1 to 3, VOTER cdbb30b5688c4bed8da0f52c9b5a70a3 (127.31.250.195) evicted. New config: { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } unsafe_config_change: true }
I20250811 20:46:19.801169 1002 catalog_manager.cc:5582] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba reported cstate change: config changed from index -1 to 3, VOTER cdbb30b5688c4bed8da0f52c9b5a70a3 (127.31.250.195) evicted, no longer has a pending config: VOTER a540b3ba92124aa1a30796dfd6a829ba (127.31.250.194), VOTER 809f3a74f4b34f84820235c7d19deb76 (127.31.250.196). New cstate: current_term: 4 leader_uuid: "a540b3ba92124aa1a30796dfd6a829ba" committed_config { opid_index: 3 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } health_report { overall_health: HEALTHY } } unsafe_config_change: true }
W20250811 20:46:19.811777 1002 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet a07e39bc140f46d3ace5ba69d8d294a5 on TS cdbb30b5688c4bed8da0f52c9b5a70a3: Not found: failed to reset TS proxy: Could not find TS for UUID cdbb30b5688c4bed8da0f52c9b5a70a3
I20250811 20:46:19.835691 602 consensus_queue.cc:237] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 4, Committed index: 4, Last appended: 4.4, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "f3fac04926ac442cbde92c6cdec496bc" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 43471 } attrs { promote: true } } unsafe_config_change: true
I20250811 20:46:19.841060 869 raft_consensus.cc:1273] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 4 FOLLOWER]: Refusing update from remote peer a540b3ba92124aa1a30796dfd6a829ba: Log matching property violated. Preceding OpId in replica: term: 4 index: 4. Preceding OpId from leader: term: 4 index: 5. (index mismatch)
I20250811 20:46:19.842957 1097 consensus_queue.cc:1035] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [LEADER]: Connected to new peer: Peer: permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 5, Last known committed idx: 4, Time since last communication: 0.001s
I20250811 20:46:19.851235 1098 raft_consensus.cc:2953] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 4 LEADER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER f3fac04926ac442cbde92c6cdec496bc (127.31.250.193) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "f3fac04926ac442cbde92c6cdec496bc" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 43471 } attrs { promote: true } } unsafe_config_change: true }
I20250811 20:46:19.857151 869 raft_consensus.cc:2953] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 4 FOLLOWER]: Committing config change with OpId 4.5: config changed from index 3 to 5, NON_VOTER f3fac04926ac442cbde92c6cdec496bc (127.31.250.193) added. New config: { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "f3fac04926ac442cbde92c6cdec496bc" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 43471 } attrs { promote: true } } unsafe_config_change: true }
W20250811 20:46:19.864075 987 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet a07e39bc140f46d3ace5ba69d8d294a5 on TS cdbb30b5688c4bed8da0f52c9b5a70a3 failed: Not found: failed to reset TS proxy: Could not find TS for UUID cdbb30b5688c4bed8da0f52c9b5a70a3
I20250811 20:46:19.864189 989 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet a07e39bc140f46d3ace5ba69d8d294a5 with cas_config_opid_index 3: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250811 20:46:19.868875 1002 catalog_manager.cc:5582] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba reported cstate change: config changed from index 3 to 5, NON_VOTER f3fac04926ac442cbde92c6cdec496bc (127.31.250.193) added. New cstate: current_term: 4 leader_uuid: "a540b3ba92124aa1a30796dfd6a829ba" committed_config { opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "f3fac04926ac442cbde92c6cdec496bc" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 43471 } attrs { promote: true } health_report { overall_health: UNKNOWN } } unsafe_config_change: true }
W20250811 20:46:19.878429 538 consensus_peers.cc:489] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba -> Peer f3fac04926ac442cbde92c6cdec496bc (127.31.250.193:43471): Couldn't send request to peer f3fac04926ac442cbde92c6cdec496bc. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: a07e39bc140f46d3ace5ba69d8d294a5. This is attempt 1: this message will repeat every 5th retry.
I20250811 20:46:20.300614 1112 ts_tablet_manager.cc:927] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc: Initiating tablet copy from peer a540b3ba92124aa1a30796dfd6a829ba (127.31.250.194:32787)
I20250811 20:46:20.303730 1112 tablet_copy_client.cc:323] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc: tablet copy: Beginning tablet copy session from remote peer at address 127.31.250.194:32787
I20250811 20:46:20.316228 622 tablet_copy_service.cc:140] P a540b3ba92124aa1a30796dfd6a829ba: Received BeginTabletCopySession request for tablet a07e39bc140f46d3ace5ba69d8d294a5 from peer f3fac04926ac442cbde92c6cdec496bc ({username='slave'} at 127.31.250.193:51277)
I20250811 20:46:20.316860 622 tablet_copy_service.cc:161] P a540b3ba92124aa1a30796dfd6a829ba: Beginning new tablet copy session on tablet a07e39bc140f46d3ace5ba69d8d294a5 from peer f3fac04926ac442cbde92c6cdec496bc at {username='slave'} at 127.31.250.193:51277: session id = f3fac04926ac442cbde92c6cdec496bc-a07e39bc140f46d3ace5ba69d8d294a5
I20250811 20:46:20.325583 622 tablet_copy_source_session.cc:215] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba: Tablet Copy: opened 0 blocks and 1 log segments
I20250811 20:46:20.331590 1112 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet a07e39bc140f46d3ace5ba69d8d294a5. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:20.352885 1112 tablet_copy_client.cc:806] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc: tablet copy: Starting download of 0 data blocks...
I20250811 20:46:20.353605 1112 tablet_copy_client.cc:670] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc: tablet copy: Starting download of 1 WAL segments...
I20250811 20:46:20.357443 1112 tablet_copy_client.cc:538] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250811 20:46:20.363137 1112 tablet_bootstrap.cc:492] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc: Bootstrap starting.
I20250811 20:46:20.375814 1112 log.cc:826] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc: Log is configured to *not* fsync() on all Append() calls
I20250811 20:46:20.386663 1112 tablet_bootstrap.cc:492] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc: Bootstrap replayed 1/1 log segments. Stats: ops{read=5 overwritten=0 applied=5 ignored=0} inserts{seen=0 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:46:20.387491 1112 tablet_bootstrap.cc:492] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc: Bootstrap complete.
I20250811 20:46:20.388064 1112 ts_tablet_manager.cc:1397] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc: Time spent bootstrapping tablet: real 0.025s user 0.027s sys 0.001s
I20250811 20:46:20.404839 1112 raft_consensus.cc:357] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc [term 4 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "f3fac04926ac442cbde92c6cdec496bc" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 43471 } attrs { promote: true } } unsafe_config_change: true
I20250811 20:46:20.405668 1112 raft_consensus.cc:738] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc [term 4 LEARNER]: Becoming Follower/Learner. State: Replica: f3fac04926ac442cbde92c6cdec496bc, State: Initialized, Role: LEARNER
I20250811 20:46:20.406276 1112 consensus_queue.cc:260] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 5, Last appended: 4.5, Last appended by leader: 5, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 5 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "f3fac04926ac442cbde92c6cdec496bc" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 43471 } attrs { promote: true } } unsafe_config_change: true
I20250811 20:46:20.409915 1112 ts_tablet_manager.cc:1428] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc: Time spent starting tablet: real 0.022s user 0.013s sys 0.008s
I20250811 20:46:20.411690 622 tablet_copy_service.cc:342] P a540b3ba92124aa1a30796dfd6a829ba: Request end of tablet copy session f3fac04926ac442cbde92c6cdec496bc-a07e39bc140f46d3ace5ba69d8d294a5 received from {username='slave'} at 127.31.250.193:51277
I20250811 20:46:20.412143 622 tablet_copy_service.cc:434] P a540b3ba92124aa1a30796dfd6a829ba: ending tablet copy session f3fac04926ac442cbde92c6cdec496bc-a07e39bc140f46d3ace5ba69d8d294a5 on tablet a07e39bc140f46d3ace5ba69d8d294a5 with peer f3fac04926ac442cbde92c6cdec496bc
I20250811 20:46:20.761587 469 raft_consensus.cc:1215] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc [term 4 LEARNER]: Deduplicated request from leader. Original: 4.4->[4.5-4.5] Dedup: 4.5->[]
W20250811 20:46:21.031454 987 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet a07e39bc140f46d3ace5ba69d8d294a5 on TS cdbb30b5688c4bed8da0f52c9b5a70a3 failed: Not found: failed to reset TS proxy: Could not find TS for UUID cdbb30b5688c4bed8da0f52c9b5a70a3
I20250811 20:46:21.285773 1120 raft_consensus.cc:1062] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba: attempting to promote NON_VOTER f3fac04926ac442cbde92c6cdec496bc to VOTER
I20250811 20:46:21.287371 1120 consensus_queue.cc:237] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 5, Committed index: 5, Last appended: 4.5, Last appended by leader: 0, Current term: 4, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "f3fac04926ac442cbde92c6cdec496bc" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43471 } attrs { promote: false } } unsafe_config_change: true
I20250811 20:46:21.292057 469 raft_consensus.cc:1273] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc [term 4 LEARNER]: Refusing update from remote peer a540b3ba92124aa1a30796dfd6a829ba: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250811 20:46:21.293241 1118 consensus_queue.cc:1035] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [LEADER]: Connected to new peer: Peer: permanent_uuid: "f3fac04926ac442cbde92c6cdec496bc" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43471 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250811 20:46:21.294242 869 raft_consensus.cc:1273] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 4 FOLLOWER]: Refusing update from remote peer a540b3ba92124aa1a30796dfd6a829ba: Log matching property violated. Preceding OpId in replica: term: 4 index: 5. Preceding OpId from leader: term: 4 index: 6. (index mismatch)
I20250811 20:46:21.295660 1118 consensus_queue.cc:1035] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [LEADER]: Connected to new peer: Peer: permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 6, Last known committed idx: 5, Time since last communication: 0.000s
I20250811 20:46:21.300331 1122 raft_consensus.cc:2953] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba [term 4 LEADER]: Committing config change with OpId 4.6: config changed from index 5 to 6, f3fac04926ac442cbde92c6cdec496bc (127.31.250.193) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "f3fac04926ac442cbde92c6cdec496bc" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43471 } attrs { promote: false } } unsafe_config_change: true }
I20250811 20:46:21.301906 469 raft_consensus.cc:2953] T a07e39bc140f46d3ace5ba69d8d294a5 P f3fac04926ac442cbde92c6cdec496bc [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, f3fac04926ac442cbde92c6cdec496bc (127.31.250.193) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "f3fac04926ac442cbde92c6cdec496bc" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43471 } attrs { promote: false } } unsafe_config_change: true }
I20250811 20:46:21.304966 869 raft_consensus.cc:2953] T a07e39bc140f46d3ace5ba69d8d294a5 P 809f3a74f4b34f84820235c7d19deb76 [term 4 FOLLOWER]: Committing config change with OpId 4.6: config changed from index 5 to 6, f3fac04926ac442cbde92c6cdec496bc (127.31.250.193) changed from NON_VOTER to VOTER. New config: { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } } peers { permanent_uuid: "f3fac04926ac442cbde92c6cdec496bc" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43471 } attrs { promote: false } } unsafe_config_change: true }
I20250811 20:46:21.311291 1001 catalog_manager.cc:5582] T a07e39bc140f46d3ace5ba69d8d294a5 P a540b3ba92124aa1a30796dfd6a829ba reported cstate change: config changed from index 5 to 6, f3fac04926ac442cbde92c6cdec496bc (127.31.250.193) changed from NON_VOTER to VOTER. New cstate: current_term: 4 leader_uuid: "a540b3ba92124aa1a30796dfd6a829ba" committed_config { opid_index: 6 OBSOLETE_local: false peers { permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 32787 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "809f3a74f4b34f84820235c7d19deb76" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 36001 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "f3fac04926ac442cbde92c6cdec496bc" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43471 } attrs { promote: false } health_report { overall_health: HEALTHY } } unsafe_config_change: true }
I20250811 20:46:21.383668 32747 kudu-admin-test.cc:751] Waiting for Master to see new config...
I20250811 20:46:21.397050 32747 kudu-admin-test.cc:756] Tablet locations:
tablet_locations {
tablet_id: "a07e39bc140f46d3ace5ba69d8d294a5"
DEPRECATED_stale: false
partition {
partition_key_start: ""
partition_key_end: ""
}
interned_replicas {
ts_info_idx: 0
role: LEADER
}
interned_replicas {
ts_info_idx: 1
role: FOLLOWER
}
interned_replicas {
ts_info_idx: 2
role: FOLLOWER
}
}
ts_infos {
permanent_uuid: "a540b3ba92124aa1a30796dfd6a829ba"
rpc_addresses {
host: "127.31.250.194"
port: 32787
}
}
ts_infos {
permanent_uuid: "809f3a74f4b34f84820235c7d19deb76"
rpc_addresses {
host: "127.31.250.196"
port: 36001
}
}
ts_infos {
permanent_uuid: "f3fac04926ac442cbde92c6cdec496bc"
rpc_addresses {
host: "127.31.250.193"
port: 43471
}
}
I20250811 20:46:21.399197 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 385
I20250811 20:46:21.423642 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 518
I20250811 20:46:21.458889 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 783
I20250811 20:46:21.485584 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 953
2025-08-11T20:46:21Z chronyd exiting
[ OK ] AdminCliTest.TestUnsafeChangeConfigForConfigWithTwoNodes (20096 ms)
[ RUN ] AdminCliTest.TestGracefulSpecificLeaderStepDown
I20250811 20:46:21.551921 32747 test_util.cc:276] Using random seed: 69569737
I20250811 20:46:21.557632 32747 ts_itest-base.cc:115] Starting cluster with:
I20250811 20:46:21.557802 32747 ts_itest-base.cc:116] --------------
I20250811 20:46:21.557912 32747 ts_itest-base.cc:117] 3 tablet servers
I20250811 20:46:21.558017 32747 ts_itest-base.cc:118] 3 replicas per TS
I20250811 20:46:21.558115 32747 ts_itest-base.cc:119] --------------
2025-08-11T20:46:21Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T20:46:21Z Disabled control of system clock
I20250811 20:46:21.592379 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:46199
--webserver_interface=127.31.250.254
--webserver_port=0
--builtin_ntp_servers=127.31.250.212:34315
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.31.250.254:46199
--catalog_manager_wait_for_new_tablets_to_elect_leader=false with env {}
W20250811 20:46:21.920120 1141 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:21.920652 1141 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:21.921025 1141 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:21.951068 1141 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 20:46:21.951380 1141 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:21.951583 1141 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 20:46:21.951776 1141 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 20:46:21.986366 1141 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:34315
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--catalog_manager_wait_for_new_tablets_to_elect_leader=false
--ipki_ca_key_size=768
--master_addresses=127.31.250.254:46199
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:46199
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.31.250.254
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:21.987618 1141 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:21.989142 1141 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:22.000655 1147 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:22.001192 1148 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:23.198778 1150 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:23.201669 1149 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1199 milliseconds
W20250811 20:46:23.203423 1141 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.203s user 0.391s sys 0.807s
W20250811 20:46:23.203675 1141 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.204s user 0.391s sys 0.807s
I20250811 20:46:23.203900 1141 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:46:23.204949 1141 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:23.211984 1141 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:23.213423 1141 hybrid_clock.cc:648] HybridClock initialized: now 1754945183213369 us; error 71 us; skew 500 ppm
I20250811 20:46:23.214207 1141 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:23.221786 1141 webserver.cc:489] Webserver started at http://127.31.250.254:42467/ using document root <none> and password file <none>
I20250811 20:46:23.222692 1141 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:23.222915 1141 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:23.223420 1141 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:46:23.227799 1141 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "a4d54ea0ee3b4132afceabe270db6d91"
format_stamp: "Formatted at 2025-08-11 20:46:23 on dist-test-slave-4gzk"
I20250811 20:46:23.228900 1141 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "a4d54ea0ee3b4132afceabe270db6d91"
format_stamp: "Formatted at 2025-08-11 20:46:23 on dist-test-slave-4gzk"
I20250811 20:46:23.236707 1141 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.008s sys 0.000s
I20250811 20:46:23.242594 1157 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:23.243849 1141 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.001s
I20250811 20:46:23.244278 1141 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
uuid: "a4d54ea0ee3b4132afceabe270db6d91"
format_stamp: "Formatted at 2025-08-11 20:46:23 on dist-test-slave-4gzk"
I20250811 20:46:23.244735 1141 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:23.341364 1141 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:23.342797 1141 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:23.343200 1141 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:23.412822 1141 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.254:46199
I20250811 20:46:23.412891 1208 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.254:46199 every 8 connection(s)
I20250811 20:46:23.415606 1141 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 20:46:23.416410 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 1141
I20250811 20:46:23.416846 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250811 20:46:23.423022 1209 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:23.442891 1209 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91: Bootstrap starting.
I20250811 20:46:23.448247 1209 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:23.450481 1209 log.cc:826] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91: Log is configured to *not* fsync() on all Append() calls
I20250811 20:46:23.454704 1209 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91: No bootstrap required, opened a new log
I20250811 20:46:23.473109 1209 raft_consensus.cc:357] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a4d54ea0ee3b4132afceabe270db6d91" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46199 } }
I20250811 20:46:23.473729 1209 raft_consensus.cc:383] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:46:23.473919 1209 raft_consensus.cc:738] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: a4d54ea0ee3b4132afceabe270db6d91, State: Initialized, Role: FOLLOWER
I20250811 20:46:23.474512 1209 consensus_queue.cc:260] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a4d54ea0ee3b4132afceabe270db6d91" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46199 } }
I20250811 20:46:23.474967 1209 raft_consensus.cc:397] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:46:23.475183 1209 raft_consensus.cc:491] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:46:23.475457 1209 raft_consensus.cc:3058] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:23.479773 1209 raft_consensus.cc:513] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a4d54ea0ee3b4132afceabe270db6d91" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46199 } }
I20250811 20:46:23.480675 1209 leader_election.cc:304] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: a4d54ea0ee3b4132afceabe270db6d91; no voters:
I20250811 20:46:23.482367 1209 leader_election.cc:290] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 20:46:23.483096 1214 raft_consensus.cc:2802] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:46:23.485296 1214 raft_consensus.cc:695] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [term 1 LEADER]: Becoming Leader. State: Replica: a4d54ea0ee3b4132afceabe270db6d91, State: Running, Role: LEADER
I20250811 20:46:23.485949 1214 consensus_queue.cc:237] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a4d54ea0ee3b4132afceabe270db6d91" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46199 } }
I20250811 20:46:23.486485 1209 sys_catalog.cc:564] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [sys.catalog]: configured and running, proceeding with master startup.
I20250811 20:46:23.493327 1216 sys_catalog.cc:455] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [sys.catalog]: SysCatalogTable state changed. Reason: New leader a4d54ea0ee3b4132afceabe270db6d91. Latest consensus state: current_term: 1 leader_uuid: "a4d54ea0ee3b4132afceabe270db6d91" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a4d54ea0ee3b4132afceabe270db6d91" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46199 } } }
I20250811 20:46:23.494073 1216 sys_catalog.cc:458] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [sys.catalog]: This master's current role is: LEADER
I20250811 20:46:23.495998 1215 sys_catalog.cc:455] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "a4d54ea0ee3b4132afceabe270db6d91" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "a4d54ea0ee3b4132afceabe270db6d91" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46199 } } }
I20250811 20:46:23.498639 1215 sys_catalog.cc:458] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91 [sys.catalog]: This master's current role is: LEADER
I20250811 20:46:23.500409 1223 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 20:46:23.513100 1223 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 20:46:23.528427 1223 catalog_manager.cc:1349] Generated new cluster ID: df37fea5e3114cdc95456cbd565f8404
I20250811 20:46:23.528757 1223 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 20:46:23.543700 1223 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 20:46:23.545212 1223 catalog_manager.cc:1506] Loading token signing keys...
I20250811 20:46:23.560256 1223 catalog_manager.cc:5955] T 00000000000000000000000000000000 P a4d54ea0ee3b4132afceabe270db6d91: Generated new TSK 0
I20250811 20:46:23.561146 1223 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 20:46:23.583751 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.193:0
--local_ip_for_outbound_sockets=127.31.250.193
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:46199
--builtin_ntp_servers=127.31.250.212:34315
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
W20250811 20:46:23.882862 1233 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250811 20:46:23.883502 1233 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:23.883774 1233 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:23.884279 1233 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:23.915033 1233 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:23.915884 1233 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.193
I20250811 20:46:23.948716 1233 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:34315
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.193:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:46199
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.193
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:23.949965 1233 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:23.951478 1233 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:23.963791 1239 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:25.368506 1238 debug-util.cc:398] Leaking SignalData structure 0x7b08000068a0 after lost signal to thread 1233
W20250811 20:46:23.964862 1240 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:25.474031 1241 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1507 milliseconds
W20250811 20:46:25.472020 1233 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.508s user 0.579s sys 0.927s
W20250811 20:46:25.474241 1242 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:25.474594 1233 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.511s user 0.580s sys 0.928s
I20250811 20:46:25.474900 1233 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:46:25.478798 1233 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:25.480867 1233 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:25.482256 1233 hybrid_clock.cc:648] HybridClock initialized: now 1754945185482215 us; error 45 us; skew 500 ppm
I20250811 20:46:25.483071 1233 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:25.489606 1233 webserver.cc:489] Webserver started at http://127.31.250.193:38745/ using document root <none> and password file <none>
I20250811 20:46:25.490686 1233 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:25.490931 1233 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:25.491575 1233 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:46:25.496208 1233 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "eb6f5673ab2643c49674b7ce504ed2ec"
format_stamp: "Formatted at 2025-08-11 20:46:25 on dist-test-slave-4gzk"
I20250811 20:46:25.497280 1233 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "eb6f5673ab2643c49674b7ce504ed2ec"
format_stamp: "Formatted at 2025-08-11 20:46:25 on dist-test-slave-4gzk"
I20250811 20:46:25.505398 1233 fs_manager.cc:696] Time spent creating directory manager: real 0.008s user 0.004s sys 0.004s
I20250811 20:46:25.511372 1249 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:25.512497 1233 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.001s
I20250811 20:46:25.512804 1233 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "eb6f5673ab2643c49674b7ce504ed2ec"
format_stamp: "Formatted at 2025-08-11 20:46:25 on dist-test-slave-4gzk"
I20250811 20:46:25.513134 1233 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:25.568058 1233 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:25.569464 1233 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:25.569926 1233 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:25.572933 1233 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:46:25.577534 1233 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:46:25.577735 1233 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:25.577989 1233 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:46:25.578151 1233 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:25.744787 1233 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.193:33247
I20250811 20:46:25.744967 1361 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.193:33247 every 8 connection(s)
I20250811 20:46:25.747516 1233 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 20:46:25.748441 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 1233
I20250811 20:46:25.748898 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250811 20:46:25.756513 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.194:0
--local_ip_for_outbound_sockets=127.31.250.194
--webserver_interface=127.31.250.194
--webserver_port=0
--tserver_master_addrs=127.31.250.254:46199
--builtin_ntp_servers=127.31.250.212:34315
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250811 20:46:25.775552 1362 heartbeater.cc:344] Connected to a master server at 127.31.250.254:46199
I20250811 20:46:25.775969 1362 heartbeater.cc:461] Registering TS with master...
I20250811 20:46:25.776973 1362 heartbeater.cc:507] Master 127.31.250.254:46199 requested a full tablet report, sending...
I20250811 20:46:25.779584 1174 ts_manager.cc:194] Registered new tserver with Master: eb6f5673ab2643c49674b7ce504ed2ec (127.31.250.193:33247)
I20250811 20:46:25.781430 1174 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.193:43093
W20250811 20:46:26.064577 1366 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250811 20:46:26.065187 1366 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:26.065469 1366 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:26.065905 1366 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:26.096750 1366 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:26.097625 1366 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.194
I20250811 20:46:26.132030 1366 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:34315
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.194:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.31.250.194
--webserver_port=0
--tserver_master_addrs=127.31.250.254:46199
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.194
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:26.133360 1366 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:26.134882 1366 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:26.145769 1372 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:46:26.784723 1362 heartbeater.cc:499] Master 127.31.250.254:46199 was elected leader, sending a full tablet report...
W20250811 20:46:26.147280 1373 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:27.298305 1375 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:27.300818 1374 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1149 milliseconds
I20250811 20:46:27.300966 1366 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:46:27.302102 1366 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:27.304790 1366 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:27.306213 1366 hybrid_clock.cc:648] HybridClock initialized: now 1754945187306185 us; error 40 us; skew 500 ppm
I20250811 20:46:27.306921 1366 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:27.313438 1366 webserver.cc:489] Webserver started at http://127.31.250.194:46713/ using document root <none> and password file <none>
I20250811 20:46:27.314240 1366 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:27.314404 1366 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:27.314780 1366 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:46:27.318984 1366 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "1a20b617ec6342938d2cf4493d7df529"
format_stamp: "Formatted at 2025-08-11 20:46:27 on dist-test-slave-4gzk"
I20250811 20:46:27.320096 1366 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "1a20b617ec6342938d2cf4493d7df529"
format_stamp: "Formatted at 2025-08-11 20:46:27 on dist-test-slave-4gzk"
I20250811 20:46:27.326742 1366 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.001s
I20250811 20:46:27.332156 1382 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:27.333124 1366 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.002s
I20250811 20:46:27.333403 1366 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "1a20b617ec6342938d2cf4493d7df529"
format_stamp: "Formatted at 2025-08-11 20:46:27 on dist-test-slave-4gzk"
I20250811 20:46:27.333683 1366 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:27.382712 1366 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:27.384222 1366 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:27.384616 1366 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:27.386919 1366 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:46:27.391140 1366 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:46:27.391395 1366 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.001s sys 0.000s
I20250811 20:46:27.391605 1366 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:46:27.391737 1366 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:27.525429 1366 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.194:45757
I20250811 20:46:27.525525 1494 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.194:45757 every 8 connection(s)
I20250811 20:46:27.527894 1366 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 20:46:27.531035 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 1366
I20250811 20:46:27.531534 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250811 20:46:27.538862 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.195:0
--local_ip_for_outbound_sockets=127.31.250.195
--webserver_interface=127.31.250.195
--webserver_port=0
--tserver_master_addrs=127.31.250.254:46199
--builtin_ntp_servers=127.31.250.212:34315
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false with env {}
I20250811 20:46:27.550575 1495 heartbeater.cc:344] Connected to a master server at 127.31.250.254:46199
I20250811 20:46:27.550971 1495 heartbeater.cc:461] Registering TS with master...
I20250811 20:46:27.552002 1495 heartbeater.cc:507] Master 127.31.250.254:46199 requested a full tablet report, sending...
I20250811 20:46:27.554117 1174 ts_manager.cc:194] Registered new tserver with Master: 1a20b617ec6342938d2cf4493d7df529 (127.31.250.194:45757)
I20250811 20:46:27.555384 1174 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.194:35433
W20250811 20:46:27.838579 1499 flags.cc:425] Enabled unsafe flag: --enable_leader_failure_detection=false
W20250811 20:46:27.839268 1499 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:27.839550 1499 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:27.840034 1499 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:27.870427 1499 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:27.871295 1499 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.195
I20250811 20:46:27.904731 1499 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:34315
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--enable_leader_failure_detection=false
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.195:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.31.250.195
--webserver_port=0
--tserver_master_addrs=127.31.250.254:46199
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.195
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:27.906114 1499 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:27.907701 1499 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:27.920125 1505 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:46:28.558482 1495 heartbeater.cc:499] Master 127.31.250.254:46199 was elected leader, sending a full tablet report...
W20250811 20:46:29.323529 1504 debug-util.cc:398] Leaking SignalData structure 0x7b08000068a0 after lost signal to thread 1499
W20250811 20:46:29.676337 1499 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.756s user 0.715s sys 1.041s
W20250811 20:46:29.676694 1499 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.757s user 0.715s sys 1.041s
W20250811 20:46:27.920544 1506 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:29.678639 1508 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:29.681361 1507 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1757 milliseconds
I20250811 20:46:29.681461 1499 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:46:29.682688 1499 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:29.684739 1499 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:29.686097 1499 hybrid_clock.cc:648] HybridClock initialized: now 1754945189686054 us; error 45 us; skew 500 ppm
I20250811 20:46:29.686847 1499 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:29.692893 1499 webserver.cc:489] Webserver started at http://127.31.250.195:38033/ using document root <none> and password file <none>
I20250811 20:46:29.694132 1499 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:29.694422 1499 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:29.694978 1499 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:46:29.699331 1499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "35827d5d81ef4f07b865b5569fb2a4e2"
format_stamp: "Formatted at 2025-08-11 20:46:29 on dist-test-slave-4gzk"
I20250811 20:46:29.700373 1499 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "35827d5d81ef4f07b865b5569fb2a4e2"
format_stamp: "Formatted at 2025-08-11 20:46:29 on dist-test-slave-4gzk"
I20250811 20:46:29.707274 1499 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.000s
I20250811 20:46:29.713199 1515 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:29.714144 1499 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250811 20:46:29.714473 1499 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "35827d5d81ef4f07b865b5569fb2a4e2"
format_stamp: "Formatted at 2025-08-11 20:46:29 on dist-test-slave-4gzk"
I20250811 20:46:29.714807 1499 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:29.771749 1499 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:29.773171 1499 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:29.773615 1499 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:29.776196 1499 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:46:29.780220 1499 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:46:29.780440 1499 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:29.780707 1499 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:46:29.780871 1499 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:29.913405 1499 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.195:44171
I20250811 20:46:29.913511 1627 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.195:44171 every 8 connection(s)
I20250811 20:46:29.916014 1499 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 20:46:29.919888 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 1499
I20250811 20:46:29.920357 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestGracefulSpecificLeaderStepDown.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250811 20:46:29.937117 1628 heartbeater.cc:344] Connected to a master server at 127.31.250.254:46199
I20250811 20:46:29.937536 1628 heartbeater.cc:461] Registering TS with master...
I20250811 20:46:29.938700 1628 heartbeater.cc:507] Master 127.31.250.254:46199 requested a full tablet report, sending...
I20250811 20:46:29.941097 1174 ts_manager.cc:194] Registered new tserver with Master: 35827d5d81ef4f07b865b5569fb2a4e2 (127.31.250.195:44171)
I20250811 20:46:29.942262 1174 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.195:57999
I20250811 20:46:29.954660 32747 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 20:46:29.987599 1174 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:48944:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250811 20:46:30.007411 1174 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250811 20:46:30.058967 1297 tablet_service.cc:1468] Processing CreateTablet for tablet 0ba11bfcad5e46558785822cdeeded52 (DEFAULT_TABLE table=TestTable [id=24a963fefc344fe6a7f3bae6a571071b]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:46:30.061053 1297 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0ba11bfcad5e46558785822cdeeded52. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:30.065177 1430 tablet_service.cc:1468] Processing CreateTablet for tablet 0ba11bfcad5e46558785822cdeeded52 (DEFAULT_TABLE table=TestTable [id=24a963fefc344fe6a7f3bae6a571071b]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:46:30.066519 1563 tablet_service.cc:1468] Processing CreateTablet for tablet 0ba11bfcad5e46558785822cdeeded52 (DEFAULT_TABLE table=TestTable [id=24a963fefc344fe6a7f3bae6a571071b]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:46:30.067176 1430 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0ba11bfcad5e46558785822cdeeded52. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:30.068060 1563 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 0ba11bfcad5e46558785822cdeeded52. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:30.089591 1647 tablet_bootstrap.cc:492] T 0ba11bfcad5e46558785822cdeeded52 P 35827d5d81ef4f07b865b5569fb2a4e2: Bootstrap starting.
I20250811 20:46:30.092717 1648 tablet_bootstrap.cc:492] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec: Bootstrap starting.
I20250811 20:46:30.095865 1649 tablet_bootstrap.cc:492] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529: Bootstrap starting.
I20250811 20:46:30.098549 1647 tablet_bootstrap.cc:654] T 0ba11bfcad5e46558785822cdeeded52 P 35827d5d81ef4f07b865b5569fb2a4e2: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:30.100557 1647 log.cc:826] T 0ba11bfcad5e46558785822cdeeded52 P 35827d5d81ef4f07b865b5569fb2a4e2: Log is configured to *not* fsync() on all Append() calls
I20250811 20:46:30.100771 1648 tablet_bootstrap.cc:654] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:30.103384 1648 log.cc:826] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec: Log is configured to *not* fsync() on all Append() calls
I20250811 20:46:30.104091 1649 tablet_bootstrap.cc:654] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:30.106040 1647 tablet_bootstrap.cc:492] T 0ba11bfcad5e46558785822cdeeded52 P 35827d5d81ef4f07b865b5569fb2a4e2: No bootstrap required, opened a new log
I20250811 20:46:30.106630 1647 ts_tablet_manager.cc:1397] T 0ba11bfcad5e46558785822cdeeded52 P 35827d5d81ef4f07b865b5569fb2a4e2: Time spent bootstrapping tablet: real 0.018s user 0.004s sys 0.011s
I20250811 20:46:30.106706 1649 log.cc:826] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529: Log is configured to *not* fsync() on all Append() calls
I20250811 20:46:30.109397 1648 tablet_bootstrap.cc:492] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec: No bootstrap required, opened a new log
I20250811 20:46:30.110062 1648 ts_tablet_manager.cc:1397] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec: Time spent bootstrapping tablet: real 0.018s user 0.017s sys 0.000s
I20250811 20:46:30.112897 1649 tablet_bootstrap.cc:492] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529: No bootstrap required, opened a new log
I20250811 20:46:30.113395 1649 ts_tablet_manager.cc:1397] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529: Time spent bootstrapping tablet: real 0.018s user 0.013s sys 0.001s
I20250811 20:46:30.127096 1647 raft_consensus.cc:357] T 0ba11bfcad5e46558785822cdeeded52 P 35827d5d81ef4f07b865b5569fb2a4e2 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "35827d5d81ef4f07b865b5569fb2a4e2" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 44171 } } peers { permanent_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 33247 } } peers { permanent_uuid: "1a20b617ec6342938d2cf4493d7df529" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45757 } }
I20250811 20:46:30.127799 1647 raft_consensus.cc:738] T 0ba11bfcad5e46558785822cdeeded52 P 35827d5d81ef4f07b865b5569fb2a4e2 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 35827d5d81ef4f07b865b5569fb2a4e2, State: Initialized, Role: FOLLOWER
I20250811 20:46:30.128376 1647 consensus_queue.cc:260] T 0ba11bfcad5e46558785822cdeeded52 P 35827d5d81ef4f07b865b5569fb2a4e2 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "35827d5d81ef4f07b865b5569fb2a4e2" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 44171 } } peers { permanent_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 33247 } } peers { permanent_uuid: "1a20b617ec6342938d2cf4493d7df529" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45757 } }
I20250811 20:46:30.131364 1628 heartbeater.cc:499] Master 127.31.250.254:46199 was elected leader, sending a full tablet report...
I20250811 20:46:30.132787 1647 ts_tablet_manager.cc:1428] T 0ba11bfcad5e46558785822cdeeded52 P 35827d5d81ef4f07b865b5569fb2a4e2: Time spent starting tablet: real 0.026s user 0.021s sys 0.003s
I20250811 20:46:30.136909 1648 raft_consensus.cc:357] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "35827d5d81ef4f07b865b5569fb2a4e2" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 44171 } } peers { permanent_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 33247 } } peers { permanent_uuid: "1a20b617ec6342938d2cf4493d7df529" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45757 } }
I20250811 20:46:30.137821 1648 raft_consensus.cc:738] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: eb6f5673ab2643c49674b7ce504ed2ec, State: Initialized, Role: FOLLOWER
I20250811 20:46:30.138527 1648 consensus_queue.cc:260] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "35827d5d81ef4f07b865b5569fb2a4e2" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 44171 } } peers { permanent_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 33247 } } peers { permanent_uuid: "1a20b617ec6342938d2cf4493d7df529" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45757 } }
I20250811 20:46:30.140806 1649 raft_consensus.cc:357] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "35827d5d81ef4f07b865b5569fb2a4e2" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 44171 } } peers { permanent_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 33247 } } peers { permanent_uuid: "1a20b617ec6342938d2cf4493d7df529" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45757 } }
I20250811 20:46:30.141906 1649 raft_consensus.cc:738] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1a20b617ec6342938d2cf4493d7df529, State: Initialized, Role: FOLLOWER
I20250811 20:46:30.142663 1649 consensus_queue.cc:260] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "35827d5d81ef4f07b865b5569fb2a4e2" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 44171 } } peers { permanent_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 33247 } } peers { permanent_uuid: "1a20b617ec6342938d2cf4493d7df529" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45757 } }
I20250811 20:46:30.143385 1648 ts_tablet_manager.cc:1428] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec: Time spent starting tablet: real 0.033s user 0.025s sys 0.008s
I20250811 20:46:30.148679 1649 ts_tablet_manager.cc:1428] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529: Time spent starting tablet: real 0.035s user 0.032s sys 0.003s
I20250811 20:46:30.160696 32747 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 20:46:30.164276 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver eb6f5673ab2643c49674b7ce504ed2ec to finish bootstrapping
W20250811 20:46:30.170692 1629 tablet.cc:2378] T 0ba11bfcad5e46558785822cdeeded52 P 35827d5d81ef4f07b865b5569fb2a4e2: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 20:46:30.177961 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 1a20b617ec6342938d2cf4493d7df529 to finish bootstrapping
I20250811 20:46:30.188333 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 35827d5d81ef4f07b865b5569fb2a4e2 to finish bootstrapping
I20250811 20:46:30.229493 1317 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "0ba11bfcad5e46558785822cdeeded52"
dest_uuid: "eb6f5673ab2643c49674b7ce504ed2ec"
from {username='slave'} at 127.0.0.1:35764
I20250811 20:46:30.230026 1317 raft_consensus.cc:491] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 0 FOLLOWER]: Starting forced leader election (received explicit request)
I20250811 20:46:30.230329 1317 raft_consensus.cc:3058] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:30.234659 1317 raft_consensus.cc:513] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 1 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "35827d5d81ef4f07b865b5569fb2a4e2" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 44171 } } peers { permanent_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 33247 } } peers { permanent_uuid: "1a20b617ec6342938d2cf4493d7df529" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45757 } }
I20250811 20:46:30.236979 1317 leader_election.cc:290] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [CANDIDATE]: Term 1 election: Requested vote from peers 35827d5d81ef4f07b865b5569fb2a4e2 (127.31.250.195:44171), 1a20b617ec6342938d2cf4493d7df529 (127.31.250.194:45757)
I20250811 20:46:30.245389 32747 cluster_itest_util.cc:257] Not converged past 1 yet: 0.0 0.0 0.0
I20250811 20:46:30.249209 1583 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "0ba11bfcad5e46558785822cdeeded52" candidate_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "35827d5d81ef4f07b865b5569fb2a4e2"
I20250811 20:46:30.249943 1583 raft_consensus.cc:3058] T 0ba11bfcad5e46558785822cdeeded52 P 35827d5d81ef4f07b865b5569fb2a4e2 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:30.251356 1450 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "0ba11bfcad5e46558785822cdeeded52" candidate_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: true dest_uuid: "1a20b617ec6342938d2cf4493d7df529"
I20250811 20:46:30.251896 1450 raft_consensus.cc:3058] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:30.254581 1583 raft_consensus.cc:2466] T 0ba11bfcad5e46558785822cdeeded52 P 35827d5d81ef4f07b865b5569fb2a4e2 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate eb6f5673ab2643c49674b7ce504ed2ec in term 1.
I20250811 20:46:30.255596 1253 leader_election.cc:304] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 35827d5d81ef4f07b865b5569fb2a4e2, eb6f5673ab2643c49674b7ce504ed2ec; no voters:
I20250811 20:46:30.256439 1654 raft_consensus.cc:2802] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:46:30.256412 1450 raft_consensus.cc:2466] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate eb6f5673ab2643c49674b7ce504ed2ec in term 1.
I20250811 20:46:30.258270 1654 raft_consensus.cc:695] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 1 LEADER]: Becoming Leader. State: Replica: eb6f5673ab2643c49674b7ce504ed2ec, State: Running, Role: LEADER
W20250811 20:46:30.258846 1363 tablet.cc:2378] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 20:46:30.259075 1654 consensus_queue.cc:237] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "35827d5d81ef4f07b865b5569fb2a4e2" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 44171 } } peers { permanent_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 33247 } } peers { permanent_uuid: "1a20b617ec6342938d2cf4493d7df529" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45757 } }
I20250811 20:46:30.270376 1171 catalog_manager.cc:5582] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec reported cstate change: term changed from 0 to 1, leader changed from <none> to eb6f5673ab2643c49674b7ce504ed2ec (127.31.250.193). New cstate: current_term: 1 leader_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "35827d5d81ef4f07b865b5569fb2a4e2" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 44171 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 33247 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "1a20b617ec6342938d2cf4493d7df529" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45757 } health_report { overall_health: UNKNOWN } } }
W20250811 20:46:30.287505 1496 tablet.cc:2378] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 20:46:30.351428 32747 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
I20250811 20:46:30.556836 32747 cluster_itest_util.cc:257] Not converged past 1 yet: 1.1 0.0 0.0
I20250811 20:46:30.731415 1654 consensus_queue.cc:1035] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [LEADER]: Connected to new peer: Peer: permanent_uuid: "1a20b617ec6342938d2cf4493d7df529" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45757 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 20:46:30.747340 1654 consensus_queue.cc:1035] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [LEADER]: Connected to new peer: Peer: permanent_uuid: "35827d5d81ef4f07b865b5569fb2a4e2" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 44171 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 20:46:32.532867 1317 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "0ba11bfcad5e46558785822cdeeded52"
dest_uuid: "eb6f5673ab2643c49674b7ce504ed2ec"
mode: GRACEFUL
from {username='slave'} at 127.0.0.1:59666
I20250811 20:46:32.533454 1317 raft_consensus.cc:604] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 1 LEADER]: Received request to transfer leadership
I20250811 20:46:32.750977 1692 raft_consensus.cc:991] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec: : Instructing follower 1a20b617ec6342938d2cf4493d7df529 to start an election
I20250811 20:46:32.751394 1680 raft_consensus.cc:1079] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 1 LEADER]: Signalling peer 1a20b617ec6342938d2cf4493d7df529 to start an election
I20250811 20:46:32.752661 1450 tablet_service.cc:1940] Received Run Leader Election RPC: tablet_id: "0ba11bfcad5e46558785822cdeeded52"
dest_uuid: "1a20b617ec6342938d2cf4493d7df529"
from {username='slave'} at 127.31.250.193:39199
I20250811 20:46:32.753170 1450 raft_consensus.cc:491] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 [term 1 FOLLOWER]: Starting forced leader election (received explicit request)
I20250811 20:46:32.753487 1450 raft_consensus.cc:3058] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 [term 1 FOLLOWER]: Advancing to term 2
I20250811 20:46:32.757870 1450 raft_consensus.cc:513] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 [term 2 FOLLOWER]: Starting forced leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "35827d5d81ef4f07b865b5569fb2a4e2" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 44171 } } peers { permanent_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 33247 } } peers { permanent_uuid: "1a20b617ec6342938d2cf4493d7df529" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45757 } }
I20250811 20:46:32.760075 1450 leader_election.cc:290] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 [CANDIDATE]: Term 2 election: Requested vote from peers 35827d5d81ef4f07b865b5569fb2a4e2 (127.31.250.195:44171), eb6f5673ab2643c49674b7ce504ed2ec (127.31.250.193:33247)
I20250811 20:46:32.774777 1583 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "0ba11bfcad5e46558785822cdeeded52" candidate_uuid: "1a20b617ec6342938d2cf4493d7df529" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "35827d5d81ef4f07b865b5569fb2a4e2"
I20250811 20:46:32.775451 1583 raft_consensus.cc:3058] T 0ba11bfcad5e46558785822cdeeded52 P 35827d5d81ef4f07b865b5569fb2a4e2 [term 1 FOLLOWER]: Advancing to term 2
I20250811 20:46:32.775431 1317 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "0ba11bfcad5e46558785822cdeeded52" candidate_uuid: "1a20b617ec6342938d2cf4493d7df529" candidate_term: 2 candidate_status { last_received { term: 1 index: 1 } } ignore_live_leader: true dest_uuid: "eb6f5673ab2643c49674b7ce504ed2ec"
I20250811 20:46:32.775913 1317 raft_consensus.cc:3053] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 1 LEADER]: Stepping down as leader of term 1
I20250811 20:46:32.776151 1317 raft_consensus.cc:738] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 1 LEADER]: Becoming Follower/Learner. State: Replica: eb6f5673ab2643c49674b7ce504ed2ec, State: Running, Role: LEADER
I20250811 20:46:32.776774 1317 consensus_queue.cc:260] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 1, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "35827d5d81ef4f07b865b5569fb2a4e2" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 44171 } } peers { permanent_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 33247 } } peers { permanent_uuid: "1a20b617ec6342938d2cf4493d7df529" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45757 } }
I20250811 20:46:32.777680 1317 raft_consensus.cc:3058] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 1 FOLLOWER]: Advancing to term 2
I20250811 20:46:32.779953 1583 raft_consensus.cc:2466] T 0ba11bfcad5e46558785822cdeeded52 P 35827d5d81ef4f07b865b5569fb2a4e2 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 1a20b617ec6342938d2cf4493d7df529 in term 2.
I20250811 20:46:32.780947 1386 leader_election.cc:304] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1a20b617ec6342938d2cf4493d7df529, 35827d5d81ef4f07b865b5569fb2a4e2; no voters:
I20250811 20:46:32.782248 1317 raft_consensus.cc:2466] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 1a20b617ec6342938d2cf4493d7df529 in term 2.
I20250811 20:46:32.783196 1696 raft_consensus.cc:2802] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 [term 2 FOLLOWER]: Leader election won for term 2
I20250811 20:46:32.784605 1696 raft_consensus.cc:695] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 [term 2 LEADER]: Becoming Leader. State: Replica: 1a20b617ec6342938d2cf4493d7df529, State: Running, Role: LEADER
I20250811 20:46:32.785456 1696 consensus_queue.cc:237] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 1, Committed index: 1, Last appended: 1.1, Last appended by leader: 1, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "35827d5d81ef4f07b865b5569fb2a4e2" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 44171 } } peers { permanent_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 33247 } } peers { permanent_uuid: "1a20b617ec6342938d2cf4493d7df529" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45757 } }
I20250811 20:46:32.792496 1171 catalog_manager.cc:5582] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 reported cstate change: term changed from 1 to 2, leader changed from eb6f5673ab2643c49674b7ce504ed2ec (127.31.250.193) to 1a20b617ec6342938d2cf4493d7df529 (127.31.250.194). New cstate: current_term: 2 leader_uuid: "1a20b617ec6342938d2cf4493d7df529" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "35827d5d81ef4f07b865b5569fb2a4e2" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 44171 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 33247 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "1a20b617ec6342938d2cf4493d7df529" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45757 } health_report { overall_health: HEALTHY } } }
I20250811 20:46:33.194334 1317 raft_consensus.cc:1273] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 2 FOLLOWER]: Refusing update from remote peer 1a20b617ec6342938d2cf4493d7df529: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250811 20:46:33.195551 1696 consensus_queue.cc:1035] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 [LEADER]: Connected to new peer: Peer: permanent_uuid: "eb6f5673ab2643c49674b7ce504ed2ec" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 33247 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250811 20:46:33.206584 1583 raft_consensus.cc:1273] T 0ba11bfcad5e46558785822cdeeded52 P 35827d5d81ef4f07b865b5569fb2a4e2 [term 2 FOLLOWER]: Refusing update from remote peer 1a20b617ec6342938d2cf4493d7df529: Log matching property violated. Preceding OpId in replica: term: 1 index: 1. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250811 20:46:33.208709 1706 consensus_queue.cc:1035] T 0ba11bfcad5e46558785822cdeeded52 P 1a20b617ec6342938d2cf4493d7df529 [LEADER]: Connected to new peer: Peer: permanent_uuid: "35827d5d81ef4f07b865b5569fb2a4e2" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 44171 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 2, Last known committed idx: 1, Time since last communication: 0.000s
I20250811 20:46:35.253662 1317 tablet_service.cc:1968] Received LeaderStepDown RPC: tablet_id: "0ba11bfcad5e46558785822cdeeded52"
dest_uuid: "eb6f5673ab2643c49674b7ce504ed2ec"
mode: GRACEFUL
from {username='slave'} at 127.0.0.1:59672
I20250811 20:46:35.254261 1317 raft_consensus.cc:604] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 2 FOLLOWER]: Received request to transfer leadership
I20250811 20:46:35.254590 1317 raft_consensus.cc:612] T 0ba11bfcad5e46558785822cdeeded52 P eb6f5673ab2643c49674b7ce504ed2ec [term 2 FOLLOWER]: Rejecting request to transer leadership while not leader
I20250811 20:46:36.288866 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 1233
I20250811 20:46:36.313351 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 1366
I20250811 20:46:36.337879 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 1499
I20250811 20:46:36.360940 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 1141
2025-08-11T20:46:36Z chronyd exiting
[ OK ] AdminCliTest.TestGracefulSpecificLeaderStepDown (14866 ms)
[ RUN ] AdminCliTest.TestDescribeTableColumnFlags
I20250811 20:46:36.419193 32747 test_util.cc:276] Using random seed: 84437084
I20250811 20:46:36.423233 32747 ts_itest-base.cc:115] Starting cluster with:
I20250811 20:46:36.423437 32747 ts_itest-base.cc:116] --------------
I20250811 20:46:36.423589 32747 ts_itest-base.cc:117] 3 tablet servers
I20250811 20:46:36.423730 32747 ts_itest-base.cc:118] 3 replicas per TS
I20250811 20:46:36.423867 32747 ts_itest-base.cc:119] --------------
2025-08-11T20:46:36Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T20:46:36Z Disabled control of system clock
I20250811 20:46:36.459158 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:40187
--webserver_interface=127.31.250.254
--webserver_port=0
--builtin_ntp_servers=127.31.250.212:40681
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.31.250.254:40187 with env {}
W20250811 20:46:36.746691 1740 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:36.747228 1740 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:36.747709 1740 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:36.783524 1740 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 20:46:36.783804 1740 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:36.784034 1740 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 20:46:36.784238 1740 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 20:46:36.818863 1740 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:40681
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.31.250.254:40187
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:40187
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.31.250.254
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:36.820091 1740 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:36.821637 1740 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:36.832991 1746 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:36.833393 1747 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:38.041733 1749 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:38.044839 1748 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1207 milliseconds
W20250811 20:46:38.045171 1740 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.213s user 0.394s sys 0.813s
W20250811 20:46:38.045553 1740 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.213s user 0.394s sys 0.813s
I20250811 20:46:38.045817 1740 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:46:38.046876 1740 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:38.049503 1740 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:38.050853 1740 hybrid_clock.cc:648] HybridClock initialized: now 1754945198050827 us; error 41 us; skew 500 ppm
I20250811 20:46:38.051656 1740 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:38.058622 1740 webserver.cc:489] Webserver started at http://127.31.250.254:36979/ using document root <none> and password file <none>
I20250811 20:46:38.059687 1740 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:38.059937 1740 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:38.060391 1740 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:46:38.064800 1740 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "7b8f70537422476b8bd61a3722b38370"
format_stamp: "Formatted at 2025-08-11 20:46:38 on dist-test-slave-4gzk"
I20250811 20:46:38.065850 1740 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "7b8f70537422476b8bd61a3722b38370"
format_stamp: "Formatted at 2025-08-11 20:46:38 on dist-test-slave-4gzk"
I20250811 20:46:38.073444 1740 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.002s
I20250811 20:46:38.079993 1756 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:38.081341 1740 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250811 20:46:38.081689 1740 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
uuid: "7b8f70537422476b8bd61a3722b38370"
format_stamp: "Formatted at 2025-08-11 20:46:38 on dist-test-slave-4gzk"
I20250811 20:46:38.082029 1740 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:38.160195 1740 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:38.161609 1740 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:38.162040 1740 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:38.229368 1740 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.254:40187
I20250811 20:46:38.229419 1807 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.254:40187 every 8 connection(s)
I20250811 20:46:38.231947 1740 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 20:46:38.237234 1808 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:38.238992 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 1740
I20250811 20:46:38.239485 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250811 20:46:38.257277 1808 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370: Bootstrap starting.
I20250811 20:46:38.263406 1808 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:38.265030 1808 log.cc:826] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370: Log is configured to *not* fsync() on all Append() calls
I20250811 20:46:38.269273 1808 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370: No bootstrap required, opened a new log
I20250811 20:46:38.285737 1808 raft_consensus.cc:357] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7b8f70537422476b8bd61a3722b38370" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40187 } }
I20250811 20:46:38.286520 1808 raft_consensus.cc:383] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:46:38.286751 1808 raft_consensus.cc:738] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 7b8f70537422476b8bd61a3722b38370, State: Initialized, Role: FOLLOWER
I20250811 20:46:38.287586 1808 consensus_queue.cc:260] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7b8f70537422476b8bd61a3722b38370" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40187 } }
I20250811 20:46:38.288069 1808 raft_consensus.cc:397] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:46:38.288321 1808 raft_consensus.cc:491] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:46:38.288597 1808 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:38.292325 1808 raft_consensus.cc:513] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7b8f70537422476b8bd61a3722b38370" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40187 } }
I20250811 20:46:38.292961 1808 leader_election.cc:304] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 7b8f70537422476b8bd61a3722b38370; no voters:
I20250811 20:46:38.294633 1808 leader_election.cc:290] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 20:46:38.295389 1813 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:46:38.297448 1813 raft_consensus.cc:695] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [term 1 LEADER]: Becoming Leader. State: Replica: 7b8f70537422476b8bd61a3722b38370, State: Running, Role: LEADER
I20250811 20:46:38.298359 1813 consensus_queue.cc:237] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7b8f70537422476b8bd61a3722b38370" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40187 } }
I20250811 20:46:38.300199 1808 sys_catalog.cc:564] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [sys.catalog]: configured and running, proceeding with master startup.
I20250811 20:46:38.304908 1815 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "7b8f70537422476b8bd61a3722b38370" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7b8f70537422476b8bd61a3722b38370" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40187 } } }
I20250811 20:46:38.305193 1814 sys_catalog.cc:455] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 7b8f70537422476b8bd61a3722b38370. Latest consensus state: current_term: 1 leader_uuid: "7b8f70537422476b8bd61a3722b38370" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "7b8f70537422476b8bd61a3722b38370" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40187 } } }
I20250811 20:46:38.305610 1815 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [sys.catalog]: This master's current role is: LEADER
I20250811 20:46:38.305899 1814 sys_catalog.cc:458] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370 [sys.catalog]: This master's current role is: LEADER
I20250811 20:46:38.310405 1819 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 20:46:38.322016 1819 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 20:46:38.339608 1819 catalog_manager.cc:1349] Generated new cluster ID: a97103af68f74420b91f1590219fdf74
I20250811 20:46:38.339906 1819 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 20:46:38.365214 1819 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 20:46:38.366748 1819 catalog_manager.cc:1506] Loading token signing keys...
I20250811 20:46:38.378470 1819 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 7b8f70537422476b8bd61a3722b38370: Generated new TSK 0
I20250811 20:46:38.379307 1819 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 20:46:38.393750 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.193:0
--local_ip_for_outbound_sockets=127.31.250.193
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:40187
--builtin_ntp_servers=127.31.250.212:40681
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250811 20:46:38.685726 1832 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:38.686185 1832 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:38.686614 1832 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:38.715291 1832 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:38.716046 1832 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.193
I20250811 20:46:38.749573 1832 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:40681
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.193:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:40187
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.193
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:38.750826 1832 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:38.752688 1832 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:38.770788 1838 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:39.915796 1841 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:38.776858 1839 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:46:38.776180 1832 server_base.cc:1047] running on GCE node
I20250811 20:46:39.932864 1832 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:39.935611 1832 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:39.937131 1832 hybrid_clock.cc:648] HybridClock initialized: now 1754945199937082 us; error 48 us; skew 500 ppm
I20250811 20:46:39.938160 1832 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:39.945315 1832 webserver.cc:489] Webserver started at http://127.31.250.193:43701/ using document root <none> and password file <none>
I20250811 20:46:39.946493 1832 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:39.946764 1832 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:39.947340 1832 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:46:39.953801 1832 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "d2be35a7226b4159ac8b184b8a149539"
format_stamp: "Formatted at 2025-08-11 20:46:39 on dist-test-slave-4gzk"
I20250811 20:46:39.955345 1832 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "d2be35a7226b4159ac8b184b8a149539"
format_stamp: "Formatted at 2025-08-11 20:46:39 on dist-test-slave-4gzk"
I20250811 20:46:39.964660 1832 fs_manager.cc:696] Time spent creating directory manager: real 0.009s user 0.008s sys 0.000s
I20250811 20:46:39.972250 1848 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:39.973508 1832 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.001s sys 0.003s
I20250811 20:46:39.973889 1832 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "d2be35a7226b4159ac8b184b8a149539"
format_stamp: "Formatted at 2025-08-11 20:46:39 on dist-test-slave-4gzk"
I20250811 20:46:39.974313 1832 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:40.045929 1832 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:40.047741 1832 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:40.048240 1832 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:40.051312 1832 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:46:40.056726 1832 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:46:40.056985 1832 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:40.057268 1832 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:46:40.057478 1832 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:40.190670 1832 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.193:39519
I20250811 20:46:40.190764 1960 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.193:39519 every 8 connection(s)
I20250811 20:46:40.193198 1832 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 20:46:40.200101 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 1832
I20250811 20:46:40.200511 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250811 20:46:40.206260 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.194:0
--local_ip_for_outbound_sockets=127.31.250.194
--webserver_interface=127.31.250.194
--webserver_port=0
--tserver_master_addrs=127.31.250.254:40187
--builtin_ntp_servers=127.31.250.212:40681
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 20:46:40.214560 1961 heartbeater.cc:344] Connected to a master server at 127.31.250.254:40187
I20250811 20:46:40.214974 1961 heartbeater.cc:461] Registering TS with master...
I20250811 20:46:40.216001 1961 heartbeater.cc:507] Master 127.31.250.254:40187 requested a full tablet report, sending...
I20250811 20:46:40.218626 1773 ts_manager.cc:194] Registered new tserver with Master: d2be35a7226b4159ac8b184b8a149539 (127.31.250.193:39519)
I20250811 20:46:40.220579 1773 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.193:56451
W20250811 20:46:40.509373 1965 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:40.509853 1965 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:40.510300 1965 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:40.540381 1965 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:40.541239 1965 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.194
I20250811 20:46:40.574787 1965 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:40681
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.194:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.31.250.194
--webserver_port=0
--tserver_master_addrs=127.31.250.254:40187
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.194
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:40.576098 1965 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:40.577670 1965 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:40.589300 1971 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:46:41.224856 1961 heartbeater.cc:499] Master 127.31.250.254:40187 was elected leader, sending a full tablet report...
W20250811 20:46:40.589882 1972 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:41.788168 1974 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:41.791487 1973 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1200 milliseconds
W20250811 20:46:41.792404 1965 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.203s user 0.406s sys 0.792s
W20250811 20:46:41.792670 1965 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.203s user 0.406s sys 0.792s
I20250811 20:46:41.792876 1965 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:46:41.793870 1965 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:41.797446 1965 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:41.798919 1965 hybrid_clock.cc:648] HybridClock initialized: now 1754945201798874 us; error 59 us; skew 500 ppm
I20250811 20:46:41.799798 1965 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:41.806915 1965 webserver.cc:489] Webserver started at http://127.31.250.194:34815/ using document root <none> and password file <none>
I20250811 20:46:41.807839 1965 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:41.808044 1965 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:41.808491 1965 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:46:41.812717 1965 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "1178ea6bb48b48309f142bf2480dba91"
format_stamp: "Formatted at 2025-08-11 20:46:41 on dist-test-slave-4gzk"
I20250811 20:46:41.813781 1965 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "1178ea6bb48b48309f142bf2480dba91"
format_stamp: "Formatted at 2025-08-11 20:46:41 on dist-test-slave-4gzk"
I20250811 20:46:41.821255 1965 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.004s sys 0.004s
I20250811 20:46:41.827056 1982 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:41.828158 1965 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.001s
I20250811 20:46:41.828467 1965 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "1178ea6bb48b48309f142bf2480dba91"
format_stamp: "Formatted at 2025-08-11 20:46:41 on dist-test-slave-4gzk"
I20250811 20:46:41.828770 1965 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:41.923581 1965 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:41.925520 1965 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:41.926012 1965 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:41.928776 1965 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:46:41.935117 1965 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:46:41.935483 1965 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.001s sys 0.000s
I20250811 20:46:41.935827 1965 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:46:41.936090 1965 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:42.069885 1965 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.194:36519
I20250811 20:46:42.069986 2094 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.194:36519 every 8 connection(s)
I20250811 20:46:42.072333 1965 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 20:46:42.082718 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 1965
I20250811 20:46:42.083472 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250811 20:46:42.090737 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.195:0
--local_ip_for_outbound_sockets=127.31.250.195
--webserver_interface=127.31.250.195
--webserver_port=0
--tserver_master_addrs=127.31.250.254:40187
--builtin_ntp_servers=127.31.250.212:40681
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 20:46:42.094476 2095 heartbeater.cc:344] Connected to a master server at 127.31.250.254:40187
I20250811 20:46:42.095029 2095 heartbeater.cc:461] Registering TS with master...
I20250811 20:46:42.096385 2095 heartbeater.cc:507] Master 127.31.250.254:40187 requested a full tablet report, sending...
I20250811 20:46:42.098732 1773 ts_manager.cc:194] Registered new tserver with Master: 1178ea6bb48b48309f142bf2480dba91 (127.31.250.194:36519)
I20250811 20:46:42.099959 1773 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.194:35915
W20250811 20:46:42.379666 2099 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:42.380189 2099 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:42.380657 2099 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:42.411393 2099 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:42.412245 2099 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.195
I20250811 20:46:42.446633 2099 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:40681
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.195:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.31.250.195
--webserver_port=0
--tserver_master_addrs=127.31.250.254:40187
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.195
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:42.447923 2099 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:42.449486 2099 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:42.461639 2105 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:46:43.102753 2095 heartbeater.cc:499] Master 127.31.250.254:40187 was elected leader, sending a full tablet report...
W20250811 20:46:42.462002 2106 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:43.655769 2108 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:43.658432 2107 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1190 milliseconds
W20250811 20:46:43.659111 2099 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.197s user 0.355s sys 0.837s
W20250811 20:46:43.659415 2099 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.198s user 0.357s sys 0.837s
I20250811 20:46:43.659621 2099 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:46:43.660590 2099 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:43.663802 2099 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:43.665136 2099 hybrid_clock.cc:648] HybridClock initialized: now 1754945203665094 us; error 49 us; skew 500 ppm
I20250811 20:46:43.665880 2099 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:43.672757 2099 webserver.cc:489] Webserver started at http://127.31.250.195:44857/ using document root <none> and password file <none>
I20250811 20:46:43.673730 2099 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:43.673939 2099 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:43.674373 2099 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:46:43.678565 2099 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "1eb47116192244209d0ea55c88ef0910"
format_stamp: "Formatted at 2025-08-11 20:46:43 on dist-test-slave-4gzk"
I20250811 20:46:43.679581 2099 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "1eb47116192244209d0ea55c88ef0910"
format_stamp: "Formatted at 2025-08-11 20:46:43 on dist-test-slave-4gzk"
I20250811 20:46:43.686429 2099 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.001s
I20250811 20:46:43.691884 2115 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:43.693003 2099 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250811 20:46:43.693311 2099 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "1eb47116192244209d0ea55c88ef0910"
format_stamp: "Formatted at 2025-08-11 20:46:43 on dist-test-slave-4gzk"
I20250811 20:46:43.693629 2099 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:43.762439 2099 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:43.764010 2099 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:43.764436 2099 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:43.766847 2099 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:46:43.770823 2099 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:46:43.771040 2099 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:43.771315 2099 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:46:43.771476 2099 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:43.902596 2099 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.195:38359
I20250811 20:46:43.902694 2227 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.195:38359 every 8 connection(s)
I20250811 20:46:43.905139 2099 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 20:46:43.912174 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 2099
I20250811 20:46:43.912568 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestDescribeTableColumnFlags.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250811 20:46:43.925493 2228 heartbeater.cc:344] Connected to a master server at 127.31.250.254:40187
I20250811 20:46:43.925899 2228 heartbeater.cc:461] Registering TS with master...
I20250811 20:46:43.926820 2228 heartbeater.cc:507] Master 127.31.250.254:40187 requested a full tablet report, sending...
I20250811 20:46:43.928750 1773 ts_manager.cc:194] Registered new tserver with Master: 1eb47116192244209d0ea55c88ef0910 (127.31.250.195:38359)
I20250811 20:46:43.929916 1773 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.195:56551
I20250811 20:46:43.932094 32747 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 20:46:43.964747 1773 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:46728:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250811 20:46:43.983737 1773 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250811 20:46:44.040544 1896 tablet_service.cc:1468] Processing CreateTablet for tablet 566c7803560d44448f8af15cdc40404f (DEFAULT_TABLE table=TestTable [id=6b43f136a4f2470c8599078a5868ec9a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:46:44.041783 2030 tablet_service.cc:1468] Processing CreateTablet for tablet 566c7803560d44448f8af15cdc40404f (DEFAULT_TABLE table=TestTable [id=6b43f136a4f2470c8599078a5868ec9a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:46:44.042768 1896 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 566c7803560d44448f8af15cdc40404f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:44.042407 2163 tablet_service.cc:1468] Processing CreateTablet for tablet 566c7803560d44448f8af15cdc40404f (DEFAULT_TABLE table=TestTable [id=6b43f136a4f2470c8599078a5868ec9a]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:46:44.043972 2030 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 566c7803560d44448f8af15cdc40404f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:44.044379 2163 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 566c7803560d44448f8af15cdc40404f. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:44.063148 2247 tablet_bootstrap.cc:492] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539: Bootstrap starting.
I20250811 20:46:44.069319 2247 tablet_bootstrap.cc:654] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:44.074724 2247 log.cc:826] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539: Log is configured to *not* fsync() on all Append() calls
I20250811 20:46:44.078073 2249 tablet_bootstrap.cc:492] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910: Bootstrap starting.
I20250811 20:46:44.078965 2248 tablet_bootstrap.cc:492] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91: Bootstrap starting.
I20250811 20:46:44.087579 2249 tablet_bootstrap.cc:654] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:44.089421 2248 tablet_bootstrap.cc:654] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:44.090230 2249 log.cc:826] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910: Log is configured to *not* fsync() on all Append() calls
I20250811 20:46:44.092195 2248 log.cc:826] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91: Log is configured to *not* fsync() on all Append() calls
I20250811 20:46:44.099200 2247 tablet_bootstrap.cc:492] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539: No bootstrap required, opened a new log
I20250811 20:46:44.099817 2247 ts_tablet_manager.cc:1397] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539: Time spent bootstrapping tablet: real 0.037s user 0.024s sys 0.004s
I20250811 20:46:44.102229 2248 tablet_bootstrap.cc:492] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91: No bootstrap required, opened a new log
I20250811 20:46:44.102909 2248 ts_tablet_manager.cc:1397] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91: Time spent bootstrapping tablet: real 0.025s user 0.008s sys 0.012s
I20250811 20:46:44.103194 2249 tablet_bootstrap.cc:492] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910: No bootstrap required, opened a new log
I20250811 20:46:44.103760 2249 ts_tablet_manager.cc:1397] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910: Time spent bootstrapping tablet: real 0.026s user 0.007s sys 0.016s
I20250811 20:46:44.128213 2247 raft_consensus.cc:357] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.129062 2247 raft_consensus.cc:383] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:46:44.129411 2247 raft_consensus.cc:738] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d2be35a7226b4159ac8b184b8a149539, State: Initialized, Role: FOLLOWER
I20250811 20:46:44.130339 2247 consensus_queue.cc:260] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.130430 2248 raft_consensus.cc:357] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.130331 2249 raft_consensus.cc:357] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.131237 2249 raft_consensus.cc:383] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:46:44.131321 2248 raft_consensus.cc:383] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:46:44.131560 2249 raft_consensus.cc:738] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1eb47116192244209d0ea55c88ef0910, State: Initialized, Role: FOLLOWER
I20250811 20:46:44.131584 2248 raft_consensus.cc:738] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1178ea6bb48b48309f142bf2480dba91, State: Initialized, Role: FOLLOWER
I20250811 20:46:44.132557 2249 consensus_queue.cc:260] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.132558 2248 consensus_queue.cc:260] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.135203 2247 ts_tablet_manager.cc:1428] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539: Time spent starting tablet: real 0.035s user 0.027s sys 0.004s
I20250811 20:46:44.138072 2248 ts_tablet_manager.cc:1428] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91: Time spent starting tablet: real 0.035s user 0.029s sys 0.003s
I20250811 20:46:44.140062 2228 heartbeater.cc:499] Master 127.31.250.254:40187 was elected leader, sending a full tablet report...
I20250811 20:46:44.141351 2249 ts_tablet_manager.cc:1428] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910: Time spent starting tablet: real 0.037s user 0.023s sys 0.012s
W20250811 20:46:44.159941 2229 tablet.cc:2378] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 20:46:44.185909 2254 raft_consensus.cc:491] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:46:44.186385 2254 raft_consensus.cc:513] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.190546 2254 leader_election.cc:290] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers d2be35a7226b4159ac8b184b8a149539 (127.31.250.193:39519), 1eb47116192244209d0ea55c88ef0910 (127.31.250.195:38359)
I20250811 20:46:44.198139 2255 raft_consensus.cc:491] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:46:44.198798 2255 raft_consensus.cc:513] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
W20250811 20:46:44.203991 1962 tablet.cc:2378] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 20:46:44.208760 2255 leader_election.cc:290] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers d2be35a7226b4159ac8b184b8a149539 (127.31.250.193:39519), 1178ea6bb48b48309f142bf2480dba91 (127.31.250.194:36519)
I20250811 20:46:44.213732 1916 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "566c7803560d44448f8af15cdc40404f" candidate_uuid: "1178ea6bb48b48309f142bf2480dba91" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d2be35a7226b4159ac8b184b8a149539" is_pre_election: true
I20250811 20:46:44.214635 1916 raft_consensus.cc:2466] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 1178ea6bb48b48309f142bf2480dba91 in term 0.
I20250811 20:46:44.216120 2183 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "566c7803560d44448f8af15cdc40404f" candidate_uuid: "1178ea6bb48b48309f142bf2480dba91" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1eb47116192244209d0ea55c88ef0910" is_pre_election: true
I20250811 20:46:44.216356 1986 leader_election.cc:304] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1178ea6bb48b48309f142bf2480dba91, d2be35a7226b4159ac8b184b8a149539; no voters:
I20250811 20:46:44.216871 2183 raft_consensus.cc:2466] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 1178ea6bb48b48309f142bf2480dba91 in term 0.
I20250811 20:46:44.217288 2254 raft_consensus.cc:2802] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250811 20:46:44.217607 2254 raft_consensus.cc:491] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 20:46:44.217906 2254 raft_consensus.cc:3058] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:44.221378 1916 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "566c7803560d44448f8af15cdc40404f" candidate_uuid: "1eb47116192244209d0ea55c88ef0910" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d2be35a7226b4159ac8b184b8a149539" is_pre_election: true
I20250811 20:46:44.221879 1916 raft_consensus.cc:2466] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 1eb47116192244209d0ea55c88ef0910 in term 0.
I20250811 20:46:44.221911 2050 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "566c7803560d44448f8af15cdc40404f" candidate_uuid: "1eb47116192244209d0ea55c88ef0910" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1178ea6bb48b48309f142bf2480dba91" is_pre_election: true
I20250811 20:46:44.223001 2119 leader_election.cc:304] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1eb47116192244209d0ea55c88ef0910, d2be35a7226b4159ac8b184b8a149539; no voters:
I20250811 20:46:44.223850 2255 raft_consensus.cc:2802] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250811 20:46:44.224197 2255 raft_consensus.cc:491] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 20:46:44.224526 2255 raft_consensus.cc:3058] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:44.224885 2254 raft_consensus.cc:513] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.226116 2050 raft_consensus.cc:2391] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [term 1 FOLLOWER]: Leader pre-election vote request: Denying vote to candidate 1eb47116192244209d0ea55c88ef0910 in current term 1: Already voted for candidate 1178ea6bb48b48309f142bf2480dba91 in this term.
I20250811 20:46:44.227478 1916 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "566c7803560d44448f8af15cdc40404f" candidate_uuid: "1178ea6bb48b48309f142bf2480dba91" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d2be35a7226b4159ac8b184b8a149539"
I20250811 20:46:44.228016 1916 raft_consensus.cc:3058] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:44.231117 2254 leader_election.cc:290] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [CANDIDATE]: Term 1 election: Requested vote from peers d2be35a7226b4159ac8b184b8a149539 (127.31.250.193:39519), 1eb47116192244209d0ea55c88ef0910 (127.31.250.195:38359)
I20250811 20:46:44.232195 2183 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "566c7803560d44448f8af15cdc40404f" candidate_uuid: "1178ea6bb48b48309f142bf2480dba91" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1eb47116192244209d0ea55c88ef0910"
I20250811 20:46:44.235550 1916 raft_consensus.cc:2466] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 1178ea6bb48b48309f142bf2480dba91 in term 1.
I20250811 20:46:44.236393 1986 leader_election.cc:304] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1178ea6bb48b48309f142bf2480dba91, d2be35a7226b4159ac8b184b8a149539; no voters:
I20250811 20:46:44.237124 2254 raft_consensus.cc:2802] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:46:44.237666 2255 raft_consensus.cc:513] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.238965 2183 raft_consensus.cc:2391] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [term 1 FOLLOWER]: Leader election vote request: Denying vote to candidate 1178ea6bb48b48309f142bf2480dba91 in current term 1: Already voted for candidate 1eb47116192244209d0ea55c88ef0910 in this term.
I20250811 20:46:44.240265 1916 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "566c7803560d44448f8af15cdc40404f" candidate_uuid: "1eb47116192244209d0ea55c88ef0910" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "d2be35a7226b4159ac8b184b8a149539"
I20250811 20:46:44.241015 1916 raft_consensus.cc:2391] T 566c7803560d44448f8af15cdc40404f P d2be35a7226b4159ac8b184b8a149539 [term 1 FOLLOWER]: Leader election vote request: Denying vote to candidate 1eb47116192244209d0ea55c88ef0910 in current term 1: Already voted for candidate 1178ea6bb48b48309f142bf2480dba91 in this term.
I20250811 20:46:44.240928 2255 leader_election.cc:290] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [CANDIDATE]: Term 1 election: Requested vote from peers d2be35a7226b4159ac8b184b8a149539 (127.31.250.193:39519), 1178ea6bb48b48309f142bf2480dba91 (127.31.250.194:36519)
I20250811 20:46:44.242218 2050 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "566c7803560d44448f8af15cdc40404f" candidate_uuid: "1eb47116192244209d0ea55c88ef0910" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1178ea6bb48b48309f142bf2480dba91"
I20250811 20:46:44.243556 2254 raft_consensus.cc:695] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [term 1 LEADER]: Becoming Leader. State: Replica: 1178ea6bb48b48309f142bf2480dba91, State: Running, Role: LEADER
I20250811 20:46:44.244781 2254 consensus_queue.cc:237] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.251327 2116 leader_election.cc:304] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [CANDIDATE]: Term 1 election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 1eb47116192244209d0ea55c88ef0910; no voters: 1178ea6bb48b48309f142bf2480dba91, d2be35a7226b4159ac8b184b8a149539
I20250811 20:46:44.252069 2255 raft_consensus.cc:2747] T 566c7803560d44448f8af15cdc40404f P 1eb47116192244209d0ea55c88ef0910 [term 1 FOLLOWER]: Leader election lost for term 1. Reason: could not achieve majority
I20250811 20:46:44.255003 1771 catalog_manager.cc:5582] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 reported cstate change: term changed from 0 to 1, leader changed from <none> to 1178ea6bb48b48309f142bf2480dba91 (127.31.250.194). New cstate: current_term: 1 leader_uuid: "1178ea6bb48b48309f142bf2480dba91" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } health_report { overall_health: UNKNOWN } } }
I20250811 20:46:44.276470 32747 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 20:46:44.279814 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver d2be35a7226b4159ac8b184b8a149539 to finish bootstrapping
I20250811 20:46:44.293151 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 1178ea6bb48b48309f142bf2480dba91 to finish bootstrapping
I20250811 20:46:44.303741 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 1eb47116192244209d0ea55c88ef0910 to finish bootstrapping
I20250811 20:46:44.316150 1771 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:46728:
name: "TestAnotherTable"
schema {
columns {
name: "foo"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "bar"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
comment: "comment for bar"
immutable: false
}
}
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "foo"
}
}
}
W20250811 20:46:44.317606 1771 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestAnotherTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
W20250811 20:46:44.328919 2096 tablet.cc:2378] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 20:46:44.334941 1896 tablet_service.cc:1468] Processing CreateTablet for tablet 2154d30305614dbba20ef8f31c58e03c (DEFAULT_TABLE table=TestAnotherTable [id=d2c1470e745b4f089f831bb1b07c3ec1]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250811 20:46:44.335453 2030 tablet_service.cc:1468] Processing CreateTablet for tablet 2154d30305614dbba20ef8f31c58e03c (DEFAULT_TABLE table=TestAnotherTable [id=d2c1470e745b4f089f831bb1b07c3ec1]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250811 20:46:44.336031 1896 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 2154d30305614dbba20ef8f31c58e03c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:44.336484 2030 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 2154d30305614dbba20ef8f31c58e03c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:44.336345 2163 tablet_service.cc:1468] Processing CreateTablet for tablet 2154d30305614dbba20ef8f31c58e03c (DEFAULT_TABLE table=TestAnotherTable [id=d2c1470e745b4f089f831bb1b07c3ec1]), partition=RANGE (foo) PARTITION UNBOUNDED
I20250811 20:46:44.337111 2163 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 2154d30305614dbba20ef8f31c58e03c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:44.349462 2247 tablet_bootstrap.cc:492] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539: Bootstrap starting.
I20250811 20:46:44.355240 2247 tablet_bootstrap.cc:654] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:44.356410 2249 tablet_bootstrap.cc:492] T 2154d30305614dbba20ef8f31c58e03c P 1eb47116192244209d0ea55c88ef0910: Bootstrap starting.
I20250811 20:46:44.357074 2248 tablet_bootstrap.cc:492] T 2154d30305614dbba20ef8f31c58e03c P 1178ea6bb48b48309f142bf2480dba91: Bootstrap starting.
I20250811 20:46:44.362354 2247 tablet_bootstrap.cc:492] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539: No bootstrap required, opened a new log
I20250811 20:46:44.362463 2249 tablet_bootstrap.cc:654] T 2154d30305614dbba20ef8f31c58e03c P 1eb47116192244209d0ea55c88ef0910: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:44.362653 2247 ts_tablet_manager.cc:1397] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539: Time spent bootstrapping tablet: real 0.013s user 0.011s sys 0.000s
I20250811 20:46:44.363107 2248 tablet_bootstrap.cc:654] T 2154d30305614dbba20ef8f31c58e03c P 1178ea6bb48b48309f142bf2480dba91: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:44.364962 2247 raft_consensus.cc:357] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.365664 2247 raft_consensus.cc:383] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:46:44.365923 2247 raft_consensus.cc:738] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d2be35a7226b4159ac8b184b8a149539, State: Initialized, Role: FOLLOWER
I20250811 20:46:44.366564 2247 consensus_queue.cc:260] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.369151 2247 ts_tablet_manager.cc:1428] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539: Time spent starting tablet: real 0.006s user 0.003s sys 0.003s
I20250811 20:46:44.369676 2248 tablet_bootstrap.cc:492] T 2154d30305614dbba20ef8f31c58e03c P 1178ea6bb48b48309f142bf2480dba91: No bootstrap required, opened a new log
I20250811 20:46:44.369992 2248 ts_tablet_manager.cc:1397] T 2154d30305614dbba20ef8f31c58e03c P 1178ea6bb48b48309f142bf2480dba91: Time spent bootstrapping tablet: real 0.013s user 0.004s sys 0.007s
I20250811 20:46:44.372570 2249 tablet_bootstrap.cc:492] T 2154d30305614dbba20ef8f31c58e03c P 1eb47116192244209d0ea55c88ef0910: No bootstrap required, opened a new log
I20250811 20:46:44.372999 2249 ts_tablet_manager.cc:1397] T 2154d30305614dbba20ef8f31c58e03c P 1eb47116192244209d0ea55c88ef0910: Time spent bootstrapping tablet: real 0.017s user 0.005s sys 0.011s
I20250811 20:46:44.373596 2248 raft_consensus.cc:357] T 2154d30305614dbba20ef8f31c58e03c P 1178ea6bb48b48309f142bf2480dba91 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.374330 2248 raft_consensus.cc:383] T 2154d30305614dbba20ef8f31c58e03c P 1178ea6bb48b48309f142bf2480dba91 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:46:44.374634 2248 raft_consensus.cc:738] T 2154d30305614dbba20ef8f31c58e03c P 1178ea6bb48b48309f142bf2480dba91 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1178ea6bb48b48309f142bf2480dba91, State: Initialized, Role: FOLLOWER
I20250811 20:46:44.375327 2248 consensus_queue.cc:260] T 2154d30305614dbba20ef8f31c58e03c P 1178ea6bb48b48309f142bf2480dba91 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.375806 2249 raft_consensus.cc:357] T 2154d30305614dbba20ef8f31c58e03c P 1eb47116192244209d0ea55c88ef0910 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.376560 2249 raft_consensus.cc:383] T 2154d30305614dbba20ef8f31c58e03c P 1eb47116192244209d0ea55c88ef0910 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:46:44.376830 2249 raft_consensus.cc:738] T 2154d30305614dbba20ef8f31c58e03c P 1eb47116192244209d0ea55c88ef0910 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 1eb47116192244209d0ea55c88ef0910, State: Initialized, Role: FOLLOWER
I20250811 20:46:44.377578 2249 consensus_queue.cc:260] T 2154d30305614dbba20ef8f31c58e03c P 1eb47116192244209d0ea55c88ef0910 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.377671 2248 ts_tablet_manager.cc:1428] T 2154d30305614dbba20ef8f31c58e03c P 1178ea6bb48b48309f142bf2480dba91: Time spent starting tablet: real 0.007s user 0.005s sys 0.000s
I20250811 20:46:44.379787 2249 ts_tablet_manager.cc:1428] T 2154d30305614dbba20ef8f31c58e03c P 1eb47116192244209d0ea55c88ef0910: Time spent starting tablet: real 0.006s user 0.004s sys 0.001s
I20250811 20:46:44.474964 2253 raft_consensus.cc:491] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:46:44.475435 2253 raft_consensus.cc:513] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.477833 2253 leader_election.cc:290] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 1178ea6bb48b48309f142bf2480dba91 (127.31.250.194:36519), 1eb47116192244209d0ea55c88ef0910 (127.31.250.195:38359)
I20250811 20:46:44.490590 2183 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "2154d30305614dbba20ef8f31c58e03c" candidate_uuid: "d2be35a7226b4159ac8b184b8a149539" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1eb47116192244209d0ea55c88ef0910" is_pre_election: true
I20250811 20:46:44.490592 2050 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "2154d30305614dbba20ef8f31c58e03c" candidate_uuid: "d2be35a7226b4159ac8b184b8a149539" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1178ea6bb48b48309f142bf2480dba91" is_pre_election: true
I20250811 20:46:44.491212 2183 raft_consensus.cc:2466] T 2154d30305614dbba20ef8f31c58e03c P 1eb47116192244209d0ea55c88ef0910 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d2be35a7226b4159ac8b184b8a149539 in term 0.
I20250811 20:46:44.491212 2050 raft_consensus.cc:2466] T 2154d30305614dbba20ef8f31c58e03c P 1178ea6bb48b48309f142bf2480dba91 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d2be35a7226b4159ac8b184b8a149539 in term 0.
I20250811 20:46:44.492357 1849 leader_election.cc:304] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1178ea6bb48b48309f142bf2480dba91, d2be35a7226b4159ac8b184b8a149539; no voters:
I20250811 20:46:44.493033 2253 raft_consensus.cc:2802] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250811 20:46:44.493286 2253 raft_consensus.cc:491] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 20:46:44.493534 2253 raft_consensus.cc:3058] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:44.497718 2253 raft_consensus.cc:513] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.499058 2253 leader_election.cc:290] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [CANDIDATE]: Term 1 election: Requested vote from peers 1178ea6bb48b48309f142bf2480dba91 (127.31.250.194:36519), 1eb47116192244209d0ea55c88ef0910 (127.31.250.195:38359)
I20250811 20:46:44.499786 2050 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "2154d30305614dbba20ef8f31c58e03c" candidate_uuid: "d2be35a7226b4159ac8b184b8a149539" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1178ea6bb48b48309f142bf2480dba91"
I20250811 20:46:44.499920 2183 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "2154d30305614dbba20ef8f31c58e03c" candidate_uuid: "d2be35a7226b4159ac8b184b8a149539" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "1eb47116192244209d0ea55c88ef0910"
I20250811 20:46:44.500181 2050 raft_consensus.cc:3058] T 2154d30305614dbba20ef8f31c58e03c P 1178ea6bb48b48309f142bf2480dba91 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:44.500335 2183 raft_consensus.cc:3058] T 2154d30305614dbba20ef8f31c58e03c P 1eb47116192244209d0ea55c88ef0910 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:44.504411 2050 raft_consensus.cc:2466] T 2154d30305614dbba20ef8f31c58e03c P 1178ea6bb48b48309f142bf2480dba91 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d2be35a7226b4159ac8b184b8a149539 in term 1.
I20250811 20:46:44.504549 2183 raft_consensus.cc:2466] T 2154d30305614dbba20ef8f31c58e03c P 1eb47116192244209d0ea55c88ef0910 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d2be35a7226b4159ac8b184b8a149539 in term 1.
I20250811 20:46:44.505286 1849 leader_election.cc:304] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 1178ea6bb48b48309f142bf2480dba91, d2be35a7226b4159ac8b184b8a149539; no voters:
I20250811 20:46:44.505915 2253 raft_consensus.cc:2802] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:46:44.507479 2253 raft_consensus.cc:695] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [term 1 LEADER]: Becoming Leader. State: Replica: d2be35a7226b4159ac8b184b8a149539, State: Running, Role: LEADER
I20250811 20:46:44.508283 2253 consensus_queue.cc:237] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } }
I20250811 20:46:44.518877 1773 catalog_manager.cc:5582] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 reported cstate change: term changed from 0 to 1, leader changed from <none> to d2be35a7226b4159ac8b184b8a149539 (127.31.250.193). New cstate: current_term: 1 leader_uuid: "d2be35a7226b4159ac8b184b8a149539" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 } health_report { overall_health: UNKNOWN } } }
I20250811 20:46:44.701440 2254 consensus_queue.cc:1035] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 20:46:44.720019 2254 consensus_queue.cc:1035] T 566c7803560d44448f8af15cdc40404f P 1178ea6bb48b48309f142bf2480dba91 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d2be35a7226b4159ac8b184b8a149539" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 39519 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250811 20:46:44.866088 2270 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:44.866609 2270 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:44.897228 2270 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
I20250811 20:46:44.951535 2253 consensus_queue.cc:1035] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1178ea6bb48b48309f142bf2480dba91" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 36519 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 20:46:44.995004 2253 consensus_queue.cc:1035] T 2154d30305614dbba20ef8f31c58e03c P d2be35a7226b4159ac8b184b8a149539 [LEADER]: Connected to new peer: Peer: permanent_uuid: "1eb47116192244209d0ea55c88ef0910" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 38359 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250811 20:46:46.149530 1804 debug-util.cc:398] Leaking SignalData structure 0x7b08000897c0 after lost signal to thread 1741
W20250811 20:46:46.150177 1804 debug-util.cc:398] Leaking SignalData structure 0x7b080006f2a0 after lost signal to thread 1807
W20250811 20:46:46.347527 2270 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.400s user 0.497s sys 0.898s
W20250811 20:46:46.347940 2270 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.400s user 0.497s sys 0.898s
W20250811 20:46:47.733052 2299 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:47.733667 2299 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:47.764834 2299 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250811 20:46:49.069478 2299 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.265s user 0.462s sys 0.789s
W20250811 20:46:49.069837 2299 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.266s user 0.462s sys 0.789s
W20250811 20:46:50.437204 2322 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:50.437783 2322 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:50.468760 2322 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250811 20:46:51.721927 2322 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.216s user 0.435s sys 0.778s
W20250811 20:46:51.722332 2322 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.216s user 0.435s sys 0.778s
W20250811 20:46:53.095449 2339 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:53.095996 2339 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:53.127610 2339 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250811 20:46:54.371063 2339 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.204s user 0.470s sys 0.731s
W20250811 20:46:54.371524 2339 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.205s user 0.473s sys 0.731s
I20250811 20:46:55.442646 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 1832
I20250811 20:46:55.469650 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 1965
I20250811 20:46:55.494951 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 2099
I20250811 20:46:55.521626 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 1740
2025-08-11T20:46:55Z chronyd exiting
[ OK ] AdminCliTest.TestDescribeTableColumnFlags (19161 ms)
[ RUN ] AdminCliTest.TestAuthzResetCacheNotAuthorized
I20250811 20:46:55.580525 32747 test_util.cc:276] Using random seed: 103598406
I20250811 20:46:55.584647 32747 ts_itest-base.cc:115] Starting cluster with:
I20250811 20:46:55.584808 32747 ts_itest-base.cc:116] --------------
I20250811 20:46:55.584920 32747 ts_itest-base.cc:117] 3 tablet servers
I20250811 20:46:55.585024 32747 ts_itest-base.cc:118] 3 replicas per TS
I20250811 20:46:55.585127 32747 ts_itest-base.cc:119] --------------
2025-08-11T20:46:55Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T20:46:55Z Disabled control of system clock
I20250811 20:46:55.620098 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:34147
--webserver_interface=127.31.250.254
--webserver_port=0
--builtin_ntp_servers=127.31.250.212:45293
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.31.250.254:34147
--superuser_acl=no-such-user with env {}
W20250811 20:46:55.908424 2359 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:55.908977 2359 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:55.909427 2359 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:55.938825 2359 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 20:46:55.939102 2359 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:55.939317 2359 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 20:46:55.939531 2359 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 20:46:55.972878 2359 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:45293
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.31.250.254:34147
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:34147
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--superuser_acl=<redacted>
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.31.250.254
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:55.974045 2359 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:55.975651 2359 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:55.985273 2365 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:55.986083 2366 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:55.990389 2368 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:57.141558 2367 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250811 20:46:57.141711 2359 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:46:57.145170 2359 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:57.148248 2359 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:57.149649 2359 hybrid_clock.cc:648] HybridClock initialized: now 1754945217149614 us; error 54 us; skew 500 ppm
I20250811 20:46:57.150383 2359 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:57.156831 2359 webserver.cc:489] Webserver started at http://127.31.250.254:43735/ using document root <none> and password file <none>
I20250811 20:46:57.157713 2359 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:57.157892 2359 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:57.158275 2359 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:46:57.162174 2359 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "5e0bb06eb26443d38445976e7d8a0594"
format_stamp: "Formatted at 2025-08-11 20:46:57 on dist-test-slave-4gzk"
I20250811 20:46:57.163223 2359 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "5e0bb06eb26443d38445976e7d8a0594"
format_stamp: "Formatted at 2025-08-11 20:46:57 on dist-test-slave-4gzk"
I20250811 20:46:57.170130 2359 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.001s sys 0.005s
I20250811 20:46:57.175349 2375 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:57.176301 2359 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.000s
I20250811 20:46:57.176570 2359 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
uuid: "5e0bb06eb26443d38445976e7d8a0594"
format_stamp: "Formatted at 2025-08-11 20:46:57 on dist-test-slave-4gzk"
I20250811 20:46:57.176867 2359 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:57.231047 2359 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:57.232383 2359 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:57.232738 2359 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:57.300856 2359 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.254:34147
I20250811 20:46:57.300917 2426 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.254:34147 every 8 connection(s)
I20250811 20:46:57.303544 2359 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 20:46:57.308465 2427 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:46:57.310992 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 2359
I20250811 20:46:57.311549 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250811 20:46:57.327617 2427 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594: Bootstrap starting.
I20250811 20:46:57.333566 2427 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594: Neither blocks nor log segments found. Creating new log.
I20250811 20:46:57.335186 2427 log.cc:826] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594: Log is configured to *not* fsync() on all Append() calls
I20250811 20:46:57.339169 2427 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594: No bootstrap required, opened a new log
I20250811 20:46:57.354730 2427 raft_consensus.cc:357] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5e0bb06eb26443d38445976e7d8a0594" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 34147 } }
I20250811 20:46:57.355365 2427 raft_consensus.cc:383] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:46:57.355608 2427 raft_consensus.cc:738] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5e0bb06eb26443d38445976e7d8a0594, State: Initialized, Role: FOLLOWER
I20250811 20:46:57.356225 2427 consensus_queue.cc:260] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5e0bb06eb26443d38445976e7d8a0594" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 34147 } }
I20250811 20:46:57.356675 2427 raft_consensus.cc:397] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:46:57.356915 2427 raft_consensus.cc:491] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:46:57.357167 2427 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:46:57.360749 2427 raft_consensus.cc:513] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5e0bb06eb26443d38445976e7d8a0594" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 34147 } }
I20250811 20:46:57.361305 2427 leader_election.cc:304] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 5e0bb06eb26443d38445976e7d8a0594; no voters:
I20250811 20:46:57.362865 2427 leader_election.cc:290] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 20:46:57.363704 2432 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:46:57.365762 2432 raft_consensus.cc:695] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [term 1 LEADER]: Becoming Leader. State: Replica: 5e0bb06eb26443d38445976e7d8a0594, State: Running, Role: LEADER
I20250811 20:46:57.366685 2427 sys_catalog.cc:564] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [sys.catalog]: configured and running, proceeding with master startup.
I20250811 20:46:57.366426 2432 consensus_queue.cc:237] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5e0bb06eb26443d38445976e7d8a0594" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 34147 } }
I20250811 20:46:57.377154 2434 sys_catalog.cc:455] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 5e0bb06eb26443d38445976e7d8a0594. Latest consensus state: current_term: 1 leader_uuid: "5e0bb06eb26443d38445976e7d8a0594" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5e0bb06eb26443d38445976e7d8a0594" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 34147 } } }
I20250811 20:46:57.377112 2433 sys_catalog.cc:455] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "5e0bb06eb26443d38445976e7d8a0594" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5e0bb06eb26443d38445976e7d8a0594" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 34147 } } }
I20250811 20:46:57.377875 2433 sys_catalog.cc:458] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [sys.catalog]: This master's current role is: LEADER
I20250811 20:46:57.377875 2434 sys_catalog.cc:458] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594 [sys.catalog]: This master's current role is: LEADER
I20250811 20:46:57.384430 2440 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 20:46:57.396054 2440 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 20:46:57.410673 2440 catalog_manager.cc:1349] Generated new cluster ID: 0f5e588e75314d47af38bef2f2d3ab8b
I20250811 20:46:57.410977 2440 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 20:46:57.436051 2440 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 20:46:57.437465 2440 catalog_manager.cc:1506] Loading token signing keys...
I20250811 20:46:57.451066 2440 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 5e0bb06eb26443d38445976e7d8a0594: Generated new TSK 0
I20250811 20:46:57.451913 2440 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 20:46:57.473589 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.193:0
--local_ip_for_outbound_sockets=127.31.250.193
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:34147
--builtin_ntp_servers=127.31.250.212:45293
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250811 20:46:57.768949 2451 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:46:57.769438 2451 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:46:57.769935 2451 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:46:57.801723 2451 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:46:57.802824 2451 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.193
I20250811 20:46:57.837736 2451 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:45293
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.193:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:34147
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.193
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:46:57.839105 2451 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:46:57.840940 2451 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:46:57.853647 2457 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:59.255367 2456 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 2451
W20250811 20:46:59.611146 2451 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.757s user 0.680s sys 1.009s
W20250811 20:46:59.611552 2451 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.758s user 0.681s sys 1.009s
W20250811 20:46:57.854614 2458 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:46:59.612907 2459 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1758 milliseconds
I20250811 20:46:59.614158 2451 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250811 20:46:59.614236 2460 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:46:59.617271 2451 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:46:59.619426 2451 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:46:59.620800 2451 hybrid_clock.cc:648] HybridClock initialized: now 1754945219620768 us; error 43 us; skew 500 ppm
I20250811 20:46:59.621555 2451 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:46:59.627763 2451 webserver.cc:489] Webserver started at http://127.31.250.193:46045/ using document root <none> and password file <none>
I20250811 20:46:59.628679 2451 fs_manager.cc:362] Metadata directory not provided
I20250811 20:46:59.628899 2451 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:46:59.629339 2451 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:46:59.633790 2451 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "5705eaf1dcd243ecae82001fd3a80474"
format_stamp: "Formatted at 2025-08-11 20:46:59 on dist-test-slave-4gzk"
I20250811 20:46:59.634856 2451 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "5705eaf1dcd243ecae82001fd3a80474"
format_stamp: "Formatted at 2025-08-11 20:46:59 on dist-test-slave-4gzk"
I20250811 20:46:59.642175 2451 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.004s sys 0.002s
I20250811 20:46:59.648057 2467 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:59.649192 2451 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250811 20:46:59.649510 2451 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "5705eaf1dcd243ecae82001fd3a80474"
format_stamp: "Formatted at 2025-08-11 20:46:59 on dist-test-slave-4gzk"
I20250811 20:46:59.649833 2451 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:46:59.704515 2451 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:46:59.705960 2451 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:46:59.706400 2451 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:46:59.709093 2451 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:46:59.713611 2451 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:46:59.713829 2451 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:59.714082 2451 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:46:59.714298 2451 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:46:59.890578 2451 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.193:43667
I20250811 20:46:59.890756 2579 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.193:43667 every 8 connection(s)
I20250811 20:46:59.893087 2451 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 20:46:59.900431 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 2451
I20250811 20:46:59.900861 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250811 20:46:59.908206 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.194:0
--local_ip_for_outbound_sockets=127.31.250.194
--webserver_interface=127.31.250.194
--webserver_port=0
--tserver_master_addrs=127.31.250.254:34147
--builtin_ntp_servers=127.31.250.212:45293
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 20:46:59.920897 2580 heartbeater.cc:344] Connected to a master server at 127.31.250.254:34147
I20250811 20:46:59.921394 2580 heartbeater.cc:461] Registering TS with master...
I20250811 20:46:59.922669 2580 heartbeater.cc:507] Master 127.31.250.254:34147 requested a full tablet report, sending...
I20250811 20:46:59.925891 2392 ts_manager.cc:194] Registered new tserver with Master: 5705eaf1dcd243ecae82001fd3a80474 (127.31.250.193:43667)
I20250811 20:46:59.929117 2392 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.193:50861
W20250811 20:47:00.204613 2584 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:47:00.205137 2584 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:47:00.205648 2584 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:47:00.235575 2584 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:47:00.236418 2584 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.194
I20250811 20:47:00.269764 2584 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:45293
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.194:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.31.250.194
--webserver_port=0
--tserver_master_addrs=127.31.250.254:34147
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.194
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:47:00.270998 2584 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:47:00.272449 2584 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:47:00.284215 2590 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:47:00.933301 2580 heartbeater.cc:499] Master 127.31.250.254:34147 was elected leader, sending a full tablet report...
W20250811 20:47:01.688534 2589 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 2584
W20250811 20:47:02.018757 2592 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1732 milliseconds
W20250811 20:47:02.017926 2584 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.733s user 0.607s sys 1.126s
W20250811 20:47:02.020220 2584 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.736s user 0.607s sys 1.126s
W20250811 20:47:02.020272 2593 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:47:02.020601 2584 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250811 20:47:00.285913 2591 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:47:02.024909 2584 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:47:02.027560 2584 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:47:02.029050 2584 hybrid_clock.cc:648] HybridClock initialized: now 1754945222028976 us; error 92 us; skew 500 ppm
I20250811 20:47:02.030102 2584 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:47:02.036947 2584 webserver.cc:489] Webserver started at http://127.31.250.194:32827/ using document root <none> and password file <none>
I20250811 20:47:02.038163 2584 fs_manager.cc:362] Metadata directory not provided
I20250811 20:47:02.038429 2584 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:47:02.038857 2584 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:47:02.042981 2584 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "949f3b633fb040278aa74df11b28c5b8"
format_stamp: "Formatted at 2025-08-11 20:47:02 on dist-test-slave-4gzk"
I20250811 20:47:02.044407 2584 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "949f3b633fb040278aa74df11b28c5b8"
format_stamp: "Formatted at 2025-08-11 20:47:02 on dist-test-slave-4gzk"
I20250811 20:47:02.051939 2584 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.007s sys 0.000s
I20250811 20:47:02.057202 2601 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:02.058111 2584 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.001s
I20250811 20:47:02.058401 2584 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "949f3b633fb040278aa74df11b28c5b8"
format_stamp: "Formatted at 2025-08-11 20:47:02 on dist-test-slave-4gzk"
I20250811 20:47:02.058689 2584 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:47:02.113003 2584 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:47:02.114398 2584 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:47:02.114840 2584 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:47:02.117347 2584 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:47:02.121260 2584 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:47:02.121501 2584 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:02.121757 2584 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:47:02.121927 2584 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:02.252197 2584 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.194:45715
I20250811 20:47:02.252301 2713 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.194:45715 every 8 connection(s)
I20250811 20:47:02.254618 2584 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 20:47:02.262142 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 2584
I20250811 20:47:02.262727 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250811 20:47:02.269586 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.195:0
--local_ip_for_outbound_sockets=127.31.250.195
--webserver_interface=127.31.250.195
--webserver_port=0
--tserver_master_addrs=127.31.250.254:34147
--builtin_ntp_servers=127.31.250.212:45293
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 20:47:02.275274 2714 heartbeater.cc:344] Connected to a master server at 127.31.250.254:34147
I20250811 20:47:02.275786 2714 heartbeater.cc:461] Registering TS with master...
I20250811 20:47:02.276983 2714 heartbeater.cc:507] Master 127.31.250.254:34147 requested a full tablet report, sending...
I20250811 20:47:02.279067 2392 ts_manager.cc:194] Registered new tserver with Master: 949f3b633fb040278aa74df11b28c5b8 (127.31.250.194:45715)
I20250811 20:47:02.280824 2392 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.194:49979
W20250811 20:47:02.563776 2718 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:47:02.564229 2718 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:47:02.564692 2718 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:47:02.595216 2718 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:47:02.596028 2718 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.195
I20250811 20:47:02.630278 2718 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:45293
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.195:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.31.250.195
--webserver_port=0
--tserver_master_addrs=127.31.250.254:34147
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.195
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:47:02.631575 2718 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:47:02.633126 2718 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:47:02.644593 2724 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:47:03.284118 2714 heartbeater.cc:499] Master 127.31.250.254:34147 was elected leader, sending a full tablet report...
W20250811 20:47:02.645411 2725 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:03.871136 2726 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1221 milliseconds
W20250811 20:47:03.872431 2718 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.228s user 0.365s sys 0.855s
W20250811 20:47:03.872774 2718 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.228s user 0.365s sys 0.855s
W20250811 20:47:03.872918 2727 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:47:03.873047 2718 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:47:03.874197 2718 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:47:03.876470 2718 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:47:03.877837 2718 hybrid_clock.cc:648] HybridClock initialized: now 1754945223877796 us; error 36 us; skew 500 ppm
I20250811 20:47:03.878628 2718 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:47:03.886190 2718 webserver.cc:489] Webserver started at http://127.31.250.195:37419/ using document root <none> and password file <none>
I20250811 20:47:03.887290 2718 fs_manager.cc:362] Metadata directory not provided
I20250811 20:47:03.887542 2718 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:47:03.888031 2718 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:47:03.892383 2718 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "ea4b06b84fe544bd92e38f80b98dc2d4"
format_stamp: "Formatted at 2025-08-11 20:47:03 on dist-test-slave-4gzk"
I20250811 20:47:03.893452 2718 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "ea4b06b84fe544bd92e38f80b98dc2d4"
format_stamp: "Formatted at 2025-08-11 20:47:03 on dist-test-slave-4gzk"
I20250811 20:47:03.900748 2718 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.008s sys 0.000s
I20250811 20:47:03.907035 2734 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:03.908169 2718 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.001s
I20250811 20:47:03.908504 2718 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "ea4b06b84fe544bd92e38f80b98dc2d4"
format_stamp: "Formatted at 2025-08-11 20:47:03 on dist-test-slave-4gzk"
I20250811 20:47:03.908844 2718 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:47:03.978106 2718 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:47:03.979532 2718 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:47:03.979945 2718 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:47:03.982362 2718 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:47:03.986478 2718 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:47:03.986711 2718 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:03.986954 2718 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:47:03.987116 2718 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:04.116818 2718 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.195:35231
I20250811 20:47:04.116918 2846 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.195:35231 every 8 connection(s)
I20250811 20:47:04.119359 2718 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 20:47:04.120738 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 2718
I20250811 20:47:04.121296 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestAuthzResetCacheNotAuthorized.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250811 20:47:04.140483 2847 heartbeater.cc:344] Connected to a master server at 127.31.250.254:34147
I20250811 20:47:04.140928 2847 heartbeater.cc:461] Registering TS with master...
I20250811 20:47:04.141974 2847 heartbeater.cc:507] Master 127.31.250.254:34147 requested a full tablet report, sending...
I20250811 20:47:04.144001 2391 ts_manager.cc:194] Registered new tserver with Master: ea4b06b84fe544bd92e38f80b98dc2d4 (127.31.250.195:35231)
I20250811 20:47:04.145277 2391 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.195:49671
I20250811 20:47:04.156060 32747 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 20:47:04.188393 2391 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:39122:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
W20250811 20:47:04.206501 2391 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table TestTable in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250811 20:47:04.255703 2782 tablet_service.cc:1468] Processing CreateTablet for tablet 8bc286f2fdb9490d88f0dd212799508e (DEFAULT_TABLE table=TestTable [id=d938b23dc485490fa686352c7c985094]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:47:04.257800 2782 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 8bc286f2fdb9490d88f0dd212799508e. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:47:04.266769 2649 tablet_service.cc:1468] Processing CreateTablet for tablet 8bc286f2fdb9490d88f0dd212799508e (DEFAULT_TABLE table=TestTable [id=d938b23dc485490fa686352c7c985094]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:47:04.269011 2649 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 8bc286f2fdb9490d88f0dd212799508e. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:47:04.268580 2515 tablet_service.cc:1468] Processing CreateTablet for tablet 8bc286f2fdb9490d88f0dd212799508e (DEFAULT_TABLE table=TestTable [id=d938b23dc485490fa686352c7c985094]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:47:04.270411 2515 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 8bc286f2fdb9490d88f0dd212799508e. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:47:04.290628 2866 tablet_bootstrap.cc:492] T 8bc286f2fdb9490d88f0dd212799508e P 5705eaf1dcd243ecae82001fd3a80474: Bootstrap starting.
I20250811 20:47:04.295729 2867 tablet_bootstrap.cc:492] T 8bc286f2fdb9490d88f0dd212799508e P ea4b06b84fe544bd92e38f80b98dc2d4: Bootstrap starting.
I20250811 20:47:04.298238 2866 tablet_bootstrap.cc:654] T 8bc286f2fdb9490d88f0dd212799508e P 5705eaf1dcd243ecae82001fd3a80474: Neither blocks nor log segments found. Creating new log.
I20250811 20:47:04.299422 2868 tablet_bootstrap.cc:492] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8: Bootstrap starting.
I20250811 20:47:04.300760 2866 log.cc:826] T 8bc286f2fdb9490d88f0dd212799508e P 5705eaf1dcd243ecae82001fd3a80474: Log is configured to *not* fsync() on all Append() calls
I20250811 20:47:04.304445 2867 tablet_bootstrap.cc:654] T 8bc286f2fdb9490d88f0dd212799508e P ea4b06b84fe544bd92e38f80b98dc2d4: Neither blocks nor log segments found. Creating new log.
I20250811 20:47:04.307325 2867 log.cc:826] T 8bc286f2fdb9490d88f0dd212799508e P ea4b06b84fe544bd92e38f80b98dc2d4: Log is configured to *not* fsync() on all Append() calls
I20250811 20:47:04.309669 2868 tablet_bootstrap.cc:654] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8: Neither blocks nor log segments found. Creating new log.
I20250811 20:47:04.312057 2868 log.cc:826] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8: Log is configured to *not* fsync() on all Append() calls
I20250811 20:47:04.323156 2866 tablet_bootstrap.cc:492] T 8bc286f2fdb9490d88f0dd212799508e P 5705eaf1dcd243ecae82001fd3a80474: No bootstrap required, opened a new log
I20250811 20:47:04.323763 2866 ts_tablet_manager.cc:1397] T 8bc286f2fdb9490d88f0dd212799508e P 5705eaf1dcd243ecae82001fd3a80474: Time spent bootstrapping tablet: real 0.034s user 0.013s sys 0.019s
I20250811 20:47:04.330186 2867 tablet_bootstrap.cc:492] T 8bc286f2fdb9490d88f0dd212799508e P ea4b06b84fe544bd92e38f80b98dc2d4: No bootstrap required, opened a new log
I20250811 20:47:04.330626 2868 tablet_bootstrap.cc:492] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8: No bootstrap required, opened a new log
I20250811 20:47:04.330718 2867 ts_tablet_manager.cc:1397] T 8bc286f2fdb9490d88f0dd212799508e P ea4b06b84fe544bd92e38f80b98dc2d4: Time spent bootstrapping tablet: real 0.036s user 0.014s sys 0.017s
I20250811 20:47:04.331189 2868 ts_tablet_manager.cc:1397] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8: Time spent bootstrapping tablet: real 0.032s user 0.008s sys 0.019s
I20250811 20:47:04.350045 2866 raft_consensus.cc:357] T 8bc286f2fdb9490d88f0dd212799508e P 5705eaf1dcd243ecae82001fd3a80474 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ea4b06b84fe544bd92e38f80b98dc2d4" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 35231 } } peers { permanent_uuid: "949f3b633fb040278aa74df11b28c5b8" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45715 } } peers { permanent_uuid: "5705eaf1dcd243ecae82001fd3a80474" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43667 } }
I20250811 20:47:04.351111 2866 raft_consensus.cc:383] T 8bc286f2fdb9490d88f0dd212799508e P 5705eaf1dcd243ecae82001fd3a80474 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:47:04.351483 2866 raft_consensus.cc:738] T 8bc286f2fdb9490d88f0dd212799508e P 5705eaf1dcd243ecae82001fd3a80474 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5705eaf1dcd243ecae82001fd3a80474, State: Initialized, Role: FOLLOWER
I20250811 20:47:04.352603 2866 consensus_queue.cc:260] T 8bc286f2fdb9490d88f0dd212799508e P 5705eaf1dcd243ecae82001fd3a80474 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ea4b06b84fe544bd92e38f80b98dc2d4" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 35231 } } peers { permanent_uuid: "949f3b633fb040278aa74df11b28c5b8" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45715 } } peers { permanent_uuid: "5705eaf1dcd243ecae82001fd3a80474" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43667 } }
I20250811 20:47:04.356695 2868 raft_consensus.cc:357] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ea4b06b84fe544bd92e38f80b98dc2d4" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 35231 } } peers { permanent_uuid: "949f3b633fb040278aa74df11b28c5b8" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45715 } } peers { permanent_uuid: "5705eaf1dcd243ecae82001fd3a80474" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43667 } }
I20250811 20:47:04.357568 2868 raft_consensus.cc:383] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:47:04.357872 2868 raft_consensus.cc:738] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 949f3b633fb040278aa74df11b28c5b8, State: Initialized, Role: FOLLOWER
I20250811 20:47:04.358806 2868 consensus_queue.cc:260] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ea4b06b84fe544bd92e38f80b98dc2d4" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 35231 } } peers { permanent_uuid: "949f3b633fb040278aa74df11b28c5b8" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45715 } } peers { permanent_uuid: "5705eaf1dcd243ecae82001fd3a80474" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43667 } }
I20250811 20:47:04.362754 2866 ts_tablet_manager.cc:1428] T 8bc286f2fdb9490d88f0dd212799508e P 5705eaf1dcd243ecae82001fd3a80474: Time spent starting tablet: real 0.039s user 0.028s sys 0.008s
I20250811 20:47:04.365849 2868 ts_tablet_manager.cc:1428] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8: Time spent starting tablet: real 0.034s user 0.032s sys 0.000s
I20250811 20:47:04.367079 2867 raft_consensus.cc:357] T 8bc286f2fdb9490d88f0dd212799508e P ea4b06b84fe544bd92e38f80b98dc2d4 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ea4b06b84fe544bd92e38f80b98dc2d4" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 35231 } } peers { permanent_uuid: "949f3b633fb040278aa74df11b28c5b8" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45715 } } peers { permanent_uuid: "5705eaf1dcd243ecae82001fd3a80474" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43667 } }
I20250811 20:47:04.367982 2867 raft_consensus.cc:383] T 8bc286f2fdb9490d88f0dd212799508e P ea4b06b84fe544bd92e38f80b98dc2d4 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:47:04.368287 2867 raft_consensus.cc:738] T 8bc286f2fdb9490d88f0dd212799508e P ea4b06b84fe544bd92e38f80b98dc2d4 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ea4b06b84fe544bd92e38f80b98dc2d4, State: Initialized, Role: FOLLOWER
I20250811 20:47:04.369174 2867 consensus_queue.cc:260] T 8bc286f2fdb9490d88f0dd212799508e P ea4b06b84fe544bd92e38f80b98dc2d4 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ea4b06b84fe544bd92e38f80b98dc2d4" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 35231 } } peers { permanent_uuid: "949f3b633fb040278aa74df11b28c5b8" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45715 } } peers { permanent_uuid: "5705eaf1dcd243ecae82001fd3a80474" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43667 } }
I20250811 20:47:04.377786 2867 ts_tablet_manager.cc:1428] T 8bc286f2fdb9490d88f0dd212799508e P ea4b06b84fe544bd92e38f80b98dc2d4: Time spent starting tablet: real 0.047s user 0.028s sys 0.007s
I20250811 20:47:04.378180 2847 heartbeater.cc:499] Master 127.31.250.254:34147 was elected leader, sending a full tablet report...
W20250811 20:47:04.406945 2581 tablet.cc:2378] T 8bc286f2fdb9490d88f0dd212799508e P 5705eaf1dcd243ecae82001fd3a80474: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250811 20:47:04.511620 2715 tablet.cc:2378] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 20:47:04.559576 2873 raft_consensus.cc:491] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:47:04.560081 2873 raft_consensus.cc:513] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ea4b06b84fe544bd92e38f80b98dc2d4" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 35231 } } peers { permanent_uuid: "949f3b633fb040278aa74df11b28c5b8" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45715 } } peers { permanent_uuid: "5705eaf1dcd243ecae82001fd3a80474" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43667 } }
I20250811 20:47:04.562289 2873 leader_election.cc:290] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers ea4b06b84fe544bd92e38f80b98dc2d4 (127.31.250.195:35231), 5705eaf1dcd243ecae82001fd3a80474 (127.31.250.193:43667)
I20250811 20:47:04.573799 2802 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "8bc286f2fdb9490d88f0dd212799508e" candidate_uuid: "949f3b633fb040278aa74df11b28c5b8" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ea4b06b84fe544bd92e38f80b98dc2d4" is_pre_election: true
I20250811 20:47:04.574548 2802 raft_consensus.cc:2466] T 8bc286f2fdb9490d88f0dd212799508e P ea4b06b84fe544bd92e38f80b98dc2d4 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 949f3b633fb040278aa74df11b28c5b8 in term 0.
I20250811 20:47:04.575650 2602 leader_election.cc:304] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 949f3b633fb040278aa74df11b28c5b8, ea4b06b84fe544bd92e38f80b98dc2d4; no voters:
I20250811 20:47:04.575934 2535 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "8bc286f2fdb9490d88f0dd212799508e" candidate_uuid: "949f3b633fb040278aa74df11b28c5b8" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "5705eaf1dcd243ecae82001fd3a80474" is_pre_election: true
I20250811 20:47:04.576303 2873 raft_consensus.cc:2802] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250811 20:47:04.576620 2873 raft_consensus.cc:491] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 20:47:04.576653 2535 raft_consensus.cc:2466] T 8bc286f2fdb9490d88f0dd212799508e P 5705eaf1dcd243ecae82001fd3a80474 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 949f3b633fb040278aa74df11b28c5b8 in term 0.
I20250811 20:47:04.576843 2873 raft_consensus.cc:3058] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:47:04.581347 2873 raft_consensus.cc:513] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ea4b06b84fe544bd92e38f80b98dc2d4" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 35231 } } peers { permanent_uuid: "949f3b633fb040278aa74df11b28c5b8" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45715 } } peers { permanent_uuid: "5705eaf1dcd243ecae82001fd3a80474" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43667 } }
I20250811 20:47:04.582820 2873 leader_election.cc:290] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [CANDIDATE]: Term 1 election: Requested vote from peers ea4b06b84fe544bd92e38f80b98dc2d4 (127.31.250.195:35231), 5705eaf1dcd243ecae82001fd3a80474 (127.31.250.193:43667)
I20250811 20:47:04.583627 2802 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "8bc286f2fdb9490d88f0dd212799508e" candidate_uuid: "949f3b633fb040278aa74df11b28c5b8" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "ea4b06b84fe544bd92e38f80b98dc2d4"
I20250811 20:47:04.583794 2535 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "8bc286f2fdb9490d88f0dd212799508e" candidate_uuid: "949f3b633fb040278aa74df11b28c5b8" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "5705eaf1dcd243ecae82001fd3a80474"
I20250811 20:47:04.584075 2802 raft_consensus.cc:3058] T 8bc286f2fdb9490d88f0dd212799508e P ea4b06b84fe544bd92e38f80b98dc2d4 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:47:04.584224 2535 raft_consensus.cc:3058] T 8bc286f2fdb9490d88f0dd212799508e P 5705eaf1dcd243ecae82001fd3a80474 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:47:04.588438 2802 raft_consensus.cc:2466] T 8bc286f2fdb9490d88f0dd212799508e P ea4b06b84fe544bd92e38f80b98dc2d4 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 949f3b633fb040278aa74df11b28c5b8 in term 1.
I20250811 20:47:04.588462 2535 raft_consensus.cc:2466] T 8bc286f2fdb9490d88f0dd212799508e P 5705eaf1dcd243ecae82001fd3a80474 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 949f3b633fb040278aa74df11b28c5b8 in term 1.
I20250811 20:47:04.589208 2602 leader_election.cc:304] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 5705eaf1dcd243ecae82001fd3a80474, 949f3b633fb040278aa74df11b28c5b8; no voters:
I20250811 20:47:04.589767 2873 raft_consensus.cc:2802] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:47:04.591311 2873 raft_consensus.cc:695] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [term 1 LEADER]: Becoming Leader. State: Replica: 949f3b633fb040278aa74df11b28c5b8, State: Running, Role: LEADER
I20250811 20:47:04.592123 2873 consensus_queue.cc:237] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ea4b06b84fe544bd92e38f80b98dc2d4" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 35231 } } peers { permanent_uuid: "949f3b633fb040278aa74df11b28c5b8" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45715 } } peers { permanent_uuid: "5705eaf1dcd243ecae82001fd3a80474" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43667 } }
I20250811 20:47:04.602094 2391 catalog_manager.cc:5582] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 reported cstate change: term changed from 0 to 1, leader changed from <none> to 949f3b633fb040278aa74df11b28c5b8 (127.31.250.194). New cstate: current_term: 1 leader_uuid: "949f3b633fb040278aa74df11b28c5b8" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "ea4b06b84fe544bd92e38f80b98dc2d4" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 35231 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "949f3b633fb040278aa74df11b28c5b8" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 45715 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "5705eaf1dcd243ecae82001fd3a80474" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43667 } health_report { overall_health: UNKNOWN } } }
I20250811 20:47:04.620970 32747 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 20:47:04.624032 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 5705eaf1dcd243ecae82001fd3a80474 to finish bootstrapping
W20250811 20:47:04.624610 2848 tablet.cc:2378] T 8bc286f2fdb9490d88f0dd212799508e P ea4b06b84fe544bd92e38f80b98dc2d4: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 20:47:04.635849 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 949f3b633fb040278aa74df11b28c5b8 to finish bootstrapping
I20250811 20:47:04.646147 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver ea4b06b84fe544bd92e38f80b98dc2d4 to finish bootstrapping
I20250811 20:47:04.997006 2873 consensus_queue.cc:1035] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [LEADER]: Connected to new peer: Peer: permanent_uuid: "5705eaf1dcd243ecae82001fd3a80474" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 43667 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 20:47:05.016173 2873 consensus_queue.cc:1035] T 8bc286f2fdb9490d88f0dd212799508e P 949f3b633fb040278aa74df11b28c5b8 [LEADER]: Connected to new peer: Peer: permanent_uuid: "ea4b06b84fe544bd92e38f80b98dc2d4" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 35231 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
W20250811 20:47:06.338027 2391 server_base.cc:1129] Unauthorized access attempt to method kudu.master.MasterService.RefreshAuthzCache from {username='slave'} at 127.0.0.1:39138
I20250811 20:47:07.416181 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 2451
I20250811 20:47:07.435771 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 2584
I20250811 20:47:07.460253 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 2718
I20250811 20:47:07.485018 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 2359
2025-08-11T20:47:07Z chronyd exiting
[ OK ] AdminCliTest.TestAuthzResetCacheNotAuthorized (11958 ms)
[ RUN ] AdminCliTest.TestRebuildTables
I20250811 20:47:07.539323 32747 test_util.cc:276] Using random seed: 115557213
I20250811 20:47:07.543325 32747 ts_itest-base.cc:115] Starting cluster with:
I20250811 20:47:07.543489 32747 ts_itest-base.cc:116] --------------
I20250811 20:47:07.543651 32747 ts_itest-base.cc:117] 3 tablet servers
I20250811 20:47:07.543794 32747 ts_itest-base.cc:118] 3 replicas per TS
I20250811 20:47:07.543942 32747 ts_itest-base.cc:119] --------------
2025-08-11T20:47:07Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T20:47:07Z Disabled control of system clock
I20250811 20:47:07.579914 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:40791
--webserver_interface=127.31.250.254
--webserver_port=0
--builtin_ntp_servers=127.31.250.212:46869
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.31.250.254:40791 with env {}
W20250811 20:47:07.877564 2916 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:47:07.878137 2916 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:47:07.878571 2916 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:47:07.909456 2916 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 20:47:07.909759 2916 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:47:07.909960 2916 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 20:47:07.910195 2916 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 20:47:07.944198 2916 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:46869
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.31.250.254:40791
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:40791
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.31.250.254
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:47:07.945412 2916 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:47:07.946980 2916 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:47:07.957437 2922 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:07.957994 2923 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:09.363097 2921 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 2916
W20250811 20:47:09.735860 2916 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.778s user 0.623s sys 1.154s
W20250811 20:47:09.737128 2916 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.780s user 0.623s sys 1.155s
W20250811 20:47:09.738466 2925 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:09.740523 2924 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1778 milliseconds
I20250811 20:47:09.740595 2916 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:47:09.741739 2916 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:47:09.744274 2916 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:47:09.745714 2916 hybrid_clock.cc:648] HybridClock initialized: now 1754945229745664 us; error 48 us; skew 500 ppm
I20250811 20:47:09.746482 2916 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:47:09.752854 2916 webserver.cc:489] Webserver started at http://127.31.250.254:46685/ using document root <none> and password file <none>
I20250811 20:47:09.753692 2916 fs_manager.cc:362] Metadata directory not provided
I20250811 20:47:09.753871 2916 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:47:09.754254 2916 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:47:09.758380 2916 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c"
format_stamp: "Formatted at 2025-08-11 20:47:09 on dist-test-slave-4gzk"
I20250811 20:47:09.759413 2916 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c"
format_stamp: "Formatted at 2025-08-11 20:47:09 on dist-test-slave-4gzk"
I20250811 20:47:09.766193 2916 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.000s
I20250811 20:47:09.771411 2932 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:09.772360 2916 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250811 20:47:09.772657 2916 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c"
format_stamp: "Formatted at 2025-08-11 20:47:09 on dist-test-slave-4gzk"
I20250811 20:47:09.772951 2916 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:47:09.831131 2916 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:47:09.832641 2916 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:47:09.833045 2916 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:47:09.902361 2916 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.254:40791
I20250811 20:47:09.902413 2983 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.254:40791 every 8 connection(s)
I20250811 20:47:09.905177 2916 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 20:47:09.907104 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 2916
I20250811 20:47:09.907634 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250811 20:47:09.910907 2984 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:47:09.934370 2984 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Bootstrap starting.
I20250811 20:47:09.939713 2984 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Neither blocks nor log segments found. Creating new log.
I20250811 20:47:09.941514 2984 log.cc:826] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Log is configured to *not* fsync() on all Append() calls
I20250811 20:47:09.945794 2984 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: No bootstrap required, opened a new log
I20250811 20:47:09.962728 2984 raft_consensus.cc:357] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:09.963454 2984 raft_consensus.cc:383] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:47:09.963689 2984 raft_consensus.cc:738] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 89c0bbfc378b4a62aaa1e62b1ce1d18c, State: Initialized, Role: FOLLOWER
I20250811 20:47:09.964342 2984 consensus_queue.cc:260] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:09.964857 2984 raft_consensus.cc:397] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:47:09.965116 2984 raft_consensus.cc:491] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:47:09.965420 2984 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:47:09.969743 2984 raft_consensus.cc:513] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:09.970413 2984 leader_election.cc:304] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 89c0bbfc378b4a62aaa1e62b1ce1d18c; no voters:
I20250811 20:47:09.972033 2984 leader_election.cc:290] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 20:47:09.972774 2989 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:47:09.974967 2989 raft_consensus.cc:695] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 1 LEADER]: Becoming Leader. State: Replica: 89c0bbfc378b4a62aaa1e62b1ce1d18c, State: Running, Role: LEADER
I20250811 20:47:09.975732 2989 consensus_queue.cc:237] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:09.976693 2984 sys_catalog.cc:564] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: configured and running, proceeding with master startup.
I20250811 20:47:09.985366 2991 sys_catalog.cc:455] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: SysCatalogTable state changed. Reason: New leader 89c0bbfc378b4a62aaa1e62b1ce1d18c. Latest consensus state: current_term: 1 leader_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } } }
I20250811 20:47:09.986589 2991 sys_catalog.cc:458] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: This master's current role is: LEADER
I20250811 20:47:09.986063 2990 sys_catalog.cc:455] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } } }
I20250811 20:47:09.987039 2990 sys_catalog.cc:458] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: This master's current role is: LEADER
I20250811 20:47:09.989645 2997 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 20:47:10.001209 2997 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 20:47:10.018074 2997 catalog_manager.cc:1349] Generated new cluster ID: d44864ab794f4d4b8dce0658483fdc68
I20250811 20:47:10.018388 2997 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 20:47:10.047089 2997 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 20:47:10.048588 2997 catalog_manager.cc:1506] Loading token signing keys...
I20250811 20:47:10.061993 2997 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Generated new TSK 0
I20250811 20:47:10.062942 2997 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 20:47:10.080999 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.193:0
--local_ip_for_outbound_sockets=127.31.250.193
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:40791
--builtin_ntp_servers=127.31.250.212:46869
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250811 20:47:10.382053 3008 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:47:10.382555 3008 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:47:10.383024 3008 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:47:10.413717 3008 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:47:10.414749 3008 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.193
I20250811 20:47:10.448640 3008 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:46869
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.193:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:40791
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.193
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:47:10.449957 3008 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:47:10.451658 3008 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:47:10.464237 3014 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:11.867986 3013 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 3008
W20250811 20:47:11.905198 3008 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.439s user 0.398s sys 0.845s
W20250811 20:47:11.905691 3008 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.440s user 0.398s sys 0.846s
W20250811 20:47:11.907066 3016 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1439 milliseconds
W20250811 20:47:11.907671 3017 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:10.465876 3015 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:47:11.908363 3008 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:47:11.912606 3008 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:47:11.915198 3008 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:47:11.916702 3008 hybrid_clock.cc:648] HybridClock initialized: now 1754945231916653 us; error 59 us; skew 500 ppm
I20250811 20:47:11.917651 3008 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:47:11.923615 3008 webserver.cc:489] Webserver started at http://127.31.250.193:43009/ using document root <none> and password file <none>
I20250811 20:47:11.924484 3008 fs_manager.cc:362] Metadata directory not provided
I20250811 20:47:11.924698 3008 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:47:11.925130 3008 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:47:11.929306 3008 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "d08ec2a3bb504a1483c931954ffcd43c"
format_stamp: "Formatted at 2025-08-11 20:47:11 on dist-test-slave-4gzk"
I20250811 20:47:11.930274 3008 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "d08ec2a3bb504a1483c931954ffcd43c"
format_stamp: "Formatted at 2025-08-11 20:47:11 on dist-test-slave-4gzk"
I20250811 20:47:11.937160 3008 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.004s sys 0.004s
I20250811 20:47:11.942740 3025 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:11.943938 3008 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.001s
I20250811 20:47:11.944243 3008 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "d08ec2a3bb504a1483c931954ffcd43c"
format_stamp: "Formatted at 2025-08-11 20:47:11 on dist-test-slave-4gzk"
I20250811 20:47:11.944578 3008 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:47:12.005314 3008 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:47:12.006928 3008 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:47:12.007368 3008 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:47:12.009886 3008 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:47:12.014441 3008 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:47:12.014652 3008 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:12.014907 3008 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:47:12.015082 3008 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:12.199779 3008 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.193:46671
I20250811 20:47:12.199903 3137 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.193:46671 every 8 connection(s)
I20250811 20:47:12.202641 3008 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 20:47:12.211318 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 3008
I20250811 20:47:12.212057 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250811 20:47:12.219908 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.194:0
--local_ip_for_outbound_sockets=127.31.250.194
--webserver_interface=127.31.250.194
--webserver_port=0
--tserver_master_addrs=127.31.250.254:40791
--builtin_ntp_servers=127.31.250.212:46869
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 20:47:12.226517 3138 heartbeater.cc:344] Connected to a master server at 127.31.250.254:40791
I20250811 20:47:12.227037 3138 heartbeater.cc:461] Registering TS with master...
I20250811 20:47:12.228127 3138 heartbeater.cc:507] Master 127.31.250.254:40791 requested a full tablet report, sending...
I20250811 20:47:12.230690 2949 ts_manager.cc:194] Registered new tserver with Master: d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193:46671)
I20250811 20:47:12.232754 2949 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.193:42821
W20250811 20:47:12.540752 3142 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:47:12.541256 3142 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:47:12.541749 3142 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:47:12.572361 3142 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:47:12.573169 3142 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.194
I20250811 20:47:12.612012 3142 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:46869
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.194:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.31.250.194
--webserver_port=0
--tserver_master_addrs=127.31.250.254:40791
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.194
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:47:12.613279 3142 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:47:12.614873 3142 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:47:12.625990 3148 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:47:13.236467 3138 heartbeater.cc:499] Master 127.31.250.254:40791 was elected leader, sending a full tablet report...
W20250811 20:47:12.627633 3149 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:13.814059 3151 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:13.816417 3150 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1184 milliseconds
I20250811 20:47:13.816511 3142 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:47:13.817674 3142 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:47:13.819811 3142 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:47:13.821156 3142 hybrid_clock.cc:648] HybridClock initialized: now 1754945233821113 us; error 42 us; skew 500 ppm
I20250811 20:47:13.821940 3142 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:47:13.827730 3142 webserver.cc:489] Webserver started at http://127.31.250.194:39741/ using document root <none> and password file <none>
I20250811 20:47:13.828634 3142 fs_manager.cc:362] Metadata directory not provided
I20250811 20:47:13.828884 3142 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:47:13.829320 3142 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:47:13.833670 3142 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "8aa039b30ffe49639e3e01dff534f030"
format_stamp: "Formatted at 2025-08-11 20:47:13 on dist-test-slave-4gzk"
I20250811 20:47:13.834645 3142 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "8aa039b30ffe49639e3e01dff534f030"
format_stamp: "Formatted at 2025-08-11 20:47:13 on dist-test-slave-4gzk"
I20250811 20:47:13.841408 3142 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.005s sys 0.004s
I20250811 20:47:13.847102 3158 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:13.848067 3142 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.000s
I20250811 20:47:13.848367 3142 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "8aa039b30ffe49639e3e01dff534f030"
format_stamp: "Formatted at 2025-08-11 20:47:13 on dist-test-slave-4gzk"
I20250811 20:47:13.848718 3142 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:47:13.915707 3142 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:47:13.917549 3142 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:47:13.918123 3142 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:47:13.921106 3142 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:47:13.925709 3142 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:47:13.925915 3142 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:13.926178 3142 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:47:13.926337 3142 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:14.056849 3142 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.194:38949
I20250811 20:47:14.056959 3270 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.194:38949 every 8 connection(s)
I20250811 20:47:14.059332 3142 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 20:47:14.067157 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 3142
I20250811 20:47:14.067605 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250811 20:47:14.073501 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.195:0
--local_ip_for_outbound_sockets=127.31.250.195
--webserver_interface=127.31.250.195
--webserver_port=0
--tserver_master_addrs=127.31.250.254:40791
--builtin_ntp_servers=127.31.250.212:46869
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 20:47:14.079856 3271 heartbeater.cc:344] Connected to a master server at 127.31.250.254:40791
I20250811 20:47:14.080260 3271 heartbeater.cc:461] Registering TS with master...
I20250811 20:47:14.081385 3271 heartbeater.cc:507] Master 127.31.250.254:40791 requested a full tablet report, sending...
I20250811 20:47:14.083479 2949 ts_manager.cc:194] Registered new tserver with Master: 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949)
I20250811 20:47:14.084685 2949 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.194:55395
W20250811 20:47:14.372789 3275 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:47:14.373268 3275 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:47:14.373884 3275 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:47:14.405021 3275 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:47:14.406014 3275 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.195
I20250811 20:47:14.439798 3275 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:46869
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.195:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.31.250.195
--webserver_port=0
--tserver_master_addrs=127.31.250.254:40791
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.195
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:47:14.441143 3275 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:47:14.442752 3275 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:47:14.454046 3281 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:47:15.087884 3271 heartbeater.cc:499] Master 127.31.250.254:40791 was elected leader, sending a full tablet report...
W20250811 20:47:14.455725 3282 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:15.625440 3284 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:15.628039 3283 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1167 milliseconds
I20250811 20:47:15.628157 3275 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:47:15.629289 3275 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:47:15.631533 3275 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:47:15.632877 3275 hybrid_clock.cc:648] HybridClock initialized: now 1754945235632833 us; error 48 us; skew 500 ppm
I20250811 20:47:15.633651 3275 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:47:15.639446 3275 webserver.cc:489] Webserver started at http://127.31.250.195:41367/ using document root <none> and password file <none>
I20250811 20:47:15.640419 3275 fs_manager.cc:362] Metadata directory not provided
I20250811 20:47:15.640673 3275 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:47:15.641181 3275 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:47:15.645483 3275 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "18398bb77b9544f0bfec984dbe18adc9"
format_stamp: "Formatted at 2025-08-11 20:47:15 on dist-test-slave-4gzk"
I20250811 20:47:15.646595 3275 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "18398bb77b9544f0bfec984dbe18adc9"
format_stamp: "Formatted at 2025-08-11 20:47:15 on dist-test-slave-4gzk"
I20250811 20:47:15.653672 3275 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.009s sys 0.000s
I20250811 20:47:15.659233 3291 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:15.660321 3275 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.000s
I20250811 20:47:15.660656 3275 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "18398bb77b9544f0bfec984dbe18adc9"
format_stamp: "Formatted at 2025-08-11 20:47:15 on dist-test-slave-4gzk"
I20250811 20:47:15.660979 3275 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:47:15.710024 3275 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:47:15.711505 3275 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:47:15.711990 3275 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:47:15.714452 3275 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:47:15.718438 3275 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:47:15.718653 3275 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:15.718927 3275 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:47:15.719081 3275 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:15.850064 3275 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.195:46003
I20250811 20:47:15.850188 3403 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.195:46003 every 8 connection(s)
I20250811 20:47:15.852625 3275 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 20:47:15.855566 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 3275
I20250811 20:47:15.856057 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250811 20:47:15.873008 3404 heartbeater.cc:344] Connected to a master server at 127.31.250.254:40791
I20250811 20:47:15.873524 3404 heartbeater.cc:461] Registering TS with master...
I20250811 20:47:15.874495 3404 heartbeater.cc:507] Master 127.31.250.254:40791 requested a full tablet report, sending...
I20250811 20:47:15.876678 2949 ts_manager.cc:194] Registered new tserver with Master: 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003)
I20250811 20:47:15.877866 2949 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.195:55565
I20250811 20:47:15.890422 32747 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 20:47:15.926074 32747 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 20:47:15.926383 32747 test_util.cc:276] Using random seed: 123944289
I20250811 20:47:15.965149 2949 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:53118:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250811 20:47:16.004981 3206 tablet_service.cc:1468] Processing CreateTablet for tablet c9fa405f1b20481486824c1627057316 (DEFAULT_TABLE table=TestTable [id=e5b8a053a0394b9da10f71511adc1c49]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:47:16.006441 3206 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c9fa405f1b20481486824c1627057316. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:47:16.024231 3424 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Bootstrap starting.
I20250811 20:47:16.029588 3424 tablet_bootstrap.cc:654] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Neither blocks nor log segments found. Creating new log.
I20250811 20:47:16.031203 3424 log.cc:826] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Log is configured to *not* fsync() on all Append() calls
I20250811 20:47:16.035668 3424 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: No bootstrap required, opened a new log
I20250811 20:47:16.036078 3424 ts_tablet_manager.cc:1397] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Time spent bootstrapping tablet: real 0.012s user 0.008s sys 0.003s
I20250811 20:47:16.052824 3424 raft_consensus.cc:357] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } }
I20250811 20:47:16.053404 3424 raft_consensus.cc:383] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:47:16.053634 3424 raft_consensus.cc:738] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8aa039b30ffe49639e3e01dff534f030, State: Initialized, Role: FOLLOWER
I20250811 20:47:16.054297 3424 consensus_queue.cc:260] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } }
I20250811 20:47:16.054816 3424 raft_consensus.cc:397] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:47:16.055081 3424 raft_consensus.cc:491] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:47:16.055433 3424 raft_consensus.cc:3058] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:47:16.059752 3424 raft_consensus.cc:513] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } }
I20250811 20:47:16.060431 3424 leader_election.cc:304] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8aa039b30ffe49639e3e01dff534f030; no voters:
I20250811 20:47:16.062863 3424 leader_election.cc:290] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 20:47:16.063290 3426 raft_consensus.cc:2802] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:47:16.073104 3426 raft_consensus.cc:695] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 1 LEADER]: Becoming Leader. State: Replica: 8aa039b30ffe49639e3e01dff534f030, State: Running, Role: LEADER
I20250811 20:47:16.073372 3424 ts_tablet_manager.cc:1428] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Time spent starting tablet: real 0.037s user 0.030s sys 0.005s
I20250811 20:47:16.074043 3426 consensus_queue.cc:237] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } }
I20250811 20:47:16.091814 2949 catalog_manager.cc:5582] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 reported cstate change: term changed from 0 to 1, leader changed from <none> to 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194). New cstate: current_term: 1 leader_uuid: "8aa039b30ffe49639e3e01dff534f030" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } health_report { overall_health: HEALTHY } } }
I20250811 20:47:16.310220 32747 test_util.cc:276] Using random seed: 124328105
I20250811 20:47:16.331070 2944 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:53128:
name: "TestTable1"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250811 20:47:16.358898 3339 tablet_service.cc:1468] Processing CreateTablet for tablet 628b4e91e833481a8a537e4947cb870c (DEFAULT_TABLE table=TestTable1 [id=1d040fad39dc4f66ba36e7177d885ae1]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:47:16.360329 3339 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 628b4e91e833481a8a537e4947cb870c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:47:16.378402 3445 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Bootstrap starting.
I20250811 20:47:16.383635 3445 tablet_bootstrap.cc:654] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Neither blocks nor log segments found. Creating new log.
I20250811 20:47:16.385226 3445 log.cc:826] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Log is configured to *not* fsync() on all Append() calls
I20250811 20:47:16.389221 3445 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: No bootstrap required, opened a new log
I20250811 20:47:16.389572 3445 ts_tablet_manager.cc:1397] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Time spent bootstrapping tablet: real 0.012s user 0.004s sys 0.005s
I20250811 20:47:16.406060 3445 raft_consensus.cc:357] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } }
I20250811 20:47:16.406622 3445 raft_consensus.cc:383] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:47:16.406821 3445 raft_consensus.cc:738] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 18398bb77b9544f0bfec984dbe18adc9, State: Initialized, Role: FOLLOWER
I20250811 20:47:16.407485 3445 consensus_queue.cc:260] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } }
I20250811 20:47:16.408068 3445 raft_consensus.cc:397] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:47:16.408288 3445 raft_consensus.cc:491] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:47:16.408546 3445 raft_consensus.cc:3058] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:47:16.412719 3445 raft_consensus.cc:513] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } }
I20250811 20:47:16.413374 3445 leader_election.cc:304] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 18398bb77b9544f0bfec984dbe18adc9; no voters:
I20250811 20:47:16.415009 3445 leader_election.cc:290] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 20:47:16.415546 3447 raft_consensus.cc:2802] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:47:16.418179 3404 heartbeater.cc:499] Master 127.31.250.254:40791 was elected leader, sending a full tablet report...
I20250811 20:47:16.418880 3447 raft_consensus.cc:695] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 1 LEADER]: Becoming Leader. State: Replica: 18398bb77b9544f0bfec984dbe18adc9, State: Running, Role: LEADER
I20250811 20:47:16.419206 3445 ts_tablet_manager.cc:1428] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Time spent starting tablet: real 0.029s user 0.030s sys 0.000s
I20250811 20:47:16.419787 3447 consensus_queue.cc:237] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } }
I20250811 20:47:16.430193 2944 catalog_manager.cc:5582] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 reported cstate change: term changed from 0 to 1, leader changed from <none> to 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195). New cstate: current_term: 1 leader_uuid: "18398bb77b9544f0bfec984dbe18adc9" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } health_report { overall_health: HEALTHY } } }
I20250811 20:47:16.639350 32747 test_util.cc:276] Using random seed: 124657235
I20250811 20:47:16.660152 2942 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:53136:
name: "TestTable2"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
I20250811 20:47:16.687098 3073 tablet_service.cc:1468] Processing CreateTablet for tablet 3918d98569dd46759251ad45bfa08089 (DEFAULT_TABLE table=TestTable2 [id=1b39bcb5fe7f4558a3be2cc8768cbb40]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:47:16.688552 3073 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 3918d98569dd46759251ad45bfa08089. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:47:16.705947 3466 tablet_bootstrap.cc:492] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap starting.
I20250811 20:47:16.711243 3466 tablet_bootstrap.cc:654] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Neither blocks nor log segments found. Creating new log.
I20250811 20:47:16.713042 3466 log.cc:826] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Log is configured to *not* fsync() on all Append() calls
I20250811 20:47:16.717017 3466 tablet_bootstrap.cc:492] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: No bootstrap required, opened a new log
I20250811 20:47:16.717367 3466 ts_tablet_manager.cc:1397] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Time spent bootstrapping tablet: real 0.012s user 0.009s sys 0.000s
I20250811 20:47:16.733628 3466 raft_consensus.cc:357] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } }
I20250811 20:47:16.734117 3466 raft_consensus.cc:383] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:47:16.734293 3466 raft_consensus.cc:738] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: d08ec2a3bb504a1483c931954ffcd43c, State: Initialized, Role: FOLLOWER
I20250811 20:47:16.734870 3466 consensus_queue.cc:260] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } }
I20250811 20:47:16.735385 3466 raft_consensus.cc:397] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:47:16.735612 3466 raft_consensus.cc:491] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:47:16.735886 3466 raft_consensus.cc:3058] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:47:16.739948 3466 raft_consensus.cc:513] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } }
I20250811 20:47:16.740588 3466 leader_election.cc:304] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: d08ec2a3bb504a1483c931954ffcd43c; no voters:
I20250811 20:47:16.742293 3466 leader_election.cc:290] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 20:47:16.743034 3468 raft_consensus.cc:2802] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:47:16.745687 3466 ts_tablet_manager.cc:1428] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Time spent starting tablet: real 0.028s user 0.022s sys 0.006s
I20250811 20:47:16.746371 3468 raft_consensus.cc:695] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 1 LEADER]: Becoming Leader. State: Replica: d08ec2a3bb504a1483c931954ffcd43c, State: Running, Role: LEADER
I20250811 20:47:16.747015 3468 consensus_queue.cc:237] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } }
I20250811 20:47:16.757479 2942 catalog_manager.cc:5582] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c reported cstate change: term changed from 0 to 1, leader changed from <none> to d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193). New cstate: current_term: 1 leader_uuid: "d08ec2a3bb504a1483c931954ffcd43c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } health_report { overall_health: HEALTHY } } }
I20250811 20:47:16.972033 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 2916
W20250811 20:47:17.114688 3271 heartbeater.cc:646] Failed to heartbeat to 127.31.250.254:40791 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.31.250.254:40791: connect: Connection refused (error 111)
W20250811 20:47:17.446630 3404 heartbeater.cc:646] Failed to heartbeat to 127.31.250.254:40791 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.31.250.254:40791: connect: Connection refused (error 111)
W20250811 20:47:17.775825 3138 heartbeater.cc:646] Failed to heartbeat to 127.31.250.254:40791 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.31.250.254:40791: connect: Connection refused (error 111)
I20250811 20:47:21.694885 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 3008
I20250811 20:47:21.790195 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 3142
I20250811 20:47:21.814900 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 3275
I20250811 20:47:21.842185 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:40791
--webserver_interface=127.31.250.254
--webserver_port=46685
--builtin_ntp_servers=127.31.250.212:46869
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.31.250.254:40791 with env {}
W20250811 20:47:22.143973 3545 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:47:22.144567 3545 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:47:22.145042 3545 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:47:22.187630 3545 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 20:47:22.188092 3545 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:47:22.188463 3545 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 20:47:22.188815 3545 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 20:47:22.227102 3545 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:46869
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.31.250.254:40791
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:40791
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.31.250.254
--webserver_port=46685
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:47:22.228428 3545 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:47:22.230111 3545 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:47:22.240587 3551 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:22.241674 3552 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:23.576164 3553 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1329 milliseconds
W20250811 20:47:23.577111 3545 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.335s user 0.517s sys 0.812s
W20250811 20:47:23.577504 3545 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.336s user 0.517s sys 0.812s
W20250811 20:47:23.577514 3554 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:47:23.577838 3545 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:47:23.579514 3545 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:47:23.582762 3545 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:47:23.584317 3545 hybrid_clock.cc:648] HybridClock initialized: now 1754945243584254 us; error 55 us; skew 500 ppm
I20250811 20:47:23.585505 3545 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:47:23.594463 3545 webserver.cc:489] Webserver started at http://127.31.250.254:46685/ using document root <none> and password file <none>
I20250811 20:47:23.595947 3545 fs_manager.cc:362] Metadata directory not provided
I20250811 20:47:23.596273 3545 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:47:23.608019 3545 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.007s sys 0.000s
I20250811 20:47:23.614189 3561 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:23.615551 3545 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.006s sys 0.001s
I20250811 20:47:23.616068 3545 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c"
format_stamp: "Formatted at 2025-08-11 20:47:09 on dist-test-slave-4gzk"
I20250811 20:47:23.619050 3545 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:47:23.713722 3545 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:47:23.715196 3545 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:47:23.715682 3545 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:47:23.786268 3545 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.254:40791
I20250811 20:47:23.786336 3612 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.254:40791 every 8 connection(s)
I20250811 20:47:23.789552 3545 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 20:47:23.792721 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 3545
I20250811 20:47:23.794713 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.193:46671
--local_ip_for_outbound_sockets=127.31.250.193
--tserver_master_addrs=127.31.250.254:40791
--webserver_port=43009
--webserver_interface=127.31.250.193
--builtin_ntp_servers=127.31.250.212:46869
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 20:47:23.804296 3613 sys_catalog.cc:263] Verifying existing consensus state
I20250811 20:47:23.809332 3613 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Bootstrap starting.
I20250811 20:47:23.818908 3613 log.cc:826] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Log is configured to *not* fsync() on all Append() calls
I20250811 20:47:23.865535 3613 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Bootstrap replayed 1/1 log segments. Stats: ops{read=18 overwritten=0 applied=18 ignored=0} inserts{seen=13 ignored=0} mutations{seen=10 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:23.866339 3613 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Bootstrap complete.
I20250811 20:47:23.886278 3613 raft_consensus.cc:357] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:23.888355 3613 raft_consensus.cc:738] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 89c0bbfc378b4a62aaa1e62b1ce1d18c, State: Initialized, Role: FOLLOWER
I20250811 20:47:23.889078 3613 consensus_queue.cc:260] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 18, Last appended: 2.18, Last appended by leader: 18, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:23.889539 3613 raft_consensus.cc:397] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:47:23.889827 3613 raft_consensus.cc:491] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:47:23.890130 3613 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 2 FOLLOWER]: Advancing to term 3
I20250811 20:47:23.895565 3613 raft_consensus.cc:513] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:23.896157 3613 leader_election.cc:304] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 89c0bbfc378b4a62aaa1e62b1ce1d18c; no voters:
I20250811 20:47:23.898209 3613 leader_election.cc:290] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [CANDIDATE]: Term 3 election: Requested vote from peers
I20250811 20:47:23.898607 3617 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 3 FOLLOWER]: Leader election won for term 3
I20250811 20:47:23.901925 3617 raft_consensus.cc:695] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 3 LEADER]: Becoming Leader. State: Replica: 89c0bbfc378b4a62aaa1e62b1ce1d18c, State: Running, Role: LEADER
I20250811 20:47:23.902815 3617 consensus_queue.cc:237] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 18, Committed index: 18, Last appended: 2.18, Last appended by leader: 18, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:23.903437 3613 sys_catalog.cc:564] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: configured and running, proceeding with master startup.
I20250811 20:47:23.914553 3619 sys_catalog.cc:455] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: SysCatalogTable state changed. Reason: New leader 89c0bbfc378b4a62aaa1e62b1ce1d18c. Latest consensus state: current_term: 3 leader_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } } }
I20250811 20:47:23.915396 3619 sys_catalog.cc:458] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: This master's current role is: LEADER
I20250811 20:47:23.919305 3618 sys_catalog.cc:455] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 3 leader_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } } }
I20250811 20:47:23.920214 3618 sys_catalog.cc:458] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: This master's current role is: LEADER
I20250811 20:47:23.925616 3625 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 20:47:23.938006 3625 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=1b39bcb5fe7f4558a3be2cc8768cbb40]
I20250811 20:47:23.939749 3625 catalog_manager.cc:671] Loaded metadata for table TestTable [id=279a214015774684b543b1281d04bd33]
I20250811 20:47:23.941452 3625 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=2aaa6e465449486dba6a24d5928d9cf8]
I20250811 20:47:23.949213 3625 tablet_loader.cc:96] loaded metadata for tablet 3918d98569dd46759251ad45bfa08089 (table TestTable2 [id=1b39bcb5fe7f4558a3be2cc8768cbb40])
I20250811 20:47:23.950502 3625 tablet_loader.cc:96] loaded metadata for tablet 628b4e91e833481a8a537e4947cb870c (table TestTable1 [id=2aaa6e465449486dba6a24d5928d9cf8])
I20250811 20:47:23.951736 3625 tablet_loader.cc:96] loaded metadata for tablet c9fa405f1b20481486824c1627057316 (table TestTable [id=279a214015774684b543b1281d04bd33])
I20250811 20:47:23.953224 3625 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 20:47:23.958333 3625 catalog_manager.cc:1261] Loaded cluster ID: d44864ab794f4d4b8dce0658483fdc68
I20250811 20:47:23.958600 3625 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 20:47:23.966821 3625 catalog_manager.cc:1506] Loading token signing keys...
I20250811 20:47:23.971997 3625 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Loaded TSK: 0
I20250811 20:47:23.973567 3625 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250811 20:47:24.141816 3615 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:47:24.142570 3615 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:47:24.143136 3615 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:47:24.173836 3615 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:47:24.174638 3615 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.193
I20250811 20:47:24.209136 3615 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:46869
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.193:46671
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.31.250.193
--webserver_port=43009
--tserver_master_addrs=127.31.250.254:40791
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.193
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:47:24.210422 3615 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:47:24.212074 3615 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:47:24.225847 3641 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:24.226362 3642 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:25.810137 3644 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:25.812238 3643 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1582 milliseconds
I20250811 20:47:25.812325 3615 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:47:25.813433 3615 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:47:25.815500 3615 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:47:25.816867 3615 hybrid_clock.cc:648] HybridClock initialized: now 1754945245816796 us; error 71 us; skew 500 ppm
I20250811 20:47:25.817641 3615 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:47:25.823491 3615 webserver.cc:489] Webserver started at http://127.31.250.193:43009/ using document root <none> and password file <none>
I20250811 20:47:25.824412 3615 fs_manager.cc:362] Metadata directory not provided
I20250811 20:47:25.824648 3615 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:47:25.832322 3615 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.003s sys 0.001s
I20250811 20:47:25.836853 3651 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:25.837810 3615 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250811 20:47:25.838120 3615 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "d08ec2a3bb504a1483c931954ffcd43c"
format_stamp: "Formatted at 2025-08-11 20:47:11 on dist-test-slave-4gzk"
I20250811 20:47:25.840034 3615 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:47:25.889647 3615 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:47:25.891175 3615 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:47:25.891623 3615 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:47:25.893957 3615 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:47:25.899309 3658 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250811 20:47:25.906718 3615 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250811 20:47:25.906960 3615 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.002s sys 0.000s
I20250811 20:47:25.907231 3615 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250811 20:47:25.911722 3615 ts_tablet_manager.cc:610] Registered 1 tablets
I20250811 20:47:25.911908 3615 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.002s sys 0.003s
I20250811 20:47:25.912257 3658 tablet_bootstrap.cc:492] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap starting.
I20250811 20:47:25.966681 3658 log.cc:826] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Log is configured to *not* fsync() on all Append() calls
I20250811 20:47:26.050165 3658 tablet_bootstrap.cc:492] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap replayed 1/1 log segments. Stats: ops{read=6 overwritten=0 applied=6 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:26.051221 3658 tablet_bootstrap.cc:492] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap complete.
I20250811 20:47:26.052642 3658 ts_tablet_manager.cc:1397] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Time spent bootstrapping tablet: real 0.141s user 0.091s sys 0.048s
I20250811 20:47:26.069242 3658 raft_consensus.cc:357] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } }
I20250811 20:47:26.072095 3658 raft_consensus.cc:738] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: d08ec2a3bb504a1483c931954ffcd43c, State: Initialized, Role: FOLLOWER
I20250811 20:47:26.072893 3658 consensus_queue.cc:260] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } }
I20250811 20:47:26.073612 3658 raft_consensus.cc:397] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:47:26.073946 3658 raft_consensus.cc:491] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:47:26.074347 3658 raft_consensus.cc:3058] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 1 FOLLOWER]: Advancing to term 2
I20250811 20:47:26.080636 3658 raft_consensus.cc:513] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } }
I20250811 20:47:26.081250 3658 leader_election.cc:304] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: d08ec2a3bb504a1483c931954ffcd43c; no voters:
I20250811 20:47:26.083572 3658 leader_election.cc:290] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 2 election: Requested vote from peers
I20250811 20:47:26.083877 3762 raft_consensus.cc:2802] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Leader election won for term 2
I20250811 20:47:26.087227 3762 raft_consensus.cc:695] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 2 LEADER]: Becoming Leader. State: Replica: d08ec2a3bb504a1483c931954ffcd43c, State: Running, Role: LEADER
I20250811 20:47:26.088403 3762 consensus_queue.cc:237] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 6, Committed index: 6, Last appended: 1.6, Last appended by leader: 6, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } }
I20250811 20:47:26.091346 3658 ts_tablet_manager.cc:1428] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Time spent starting tablet: real 0.038s user 0.031s sys 0.009s
I20250811 20:47:26.097959 3615 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.193:46671
I20250811 20:47:26.098491 3770 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.193:46671 every 8 connection(s)
I20250811 20:47:26.100628 3615 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 20:47:26.103243 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 3615
I20250811 20:47:26.105165 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.194:38949
--local_ip_for_outbound_sockets=127.31.250.194
--tserver_master_addrs=127.31.250.254:40791
--webserver_port=39741
--webserver_interface=127.31.250.194
--builtin_ntp_servers=127.31.250.212:46869
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 20:47:26.128266 3771 heartbeater.cc:344] Connected to a master server at 127.31.250.254:40791
I20250811 20:47:26.128713 3771 heartbeater.cc:461] Registering TS with master...
I20250811 20:47:26.129809 3771 heartbeater.cc:507] Master 127.31.250.254:40791 requested a full tablet report, sending...
I20250811 20:47:26.134184 3578 ts_manager.cc:194] Registered new tserver with Master: d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193:46671)
I20250811 20:47:26.137815 3578 catalog_manager.cc:5582] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c reported cstate change: term changed from 1 to 2. New cstate: current_term: 2 leader_uuid: "d08ec2a3bb504a1483c931954ffcd43c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } health_report { overall_health: HEALTHY } } }
I20250811 20:47:26.180502 3578 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.193:39513
I20250811 20:47:26.183789 3771 heartbeater.cc:499] Master 127.31.250.254:40791 was elected leader, sending a full tablet report...
W20250811 20:47:26.421124 3775 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:47:26.421638 3775 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:47:26.422148 3775 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:47:26.453018 3775 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:47:26.453886 3775 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.194
I20250811 20:47:26.487828 3775 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:46869
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.194:38949
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.31.250.194
--webserver_port=39741
--tserver_master_addrs=127.31.250.254:40791
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.194
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:47:26.489142 3775 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:47:26.490721 3775 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:47:26.501845 3785 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:26.503369 3786 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:26.508090 3788 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:27.659544 3787 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250811 20:47:27.659590 3775 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:47:27.663175 3775 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:47:27.665889 3775 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:47:27.667352 3775 hybrid_clock.cc:648] HybridClock initialized: now 1754945247667311 us; error 58 us; skew 500 ppm
I20250811 20:47:27.668145 3775 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:47:27.674520 3775 webserver.cc:489] Webserver started at http://127.31.250.194:39741/ using document root <none> and password file <none>
I20250811 20:47:27.675486 3775 fs_manager.cc:362] Metadata directory not provided
I20250811 20:47:27.675719 3775 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:47:27.683570 3775 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.007s sys 0.001s
I20250811 20:47:27.688099 3795 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:27.689059 3775 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.002s
I20250811 20:47:27.689361 3775 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "8aa039b30ffe49639e3e01dff534f030"
format_stamp: "Formatted at 2025-08-11 20:47:13 on dist-test-slave-4gzk"
I20250811 20:47:27.691068 3775 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:47:27.738306 3775 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:47:27.739791 3775 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:47:27.740197 3775 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:47:27.742763 3775 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:47:27.748212 3802 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250811 20:47:27.755215 3775 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250811 20:47:27.755507 3775 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.009s user 0.002s sys 0.000s
I20250811 20:47:27.755772 3775 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250811 20:47:27.760293 3775 ts_tablet_manager.cc:610] Registered 1 tablets
I20250811 20:47:27.760483 3775 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.004s sys 0.000s
I20250811 20:47:27.760794 3802 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Bootstrap starting.
I20250811 20:47:27.826951 3802 log.cc:826] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Log is configured to *not* fsync() on all Append() calls
I20250811 20:47:27.925951 3802 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Bootstrap replayed 1/1 log segments. Stats: ops{read=8 overwritten=0 applied=8 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:27.926760 3802 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Bootstrap complete.
I20250811 20:47:27.928596 3802 ts_tablet_manager.cc:1397] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Time spent bootstrapping tablet: real 0.168s user 0.134s sys 0.032s
I20250811 20:47:27.938603 3775 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.194:38949
I20250811 20:47:27.938838 3909 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.194:38949 every 8 connection(s)
I20250811 20:47:27.941128 3775 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 20:47:27.943997 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 3775
I20250811 20:47:27.945842 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.195:46003
--local_ip_for_outbound_sockets=127.31.250.195
--tserver_master_addrs=127.31.250.254:40791
--webserver_port=41367
--webserver_interface=127.31.250.195
--builtin_ntp_servers=127.31.250.212:46869
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 20:47:27.944579 3802 raft_consensus.cc:357] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } }
I20250811 20:47:27.947847 3802 raft_consensus.cc:738] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8aa039b30ffe49639e3e01dff534f030, State: Initialized, Role: FOLLOWER
I20250811 20:47:27.948778 3802 consensus_queue.cc:260] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 8, Last appended: 1.8, Last appended by leader: 8, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } }
I20250811 20:47:27.949496 3802 raft_consensus.cc:397] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:47:27.949818 3802 raft_consensus.cc:491] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:47:27.950212 3802 raft_consensus.cc:3058] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 1 FOLLOWER]: Advancing to term 2
I20250811 20:47:27.959555 3802 raft_consensus.cc:513] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } }
I20250811 20:47:27.960548 3802 leader_election.cc:304] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 8aa039b30ffe49639e3e01dff534f030; no voters:
I20250811 20:47:27.963552 3802 leader_election.cc:290] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250811 20:47:27.963863 3915 raft_consensus.cc:2802] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 2 FOLLOWER]: Leader election won for term 2
I20250811 20:47:27.969520 3915 raft_consensus.cc:695] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 2 LEADER]: Becoming Leader. State: Replica: 8aa039b30ffe49639e3e01dff534f030, State: Running, Role: LEADER
I20250811 20:47:27.970467 3915 consensus_queue.cc:237] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 1.8, Last appended by leader: 8, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } }
I20250811 20:47:27.975801 3802 ts_tablet_manager.cc:1428] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Time spent starting tablet: real 0.047s user 0.034s sys 0.012s
I20250811 20:47:27.978824 3910 heartbeater.cc:344] Connected to a master server at 127.31.250.254:40791
I20250811 20:47:27.979357 3910 heartbeater.cc:461] Registering TS with master...
I20250811 20:47:27.980464 3910 heartbeater.cc:507] Master 127.31.250.254:40791 requested a full tablet report, sending...
I20250811 20:47:27.988415 3578 ts_manager.cc:194] Registered new tserver with Master: 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949)
I20250811 20:47:27.989661 3578 catalog_manager.cc:5582] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 reported cstate change: term changed from 0 to 2, leader changed from <none> to 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194), VOTER 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194) added. New cstate: current_term: 2 leader_uuid: "8aa039b30ffe49639e3e01dff534f030" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } health_report { overall_health: HEALTHY } } }
I20250811 20:47:28.003048 3578 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.194:49575
I20250811 20:47:28.007930 3910 heartbeater.cc:499] Master 127.31.250.254:40791 was elected leader, sending a full tablet report...
I20250811 20:47:28.018486 3865 consensus_queue.cc:237] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 2.9, Last appended by leader: 8, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } }
I20250811 20:47:28.021724 3916 raft_consensus.cc:2953] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 2 LEADER]: Committing config change with OpId 2.10: config changed from index -1 to 10, NON_VOTER d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193) added. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } } }
I20250811 20:47:28.030607 3562 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet c9fa405f1b20481486824c1627057316 with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
I20250811 20:47:28.034076 3578 catalog_manager.cc:5582] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 reported cstate change: config changed from index -1 to 10, NON_VOTER d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193) added. New cstate: current_term: 2 leader_uuid: "8aa039b30ffe49639e3e01dff534f030" committed_config { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
W20250811 20:47:28.035764 3798 consensus_peers.cc:489] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 -> Peer d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193:46671): Couldn't send request to peer d08ec2a3bb504a1483c931954ffcd43c. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: c9fa405f1b20481486824c1627057316. This is attempt 1: this message will repeat every 5th retry.
W20250811 20:47:28.042009 3578 catalog_manager.cc:5260] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet c9fa405f1b20481486824c1627057316 with cas_config_opid_index 10: no extra replica candidate found for tablet c9fa405f1b20481486824c1627057316 (table TestTable [id=279a214015774684b543b1281d04bd33]): Not found: could not select location for extra replica: not enough tablet servers to satisfy replica placement policy: the total number of registered tablet servers (2) does not allow for adding an extra replica; consider bringing up more to have at least 4 tablet servers up and running
W20250811 20:47:28.270905 3914 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:47:28.271468 3914 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:47:28.272006 3914 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:47:28.303287 3914 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:47:28.304114 3914 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.195
I20250811 20:47:28.339304 3914 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:46869
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.195:46003
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.31.250.195
--webserver_port=41367
--tserver_master_addrs=127.31.250.254:40791
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.195
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:47:28.340626 3914 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:47:28.342175 3914 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:47:28.353446 3932 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:47:28.580506 3939 ts_tablet_manager.cc:927] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c: Initiating tablet copy from peer 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949)
I20250811 20:47:28.589125 3939 tablet_copy_client.cc:323] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c: tablet copy: Beginning tablet copy session from remote peer at address 127.31.250.194:38949
I20250811 20:47:28.633065 3885 tablet_copy_service.cc:140] P 8aa039b30ffe49639e3e01dff534f030: Received BeginTabletCopySession request for tablet c9fa405f1b20481486824c1627057316 from peer d08ec2a3bb504a1483c931954ffcd43c ({username='slave'} at 127.31.250.193:53675)
I20250811 20:47:28.633870 3885 tablet_copy_service.cc:161] P 8aa039b30ffe49639e3e01dff534f030: Beginning new tablet copy session on tablet c9fa405f1b20481486824c1627057316 from peer d08ec2a3bb504a1483c931954ffcd43c at {username='slave'} at 127.31.250.193:53675: session id = d08ec2a3bb504a1483c931954ffcd43c-c9fa405f1b20481486824c1627057316
I20250811 20:47:28.645432 3885 tablet_copy_source_session.cc:215] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Tablet Copy: opened 0 blocks and 1 log segments
I20250811 20:47:28.652734 3939 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c9fa405f1b20481486824c1627057316. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:47:28.676317 3939 tablet_copy_client.cc:806] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c: tablet copy: Starting download of 0 data blocks...
I20250811 20:47:28.677461 3939 tablet_copy_client.cc:670] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c: tablet copy: Starting download of 1 WAL segments...
I20250811 20:47:28.683939 3939 tablet_copy_client.cc:538] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250811 20:47:28.696811 3939 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap starting.
I20250811 20:47:29.017753 3939 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap replayed 1/1 log segments. Stats: ops{read=10 overwritten=0 applied=10 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:29.024124 3939 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap complete.
I20250811 20:47:29.025664 3939 ts_tablet_manager.cc:1397] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c: Time spent bootstrapping tablet: real 0.329s user 0.172s sys 0.011s
I20250811 20:47:29.028806 3939 raft_consensus.cc:357] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } }
I20250811 20:47:29.032694 3939 raft_consensus.cc:738] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: d08ec2a3bb504a1483c931954ffcd43c, State: Initialized, Role: LEARNER
I20250811 20:47:29.033676 3939 consensus_queue.cc:260] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 10, Last appended: 2.10, Last appended by leader: 10, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } }
I20250811 20:47:29.043205 3939 ts_tablet_manager.cc:1428] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c: Time spent starting tablet: real 0.016s user 0.002s sys 0.005s
I20250811 20:47:29.056737 3885 tablet_copy_service.cc:342] P 8aa039b30ffe49639e3e01dff534f030: Request end of tablet copy session d08ec2a3bb504a1483c931954ffcd43c-c9fa405f1b20481486824c1627057316 received from {username='slave'} at 127.31.250.193:53675
I20250811 20:47:29.057336 3885 tablet_copy_service.cc:434] P 8aa039b30ffe49639e3e01dff534f030: ending tablet copy session d08ec2a3bb504a1483c931954ffcd43c-c9fa405f1b20481486824c1627057316 on tablet c9fa405f1b20481486824c1627057316 with peer d08ec2a3bb504a1483c931954ffcd43c
I20250811 20:47:29.471181 3721 raft_consensus.cc:1215] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 LEARNER]: Deduplicated request from leader. Original: 2.9->[2.10-2.10] Dedup: 2.10->[]
W20250811 20:47:29.756490 3931 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 3914
W20250811 20:47:29.852450 3914 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.498s user 0.439s sys 1.028s
W20250811 20:47:28.355041 3933 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:29.852967 3914 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.499s user 0.439s sys 1.029s
W20250811 20:47:29.854961 3935 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:29.857467 3934 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1499 milliseconds
I20250811 20:47:29.857537 3914 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:47:29.858644 3914 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:47:29.860699 3914 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:47:29.862027 3914 hybrid_clock.cc:648] HybridClock initialized: now 1754945249861983 us; error 49 us; skew 500 ppm
I20250811 20:47:29.862798 3914 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:47:29.868588 3914 webserver.cc:489] Webserver started at http://127.31.250.195:41367/ using document root <none> and password file <none>
I20250811 20:47:29.869613 3914 fs_manager.cc:362] Metadata directory not provided
I20250811 20:47:29.869915 3914 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:47:29.877707 3914 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.005s sys 0.001s
I20250811 20:47:29.882205 3950 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:29.883143 3914 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.002s sys 0.000s
I20250811 20:47:29.883471 3914 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "18398bb77b9544f0bfec984dbe18adc9"
format_stamp: "Formatted at 2025-08-11 20:47:15 on dist-test-slave-4gzk"
I20250811 20:47:29.885386 3914 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:47:29.937563 3914 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:47:29.938956 3914 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:47:29.939424 3914 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:47:29.942061 3914 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:47:29.949280 3957 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250811 20:47:29.959758 3914 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250811 20:47:29.959987 3914 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.012s user 0.002s sys 0.000s
I20250811 20:47:29.960258 3914 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250811 20:47:29.964948 3914 ts_tablet_manager.cc:610] Registered 1 tablets
I20250811 20:47:29.965148 3914 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.004s sys 0.000s
I20250811 20:47:29.965490 3957 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Bootstrap starting.
I20250811 20:47:30.021147 3957 log.cc:826] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Log is configured to *not* fsync() on all Append() calls
I20250811 20:47:30.058838 4007 raft_consensus.cc:1062] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: attempting to promote NON_VOTER d08ec2a3bb504a1483c931954ffcd43c to VOTER
I20250811 20:47:30.061508 4007 consensus_queue.cc:237] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 8, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } }
I20250811 20:47:30.069621 3721 raft_consensus.cc:1273] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 LEARNER]: Refusing update from remote peer 8aa039b30ffe49639e3e01dff534f030: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250811 20:47:30.071406 4007 consensus_queue.cc:1035] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.001s
I20250811 20:47:30.080410 4005 raft_consensus.cc:2953] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index 10 to 11, d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } }
I20250811 20:47:30.081995 3721 raft_consensus.cc:2953] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } }
I20250811 20:47:30.093439 3577 catalog_manager.cc:5582] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 reported cstate change: config changed from index 10 to 11, d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "8aa039b30ffe49639e3e01dff534f030" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250811 20:47:30.170426 3957 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:30.171607 3957 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Bootstrap complete.
I20250811 20:47:30.173555 3957 ts_tablet_manager.cc:1397] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Time spent bootstrapping tablet: real 0.208s user 0.136s sys 0.047s
I20250811 20:47:30.193522 3957 raft_consensus.cc:357] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } }
I20250811 20:47:30.196668 3957 raft_consensus.cc:738] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 18398bb77b9544f0bfec984dbe18adc9, State: Initialized, Role: FOLLOWER
I20250811 20:47:30.197431 3957 consensus_queue.cc:260] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } }
I20250811 20:47:30.197907 3957 raft_consensus.cc:397] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:47:30.198210 3957 raft_consensus.cc:491] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:47:30.198549 3957 raft_consensus.cc:3058] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 1 FOLLOWER]: Advancing to term 2
I20250811 20:47:30.202620 3914 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.195:46003
I20250811 20:47:30.202839 4075 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.195:46003 every 8 connection(s)
I20250811 20:47:30.205029 3914 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 20:47:30.205051 3957 raft_consensus.cc:513] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } }
I20250811 20:47:30.205842 3957 leader_election.cc:304] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 18398bb77b9544f0bfec984dbe18adc9; no voters:
I20250811 20:47:30.207942 3957 leader_election.cc:290] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [CANDIDATE]: Term 2 election: Requested vote from peers
I20250811 20:47:30.208817 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 3914
I20250811 20:47:30.209129 4077 raft_consensus.cc:2802] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 2 FOLLOWER]: Leader election won for term 2
I20250811 20:47:30.215561 4077 raft_consensus.cc:695] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 2 LEADER]: Becoming Leader. State: Replica: 18398bb77b9544f0bfec984dbe18adc9, State: Running, Role: LEADER
I20250811 20:47:30.216790 4077 consensus_queue.cc:237] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 1.7, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } }
I20250811 20:47:30.223050 3957 ts_tablet_manager.cc:1428] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Time spent starting tablet: real 0.049s user 0.037s sys 0.012s
I20250811 20:47:30.254940 4076 heartbeater.cc:344] Connected to a master server at 127.31.250.254:40791
I20250811 20:47:30.255352 4076 heartbeater.cc:461] Registering TS with master...
I20250811 20:47:30.256234 4076 heartbeater.cc:507] Master 127.31.250.254:40791 requested a full tablet report, sending...
I20250811 20:47:30.259629 3577 ts_manager.cc:194] Registered new tserver with Master: 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003)
I20250811 20:47:30.260888 3577 catalog_manager.cc:5582] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 reported cstate change: term changed from 0 to 2, leader changed from <none> to 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195), VOTER 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195) added. New cstate: current_term: 2 leader_uuid: "18398bb77b9544f0bfec984dbe18adc9" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } health_report { overall_health: HEALTHY } } }
I20250811 20:47:30.261914 32747 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 20:47:30.265898 32747 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 20:47:30.270032 3577 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.195:53483
W20250811 20:47:30.269855 32747 ts_itest-base.cc:209] found only 2 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER } interned_replicas { ts_info_idx: 1 role: FOLLOWER }
I20250811 20:47:30.273658 4076 heartbeater.cc:499] Master 127.31.250.254:40791 was elected leader, sending a full tablet report...
I20250811 20:47:30.282984 4031 consensus_queue.cc:237] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 8, Committed index: 8, Last appended: 2.8, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: NON_VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: true } }
I20250811 20:47:30.286280 4080 raft_consensus.cc:2953] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 2 LEADER]: Committing config change with OpId 2.9: config changed from index -1 to 9, NON_VOTER 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194) added. New config: { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: NON_VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: true } } }
I20250811 20:47:30.295642 3562 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 628b4e91e833481a8a537e4947cb870c with cas_config_opid_index -1: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
W20250811 20:47:30.297549 3951 consensus_peers.cc:489] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 -> Peer 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949): Couldn't send request to peer 8aa039b30ffe49639e3e01dff534f030. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 628b4e91e833481a8a537e4947cb870c. This is attempt 1: this message will repeat every 5th retry.
I20250811 20:47:30.300999 3577 catalog_manager.cc:5582] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 reported cstate change: config changed from index -1 to 9, NON_VOTER 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194) added. New cstate: current_term: 2 leader_uuid: "18398bb77b9544f0bfec984dbe18adc9" committed_config { opid_index: 9 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: NON_VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250811 20:47:30.309983 4031 consensus_queue.cc:237] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 9, Committed index: 9, Last appended: 2.9, Last appended by leader: 7, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: NON_VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: true } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } }
I20250811 20:47:30.313141 4080 raft_consensus.cc:2953] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 2 LEADER]: Committing config change with OpId 2.10: config changed from index 9 to 10, NON_VOTER d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193) added. New config: { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: NON_VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: true } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } } }
W20250811 20:47:30.314918 3951 consensus_peers.cc:489] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 -> Peer 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949): Couldn't send request to peer 8aa039b30ffe49639e3e01dff534f030. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 628b4e91e833481a8a537e4947cb870c. This is attempt 1: this message will repeat every 5th retry.
I20250811 20:47:30.321460 3562 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet 628b4e91e833481a8a537e4947cb870c with cas_config_opid_index 9: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 1)
W20250811 20:47:30.323808 3953 consensus_peers.cc:489] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 -> Peer d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193:46671): Couldn't send request to peer d08ec2a3bb504a1483c931954ffcd43c. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: 628b4e91e833481a8a537e4947cb870c. This is attempt 1: this message will repeat every 5th retry.
I20250811 20:47:30.323745 3577 catalog_manager.cc:5582] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 reported cstate change: config changed from index 9 to 10, NON_VOTER d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193) added. New cstate: current_term: 2 leader_uuid: "18398bb77b9544f0bfec984dbe18adc9" committed_config { opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: NON_VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: true } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250811 20:47:30.339545 3865 consensus_queue.cc:237] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 8, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: NON_VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: true } }
I20250811 20:47:30.342136 3563 catalog_manager.cc:5129] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet c9fa405f1b20481486824c1627057316 with cas_config_opid_index 10: aborting the task: latest config opid_index 11; task opid_index 10
I20250811 20:47:30.344070 3721 raft_consensus.cc:1273] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Refusing update from remote peer 8aa039b30ffe49639e3e01dff534f030: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 2 index: 12. (index mismatch)
I20250811 20:47:30.345367 4008 consensus_queue.cc:1035] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.001s
I20250811 20:47:30.350639 4005 raft_consensus.cc:2953] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 2 LEADER]: Committing config change with OpId 2.12: config changed from index 11 to 12, NON_VOTER 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195) added. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: NON_VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: true } } }
I20250811 20:47:30.352811 3721 raft_consensus.cc:2953] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Committing config change with OpId 2.12: config changed from index 11 to 12, NON_VOTER 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195) added. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: NON_VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: true } } }
W20250811 20:47:30.353801 3796 consensus_peers.cc:489] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 -> Peer 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003): Couldn't send request to peer 18398bb77b9544f0bfec984dbe18adc9. Error code: TABLET_NOT_FOUND (6). Status: Not found: Tablet not found: c9fa405f1b20481486824c1627057316. This is attempt 1: this message will repeat every 5th retry.
I20250811 20:47:30.358681 3562 catalog_manager.cc:5095] ChangeConfig:ADD_PEER:NON_VOTER RPC for tablet c9fa405f1b20481486824c1627057316 with cas_config_opid_index 11: ChangeConfig:ADD_PEER:NON_VOTER succeeded (attempt 4)
I20250811 20:47:30.361997 3577 catalog_manager.cc:5582] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 reported cstate change: config changed from index 11 to 12, NON_VOTER 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195) added. New cstate: current_term: 2 leader_uuid: "8aa039b30ffe49639e3e01dff534f030" committed_config { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: NON_VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: true } health_report { overall_health: UNKNOWN } } }
I20250811 20:47:30.708590 4092 ts_tablet_manager.cc:927] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030: Initiating tablet copy from peer 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003)
I20250811 20:47:30.710630 4092 tablet_copy_client.cc:323] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030: tablet copy: Beginning tablet copy session from remote peer at address 127.31.250.195:46003
I20250811 20:47:30.712158 4051 tablet_copy_service.cc:140] P 18398bb77b9544f0bfec984dbe18adc9: Received BeginTabletCopySession request for tablet 628b4e91e833481a8a537e4947cb870c from peer 8aa039b30ffe49639e3e01dff534f030 ({username='slave'} at 127.31.250.194:47061)
I20250811 20:47:30.712607 4051 tablet_copy_service.cc:161] P 18398bb77b9544f0bfec984dbe18adc9: Beginning new tablet copy session on tablet 628b4e91e833481a8a537e4947cb870c from peer 8aa039b30ffe49639e3e01dff534f030 at {username='slave'} at 127.31.250.194:47061: session id = 8aa039b30ffe49639e3e01dff534f030-628b4e91e833481a8a537e4947cb870c
I20250811 20:47:30.717012 4051 tablet_copy_source_session.cc:215] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Tablet Copy: opened 0 blocks and 1 log segments
I20250811 20:47:30.720165 4092 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 628b4e91e833481a8a537e4947cb870c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:47:30.729749 4092 tablet_copy_client.cc:806] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030: tablet copy: Starting download of 0 data blocks...
I20250811 20:47:30.730165 4092 tablet_copy_client.cc:670] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030: tablet copy: Starting download of 1 WAL segments...
I20250811 20:47:30.733582 4092 tablet_copy_client.cc:538] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250811 20:47:30.738545 4092 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030: Bootstrap starting.
I20250811 20:47:30.788242 4095 ts_tablet_manager.cc:927] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: Initiating tablet copy from peer 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949)
I20250811 20:47:30.790702 4095 tablet_copy_client.cc:323] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: tablet copy: Beginning tablet copy session from remote peer at address 127.31.250.194:38949
I20250811 20:47:30.792383 3885 tablet_copy_service.cc:140] P 8aa039b30ffe49639e3e01dff534f030: Received BeginTabletCopySession request for tablet c9fa405f1b20481486824c1627057316 from peer 18398bb77b9544f0bfec984dbe18adc9 ({username='slave'} at 127.31.250.195:60903)
I20250811 20:47:30.792861 3885 tablet_copy_service.cc:161] P 8aa039b30ffe49639e3e01dff534f030: Beginning new tablet copy session on tablet c9fa405f1b20481486824c1627057316 from peer 18398bb77b9544f0bfec984dbe18adc9 at {username='slave'} at 127.31.250.195:60903: session id = 18398bb77b9544f0bfec984dbe18adc9-c9fa405f1b20481486824c1627057316
I20250811 20:47:30.794319 4097 ts_tablet_manager.cc:927] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c: Initiating tablet copy from peer 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003)
I20250811 20:47:30.799533 3885 tablet_copy_source_session.cc:215] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Tablet Copy: opened 0 blocks and 1 log segments
I20250811 20:47:30.802287 4097 tablet_copy_client.cc:323] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c: tablet copy: Beginning tablet copy session from remote peer at address 127.31.250.195:46003
I20250811 20:47:30.802790 4095 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet c9fa405f1b20481486824c1627057316. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:47:30.822165 4095 tablet_copy_client.cc:806] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: tablet copy: Starting download of 0 data blocks...
I20250811 20:47:30.822767 4095 tablet_copy_client.cc:670] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: tablet copy: Starting download of 1 WAL segments...
I20250811 20:47:30.827021 4095 tablet_copy_client.cc:538] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250811 20:47:30.833042 4051 tablet_copy_service.cc:140] P 18398bb77b9544f0bfec984dbe18adc9: Received BeginTabletCopySession request for tablet 628b4e91e833481a8a537e4947cb870c from peer d08ec2a3bb504a1483c931954ffcd43c ({username='slave'} at 127.31.250.193:53653)
I20250811 20:47:30.833544 4051 tablet_copy_service.cc:161] P 18398bb77b9544f0bfec984dbe18adc9: Beginning new tablet copy session on tablet 628b4e91e833481a8a537e4947cb870c from peer d08ec2a3bb504a1483c931954ffcd43c at {username='slave'} at 127.31.250.193:53653: session id = d08ec2a3bb504a1483c931954ffcd43c-628b4e91e833481a8a537e4947cb870c
I20250811 20:47:30.835738 4095 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: Bootstrap starting.
I20250811 20:47:30.840341 4051 tablet_copy_source_session.cc:215] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Tablet Copy: opened 0 blocks and 1 log segments
I20250811 20:47:30.843474 4097 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 628b4e91e833481a8a537e4947cb870c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:47:30.846110 4092 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030: Bootstrap replayed 1/1 log segments. Stats: ops{read=10 overwritten=0 applied=10 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:30.846875 4092 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030: Bootstrap complete.
I20250811 20:47:30.847430 4092 ts_tablet_manager.cc:1397] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030: Time spent bootstrapping tablet: real 0.109s user 0.104s sys 0.008s
I20250811 20:47:30.849686 4092 raft_consensus.cc:357] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: NON_VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: true } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } }
I20250811 20:47:30.850415 4092 raft_consensus.cc:738] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 8aa039b30ffe49639e3e01dff534f030, State: Initialized, Role: LEARNER
I20250811 20:47:30.851013 4092 consensus_queue.cc:260] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 10, Last appended: 2.10, Last appended by leader: 10, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: NON_VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: true } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } }
I20250811 20:47:30.855382 4092 ts_tablet_manager.cc:1428] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030: Time spent starting tablet: real 0.008s user 0.005s sys 0.000s
I20250811 20:47:30.856899 4051 tablet_copy_service.cc:342] P 18398bb77b9544f0bfec984dbe18adc9: Request end of tablet copy session 8aa039b30ffe49639e3e01dff534f030-628b4e91e833481a8a537e4947cb870c received from {username='slave'} at 127.31.250.194:47061
I20250811 20:47:30.857369 4051 tablet_copy_service.cc:434] P 18398bb77b9544f0bfec984dbe18adc9: ending tablet copy session 8aa039b30ffe49639e3e01dff534f030-628b4e91e833481a8a537e4947cb870c on tablet 628b4e91e833481a8a537e4947cb870c with peer 8aa039b30ffe49639e3e01dff534f030
I20250811 20:47:30.862308 4097 tablet_copy_client.cc:806] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c: tablet copy: Starting download of 0 data blocks...
I20250811 20:47:30.862864 4097 tablet_copy_client.cc:670] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c: tablet copy: Starting download of 1 WAL segments...
I20250811 20:47:30.867293 4097 tablet_copy_client.cc:538] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c: tablet copy: Tablet Copy complete. Replacing tablet superblock.
I20250811 20:47:30.875685 4097 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap starting.
I20250811 20:47:30.957226 4095 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:30.957981 4095 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: Bootstrap complete.
I20250811 20:47:30.958516 4095 ts_tablet_manager.cc:1397] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: Time spent bootstrapping tablet: real 0.123s user 0.111s sys 0.004s
I20250811 20:47:30.960695 4095 raft_consensus.cc:357] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9 [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: NON_VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: true } }
I20250811 20:47:30.961325 4095 raft_consensus.cc:738] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9 [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: 18398bb77b9544f0bfec984dbe18adc9, State: Initialized, Role: LEARNER
I20250811 20:47:30.961856 4097 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap replayed 1/1 log segments. Stats: ops{read=10 overwritten=0 applied=10 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:30.961849 4095 consensus_queue.cc:260] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: NON_VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: true } }
I20250811 20:47:30.962396 4097 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap complete.
I20250811 20:47:30.962868 4097 ts_tablet_manager.cc:1397] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c: Time spent bootstrapping tablet: real 0.087s user 0.068s sys 0.015s
I20250811 20:47:30.963912 4095 ts_tablet_manager.cc:1428] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: Time spent starting tablet: real 0.005s user 0.008s sys 0.000s
I20250811 20:47:30.965548 3885 tablet_copy_service.cc:342] P 8aa039b30ffe49639e3e01dff534f030: Request end of tablet copy session 18398bb77b9544f0bfec984dbe18adc9-c9fa405f1b20481486824c1627057316 received from {username='slave'} at 127.31.250.195:60903
I20250811 20:47:30.965934 3885 tablet_copy_service.cc:434] P 8aa039b30ffe49639e3e01dff534f030: ending tablet copy session 18398bb77b9544f0bfec984dbe18adc9-c9fa405f1b20481486824c1627057316 on tablet c9fa405f1b20481486824c1627057316 with peer 18398bb77b9544f0bfec984dbe18adc9
I20250811 20:47:30.965977 4097 raft_consensus.cc:357] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 LEARNER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: NON_VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: true } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } }
I20250811 20:47:30.966616 4097 raft_consensus.cc:738] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 LEARNER]: Becoming Follower/Learner. State: Replica: d08ec2a3bb504a1483c931954ffcd43c, State: Initialized, Role: LEARNER
I20250811 20:47:30.967121 4097 consensus_queue.cc:260] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 10, Last appended: 2.10, Last appended by leader: 10, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 10 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: NON_VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: true } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } }
I20250811 20:47:30.970800 4097 ts_tablet_manager.cc:1428] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c: Time spent starting tablet: real 0.008s user 0.006s sys 0.000s
I20250811 20:47:30.972219 4051 tablet_copy_service.cc:342] P 18398bb77b9544f0bfec984dbe18adc9: Request end of tablet copy session d08ec2a3bb504a1483c931954ffcd43c-628b4e91e833481a8a537e4947cb870c received from {username='slave'} at 127.31.250.193:53653
I20250811 20:47:30.972589 4051 tablet_copy_service.cc:434] P 18398bb77b9544f0bfec984dbe18adc9: ending tablet copy session d08ec2a3bb504a1483c931954ffcd43c-628b4e91e833481a8a537e4947cb870c on tablet 628b4e91e833481a8a537e4947cb870c with peer d08ec2a3bb504a1483c931954ffcd43c
I20250811 20:47:31.195115 3865 raft_consensus.cc:1215] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.9->[2.10-2.10] Dedup: 2.10->[]
I20250811 20:47:31.262338 4031 raft_consensus.cc:1215] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9 [term 2 LEARNER]: Deduplicated request from leader. Original: 2.11->[2.12-2.12] Dedup: 2.12->[]
I20250811 20:47:31.274263 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver d08ec2a3bb504a1483c931954ffcd43c to finish bootstrapping
I20250811 20:47:31.293840 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 8aa039b30ffe49639e3e01dff534f030 to finish bootstrapping
I20250811 20:47:31.304953 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 18398bb77b9544f0bfec984dbe18adc9 to finish bootstrapping
I20250811 20:47:31.417485 3721 raft_consensus.cc:1215] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 LEARNER]: Deduplicated request from leader. Original: 2.9->[2.10-2.10] Dedup: 2.10->[]
I20250811 20:47:31.560112 4000 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250811 20:47:31.563931 3845 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250811 20:47:31.570147 3701 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250811 20:47:31.664582 4105 raft_consensus.cc:1062] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: attempting to promote NON_VOTER 18398bb77b9544f0bfec984dbe18adc9 to VOTER
I20250811 20:47:31.667932 4105 consensus_queue.cc:237] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 12, Committed index: 12, Last appended: 2.12, Last appended by leader: 8, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } }
I20250811 20:47:31.683705 4031 raft_consensus.cc:1273] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9 [term 2 LEARNER]: Refusing update from remote peer 8aa039b30ffe49639e3e01dff534f030: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 2 index: 13. (index mismatch)
I20250811 20:47:31.685273 4005 consensus_queue.cc:1035] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [LEADER]: Connected to new peer: Peer: permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.000s
I20250811 20:47:31.694514 3721 raft_consensus.cc:1273] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Refusing update from remote peer 8aa039b30ffe49639e3e01dff534f030: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 2 index: 13. (index mismatch)
I20250811 20:47:31.695869 4105 consensus_queue.cc:1035] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.000s
I20250811 20:47:31.711190 4105 raft_consensus.cc:2953] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 2 LEADER]: Committing config change with OpId 2.13: config changed from index 12 to 13, 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } } }
I20250811 20:47:31.725010 4031 raft_consensus.cc:2953] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9 [term 2 FOLLOWER]: Committing config change with OpId 2.13: config changed from index 12 to 13, 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } } }
I20250811 20:47:31.738843 3578 catalog_manager.cc:5582] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9 reported cstate change: config changed from index 12 to 13, 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "8aa039b30ffe49639e3e01dff534f030" committed_config { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } } }
I20250811 20:47:31.741513 3721 raft_consensus.cc:2953] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Committing config change with OpId 2.13: config changed from index 12 to 13, 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195) changed from NON_VOTER to VOTER. New config: { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } } }
I20250811 20:47:31.793533 4104 raft_consensus.cc:1062] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: attempting to promote NON_VOTER 8aa039b30ffe49639e3e01dff534f030 to VOTER
I20250811 20:47:31.796175 4104 consensus_queue.cc:237] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 10, Committed index: 10, Last appended: 2.10, Last appended by leader: 7, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } }
I20250811 20:47:31.802094 3721 raft_consensus.cc:1273] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 LEARNER]: Refusing update from remote peer 18398bb77b9544f0bfec984dbe18adc9: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250811 20:47:31.803659 3865 raft_consensus.cc:1273] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030 [term 2 LEARNER]: Refusing update from remote peer 18398bb77b9544f0bfec984dbe18adc9: Log matching property violated. Preceding OpId in replica: term: 2 index: 10. Preceding OpId from leader: term: 2 index: 11. (index mismatch)
I20250811 20:47:31.803545 4080 consensus_queue.cc:1035] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.000s
I20250811 20:47:31.805207 4080 consensus_queue.cc:1035] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [LEADER]: Connected to new peer: Peer: permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 11, Last known committed idx: 10, Time since last communication: 0.000s
Master Summary
UUID | Address | Status
----------------------------------+----------------------+---------
89c0bbfc378b4a62aaa1e62b1ce1d18c | 127.31.250.254:40791 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+----------------------+-------------------------
builtin_ntp_servers | 127.31.250.212:46869 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+----------------------+---------+----------+----------------+-----------------
18398bb77b9544f0bfec984dbe18adc9 | 127.31.250.195:46003 | HEALTHY | <none> | 1 | 0
8aa039b30ffe49639e3e01dff534f030 | 127.31.250.194:38949 | HEALTHY | <none> | 1 | 0
d08ec2a3bb504a1483c931954ffcd43c | 127.31.250.193:46671 | HEALTHY | <none> | 1 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
I20250811 20:47:31.818334 4080 raft_consensus.cc:1025] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 2 LEADER]: attempt to promote peer d08ec2a3bb504a1483c931954ffcd43c: there is already a config change operation in progress. Unable to promote follower until it completes. Doing nothing.
Flag | Value | Tags | Tablet Server
----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.31.250.193 | experimental | 127.31.250.193:46671
local_ip_for_outbound_sockets | 127.31.250.194 | experimental | 127.31.250.194:38949
local_ip_for_outbound_sockets | 127.31.250.195 | experimental | 127.31.250.195:46003
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb | hidden | 127.31.250.193:46671
server_dump_info_path | /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb | hidden | 127.31.250.194:38949
server_dump_info_path | /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb | hidden | 127.31.250.195:46003
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+----------------------+-------------------------
builtin_ntp_servers | 127.31.250.212:46869 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.19.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
------------+----+---------+---------------+---------+------------+------------------+-------------
TestTable | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
TestTable1 | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
TestTable2 | 1 | HEALTHY | 1 | 1 | 0 | 0 | 0
I20250811 20:47:31.820899 4103 raft_consensus.cc:2953] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 2 LEADER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } } }
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 2
First Quartile | 2
Median | 2
Third Quartile | 3
Maximum | 3
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 3
Tablets | 3
Replicas | 7
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250811 20:47:31.830135 3865 raft_consensus.cc:2953] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030 [term 2 FOLLOWER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } } }
I20250811 20:47:31.833774 32747 log_verifier.cc:126] Checking tablet 3918d98569dd46759251ad45bfa08089
I20250811 20:47:31.835999 3721 raft_consensus.cc:2953] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 LEARNER]: Committing config change with OpId 2.11: config changed from index 10 to 11, 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194) changed from NON_VOTER to VOTER. New config: { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } } }
I20250811 20:47:31.839844 3576 catalog_manager.cc:5582] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 reported cstate change: config changed from index 10 to 11, 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "18398bb77b9544f0bfec984dbe18adc9" committed_config { opid_index: 11 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: NON_VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: true } health_report { overall_health: HEALTHY } } }
I20250811 20:47:31.852778 4103 raft_consensus.cc:1062] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: attempting to promote NON_VOTER d08ec2a3bb504a1483c931954ffcd43c to VOTER
I20250811 20:47:31.855065 4103 consensus_queue.cc:237] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 11, Committed index: 11, Last appended: 2.11, Last appended by leader: 7, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } }
I20250811 20:47:31.862723 3720 raft_consensus.cc:1273] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 LEARNER]: Refusing update from remote peer 18398bb77b9544f0bfec984dbe18adc9: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 2 index: 12. (index mismatch)
I20250811 20:47:31.863962 3865 raft_consensus.cc:1273] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030 [term 2 FOLLOWER]: Refusing update from remote peer 18398bb77b9544f0bfec984dbe18adc9: Log matching property violated. Preceding OpId in replica: term: 2 index: 11. Preceding OpId from leader: term: 2 index: 12. (index mismatch)
I20250811 20:47:31.865455 4080 consensus_queue.cc:1035] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [LEADER]: Connected to new peer: Peer: permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.000s
I20250811 20:47:31.866294 4104 consensus_queue.cc:1035] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [LEADER]: Connected to new peer: Peer: permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 12, Last known committed idx: 11, Time since last communication: 0.000s
I20250811 20:47:31.873458 4080 raft_consensus.cc:2953] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 2 LEADER]: Committing config change with OpId 2.12: config changed from index 11 to 12, d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193) changed from NON_VOTER to VOTER. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } }
I20250811 20:47:31.875062 3864 raft_consensus.cc:2953] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030 [term 2 FOLLOWER]: Committing config change with OpId 2.12: config changed from index 11 to 12, d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193) changed from NON_VOTER to VOTER. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } }
I20250811 20:47:31.876647 3721 raft_consensus.cc:2953] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Committing config change with OpId 2.12: config changed from index 11 to 12, d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193) changed from NON_VOTER to VOTER. New config: { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } }
I20250811 20:47:31.884704 3576 catalog_manager.cc:5582] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 reported cstate change: config changed from index 11 to 12, d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193) changed from NON_VOTER to VOTER. New cstate: current_term: 2 leader_uuid: "18398bb77b9544f0bfec984dbe18adc9" committed_config { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
I20250811 20:47:31.888696 32747 log_verifier.cc:177] Verified matching terms for 7 ops in tablet 3918d98569dd46759251ad45bfa08089
I20250811 20:47:31.889019 32747 log_verifier.cc:126] Checking tablet 628b4e91e833481a8a537e4947cb870c
I20250811 20:47:31.983412 32747 log_verifier.cc:177] Verified matching terms for 12 ops in tablet 628b4e91e833481a8a537e4947cb870c
I20250811 20:47:31.983650 32747 log_verifier.cc:126] Checking tablet c9fa405f1b20481486824c1627057316
I20250811 20:47:32.067720 32747 log_verifier.cc:177] Verified matching terms for 13 ops in tablet c9fa405f1b20481486824c1627057316
I20250811 20:47:32.068145 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 3545
I20250811 20:47:32.099778 32747 minidump.cc:252] Setting minidump size limit to 20M
I20250811 20:47:32.101375 32747 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:47:32.102767 32747 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:47:32.114950 4146 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:32.119091 4149 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:32.115896 4147 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:47:32.189682 32747 server_base.cc:1047] running on GCE node
I20250811 20:47:32.190892 32747 hybrid_clock.cc:584] initializing the hybrid clock with 'system_unsync' time source
W20250811 20:47:32.191100 32747 system_unsync_time.cc:38] NTP support is disabled. Clock error bounds will not be accurate. This configuration is not suitable for distributed clusters.
I20250811 20:47:32.191304 32747 hybrid_clock.cc:648] HybridClock initialized: now 1754945252191279 us; error 0 us; skew 500 ppm
I20250811 20:47:32.191946 32747 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:47:32.195056 32747 webserver.cc:489] Webserver started at http://0.0.0.0:43847/ using document root <none> and password file <none>
I20250811 20:47:32.195902 32747 fs_manager.cc:362] Metadata directory not provided
I20250811 20:47:32.196095 32747 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:47:32.201356 32747 fs_manager.cc:714] Time spent opening directory manager: real 0.004s user 0.005s sys 0.000s
I20250811 20:47:32.205229 4154 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:32.206211 32747 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250811 20:47:32.206521 32747 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c"
format_stamp: "Formatted at 2025-08-11 20:47:09 on dist-test-slave-4gzk"
I20250811 20:47:32.208282 32747 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:47:32.269760 32747 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:47:32.271279 32747 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:47:32.271683 32747 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:47:32.281080 32747 sys_catalog.cc:263] Verifying existing consensus state
W20250811 20:47:32.285346 32747 sys_catalog.cc:243] For a single master config, on-disk Raft master: 127.31.250.254:40791 exists but no master address supplied!
I20250811 20:47:32.287590 32747 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Bootstrap starting.
I20250811 20:47:32.332103 32747 log.cc:826] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Log is configured to *not* fsync() on all Append() calls
I20250811 20:47:32.395015 32747 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Bootstrap replayed 1/1 log segments. Stats: ops{read=30 overwritten=0 applied=30 ignored=0} inserts{seen=13 ignored=0} mutations{seen=21 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:32.395838 32747 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Bootstrap complete.
I20250811 20:47:32.409475 32747 raft_consensus.cc:357] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 3 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:32.410081 32747 raft_consensus.cc:738] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 3 FOLLOWER]: Becoming Follower/Learner. State: Replica: 89c0bbfc378b4a62aaa1e62b1ce1d18c, State: Initialized, Role: FOLLOWER
I20250811 20:47:32.410840 32747 consensus_queue.cc:260] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 30, Last appended: 3.30, Last appended by leader: 30, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:32.411365 32747 raft_consensus.cc:397] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 3 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:47:32.411616 32747 raft_consensus.cc:491] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 3 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:47:32.411940 32747 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 3 FOLLOWER]: Advancing to term 4
I20250811 20:47:32.417380 32747 raft_consensus.cc:513] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 4 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:32.418097 32747 leader_election.cc:304] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [CANDIDATE]: Term 4 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 89c0bbfc378b4a62aaa1e62b1ce1d18c; no voters:
I20250811 20:47:32.419308 32747 leader_election.cc:290] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [CANDIDATE]: Term 4 election: Requested vote from peers
I20250811 20:47:32.419589 4161 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 4 FOLLOWER]: Leader election won for term 4
I20250811 20:47:32.420964 4161 raft_consensus.cc:695] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 4 LEADER]: Becoming Leader. State: Replica: 89c0bbfc378b4a62aaa1e62b1ce1d18c, State: Running, Role: LEADER
I20250811 20:47:32.421766 4161 consensus_queue.cc:237] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 30, Committed index: 30, Last appended: 3.30, Last appended by leader: 30, Current term: 4, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:32.428323 4162 sys_catalog.cc:455] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 4 leader_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } } }
I20250811 20:47:32.428915 4162 sys_catalog.cc:458] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: This master's current role is: LEADER
I20250811 20:47:32.429607 4163 sys_catalog.cc:455] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: SysCatalogTable state changed. Reason: New leader 89c0bbfc378b4a62aaa1e62b1ce1d18c. Latest consensus state: current_term: 4 leader_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } } }
I20250811 20:47:32.430096 4163 sys_catalog.cc:458] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: This master's current role is: LEADER
I20250811 20:47:32.456578 32747 tablet_replica.cc:331] stopping tablet replica
I20250811 20:47:32.457175 32747 raft_consensus.cc:2241] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 4 LEADER]: Raft consensus shutting down.
I20250811 20:47:32.457605 32747 raft_consensus.cc:2270] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 4 FOLLOWER]: Raft consensus is shut down!
I20250811 20:47:32.459949 32747 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250811 20:47:32.460431 32747 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250811 20:47:32.485908 32747 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
W20250811 20:47:32.911056 3910 heartbeater.cc:646] Failed to heartbeat to 127.31.250.254:40791 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.31.250.254:40791: connect: Connection refused (error 111)
W20250811 20:47:32.913568 3771 heartbeater.cc:646] Failed to heartbeat to 127.31.250.254:40791 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.31.250.254:40791: connect: Connection refused (error 111)
W20250811 20:47:33.912814 4076 heartbeater.cc:646] Failed to heartbeat to 127.31.250.254:40791 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.31.250.254:40791: connect: Connection refused (error 111)
I20250811 20:47:37.558769 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 3615
I20250811 20:47:37.584537 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 3775
I20250811 20:47:37.611191 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 3914
I20250811 20:47:37.640843 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:40791
--webserver_interface=127.31.250.254
--webserver_port=46685
--builtin_ntp_servers=127.31.250.212:46869
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.31.250.254:40791 with env {}
W20250811 20:47:37.939651 4236 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:47:37.940263 4236 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:47:37.940707 4236 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:47:37.971917 4236 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 20:47:37.972344 4236 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:47:37.972733 4236 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 20:47:37.973088 4236 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 20:47:38.007956 4236 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:46869
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.31.250.254:40791
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:40791
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.31.250.254
--webserver_port=46685
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:47:38.009418 4236 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:47:38.011026 4236 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:47:38.020849 4242 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:38.022739 4243 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:38.034765 4245 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:39.237293 4244 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1211 milliseconds
I20250811 20:47:39.237406 4236 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:47:39.238608 4236 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:47:39.241762 4236 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:47:39.243180 4236 hybrid_clock.cc:648] HybridClock initialized: now 1754945259243144 us; error 59 us; skew 500 ppm
I20250811 20:47:39.244014 4236 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:47:39.250545 4236 webserver.cc:489] Webserver started at http://127.31.250.254:46685/ using document root <none> and password file <none>
I20250811 20:47:39.251415 4236 fs_manager.cc:362] Metadata directory not provided
I20250811 20:47:39.251652 4236 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:47:39.259130 4236 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.004s sys 0.003s
I20250811 20:47:39.263525 4252 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:39.264531 4236 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250811 20:47:39.264835 4236 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c"
format_stamp: "Formatted at 2025-08-11 20:47:09 on dist-test-slave-4gzk"
I20250811 20:47:39.266774 4236 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:47:39.319368 4236 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:47:39.320816 4236 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:47:39.321236 4236 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:47:39.392356 4236 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.254:40791
I20250811 20:47:39.392427 4303 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.254:40791 every 8 connection(s)
I20250811 20:47:39.395154 4236 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 20:47:39.403599 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 4236
I20250811 20:47:39.405169 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.193:46671
--local_ip_for_outbound_sockets=127.31.250.193
--tserver_master_addrs=127.31.250.254:40791
--webserver_port=43009
--webserver_interface=127.31.250.193
--builtin_ntp_servers=127.31.250.212:46869
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 20:47:39.405261 4304 sys_catalog.cc:263] Verifying existing consensus state
I20250811 20:47:39.409924 4304 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Bootstrap starting.
I20250811 20:47:39.419842 4304 log.cc:826] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Log is configured to *not* fsync() on all Append() calls
I20250811 20:47:39.496544 4304 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Bootstrap replayed 1/1 log segments. Stats: ops{read=34 overwritten=0 applied=34 ignored=0} inserts{seen=15 ignored=0} mutations{seen=23 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:39.497316 4304 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Bootstrap complete.
I20250811 20:47:39.515889 4304 raft_consensus.cc:357] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 5 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:39.517879 4304 raft_consensus.cc:738] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 5 FOLLOWER]: Becoming Follower/Learner. State: Replica: 89c0bbfc378b4a62aaa1e62b1ce1d18c, State: Initialized, Role: FOLLOWER
I20250811 20:47:39.518628 4304 consensus_queue.cc:260] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 34, Last appended: 5.34, Last appended by leader: 34, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:39.519083 4304 raft_consensus.cc:397] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 5 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:47:39.519346 4304 raft_consensus.cc:491] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 5 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:47:39.519639 4304 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 5 FOLLOWER]: Advancing to term 6
I20250811 20:47:39.524659 4304 raft_consensus.cc:513] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 6 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:39.525244 4304 leader_election.cc:304] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [CANDIDATE]: Term 6 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 89c0bbfc378b4a62aaa1e62b1ce1d18c; no voters:
I20250811 20:47:39.527397 4304 leader_election.cc:290] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [CANDIDATE]: Term 6 election: Requested vote from peers
I20250811 20:47:39.527776 4308 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 6 FOLLOWER]: Leader election won for term 6
I20250811 20:47:39.530817 4308 raft_consensus.cc:695] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [term 6 LEADER]: Becoming Leader. State: Replica: 89c0bbfc378b4a62aaa1e62b1ce1d18c, State: Running, Role: LEADER
I20250811 20:47:39.531649 4308 consensus_queue.cc:237] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 34, Committed index: 34, Last appended: 5.34, Last appended by leader: 34, Current term: 6, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } }
I20250811 20:47:39.532209 4304 sys_catalog.cc:564] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: configured and running, proceeding with master startup.
I20250811 20:47:39.541654 4309 sys_catalog.cc:455] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 6 leader_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } } }
I20250811 20:47:39.541808 4310 sys_catalog.cc:455] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: SysCatalogTable state changed. Reason: New leader 89c0bbfc378b4a62aaa1e62b1ce1d18c. Latest consensus state: current_term: 6 leader_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "89c0bbfc378b4a62aaa1e62b1ce1d18c" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 40791 } } }
I20250811 20:47:39.542271 4309 sys_catalog.cc:458] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: This master's current role is: LEADER
I20250811 20:47:39.542271 4310 sys_catalog.cc:458] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c [sys.catalog]: This master's current role is: LEADER
I20250811 20:47:39.548789 4316 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 20:47:39.570817 4316 catalog_manager.cc:671] Loaded metadata for table TestTable2 [id=1b39bcb5fe7f4558a3be2cc8768cbb40]
I20250811 20:47:39.573078 4316 catalog_manager.cc:671] Loaded metadata for table TestTable1 [id=2aaa6e465449486dba6a24d5928d9cf8]
I20250811 20:47:39.575943 4316 catalog_manager.cc:671] Loaded metadata for table TestTable [id=dd92d2883e1445d5a0817cfb5a207bcc]
I20250811 20:47:39.586763 4316 tablet_loader.cc:96] loaded metadata for tablet 3918d98569dd46759251ad45bfa08089 (table TestTable2 [id=1b39bcb5fe7f4558a3be2cc8768cbb40])
I20250811 20:47:39.588582 4316 tablet_loader.cc:96] loaded metadata for tablet 628b4e91e833481a8a537e4947cb870c (table TestTable1 [id=2aaa6e465449486dba6a24d5928d9cf8])
I20250811 20:47:39.589823 4316 tablet_loader.cc:96] loaded metadata for tablet c9fa405f1b20481486824c1627057316 (table TestTable [id=dd92d2883e1445d5a0817cfb5a207bcc])
I20250811 20:47:39.591358 4316 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 20:47:39.598627 4316 catalog_manager.cc:1261] Loaded cluster ID: d44864ab794f4d4b8dce0658483fdc68
I20250811 20:47:39.598960 4316 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 20:47:39.599040 4325 catalog_manager.cc:797] Waiting for catalog manager background task thread to start: Service unavailable: Catalog manager is not initialized. State: Starting
I20250811 20:47:39.605899 4316 catalog_manager.cc:1506] Loading token signing keys...
I20250811 20:47:39.609728 4316 catalog_manager.cc:5966] T 00000000000000000000000000000000 P 89c0bbfc378b4a62aaa1e62b1ce1d18c: Loaded TSK: 0
I20250811 20:47:39.610869 4316 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250811 20:47:39.745182 4306 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:47:39.745683 4306 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:47:39.746169 4306 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:47:39.777751 4306 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:47:39.778710 4306 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.193
I20250811 20:47:39.812858 4306 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:46869
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.193:46671
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.31.250.193
--webserver_port=43009
--tserver_master_addrs=127.31.250.254:40791
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.193
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:47:39.814092 4306 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:47:39.815735 4306 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:47:39.827859 4331 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:39.833382 4334 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:39.829841 4332 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:41.242229 4333 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1409 milliseconds
I20250811 20:47:41.242338 4306 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:47:41.243636 4306 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:47:41.247622 4306 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:47:41.249054 4306 hybrid_clock.cc:648] HybridClock initialized: now 1754945261248998 us; error 63 us; skew 500 ppm
I20250811 20:47:41.249823 4306 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:47:41.256357 4306 webserver.cc:489] Webserver started at http://127.31.250.193:43009/ using document root <none> and password file <none>
I20250811 20:47:41.257217 4306 fs_manager.cc:362] Metadata directory not provided
I20250811 20:47:41.257424 4306 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:47:41.265131 4306 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.000s sys 0.004s
I20250811 20:47:41.270406 4341 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:41.271476 4306 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250811 20:47:41.271775 4306 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "d08ec2a3bb504a1483c931954ffcd43c"
format_stamp: "Formatted at 2025-08-11 20:47:11 on dist-test-slave-4gzk"
I20250811 20:47:41.273615 4306 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:47:41.348078 4306 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:47:41.349591 4306 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:47:41.350003 4306 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:47:41.352972 4306 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:47:41.358639 4348 ts_tablet_manager.cc:542] Loading tablet metadata (0/3 complete)
I20250811 20:47:41.377815 4306 ts_tablet_manager.cc:579] Loaded tablet metadata (3 total tablets, 3 live tablets)
I20250811 20:47:41.378036 4306 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.021s user 0.002s sys 0.000s
I20250811 20:47:41.378333 4306 ts_tablet_manager.cc:594] Registering tablets (0/3 complete)
I20250811 20:47:41.383471 4348 tablet_bootstrap.cc:492] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap starting.
I20250811 20:47:41.393429 4306 ts_tablet_manager.cc:610] Registered 3 tablets
I20250811 20:47:41.393751 4306 ts_tablet_manager.cc:589] Time spent register tablets: real 0.015s user 0.016s sys 0.000s
I20250811 20:47:41.452328 4348 log.cc:826] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Log is configured to *not* fsync() on all Append() calls
I20250811 20:47:41.568842 4348 tablet_bootstrap.cc:492] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap replayed 1/1 log segments. Stats: ops{read=7 overwritten=0 applied=7 ignored=0} inserts{seen=250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:41.570115 4348 tablet_bootstrap.cc:492] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap complete.
I20250811 20:47:41.572216 4348 ts_tablet_manager.cc:1397] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Time spent bootstrapping tablet: real 0.189s user 0.156s sys 0.027s
I20250811 20:47:41.580129 4306 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.193:46671
I20250811 20:47:41.580327 4455 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.193:46671 every 8 connection(s)
I20250811 20:47:41.583518 4306 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 20:47:41.590188 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 4306
I20250811 20:47:41.592475 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.194:38949
--local_ip_for_outbound_sockets=127.31.250.194
--tserver_master_addrs=127.31.250.254:40791
--webserver_port=39741
--webserver_interface=127.31.250.194
--builtin_ntp_servers=127.31.250.212:46869
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 20:47:41.591821 4348 raft_consensus.cc:357] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } }
I20250811 20:47:41.595527 4348 raft_consensus.cc:738] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: d08ec2a3bb504a1483c931954ffcd43c, State: Initialized, Role: FOLLOWER
I20250811 20:47:41.596534 4348 consensus_queue.cc:260] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 7, Last appended: 2.7, Last appended by leader: 7, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } }
I20250811 20:47:41.597366 4348 raft_consensus.cc:397] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:47:41.597741 4348 raft_consensus.cc:491] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:47:41.598205 4348 raft_consensus.cc:3058] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Advancing to term 3
I20250811 20:47:41.608022 4348 raft_consensus.cc:513] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 3 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } }
I20250811 20:47:41.608925 4348 leader_election.cc:304] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: d08ec2a3bb504a1483c931954ffcd43c; no voters:
I20250811 20:47:41.611042 4348 leader_election.cc:290] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 election: Requested vote from peers
I20250811 20:47:41.611495 4461 raft_consensus.cc:2802] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 3 FOLLOWER]: Leader election won for term 3
I20250811 20:47:41.613701 4461 raft_consensus.cc:695] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [term 3 LEADER]: Becoming Leader. State: Replica: d08ec2a3bb504a1483c931954ffcd43c, State: Running, Role: LEADER
I20250811 20:47:41.614557 4461 consensus_queue.cc:237] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 7, Committed index: 7, Last appended: 2.7, Last appended by leader: 7, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } }
I20250811 20:47:41.619359 4348 ts_tablet_manager.cc:1428] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c: Time spent starting tablet: real 0.047s user 0.037s sys 0.004s
I20250811 20:47:41.620414 4348 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap starting.
I20250811 20:47:41.626569 4456 heartbeater.cc:344] Connected to a master server at 127.31.250.254:40791
I20250811 20:47:41.627033 4456 heartbeater.cc:461] Registering TS with master...
I20250811 20:47:41.628264 4456 heartbeater.cc:507] Master 127.31.250.254:40791 requested a full tablet report, sending...
I20250811 20:47:41.644769 4269 ts_manager.cc:194] Registered new tserver with Master: d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193:46671)
I20250811 20:47:41.650540 4269 catalog_manager.cc:5582] T 3918d98569dd46759251ad45bfa08089 P d08ec2a3bb504a1483c931954ffcd43c reported cstate change: term changed from 2 to 3. New cstate: current_term: 3 leader_uuid: "d08ec2a3bb504a1483c931954ffcd43c" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } health_report { overall_health: HEALTHY } } }
I20250811 20:47:41.652274 4269 catalog_manager.cc:5582] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c reported cstate change: config changed from index -1 to 13, term changed from 0 to 2, VOTER 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195) added, VOTER 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194) added, VOTER d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193) added. New cstate: current_term: 2 committed_config { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } } }
I20250811 20:47:41.719739 4269 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.193:54565
I20250811 20:47:41.724104 4456 heartbeater.cc:499] Master 127.31.250.254:40791 was elected leader, sending a full tablet report...
I20250811 20:47:41.784927 4348 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap replayed 1/1 log segments. Stats: ops{read=13 overwritten=0 applied=13 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:41.785637 4348 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap complete.
I20250811 20:47:41.786770 4348 ts_tablet_manager.cc:1397] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c: Time spent bootstrapping tablet: real 0.167s user 0.135s sys 0.023s
I20250811 20:47:41.788259 4348 raft_consensus.cc:357] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } }
I20250811 20:47:41.788722 4348 raft_consensus.cc:738] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: d08ec2a3bb504a1483c931954ffcd43c, State: Initialized, Role: FOLLOWER
I20250811 20:47:41.789258 4348 consensus_queue.cc:260] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } }
I20250811 20:47:41.790838 4348 ts_tablet_manager.cc:1428] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c: Time spent starting tablet: real 0.004s user 0.004s sys 0.000s
I20250811 20:47:41.791494 4348 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap starting.
I20250811 20:47:41.890549 4348 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:41.891287 4348 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c: Bootstrap complete.
I20250811 20:47:41.892395 4348 ts_tablet_manager.cc:1397] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c: Time spent bootstrapping tablet: real 0.101s user 0.091s sys 0.008s
I20250811 20:47:41.893846 4348 raft_consensus.cc:357] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } }
I20250811 20:47:41.894266 4348 raft_consensus.cc:738] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: d08ec2a3bb504a1483c931954ffcd43c, State: Initialized, Role: FOLLOWER
I20250811 20:47:41.894774 4348 consensus_queue.cc:260] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } }
I20250811 20:47:41.896360 4348 ts_tablet_manager.cc:1428] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c: Time spent starting tablet: real 0.004s user 0.004s sys 0.000s
W20250811 20:47:42.056869 4460 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:47:42.057365 4460 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:47:42.057848 4460 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:47:42.087424 4460 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:47:42.088295 4460 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.194
I20250811 20:47:42.122967 4460 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:46869
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.194:38949
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.31.250.194
--webserver_port=39741
--tserver_master_addrs=127.31.250.254:40791
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.194
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:47:42.124334 4460 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:47:42.125968 4460 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:47:42.138180 4478 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:47:43.114122 4484 raft_consensus.cc:491] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:47:43.115234 4484 raft_consensus.cc:513] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } }
W20250811 20:47:43.125965 4342 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.31.250.194:38949: connect: Connection refused (error 111)
I20250811 20:47:43.128381 4484 leader_election.cc:290] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949), 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003)
W20250811 20:47:43.135968 4342 leader_election.cc:336] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949): Network error: Client connection negotiation failed: client connection to 127.31.250.194:38949: connect: Connection refused (error 111)
W20250811 20:47:43.137100 4342 leader_election.cc:336] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003): Network error: Client connection negotiation failed: client connection to 127.31.250.195:46003: connect: Connection refused (error 111)
I20250811 20:47:43.137742 4342 leader_election.cc:304] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: d08ec2a3bb504a1483c931954ffcd43c; no voters: 18398bb77b9544f0bfec984dbe18adc9, 8aa039b30ffe49639e3e01dff534f030
I20250811 20:47:43.138859 4484 raft_consensus.cc:2747] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
W20250811 20:47:42.139307 4479 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:43.495694 4480 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1354 milliseconds
W20250811 20:47:43.495952 4481 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:43.495991 4460 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.356s user 0.405s sys 0.846s
W20250811 20:47:43.496447 4460 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.357s user 0.405s sys 0.846s
I20250811 20:47:43.496762 4460 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:47:43.498379 4460 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:47:43.501397 4460 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:47:43.502941 4460 hybrid_clock.cc:648] HybridClock initialized: now 1754945263502893 us; error 33 us; skew 500 ppm
I20250811 20:47:43.504158 4460 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:47:43.513676 4460 webserver.cc:489] Webserver started at http://127.31.250.194:39741/ using document root <none> and password file <none>
I20250811 20:47:43.515039 4460 fs_manager.cc:362] Metadata directory not provided
I20250811 20:47:43.515408 4460 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:47:43.527433 4460 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.006s sys 0.001s
I20250811 20:47:43.533752 4491 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:43.534919 4460 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.001s
I20250811 20:47:43.535419 4460 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "8aa039b30ffe49639e3e01dff534f030"
format_stamp: "Formatted at 2025-08-11 20:47:13 on dist-test-slave-4gzk"
I20250811 20:47:43.538616 4460 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:47:43.623039 4484 raft_consensus.cc:491] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:47:43.623544 4484 raft_consensus.cc:513] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } }
I20250811 20:47:43.625406 4484 leader_election.cc:290] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003), 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949)
I20250811 20:47:43.629895 4460 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
W20250811 20:47:43.631652 4342 leader_election.cc:336] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003): Network error: Client connection negotiation failed: client connection to 127.31.250.195:46003: connect: Connection refused (error 111)
I20250811 20:47:43.631937 4460 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
W20250811 20:47:43.632297 4342 leader_election.cc:336] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949): Network error: Client connection negotiation failed: client connection to 127.31.250.194:38949: connect: Connection refused (error 111)
I20250811 20:47:43.632514 4460 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:47:43.632701 4342 leader_election.cc:304] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: d08ec2a3bb504a1483c931954ffcd43c; no voters: 18398bb77b9544f0bfec984dbe18adc9, 8aa039b30ffe49639e3e01dff534f030
I20250811 20:47:43.633384 4484 raft_consensus.cc:2747] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250811 20:47:43.635948 4460 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:47:43.641700 4499 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250811 20:47:43.653107 4460 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250811 20:47:43.653331 4460 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.013s user 0.001s sys 0.001s
I20250811 20:47:43.653573 4460 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250811 20:47:43.658741 4499 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Bootstrap starting.
I20250811 20:47:43.661259 4460 ts_tablet_manager.cc:610] Registered 2 tablets
I20250811 20:47:43.661439 4460 ts_tablet_manager.cc:589] Time spent register tablets: real 0.008s user 0.006s sys 0.000s
I20250811 20:47:43.711428 4499 log.cc:826] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Log is configured to *not* fsync() on all Append() calls
I20250811 20:47:43.820813 4499 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Bootstrap replayed 1/1 log segments. Stats: ops{read=13 overwritten=0 applied=13 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:43.821585 4499 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Bootstrap complete.
I20250811 20:47:43.823352 4499 ts_tablet_manager.cc:1397] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Time spent bootstrapping tablet: real 0.165s user 0.132s sys 0.031s
I20250811 20:47:43.831599 4460 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.194:38949
I20250811 20:47:43.831720 4606 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.194:38949 every 8 connection(s)
I20250811 20:47:43.834129 4460 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 20:47:43.838817 4499 raft_consensus.cc:357] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } }
I20250811 20:47:43.842746 4499 raft_consensus.cc:738] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8aa039b30ffe49639e3e01dff534f030, State: Initialized, Role: FOLLOWER
I20250811 20:47:43.842896 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 4460
I20250811 20:47:43.844152 4499 consensus_queue.cc:260] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } }
I20250811 20:47:43.845185 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.195:46003
--local_ip_for_outbound_sockets=127.31.250.195
--tserver_master_addrs=127.31.250.254:40791
--webserver_port=41367
--webserver_interface=127.31.250.195
--builtin_ntp_servers=127.31.250.212:46869
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
I20250811 20:47:43.848654 4499 ts_tablet_manager.cc:1428] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Time spent starting tablet: real 0.025s user 0.023s sys 0.001s
I20250811 20:47:43.849258 4499 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030: Bootstrap starting.
I20250811 20:47:43.868640 4607 heartbeater.cc:344] Connected to a master server at 127.31.250.254:40791
I20250811 20:47:43.869385 4607 heartbeater.cc:461] Registering TS with master...
I20250811 20:47:43.871224 4607 heartbeater.cc:507] Master 127.31.250.254:40791 requested a full tablet report, sending...
I20250811 20:47:43.876277 4268 ts_manager.cc:194] Registered new tserver with Master: 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949)
I20250811 20:47:43.880391 4268 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.194:48085
I20250811 20:47:43.884096 4607 heartbeater.cc:499] Master 127.31.250.254:40791 was elected leader, sending a full tablet report...
I20250811 20:47:43.949047 4499 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:43.949729 4499 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030: Bootstrap complete.
I20250811 20:47:43.950835 4499 ts_tablet_manager.cc:1397] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030: Time spent bootstrapping tablet: real 0.102s user 0.074s sys 0.023s
I20250811 20:47:43.952395 4499 raft_consensus.cc:357] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } }
I20250811 20:47:43.952813 4499 raft_consensus.cc:738] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 8aa039b30ffe49639e3e01dff534f030, State: Initialized, Role: FOLLOWER
I20250811 20:47:43.953294 4499 consensus_queue.cc:260] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } }
I20250811 20:47:43.954597 4499 ts_tablet_manager.cc:1428] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030: Time spent starting tablet: real 0.004s user 0.000s sys 0.004s
W20250811 20:47:44.180876 4612 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:47:44.181391 4612 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:47:44.181892 4612 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:47:44.212399 4612 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:47:44.213275 4612 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.195
I20250811 20:47:44.247973 4612 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:46869
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.195:46003
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.31.250.195
--webserver_port=41367
--tserver_master_addrs=127.31.250.254:40791
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.195
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:47:44.249286 4612 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:47:44.250855 4612 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:47:44.262017 4619 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:45.053928 4452 debug-util.cc:398] Leaking SignalData structure 0x7b08000c7480 after lost signal to thread 4326
W20250811 20:47:45.055809 4452 debug-util.cc:398] Leaking SignalData structure 0x7b08000cdde0 after lost signal to thread 4455
I20250811 20:47:45.285748 4625 raft_consensus.cc:491] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:47:45.286680 4625 raft_consensus.cc:513] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } }
I20250811 20:47:45.304313 4625 leader_election.cc:290] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949), 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003)
I20250811 20:47:45.328562 4562 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "c9fa405f1b20481486824c1627057316" candidate_uuid: "d08ec2a3bb504a1483c931954ffcd43c" candidate_term: 3 candidate_status { last_received { term: 2 index: 13 } } ignore_live_leader: false dest_uuid: "8aa039b30ffe49639e3e01dff534f030" is_pre_election: true
I20250811 20:47:45.329782 4562 raft_consensus.cc:2466] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d08ec2a3bb504a1483c931954ffcd43c in term 2.
I20250811 20:47:45.332099 4342 leader_election.cc:304] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 8aa039b30ffe49639e3e01dff534f030, d08ec2a3bb504a1483c931954ffcd43c; no voters:
I20250811 20:47:45.334969 4625 raft_consensus.cc:2802] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250811 20:47:45.335459 4625 raft_consensus.cc:491] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 20:47:45.335915 4625 raft_consensus.cc:3058] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Advancing to term 3
I20250811 20:47:45.340090 4630 raft_consensus.cc:491] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:47:45.340691 4630 raft_consensus.cc:513] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } }
I20250811 20:47:45.359997 4630 leader_election.cc:290] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193:46671), 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003)
I20250811 20:47:45.360519 4625 raft_consensus.cc:513] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 3 FOLLOWER]: Starting leader election with config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } }
I20250811 20:47:45.363692 4625 leader_election.cc:290] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 election: Requested vote from peers 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949), 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003)
W20250811 20:47:45.371179 4492 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.31.250.195:46003: connect: Connection refused (error 111)
W20250811 20:47:45.379014 4342 leader_election.cc:336] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003): Network error: Client connection negotiation failed: client connection to 127.31.250.195:46003: connect: Connection refused (error 111)
I20250811 20:47:45.381534 4562 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "c9fa405f1b20481486824c1627057316" candidate_uuid: "d08ec2a3bb504a1483c931954ffcd43c" candidate_term: 3 candidate_status { last_received { term: 2 index: 13 } } ignore_live_leader: false dest_uuid: "8aa039b30ffe49639e3e01dff534f030"
I20250811 20:47:45.382184 4562 raft_consensus.cc:3058] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 2 FOLLOWER]: Advancing to term 3
W20250811 20:47:45.385224 4342 leader_election.cc:336] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003): Network error: Client connection negotiation failed: client connection to 127.31.250.195:46003: connect: Connection refused (error 111)
I20250811 20:47:45.394831 4562 raft_consensus.cc:2466] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d08ec2a3bb504a1483c931954ffcd43c in term 3.
I20250811 20:47:45.398267 4342 leader_election.cc:304] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 8aa039b30ffe49639e3e01dff534f030, d08ec2a3bb504a1483c931954ffcd43c; no voters: 18398bb77b9544f0bfec984dbe18adc9
I20250811 20:47:45.399313 4625 raft_consensus.cc:2802] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 3 FOLLOWER]: Leader election won for term 3
W20250811 20:47:45.415186 4492 leader_election.cc:336] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003): Network error: Client connection negotiation failed: client connection to 127.31.250.195:46003: connect: Connection refused (error 111)
I20250811 20:47:45.417541 4625 raft_consensus.cc:695] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 3 LEADER]: Becoming Leader. State: Replica: d08ec2a3bb504a1483c931954ffcd43c, State: Running, Role: LEADER
I20250811 20:47:45.418593 4625 consensus_queue.cc:237] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 13, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } }
I20250811 20:47:45.442185 4411 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "c9fa405f1b20481486824c1627057316" candidate_uuid: "8aa039b30ffe49639e3e01dff534f030" candidate_term: 3 candidate_status { last_received { term: 2 index: 13 } } ignore_live_leader: false dest_uuid: "d08ec2a3bb504a1483c931954ffcd43c" is_pre_election: true
I20250811 20:47:45.445159 4494 leader_election.cc:304] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: 8aa039b30ffe49639e3e01dff534f030; no voters: 18398bb77b9544f0bfec984dbe18adc9, d08ec2a3bb504a1483c931954ffcd43c
I20250811 20:47:45.446522 4630 raft_consensus.cc:2747] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 3 FOLLOWER]: Leader pre-election lost for term 3. Reason: could not achieve majority
I20250811 20:47:45.455909 4268 catalog_manager.cc:5582] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c reported cstate change: term changed from 2 to 3, leader changed from <none> to d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193). New cstate: current_term: 3 leader_uuid: "d08ec2a3bb504a1483c931954ffcd43c" committed_config { opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } health_report { overall_health: UNKNOWN } } }
I20250811 20:47:45.460285 4635 raft_consensus.cc:491] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:47:45.460907 4635 raft_consensus.cc:513] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Starting pre-election with config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } }
I20250811 20:47:45.466702 4635 leader_election.cc:290] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 pre-election: Requested pre-vote from peers 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003), 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949)
I20250811 20:47:45.468927 4562 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "628b4e91e833481a8a537e4947cb870c" candidate_uuid: "d08ec2a3bb504a1483c931954ffcd43c" candidate_term: 3 candidate_status { last_received { term: 2 index: 12 } } ignore_live_leader: false dest_uuid: "8aa039b30ffe49639e3e01dff534f030" is_pre_election: true
I20250811 20:47:45.469617 4562 raft_consensus.cc:2466] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030 [term 2 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate d08ec2a3bb504a1483c931954ffcd43c in term 2.
W20250811 20:47:45.469806 4342 leader_election.cc:336] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 pre-election: RPC error from VoteRequest() call to peer 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003): Network error: Client connection negotiation failed: client connection to 127.31.250.195:46003: connect: Connection refused (error 111)
I20250811 20:47:45.470743 4342 leader_election.cc:304] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 8aa039b30ffe49639e3e01dff534f030, d08ec2a3bb504a1483c931954ffcd43c; no voters: 18398bb77b9544f0bfec984dbe18adc9
I20250811 20:47:45.472146 4635 raft_consensus.cc:2802] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Leader pre-election won for term 3
I20250811 20:47:45.472635 4635 raft_consensus.cc:491] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 20:47:45.473120 4635 raft_consensus.cc:3058] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 2 FOLLOWER]: Advancing to term 3
I20250811 20:47:45.482528 4635 raft_consensus.cc:513] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 3 FOLLOWER]: Starting leader election with config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } }
W20250811 20:47:45.497419 4342 leader_election.cc:336] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 election: RPC error from VoteRequest() call to peer 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003): Network error: Client connection negotiation failed: client connection to 127.31.250.195:46003: connect: Connection refused (error 111)
I20250811 20:47:45.500739 4635 leader_election.cc:290] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 election: Requested vote from peers 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003), 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949)
I20250811 20:47:45.501135 4562 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "628b4e91e833481a8a537e4947cb870c" candidate_uuid: "d08ec2a3bb504a1483c931954ffcd43c" candidate_term: 3 candidate_status { last_received { term: 2 index: 12 } } ignore_live_leader: false dest_uuid: "8aa039b30ffe49639e3e01dff534f030"
I20250811 20:47:45.501825 4562 raft_consensus.cc:3058] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030 [term 2 FOLLOWER]: Advancing to term 3
I20250811 20:47:45.511130 4562 raft_consensus.cc:2466] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030 [term 3 FOLLOWER]: Leader election vote request: Granting yes vote for candidate d08ec2a3bb504a1483c931954ffcd43c in term 3.
I20250811 20:47:45.512616 4342 leader_election.cc:304] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [CANDIDATE]: Term 3 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 8aa039b30ffe49639e3e01dff534f030, d08ec2a3bb504a1483c931954ffcd43c; no voters: 18398bb77b9544f0bfec984dbe18adc9
I20250811 20:47:45.513579 4635 raft_consensus.cc:2802] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 3 FOLLOWER]: Leader election won for term 3
I20250811 20:47:45.514036 4635 raft_consensus.cc:695] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [term 3 LEADER]: Becoming Leader. State: Replica: d08ec2a3bb504a1483c931954ffcd43c, State: Running, Role: LEADER
I20250811 20:47:45.515049 4635 consensus_queue.cc:237] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 12, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } }
I20250811 20:47:45.539351 4268 catalog_manager.cc:5582] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c reported cstate change: term changed from 2 to 3, leader changed from 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195) to d08ec2a3bb504a1483c931954ffcd43c (127.31.250.193). New cstate: current_term: 3 leader_uuid: "d08ec2a3bb504a1483c931954ffcd43c" committed_config { opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
W20250811 20:47:44.263370 4620 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:45.669807 4618 debug-util.cc:398] Leaking SignalData structure 0x7b0800006fc0 after lost signal to thread 4612
W20250811 20:47:45.707499 4612 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.444s user 0.526s sys 0.850s
W20250811 20:47:45.709560 4612 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.446s user 0.526s sys 0.850s
W20250811 20:47:45.709836 4622 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:47:45.713371 4621 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1444 milliseconds
I20250811 20:47:45.713387 4612 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:47:45.714787 4612 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:47:45.717397 4612 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:47:45.718830 4612 hybrid_clock.cc:648] HybridClock initialized: now 1754945265718768 us; error 81 us; skew 500 ppm
I20250811 20:47:45.719662 4612 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:47:45.726640 4612 webserver.cc:489] Webserver started at http://127.31.250.195:41367/ using document root <none> and password file <none>
I20250811 20:47:45.727552 4612 fs_manager.cc:362] Metadata directory not provided
I20250811 20:47:45.727759 4612 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:47:45.736076 4612 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.005s sys 0.003s
I20250811 20:47:45.741303 4646 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:47:45.742429 4612 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.001s
I20250811 20:47:45.742731 4612 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "18398bb77b9544f0bfec984dbe18adc9"
format_stamp: "Formatted at 2025-08-11 20:47:15 on dist-test-slave-4gzk"
I20250811 20:47:45.744652 4612 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:47:45.810011 4612 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:47:45.811497 4612 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:47:45.811925 4612 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:47:45.814741 4612 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:47:45.821214 4653 ts_tablet_manager.cc:542] Loading tablet metadata (0/2 complete)
I20250811 20:47:45.835605 4612 ts_tablet_manager.cc:579] Loaded tablet metadata (2 total tablets, 2 live tablets)
I20250811 20:47:45.835914 4612 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.017s user 0.002s sys 0.000s
I20250811 20:47:45.836248 4612 ts_tablet_manager.cc:594] Registering tablets (0/2 complete)
I20250811 20:47:45.845811 4653 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: Bootstrap starting.
I20250811 20:47:45.850910 4612 ts_tablet_manager.cc:610] Registered 2 tablets
I20250811 20:47:45.851295 4612 ts_tablet_manager.cc:589] Time spent register tablets: real 0.015s user 0.010s sys 0.005s
I20250811 20:47:45.914165 4653 log.cc:826] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: Log is configured to *not* fsync() on all Append() calls
W20250811 20:47:45.948261 4342 consensus_peers.cc:489] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c -> Peer 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003): Couldn't send request to peer 18398bb77b9544f0bfec984dbe18adc9. Status: Network error: Client connection negotiation failed: client connection to 127.31.250.195:46003: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250811 20:47:45.963531 4562 raft_consensus.cc:1273] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 3 FOLLOWER]: Refusing update from remote peer d08ec2a3bb504a1483c931954ffcd43c: Log matching property violated. Preceding OpId in replica: term: 2 index: 13. Preceding OpId from leader: term: 3 index: 14. (index mismatch)
I20250811 20:47:45.967231 4635 consensus_queue.cc:1035] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [LEADER]: Connected to new peer: Peer: permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 14, Last known committed idx: 13, Time since last communication: 0.000s
W20250811 20:47:45.973681 4342 consensus_peers.cc:489] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c -> Peer 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003): Couldn't send request to peer 18398bb77b9544f0bfec984dbe18adc9. Status: Network error: Client connection negotiation failed: client connection to 127.31.250.195:46003: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250811 20:47:45.980218 4561 raft_consensus.cc:1273] T 628b4e91e833481a8a537e4947cb870c P 8aa039b30ffe49639e3e01dff534f030 [term 3 FOLLOWER]: Refusing update from remote peer d08ec2a3bb504a1483c931954ffcd43c: Log matching property violated. Preceding OpId in replica: term: 2 index: 12. Preceding OpId from leader: term: 3 index: 13. (index mismatch)
I20250811 20:47:45.981778 4635 consensus_queue.cc:1035] T 628b4e91e833481a8a537e4947cb870c P d08ec2a3bb504a1483c931954ffcd43c [LEADER]: Connected to new peer: Peer: permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 13, Last known committed idx: 12, Time since last communication: 0.000s
I20250811 20:47:46.086958 4653 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: Bootstrap replayed 1/1 log segments. Stats: ops{read=13 overwritten=0 applied=13 ignored=0} inserts{seen=350 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:46.088317 4653 tablet_bootstrap.cc:492] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: Bootstrap complete.
I20250811 20:47:46.090567 4653 ts_tablet_manager.cc:1397] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: Time spent bootstrapping tablet: real 0.245s user 0.187s sys 0.044s
I20250811 20:47:46.117431 4653 raft_consensus.cc:357] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } }
I20250811 20:47:46.121310 4653 raft_consensus.cc:738] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 18398bb77b9544f0bfec984dbe18adc9, State: Initialized, Role: FOLLOWER
I20250811 20:47:46.122699 4653 consensus_queue.cc:260] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 13, Last appended: 2.13, Last appended by leader: 13, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 13 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } attrs { promote: false } }
I20250811 20:47:46.134287 4653 ts_tablet_manager.cc:1428] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: Time spent starting tablet: real 0.043s user 0.028s sys 0.011s
I20250811 20:47:46.135496 4653 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Bootstrap starting.
I20250811 20:47:46.135721 4411 consensus_queue.cc:237] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 14, Committed index: 14, Last appended: 3.14, Last appended by leader: 13, Current term: 3, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 15 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } }
I20250811 20:47:46.142154 4560 raft_consensus.cc:1273] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 3 FOLLOWER]: Refusing update from remote peer d08ec2a3bb504a1483c931954ffcd43c: Log matching property violated. Preceding OpId in replica: term: 3 index: 14. Preceding OpId from leader: term: 3 index: 15. (index mismatch)
I20250811 20:47:46.143961 4709 consensus_queue.cc:1035] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [LEADER]: Connected to new peer: Peer: permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 15, Last known committed idx: 14, Time since last communication: 0.001s
I20250811 20:47:46.150926 4625 raft_consensus.cc:2953] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 3 LEADER]: Committing config change with OpId 3.15: config changed from index 13 to 15, VOTER 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195) evicted. New config: { opid_index: 15 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } }
I20250811 20:47:46.162339 4560 raft_consensus.cc:2953] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 3 FOLLOWER]: Committing config change with OpId 3.15: config changed from index 13 to 15, VOTER 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195) evicted. New config: { opid_index: 15 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } }
I20250811 20:47:46.172616 4255 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet c9fa405f1b20481486824c1627057316 with cas_config_opid_index 13: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250811 20:47:46.178570 4269 catalog_manager.cc:5582] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c reported cstate change: config changed from index 13 to 15, VOTER 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195) evicted. New cstate: current_term: 3 leader_uuid: "d08ec2a3bb504a1483c931954ffcd43c" committed_config { opid_index: 15 OBSOLETE_local: true peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
W20250811 20:47:46.216621 4269 catalog_manager.cc:5774] Failed to send DeleteTablet RPC for tablet c9fa405f1b20481486824c1627057316 on TS 18398bb77b9544f0bfec984dbe18adc9: Not found: failed to reset TS proxy: Could not find TS for UUID 18398bb77b9544f0bfec984dbe18adc9
I20250811 20:47:46.223618 4411 consensus_queue.cc:237] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 15, Committed index: 15, Last appended: 3.15, Last appended by leader: 13, Current term: 3, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } }
I20250811 20:47:46.228399 4625 raft_consensus.cc:2953] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c [term 3 LEADER]: Committing config change with OpId 3.16: config changed from index 15 to 16, VOTER 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194) evicted. New config: { opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } } }
I20250811 20:47:46.244421 4255 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet c9fa405f1b20481486824c1627057316 with cas_config_opid_index 15: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250811 20:47:46.249665 4268 catalog_manager.cc:5582] T c9fa405f1b20481486824c1627057316 P d08ec2a3bb504a1483c931954ffcd43c reported cstate change: config changed from index 15 to 16, VOTER 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194) evicted. New cstate: current_term: 3 leader_uuid: "d08ec2a3bb504a1483c931954ffcd43c" committed_config { opid_index: 16 OBSOLETE_local: true peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } health_report { overall_health: HEALTHY } } }
W20250811 20:47:46.272395 4254 catalog_manager.cc:4726] Async tablet task DeleteTablet RPC for tablet c9fa405f1b20481486824c1627057316 on TS 18398bb77b9544f0bfec984dbe18adc9 failed: Not found: failed to reset TS proxy: Could not find TS for UUID 18398bb77b9544f0bfec984dbe18adc9
I20250811 20:47:46.290251 4542 tablet_service.cc:1515] Processing DeleteTablet for tablet c9fa405f1b20481486824c1627057316 with delete_type TABLET_DATA_TOMBSTONED (TS 8aa039b30ffe49639e3e01dff534f030 not found in new config with opid_index 16) from {username='slave'} at 127.0.0.1:51748
I20250811 20:47:46.293993 4749 tablet_replica.cc:331] stopping tablet replica
I20250811 20:47:46.294992 4749 raft_consensus.cc:2241] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 3 FOLLOWER]: Raft consensus shutting down.
I20250811 20:47:46.295847 4749 raft_consensus.cc:2270] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030 [term 3 FOLLOWER]: Raft consensus is shut down!
I20250811 20:47:46.300143 4749 ts_tablet_manager.cc:1905] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250811 20:47:46.318483 4749 ts_tablet_manager.cc:1918] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 3.15
I20250811 20:47:46.319063 4749 log.cc:1199] T c9fa405f1b20481486824c1627057316 P 8aa039b30ffe49639e3e01dff534f030: Deleting WAL directory at /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal/wals/c9fa405f1b20481486824c1627057316
I20250811 20:47:46.321405 4253 catalog_manager.cc:4928] TS 8aa039b30ffe49639e3e01dff534f030 (127.31.250.194:38949): tablet c9fa405f1b20481486824c1627057316 (table TestTable [id=dd92d2883e1445d5a0817cfb5a207bcc]) successfully deleted
I20250811 20:47:46.355007 4653 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Bootstrap replayed 1/1 log segments. Stats: ops{read=12 overwritten=0 applied=12 ignored=0} inserts{seen=300 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:47:46.356215 4653 tablet_bootstrap.cc:492] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Bootstrap complete.
I20250811 20:47:46.357918 4653 ts_tablet_manager.cc:1397] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Time spent bootstrapping tablet: real 0.223s user 0.189s sys 0.020s
I20250811 20:47:46.360087 4612 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.195:46003
I20250811 20:47:46.360361 4776 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.195:46003 every 8 connection(s)
I20250811 20:47:46.360852 4653 raft_consensus.cc:357] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 2 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } }
I20250811 20:47:46.361446 4653 raft_consensus.cc:738] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 2 FOLLOWER]: Becoming Follower/Learner. State: Replica: 18398bb77b9544f0bfec984dbe18adc9, State: Initialized, Role: FOLLOWER
I20250811 20:47:46.361999 4653 consensus_queue.cc:260] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 12, Last appended: 2.12, Last appended by leader: 12, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: 12 OBSOLETE_local: true peers { permanent_uuid: "18398bb77b9544f0bfec984dbe18adc9" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 46003 } } peers { permanent_uuid: "8aa039b30ffe49639e3e01dff534f030" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 38949 } attrs { promote: false } } peers { permanent_uuid: "d08ec2a3bb504a1483c931954ffcd43c" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 46671 } attrs { promote: false } }
I20250811 20:47:46.363441 4612 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 20:47:46.363776 4653 ts_tablet_manager.cc:1428] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9: Time spent starting tablet: real 0.005s user 0.005s sys 0.000s
I20250811 20:47:46.368908 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 4612
I20250811 20:47:46.397714 4777 heartbeater.cc:344] Connected to a master server at 127.31.250.254:40791
I20250811 20:47:46.398260 4777 heartbeater.cc:461] Registering TS with master...
I20250811 20:47:46.399605 4777 heartbeater.cc:507] Master 127.31.250.254:40791 requested a full tablet report, sending...
I20250811 20:47:46.404150 4268 ts_manager.cc:194] Registered new tserver with Master: 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003)
I20250811 20:47:46.408921 4268 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.195:45287
I20250811 20:47:46.415818 32747 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 20:47:46.421298 32747 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
W20250811 20:47:46.425045 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
I20250811 20:47:46.442212 4696 tablet_service.cc:1515] Processing DeleteTablet for tablet c9fa405f1b20481486824c1627057316 with delete_type TABLET_DATA_TOMBSTONED (TS 18398bb77b9544f0bfec984dbe18adc9 not found in new config with opid_index 15) from {username='slave'} at 127.0.0.1:47178
I20250811 20:47:46.448983 4784 tablet_replica.cc:331] stopping tablet replica
I20250811 20:47:46.449928 4784 raft_consensus.cc:2241] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9 [term 2 FOLLOWER]: Raft consensus shutting down.
I20250811 20:47:46.450508 4784 raft_consensus.cc:2270] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250811 20:47:46.454015 4784 ts_tablet_manager.cc:1905] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250811 20:47:46.473783 4784 ts_tablet_manager.cc:1918] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.13
I20250811 20:47:46.474270 4784 log.cc:1199] T c9fa405f1b20481486824c1627057316 P 18398bb77b9544f0bfec984dbe18adc9: Deleting WAL directory at /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal/wals/c9fa405f1b20481486824c1627057316
I20250811 20:47:46.476353 4253 catalog_manager.cc:4928] TS 18398bb77b9544f0bfec984dbe18adc9 (127.31.250.195:46003): tablet c9fa405f1b20481486824c1627057316 (table TestTable [id=dd92d2883e1445d5a0817cfb5a207bcc]) successfully deleted
I20250811 20:47:46.600633 4726 raft_consensus.cc:3058] T 628b4e91e833481a8a537e4947cb870c P 18398bb77b9544f0bfec984dbe18adc9 [term 2 FOLLOWER]: Advancing to term 3
I20250811 20:47:46.606046 4777 heartbeater.cc:499] Master 127.31.250.254:40791 was elected leader, sending a full tablet report...
W20250811 20:47:47.429754 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:47:48.433727 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:47:49.437400 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:47:50.441187 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:47:51.444614 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:47:52.449791 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:47:53.453663 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:47:54.456997 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:47:55.460436 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:47:56.463863 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:47:57.467324 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:47:58.471613 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:47:59.475227 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:48:00.478915 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:48:01.482275 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:48:02.246098 4603 debug-util.cc:398] Leaking SignalData structure 0x7b08000c42a0 after lost signal to thread 4473
W20250811 20:48:02.247224 4603 debug-util.cc:398] Leaking SignalData structure 0x7b08000c67a0 after lost signal to thread 4606
W20250811 20:48:02.487856 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:48:03.491860 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:48:04.495242 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
W20250811 20:48:05.498509 32747 ts_itest-base.cc:209] found only 1 out of 3 replicas of tablet c9fa405f1b20481486824c1627057316: tablet_id: "c9fa405f1b20481486824c1627057316" DEPRECATED_stale: false partition { partition_key_start: "" partition_key_end: "" } interned_replicas { ts_info_idx: 0 role: LEADER }
/home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/tools/kudu-admin-test.cc:3914: Failure
Failed
Bad status: Not found: not all replicas of tablets comprising table TestTable are registered yet
I20250811 20:48:06.503607 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 4306
I20250811 20:48:06.531472 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 4460
I20250811 20:48:06.560758 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 4612
I20250811 20:48:06.586848 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 4236
2025-08-11T20:48:06Z chronyd exiting
I20250811 20:48:06.638756 32747 test_util.cc:183] -----------------------------------------------
I20250811 20:48:06.638967 32747 test_util.cc:184] Had failures, leaving test files at /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.AdminCliTest.TestRebuildTables.1754945161385764-32747-0
[ FAILED ] AdminCliTest.TestRebuildTables (59103 ms)
[----------] 5 tests from AdminCliTest (125186 ms total)
[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest
[ RUN ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4
I20250811 20:48:06.642794 32747 test_util.cc:276] Using random seed: 174660688
I20250811 20:48:06.646992 32747 ts_itest-base.cc:115] Starting cluster with:
I20250811 20:48:06.647157 32747 ts_itest-base.cc:116] --------------
I20250811 20:48:06.647370 32747 ts_itest-base.cc:117] 5 tablet servers
I20250811 20:48:06.647518 32747 ts_itest-base.cc:118] 3 replicas per TS
I20250811 20:48:06.647665 32747 ts_itest-base.cc:119] --------------
2025-08-11T20:48:06Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T20:48:06Z Disabled control of system clock
I20250811 20:48:06.689810 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:45815
--webserver_interface=127.31.250.254
--webserver_port=0
--builtin_ntp_servers=127.31.250.212:43967
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.31.250.254:45815
--raft_prepare_replacement_before_eviction=true with env {}
W20250811 20:48:06.995002 4809 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:06.995623 4809 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:06.996093 4809 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:07.026654 4809 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250811 20:48:07.027065 4809 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 20:48:07.027359 4809 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:07.027622 4809 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 20:48:07.027874 4809 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 20:48:07.063271 4809 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:43967
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.31.250.254:45815
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:45815
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.31.250.254
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:07.064625 4809 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:07.066391 4809 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:07.078500 4815 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:08.481312 4814 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 4809
W20250811 20:48:08.874950 4814 kernel_stack_watchdog.cc:198] Thread 4809 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 400ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250811 20:48:08.875416 4809 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.798s user 0.598s sys 1.200s
W20250811 20:48:07.078802 4816 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:08.875826 4809 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.798s user 0.598s sys 1.200s
W20250811 20:48:08.876263 4817 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1796 milliseconds
I20250811 20:48:08.877653 4809 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250811 20:48:08.877760 4818 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:48:08.881397 4809 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:48:08.883868 4809 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:48:08.885198 4809 hybrid_clock.cc:648] HybridClock initialized: now 1754945288885152 us; error 62 us; skew 500 ppm
I20250811 20:48:08.885969 4809 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:08.892067 4809 webserver.cc:489] Webserver started at http://127.31.250.254:42405/ using document root <none> and password file <none>
I20250811 20:48:08.892967 4809 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:08.893151 4809 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:08.893563 4809 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:48:08.899107 4809 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "f12ae4bf6e3347aeab89ab97cea58803"
format_stamp: "Formatted at 2025-08-11 20:48:08 on dist-test-slave-4gzk"
I20250811 20:48:08.900260 4809 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "f12ae4bf6e3347aeab89ab97cea58803"
format_stamp: "Formatted at 2025-08-11 20:48:08 on dist-test-slave-4gzk"
I20250811 20:48:08.907532 4809 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.002s
I20250811 20:48:08.912961 4825 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:08.914036 4809 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.005s sys 0.000s
I20250811 20:48:08.914383 4809 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
uuid: "f12ae4bf6e3347aeab89ab97cea58803"
format_stamp: "Formatted at 2025-08-11 20:48:08 on dist-test-slave-4gzk"
I20250811 20:48:08.914733 4809 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:08.970638 4809 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:08.972157 4809 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:08.972604 4809 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:09.043985 4809 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.254:45815
I20250811 20:48:09.044054 4876 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.254:45815 every 8 connection(s)
I20250811 20:48:09.046821 4809 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 20:48:09.051981 4877 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:48:09.051950 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 4809
I20250811 20:48:09.052350 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250811 20:48:09.077230 4877 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803: Bootstrap starting.
I20250811 20:48:09.084009 4877 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803: Neither blocks nor log segments found. Creating new log.
I20250811 20:48:09.085755 4877 log.cc:826] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803: Log is configured to *not* fsync() on all Append() calls
I20250811 20:48:09.090564 4877 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803: No bootstrap required, opened a new log
I20250811 20:48:09.107388 4877 raft_consensus.cc:357] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f12ae4bf6e3347aeab89ab97cea58803" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45815 } }
I20250811 20:48:09.108070 4877 raft_consensus.cc:383] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:48:09.108304 4877 raft_consensus.cc:738] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f12ae4bf6e3347aeab89ab97cea58803, State: Initialized, Role: FOLLOWER
I20250811 20:48:09.108999 4877 consensus_queue.cc:260] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f12ae4bf6e3347aeab89ab97cea58803" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45815 } }
I20250811 20:48:09.109520 4877 raft_consensus.cc:397] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:48:09.109824 4877 raft_consensus.cc:491] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:48:09.110147 4877 raft_consensus.cc:3058] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:48:09.114276 4877 raft_consensus.cc:513] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f12ae4bf6e3347aeab89ab97cea58803" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45815 } }
I20250811 20:48:09.114976 4877 leader_election.cc:304] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: f12ae4bf6e3347aeab89ab97cea58803; no voters:
I20250811 20:48:09.116677 4877 leader_election.cc:290] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 20:48:09.117321 4882 raft_consensus.cc:2802] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:48:09.119431 4882 raft_consensus.cc:695] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [term 1 LEADER]: Becoming Leader. State: Replica: f12ae4bf6e3347aeab89ab97cea58803, State: Running, Role: LEADER
I20250811 20:48:09.120214 4882 consensus_queue.cc:237] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f12ae4bf6e3347aeab89ab97cea58803" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45815 } }
I20250811 20:48:09.120849 4877 sys_catalog.cc:564] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [sys.catalog]: configured and running, proceeding with master startup.
I20250811 20:48:09.130674 4883 sys_catalog.cc:455] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "f12ae4bf6e3347aeab89ab97cea58803" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f12ae4bf6e3347aeab89ab97cea58803" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45815 } } }
I20250811 20:48:09.130992 4884 sys_catalog.cc:455] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [sys.catalog]: SysCatalogTable state changed. Reason: New leader f12ae4bf6e3347aeab89ab97cea58803. Latest consensus state: current_term: 1 leader_uuid: "f12ae4bf6e3347aeab89ab97cea58803" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "f12ae4bf6e3347aeab89ab97cea58803" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 45815 } } }
I20250811 20:48:09.131676 4883 sys_catalog.cc:458] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [sys.catalog]: This master's current role is: LEADER
I20250811 20:48:09.131783 4884 sys_catalog.cc:458] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803 [sys.catalog]: This master's current role is: LEADER
I20250811 20:48:09.134905 4892 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 20:48:09.147675 4892 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 20:48:09.164260 4892 catalog_manager.cc:1349] Generated new cluster ID: afabc90acc234fe08f0688ad8bde52f5
I20250811 20:48:09.164597 4892 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 20:48:09.177361 4892 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 20:48:09.178812 4892 catalog_manager.cc:1506] Loading token signing keys...
I20250811 20:48:09.197232 4892 catalog_manager.cc:5955] T 00000000000000000000000000000000 P f12ae4bf6e3347aeab89ab97cea58803: Generated new TSK 0
I20250811 20:48:09.198144 4892 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 20:48:09.217130 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.193:0
--local_ip_for_outbound_sockets=127.31.250.193
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45815
--builtin_ntp_servers=127.31.250.212:43967
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
W20250811 20:48:09.527138 4901 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:09.527695 4901 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:09.528179 4901 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:09.559273 4901 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250811 20:48:09.559749 4901 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:09.560544 4901 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.193
I20250811 20:48:09.595819 4901 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:43967
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.193:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45815
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.193
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:09.597084 4901 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:09.598671 4901 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:09.611945 4907 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:11.012693 4906 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 4901
W20250811 20:48:09.614061 4908 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:11.138331 4906 kernel_stack_watchdog.cc:198] Thread 4901 stuck at /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/util/thread.cc:642 for 399ms:
Kernel stack:
(could not read kernel stack)
User stack:
<Timed out: thread did not respond: maybe it is blocking signals>
W20250811 20:48:11.138583 4909 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
W20250811 20:48:11.142493 4901 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.531s user 0.482s sys 1.049s
W20250811 20:48:11.142808 4901 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.531s user 0.482s sys 1.049s
W20250811 20:48:11.144150 4910 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:48:11.144179 4901 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:48:11.145278 4901 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:48:11.147415 4901 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:48:11.148754 4901 hybrid_clock.cc:648] HybridClock initialized: now 1754945291148727 us; error 33 us; skew 500 ppm
I20250811 20:48:11.149478 4901 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:11.155447 4901 webserver.cc:489] Webserver started at http://127.31.250.193:37467/ using document root <none> and password file <none>
I20250811 20:48:11.156342 4901 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:11.156561 4901 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:11.157008 4901 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:48:11.161214 4901 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "f13fccb690a248468d5564dcf758fb07"
format_stamp: "Formatted at 2025-08-11 20:48:11 on dist-test-slave-4gzk"
I20250811 20:48:11.162285 4901 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "f13fccb690a248468d5564dcf758fb07"
format_stamp: "Formatted at 2025-08-11 20:48:11 on dist-test-slave-4gzk"
I20250811 20:48:11.169255 4901 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.003s
I20250811 20:48:11.174926 4917 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:11.175961 4901 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.001s
I20250811 20:48:11.176280 4901 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "f13fccb690a248468d5564dcf758fb07"
format_stamp: "Formatted at 2025-08-11 20:48:11 on dist-test-slave-4gzk"
I20250811 20:48:11.176591 4901 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:11.244354 4901 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:11.245836 4901 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:11.246268 4901 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:11.248813 4901 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:48:11.252733 4901 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:48:11.252938 4901 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:11.253181 4901 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:48:11.253345 4901 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:11.384126 4901 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.193:37929
I20250811 20:48:11.384224 5029 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.193:37929 every 8 connection(s)
I20250811 20:48:11.386641 4901 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 20:48:11.389048 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 4901
I20250811 20:48:11.389528 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250811 20:48:11.397636 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.194:0
--local_ip_for_outbound_sockets=127.31.250.194
--webserver_interface=127.31.250.194
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45815
--builtin_ntp_servers=127.31.250.212:43967
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250811 20:48:11.411055 5030 heartbeater.cc:344] Connected to a master server at 127.31.250.254:45815
I20250811 20:48:11.411545 5030 heartbeater.cc:461] Registering TS with master...
I20250811 20:48:11.412575 5030 heartbeater.cc:507] Master 127.31.250.254:45815 requested a full tablet report, sending...
I20250811 20:48:11.415071 4842 ts_manager.cc:194] Registered new tserver with Master: f13fccb690a248468d5564dcf758fb07 (127.31.250.193:37929)
I20250811 20:48:11.416951 4842 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.193:50277
W20250811 20:48:11.696269 5034 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:11.696755 5034 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:11.697211 5034 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:11.725922 5034 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250811 20:48:11.726315 5034 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:11.727092 5034 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.194
I20250811 20:48:11.760279 5034 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:43967
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.194:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
--webserver_interface=127.31.250.194
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45815
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.194
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:11.761533 5034 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:11.763130 5034 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:11.777869 5040 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:48:12.420023 5030 heartbeater.cc:499] Master 127.31.250.254:45815 was elected leader, sending a full tablet report...
W20250811 20:48:11.779207 5041 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:13.007551 5042 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1227 milliseconds
W20250811 20:48:13.009182 5043 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:13.011274 5034 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.232s user 0.436s sys 0.786s
W20250811 20:48:13.011547 5034 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.232s user 0.440s sys 0.786s
I20250811 20:48:13.011761 5034 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:48:13.012776 5034 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:48:13.015040 5034 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:48:13.016383 5034 hybrid_clock.cc:648] HybridClock initialized: now 1754945293016359 us; error 39 us; skew 500 ppm
I20250811 20:48:13.017179 5034 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:13.024241 5034 webserver.cc:489] Webserver started at http://127.31.250.194:34557/ using document root <none> and password file <none>
I20250811 20:48:13.025215 5034 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:13.025467 5034 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:13.025979 5034 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:48:13.030316 5034 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/instance:
uuid: "5c5c7f63d14f4e65a13f36474436136f"
format_stamp: "Formatted at 2025-08-11 20:48:13 on dist-test-slave-4gzk"
I20250811 20:48:13.031412 5034 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal/instance:
uuid: "5c5c7f63d14f4e65a13f36474436136f"
format_stamp: "Formatted at 2025-08-11 20:48:13 on dist-test-slave-4gzk"
I20250811 20:48:13.038920 5034 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.006s sys 0.000s
I20250811 20:48:13.045931 5051 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:13.047204 5034 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.000s sys 0.006s
I20250811 20:48:13.047641 5034 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
uuid: "5c5c7f63d14f4e65a13f36474436136f"
format_stamp: "Formatted at 2025-08-11 20:48:13 on dist-test-slave-4gzk"
I20250811 20:48:13.048144 5034 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:13.121811 5034 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:13.123404 5034 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:13.123829 5034 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:13.126315 5034 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:48:13.130235 5034 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:48:13.130412 5034 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:13.130671 5034 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:48:13.130815 5034 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:13.262861 5034 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.194:33377
I20250811 20:48:13.262966 5163 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.194:33377 every 8 connection(s)
I20250811 20:48:13.265486 5034 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/data/info.pb
I20250811 20:48:13.266789 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 5034
I20250811 20:48:13.267393 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-1/wal/instance
I20250811 20:48:13.275924 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.195:0
--local_ip_for_outbound_sockets=127.31.250.195
--webserver_interface=127.31.250.195
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45815
--builtin_ntp_servers=127.31.250.212:43967
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250811 20:48:13.288410 5164 heartbeater.cc:344] Connected to a master server at 127.31.250.254:45815
I20250811 20:48:13.288839 5164 heartbeater.cc:461] Registering TS with master...
I20250811 20:48:13.289832 5164 heartbeater.cc:507] Master 127.31.250.254:45815 requested a full tablet report, sending...
I20250811 20:48:13.291985 4842 ts_manager.cc:194] Registered new tserver with Master: 5c5c7f63d14f4e65a13f36474436136f (127.31.250.194:33377)
I20250811 20:48:13.293174 4842 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.194:54147
W20250811 20:48:13.588150 5168 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:13.588704 5168 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:13.589205 5168 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:13.618850 5168 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250811 20:48:13.619295 5168 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:13.620023 5168 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.195
I20250811 20:48:13.653323 5168 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:43967
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.195:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
--webserver_interface=127.31.250.195
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45815
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.195
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:13.654678 5168 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:13.656281 5168 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:13.669363 5175 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:48:14.297070 5164 heartbeater.cc:499] Master 127.31.250.254:45815 was elected leader, sending a full tablet report...
W20250811 20:48:13.669374 5174 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:14.875280 5176 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1203 milliseconds
W20250811 20:48:14.875659 5177 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:14.876331 5168 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.207s user 0.386s sys 0.811s
W20250811 20:48:14.876597 5168 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.207s user 0.386s sys 0.811s
I20250811 20:48:14.876809 5168 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:48:14.877889 5168 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:48:14.889045 5168 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:48:14.890403 5168 hybrid_clock.cc:648] HybridClock initialized: now 1754945294890373 us; error 41 us; skew 500 ppm
I20250811 20:48:14.891203 5168 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:14.898209 5168 webserver.cc:489] Webserver started at http://127.31.250.195:33319/ using document root <none> and password file <none>
I20250811 20:48:14.899111 5168 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:14.899353 5168 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:14.899806 5168 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:48:14.904695 5168 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/instance:
uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55"
format_stamp: "Formatted at 2025-08-11 20:48:14 on dist-test-slave-4gzk"
I20250811 20:48:14.905911 5168 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal/instance:
uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55"
format_stamp: "Formatted at 2025-08-11 20:48:14 on dist-test-slave-4gzk"
I20250811 20:48:14.913739 5168 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.004s
I20250811 20:48:14.919585 5184 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:14.920704 5168 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250811 20:48:14.921000 5168 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55"
format_stamp: "Formatted at 2025-08-11 20:48:14 on dist-test-slave-4gzk"
I20250811 20:48:14.921319 5168 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:14.985260 5168 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:14.986711 5168 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:14.987129 5168 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:14.989681 5168 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:48:14.993680 5168 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:48:14.993893 5168 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:14.994143 5168 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:48:14.994313 5168 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:15.134999 5168 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.195:36573
I20250811 20:48:15.135119 5296 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.195:36573 every 8 connection(s)
I20250811 20:48:15.137632 5168 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/data/info.pb
I20250811 20:48:15.141140 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 5168
I20250811 20:48:15.141620 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-2/wal/instance
I20250811 20:48:15.149134 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.196:0
--local_ip_for_outbound_sockets=127.31.250.196
--webserver_interface=127.31.250.196
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45815
--builtin_ntp_servers=127.31.250.212:43967
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250811 20:48:15.167852 5297 heartbeater.cc:344] Connected to a master server at 127.31.250.254:45815
I20250811 20:48:15.168324 5297 heartbeater.cc:461] Registering TS with master...
I20250811 20:48:15.169399 5297 heartbeater.cc:507] Master 127.31.250.254:45815 requested a full tablet report, sending...
I20250811 20:48:15.171721 4842 ts_manager.cc:194] Registered new tserver with Master: 6eeeb7741faf4ce8aec0a730fd3fcb55 (127.31.250.195:36573)
I20250811 20:48:15.173161 4842 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.195:48389
W20250811 20:48:15.461994 5301 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:15.462535 5301 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:15.463063 5301 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:15.494328 5301 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250811 20:48:15.494776 5301 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:15.495636 5301 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.196
I20250811 20:48:15.530340 5301 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:43967
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.196:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data/info.pb
--webserver_interface=127.31.250.196
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45815
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.196
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:15.531747 5301 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:15.533367 5301 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:15.545624 5307 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:48:16.176419 5297 heartbeater.cc:499] Master 127.31.250.254:45815 was elected leader, sending a full tablet report...
W20250811 20:48:15.546588 5308 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:16.766131 5310 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:16.769204 5301 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.223s user 0.376s sys 0.845s
W20250811 20:48:16.769559 5301 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.224s user 0.376s sys 0.845s
W20250811 20:48:16.772825 5309 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1216 milliseconds
I20250811 20:48:16.775382 5301 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:48:16.776989 5301 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:48:16.779868 5301 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:48:16.781402 5301 hybrid_clock.cc:648] HybridClock initialized: now 1754945296781345 us; error 54 us; skew 500 ppm
I20250811 20:48:16.782608 5301 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:16.792150 5301 webserver.cc:489] Webserver started at http://127.31.250.196:38411/ using document root <none> and password file <none>
I20250811 20:48:16.793565 5301 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:16.793879 5301 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:16.794551 5301 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:48:16.801832 5301 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data/instance:
uuid: "335438d8c109465b91eecf43f787cffb"
format_stamp: "Formatted at 2025-08-11 20:48:16 on dist-test-slave-4gzk"
I20250811 20:48:16.803483 5301 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/wal/instance:
uuid: "335438d8c109465b91eecf43f787cffb"
format_stamp: "Formatted at 2025-08-11 20:48:16 on dist-test-slave-4gzk"
I20250811 20:48:16.813519 5301 fs_manager.cc:696] Time spent creating directory manager: real 0.009s user 0.004s sys 0.005s
I20250811 20:48:16.821406 5317 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:16.822705 5301 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.005s sys 0.000s
I20250811 20:48:16.823150 5301 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/wal
uuid: "335438d8c109465b91eecf43f787cffb"
format_stamp: "Formatted at 2025-08-11 20:48:16 on dist-test-slave-4gzk"
I20250811 20:48:16.823678 5301 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:16.937616 5301 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:16.939076 5301 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:16.939515 5301 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:16.942094 5301 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:48:16.946228 5301 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:48:16.946435 5301 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:16.946683 5301 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:48:16.946848 5301 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:17.092280 5301 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.196:43959
I20250811 20:48:17.092382 5430 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.196:43959 every 8 connection(s)
I20250811 20:48:17.095011 5301 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/data/info.pb
I20250811 20:48:17.097860 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 5301
I20250811 20:48:17.098354 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-3/wal/instance
I20250811 20:48:17.106690 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.197:0
--local_ip_for_outbound_sockets=127.31.250.197
--webserver_interface=127.31.250.197
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45815
--builtin_ntp_servers=127.31.250.212:43967
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--raft_prepare_replacement_before_eviction=true with env {}
I20250811 20:48:17.121866 5431 heartbeater.cc:344] Connected to a master server at 127.31.250.254:45815
I20250811 20:48:17.122355 5431 heartbeater.cc:461] Registering TS with master...
I20250811 20:48:17.123442 5431 heartbeater.cc:507] Master 127.31.250.254:45815 requested a full tablet report, sending...
I20250811 20:48:17.125942 4842 ts_manager.cc:194] Registered new tserver with Master: 335438d8c109465b91eecf43f787cffb (127.31.250.196:43959)
I20250811 20:48:17.128161 4842 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.196:46271
W20250811 20:48:17.422780 5435 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:17.423287 5435 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:17.423750 5435 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:17.455477 5435 flags.cc:425] Enabled experimental flag: --raft_prepare_replacement_before_eviction=true
W20250811 20:48:17.455847 5435 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:17.456597 5435 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.197
I20250811 20:48:17.491389 5435 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:43967
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.197:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/data/info.pb
--webserver_interface=127.31.250.197
--webserver_port=0
--tserver_master_addrs=127.31.250.254:45815
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.197
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:17.492714 5435 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:17.494414 5435 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:17.506518 5441 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:48:18.131621 5431 heartbeater.cc:499] Master 127.31.250.254:45815 was elected leader, sending a full tablet report...
W20250811 20:48:17.511852 5444 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:17.508200 5442 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:18.662453 5443 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250811 20:48:18.662500 5435 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:48:18.666273 5435 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:48:18.668464 5435 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:48:18.669818 5435 hybrid_clock.cc:648] HybridClock initialized: now 1754945298669795 us; error 44 us; skew 500 ppm
I20250811 20:48:18.670665 5435 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:18.676514 5435 webserver.cc:489] Webserver started at http://127.31.250.197:34933/ using document root <none> and password file <none>
I20250811 20:48:18.677409 5435 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:18.677613 5435 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:18.678128 5435 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:48:18.682672 5435 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/data/instance:
uuid: "01135df7fd7946268dc3bac5f17faf55"
format_stamp: "Formatted at 2025-08-11 20:48:18 on dist-test-slave-4gzk"
I20250811 20:48:18.683789 5435 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/wal/instance:
uuid: "01135df7fd7946268dc3bac5f17faf55"
format_stamp: "Formatted at 2025-08-11 20:48:18 on dist-test-slave-4gzk"
I20250811 20:48:18.690609 5435 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.001s
I20250811 20:48:18.696089 5451 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:18.697228 5435 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250811 20:48:18.697548 5435 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/wal
uuid: "01135df7fd7946268dc3bac5f17faf55"
format_stamp: "Formatted at 2025-08-11 20:48:18 on dist-test-slave-4gzk"
I20250811 20:48:18.697856 5435 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:18.758741 5435 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:18.760216 5435 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:18.760622 5435 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:18.763043 5435 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:48:18.768484 5435 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:48:18.768806 5435 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:18.769151 5435 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:48:18.769369 5435 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:18.904597 5435 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.197:40343
I20250811 20:48:18.904771 5563 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.197:40343 every 8 connection(s)
I20250811 20:48:18.907147 5435 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/data/info.pb
I20250811 20:48:18.917809 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 5435
I20250811 20:48:18.918311 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.EnableKudu1097AndDownTS_MoveTabletParamTest.Test_4.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-4/wal/instance
I20250811 20:48:18.937980 5564 heartbeater.cc:344] Connected to a master server at 127.31.250.254:45815
I20250811 20:48:18.938402 5564 heartbeater.cc:461] Registering TS with master...
I20250811 20:48:18.939412 5564 heartbeater.cc:507] Master 127.31.250.254:45815 requested a full tablet report, sending...
I20250811 20:48:18.941522 4841 ts_manager.cc:194] Registered new tserver with Master: 01135df7fd7946268dc3bac5f17faf55 (127.31.250.197:40343)
I20250811 20:48:18.942898 4841 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.197:52941
I20250811 20:48:18.954705 32747 external_mini_cluster.cc:949] 5 TS(s) registered with all masters
I20250811 20:48:18.989777 4841 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:44408:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250811 20:48:19.077267 5366 tablet_service.cc:1468] Processing CreateTablet for tablet ef85281c3b7345e9bcfd66632f6f8042 (DEFAULT_TABLE table=TestTable [id=ab889d0c682d446fa038c0f88537c06d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:48:19.079303 5366 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ef85281c3b7345e9bcfd66632f6f8042. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:48:19.079121 4965 tablet_service.cc:1468] Processing CreateTablet for tablet ef85281c3b7345e9bcfd66632f6f8042 (DEFAULT_TABLE table=TestTable [id=ab889d0c682d446fa038c0f88537c06d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:48:19.081077 4965 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ef85281c3b7345e9bcfd66632f6f8042. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:48:19.089962 5232 tablet_service.cc:1468] Processing CreateTablet for tablet ef85281c3b7345e9bcfd66632f6f8042 (DEFAULT_TABLE table=TestTable [id=ab889d0c682d446fa038c0f88537c06d]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:48:19.092023 5232 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet ef85281c3b7345e9bcfd66632f6f8042. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:48:19.120800 5583 tablet_bootstrap.cc:492] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07: Bootstrap starting.
I20250811 20:48:19.129031 5583 tablet_bootstrap.cc:654] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07: Neither blocks nor log segments found. Creating new log.
I20250811 20:48:19.134403 5583 log.cc:826] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07: Log is configured to *not* fsync() on all Append() calls
I20250811 20:48:19.149232 5584 tablet_bootstrap.cc:492] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55: Bootstrap starting.
I20250811 20:48:19.166584 5585 tablet_bootstrap.cc:492] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb: Bootstrap starting.
I20250811 20:48:19.169246 5583 tablet_bootstrap.cc:492] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07: No bootstrap required, opened a new log
I20250811 20:48:19.169828 5583 ts_tablet_manager.cc:1397] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07: Time spent bootstrapping tablet: real 0.050s user 0.031s sys 0.004s
I20250811 20:48:19.169996 5584 tablet_bootstrap.cc:654] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55: Neither blocks nor log segments found. Creating new log.
I20250811 20:48:19.172960 5584 log.cc:826] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55: Log is configured to *not* fsync() on all Append() calls
I20250811 20:48:19.174507 5585 tablet_bootstrap.cc:654] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb: Neither blocks nor log segments found. Creating new log.
I20250811 20:48:19.176807 5585 log.cc:826] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb: Log is configured to *not* fsync() on all Append() calls
I20250811 20:48:19.184428 5584 tablet_bootstrap.cc:492] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55: No bootstrap required, opened a new log
I20250811 20:48:19.185015 5584 ts_tablet_manager.cc:1397] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55: Time spent bootstrapping tablet: real 0.036s user 0.011s sys 0.013s
I20250811 20:48:19.186098 5585 tablet_bootstrap.cc:492] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb: No bootstrap required, opened a new log
I20250811 20:48:19.186497 5585 ts_tablet_manager.cc:1397] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb: Time spent bootstrapping tablet: real 0.020s user 0.003s sys 0.014s
I20250811 20:48:19.196861 5583 raft_consensus.cc:357] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f13fccb690a248468d5564dcf758fb07" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 37929 } } peers { permanent_uuid: "335438d8c109465b91eecf43f787cffb" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 43959 } } peers { permanent_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 36573 } }
I20250811 20:48:19.197986 5583 raft_consensus.cc:383] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:48:19.198380 5583 raft_consensus.cc:738] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f13fccb690a248468d5564dcf758fb07, State: Initialized, Role: FOLLOWER
I20250811 20:48:19.199785 5583 consensus_queue.cc:260] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f13fccb690a248468d5564dcf758fb07" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 37929 } } peers { permanent_uuid: "335438d8c109465b91eecf43f787cffb" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 43959 } } peers { permanent_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 36573 } }
I20250811 20:48:19.203943 5583 ts_tablet_manager.cc:1428] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07: Time spent starting tablet: real 0.034s user 0.025s sys 0.008s
I20250811 20:48:19.212862 5585 raft_consensus.cc:357] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f13fccb690a248468d5564dcf758fb07" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 37929 } } peers { permanent_uuid: "335438d8c109465b91eecf43f787cffb" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 43959 } } peers { permanent_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 36573 } }
I20250811 20:48:19.212862 5584 raft_consensus.cc:357] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f13fccb690a248468d5564dcf758fb07" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 37929 } } peers { permanent_uuid: "335438d8c109465b91eecf43f787cffb" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 43959 } } peers { permanent_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 36573 } }
I20250811 20:48:19.213775 5585 raft_consensus.cc:383] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:48:19.213775 5584 raft_consensus.cc:383] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:48:19.214076 5585 raft_consensus.cc:738] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 335438d8c109465b91eecf43f787cffb, State: Initialized, Role: FOLLOWER
I20250811 20:48:19.214082 5584 raft_consensus.cc:738] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 6eeeb7741faf4ce8aec0a730fd3fcb55, State: Initialized, Role: FOLLOWER
I20250811 20:48:19.214957 5585 consensus_queue.cc:260] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f13fccb690a248468d5564dcf758fb07" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 37929 } } peers { permanent_uuid: "335438d8c109465b91eecf43f787cffb" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 43959 } } peers { permanent_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 36573 } }
I20250811 20:48:19.214960 5584 consensus_queue.cc:260] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f13fccb690a248468d5564dcf758fb07" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 37929 } } peers { permanent_uuid: "335438d8c109465b91eecf43f787cffb" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 43959 } } peers { permanent_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 36573 } }
I20250811 20:48:19.219331 5584 ts_tablet_manager.cc:1428] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55: Time spent starting tablet: real 0.034s user 0.033s sys 0.000s
I20250811 20:48:19.220228 5585 ts_tablet_manager.cc:1428] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb: Time spent starting tablet: real 0.034s user 0.029s sys 0.006s
I20250811 20:48:19.260005 5589 raft_consensus.cc:491] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:48:19.260501 5589 raft_consensus.cc:513] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f13fccb690a248468d5564dcf758fb07" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 37929 } } peers { permanent_uuid: "335438d8c109465b91eecf43f787cffb" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 43959 } } peers { permanent_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 36573 } }
I20250811 20:48:19.263104 5589 leader_election.cc:290] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 335438d8c109465b91eecf43f787cffb (127.31.250.196:43959), 6eeeb7741faf4ce8aec0a730fd3fcb55 (127.31.250.195:36573)
I20250811 20:48:19.276716 5252 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ef85281c3b7345e9bcfd66632f6f8042" candidate_uuid: "f13fccb690a248468d5564dcf758fb07" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" is_pre_election: true
I20250811 20:48:19.276959 5386 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ef85281c3b7345e9bcfd66632f6f8042" candidate_uuid: "f13fccb690a248468d5564dcf758fb07" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "335438d8c109465b91eecf43f787cffb" is_pre_election: true
I20250811 20:48:19.277452 5252 raft_consensus.cc:2466] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f13fccb690a248468d5564dcf758fb07 in term 0.
I20250811 20:48:19.277741 5386 raft_consensus.cc:2466] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f13fccb690a248468d5564dcf758fb07 in term 0.
I20250811 20:48:19.278656 4921 leader_election.cc:304] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 6eeeb7741faf4ce8aec0a730fd3fcb55, f13fccb690a248468d5564dcf758fb07; no voters:
I20250811 20:48:19.279415 5589 raft_consensus.cc:2802] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250811 20:48:19.279699 5589 raft_consensus.cc:491] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 20:48:19.279973 5589 raft_consensus.cc:3058] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:48:19.284360 5589 raft_consensus.cc:513] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f13fccb690a248468d5564dcf758fb07" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 37929 } } peers { permanent_uuid: "335438d8c109465b91eecf43f787cffb" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 43959 } } peers { permanent_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 36573 } }
I20250811 20:48:19.285724 5589 leader_election.cc:290] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [CANDIDATE]: Term 1 election: Requested vote from peers 335438d8c109465b91eecf43f787cffb (127.31.250.196:43959), 6eeeb7741faf4ce8aec0a730fd3fcb55 (127.31.250.195:36573)
I20250811 20:48:19.286592 5386 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ef85281c3b7345e9bcfd66632f6f8042" candidate_uuid: "f13fccb690a248468d5564dcf758fb07" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "335438d8c109465b91eecf43f787cffb"
I20250811 20:48:19.286726 5252 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ef85281c3b7345e9bcfd66632f6f8042" candidate_uuid: "f13fccb690a248468d5564dcf758fb07" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55"
I20250811 20:48:19.287132 5386 raft_consensus.cc:3058] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:48:19.287232 5252 raft_consensus.cc:3058] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:48:19.294046 5386 raft_consensus.cc:2466] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f13fccb690a248468d5564dcf758fb07 in term 1.
I20250811 20:48:19.294056 5252 raft_consensus.cc:2466] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f13fccb690a248468d5564dcf758fb07 in term 1.
I20250811 20:48:19.295192 4919 leader_election.cc:304] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 335438d8c109465b91eecf43f787cffb, f13fccb690a248468d5564dcf758fb07; no voters:
I20250811 20:48:19.295907 5589 raft_consensus.cc:2802] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:48:19.297375 5589 raft_consensus.cc:695] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [term 1 LEADER]: Becoming Leader. State: Replica: f13fccb690a248468d5564dcf758fb07, State: Running, Role: LEADER
I20250811 20:48:19.298225 5589 consensus_queue.cc:237] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f13fccb690a248468d5564dcf758fb07" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 37929 } } peers { permanent_uuid: "335438d8c109465b91eecf43f787cffb" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 43959 } } peers { permanent_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 36573 } }
I20250811 20:48:19.309196 4840 catalog_manager.cc:5582] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07 reported cstate change: term changed from 0 to 1, leader changed from <none> to f13fccb690a248468d5564dcf758fb07 (127.31.250.193). New cstate: current_term: 1 leader_uuid: "f13fccb690a248468d5564dcf758fb07" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f13fccb690a248468d5564dcf758fb07" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 37929 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "335438d8c109465b91eecf43f787cffb" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 43959 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 36573 } health_report { overall_health: UNKNOWN } } }
W20250811 20:48:19.351527 5432 tablet.cc:2378] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 20:48:19.379909 32747 external_mini_cluster.cc:949] 5 TS(s) registered with all masters
I20250811 20:48:19.383474 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver f13fccb690a248468d5564dcf758fb07 to finish bootstrapping
W20250811 20:48:19.396719 5298 tablet.cc:2378] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 20:48:19.397123 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 6eeeb7741faf4ce8aec0a730fd3fcb55 to finish bootstrapping
W20250811 20:48:19.400521 5031 tablet.cc:2378] T ef85281c3b7345e9bcfd66632f6f8042 P f13fccb690a248468d5564dcf758fb07: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 20:48:19.409355 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 335438d8c109465b91eecf43f787cffb to finish bootstrapping
I20250811 20:48:19.420583 32747 test_util.cc:276] Using random seed: 187438480
I20250811 20:48:19.444737 32747 test_workload.cc:405] TestWorkload: Skipping table creation because table TestTable already exists
I20250811 20:48:19.445641 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 4901
W20250811 20:48:19.483381 5601 negotiation.cc:337] Failed RPC negotiation. Trace:
0811 20:48:19.458623 (+ 0us) reactor.cc:625] Submitting negotiation task for client connection to 127.31.250.193:37929 (local address 127.0.0.1:33382)
0811 20:48:19.459314 (+ 691us) negotiation.cc:107] Waiting for socket to connect
0811 20:48:19.459348 (+ 34us) client_negotiation.cc:174] Beginning negotiation
0811 20:48:19.459553 (+ 205us) client_negotiation.cc:252] Sending NEGOTIATE NegotiatePB request
0811 20:48:19.472574 (+ 13021us) negotiation.cc:327] Negotiation complete: Network error: Client connection negotiation failed: client connection to 127.31.250.193:37929: BlockingRecv error: recv error from unknown peer: Transport endpoint is not connected (error 107)
Metrics: {"client-negotiator.queue_time_us":90}
W20250811 20:48:19.485361 5600 meta_cache.cc:302] tablet ef85281c3b7345e9bcfd66632f6f8042: replica f13fccb690a248468d5564dcf758fb07 (127.31.250.193:37929) has failed: Network error: Client connection negotiation failed: client connection to 127.31.250.193:37929: BlockingRecv error: recv error from unknown peer: Transport endpoint is not connected (error 107)
W20250811 20:48:19.505822 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
W20250811 20:48:19.523998 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
W20250811 20:48:19.543303 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
W20250811 20:48:19.555001 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
W20250811 20:48:19.581853 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
W20250811 20:48:19.597366 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
W20250811 20:48:19.629653 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
W20250811 20:48:19.648556 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
W20250811 20:48:19.688215 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
W20250811 20:48:19.708930 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
W20250811 20:48:19.756301 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
W20250811 20:48:19.780877 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
W20250811 20:48:19.832558 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
W20250811 20:48:19.858347 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
W20250811 20:48:19.913084 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
W20250811 20:48:19.941115 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
I20250811 20:48:19.945843 5564 heartbeater.cc:499] Master 127.31.250.254:45815 was elected leader, sending a full tablet report...
W20250811 20:48:20.005993 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
W20250811 20:48:20.039973 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
W20250811 20:48:20.116370 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
W20250811 20:48:20.155329 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
W20250811 20:48:20.239660 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
W20250811 20:48:20.282146 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
W20250811 20:48:20.371826 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
W20250811 20:48:20.415364 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
W20250811 20:48:20.512953 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
W20250811 20:48:20.559448 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
W20250811 20:48:20.612972 5600 meta_cache.cc:302] tablet ef85281c3b7345e9bcfd66632f6f8042: replica f13fccb690a248468d5564dcf758fb07 (127.31.250.193:37929) has failed: Network error: Client connection negotiation failed: client connection to 127.31.250.193:37929: connect: Connection refused (error 111)
W20250811 20:48:20.660475 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
W20250811 20:48:20.707942 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
W20250811 20:48:20.817698 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
W20250811 20:48:20.869160 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
I20250811 20:48:20.928354 5614 raft_consensus.cc:491] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:48:20.928731 5614 raft_consensus.cc:513] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f13fccb690a248468d5564dcf758fb07" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 37929 } } peers { permanent_uuid: "335438d8c109465b91eecf43f787cffb" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 43959 } } peers { permanent_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 36573 } }
I20250811 20:48:20.930994 5614 leader_election.cc:290] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers f13fccb690a248468d5564dcf758fb07 (127.31.250.193:37929), 6eeeb7741faf4ce8aec0a730fd3fcb55 (127.31.250.195:36573)
W20250811 20:48:20.934921 5321 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.31.250.193:37929: connect: Connection refused (error 111)
W20250811 20:48:20.946337 5321 leader_election.cc:336] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer f13fccb690a248468d5564dcf758fb07 (127.31.250.193:37929): Network error: Client connection negotiation failed: client connection to 127.31.250.193:37929: connect: Connection refused (error 111)
I20250811 20:48:20.952206 5252 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ef85281c3b7345e9bcfd66632f6f8042" candidate_uuid: "335438d8c109465b91eecf43f787cffb" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" is_pre_election: true
I20250811 20:48:20.952844 5252 raft_consensus.cc:2466] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55 [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 335438d8c109465b91eecf43f787cffb in term 1.
I20250811 20:48:20.954090 5321 leader_election.cc:304] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 335438d8c109465b91eecf43f787cffb, 6eeeb7741faf4ce8aec0a730fd3fcb55; no voters: f13fccb690a248468d5564dcf758fb07
I20250811 20:48:20.955027 5614 raft_consensus.cc:2802] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250811 20:48:20.955475 5614 raft_consensus.cc:491] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 20:48:20.955969 5614 raft_consensus.cc:3058] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [term 1 FOLLOWER]: Advancing to term 2
I20250811 20:48:20.965546 5614 raft_consensus.cc:513] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f13fccb690a248468d5564dcf758fb07" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 37929 } } peers { permanent_uuid: "335438d8c109465b91eecf43f787cffb" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 43959 } } peers { permanent_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 36573 } }
I20250811 20:48:20.968102 5614 leader_election.cc:290] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [CANDIDATE]: Term 2 election: Requested vote from peers f13fccb690a248468d5564dcf758fb07 (127.31.250.193:37929), 6eeeb7741faf4ce8aec0a730fd3fcb55 (127.31.250.195:36573)
I20250811 20:48:20.970964 5252 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "ef85281c3b7345e9bcfd66632f6f8042" candidate_uuid: "335438d8c109465b91eecf43f787cffb" candidate_term: 2 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55"
I20250811 20:48:20.971719 5252 raft_consensus.cc:3058] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55 [term 1 FOLLOWER]: Advancing to term 2
W20250811 20:48:20.976533 5321 leader_election.cc:336] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [CANDIDATE]: Term 2 election: RPC error from VoteRequest() call to peer f13fccb690a248468d5564dcf758fb07 (127.31.250.193:37929): Network error: Client connection negotiation failed: client connection to 127.31.250.193:37929: connect: Connection refused (error 111)
W20250811 20:48:20.977419 5346 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:44706: Illegal state: replica 335438d8c109465b91eecf43f787cffb is not leader of this config: current role FOLLOWER
I20250811 20:48:20.978595 5252 raft_consensus.cc:2466] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55 [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 335438d8c109465b91eecf43f787cffb in term 2.
I20250811 20:48:20.979780 5321 leader_election.cc:304] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 3 responses out of 3 voters: 2 yes votes; 1 no votes. yes voters: 335438d8c109465b91eecf43f787cffb, 6eeeb7741faf4ce8aec0a730fd3fcb55; no voters: f13fccb690a248468d5564dcf758fb07
I20250811 20:48:20.980667 5614 raft_consensus.cc:2802] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [term 2 FOLLOWER]: Leader election won for term 2
I20250811 20:48:20.983312 5614 raft_consensus.cc:695] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [term 2 LEADER]: Becoming Leader. State: Replica: 335438d8c109465b91eecf43f787cffb, State: Running, Role: LEADER
I20250811 20:48:20.984658 5614 consensus_queue.cc:237] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f13fccb690a248468d5564dcf758fb07" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 37929 } } peers { permanent_uuid: "335438d8c109465b91eecf43f787cffb" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 43959 } } peers { permanent_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 36573 } }
I20250811 20:48:20.999450 4840 catalog_manager.cc:5582] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb reported cstate change: term changed from 1 to 2, leader changed from f13fccb690a248468d5564dcf758fb07 (127.31.250.193) to 335438d8c109465b91eecf43f787cffb (127.31.250.196). New cstate: current_term: 2 leader_uuid: "335438d8c109465b91eecf43f787cffb" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "f13fccb690a248468d5564dcf758fb07" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 37929 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "335438d8c109465b91eecf43f787cffb" member_type: VOTER last_known_addr { host: "127.31.250.196" port: 43959 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 36573 } health_report { overall_health: UNKNOWN } } }
W20250811 20:48:21.032473 5212 tablet_service.cc:696] failed op from {username='slave'} at 127.0.0.1:36222: Illegal state: replica 6eeeb7741faf4ce8aec0a730fd3fcb55 is not leader of this config: current role FOLLOWER
I20250811 20:48:21.110669 5252 raft_consensus.cc:1273] T ef85281c3b7345e9bcfd66632f6f8042 P 6eeeb7741faf4ce8aec0a730fd3fcb55 [term 2 FOLLOWER]: Refusing update from remote peer 335438d8c109465b91eecf43f787cffb: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 2 index: 2. (index mismatch)
I20250811 20:48:21.113138 5619 consensus_queue.cc:1035] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb [LEADER]: Connected to new peer: Peer: permanent_uuid: "6eeeb7741faf4ce8aec0a730fd3fcb55" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 36573 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
W20250811 20:48:21.113936 5321 consensus_peers.cc:489] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb -> Peer f13fccb690a248468d5564dcf758fb07 (127.31.250.193:37929): Couldn't send request to peer f13fccb690a248468d5564dcf758fb07. Status: Network error: Client connection negotiation failed: client connection to 127.31.250.193:37929: connect: Connection refused (error 111). This is attempt 1: this message will repeat every 5th retry.
I20250811 20:48:21.150487 5625 mvcc.cc:204] Tried to move back new op lower bound from 7188255953330008064 to 7188255952845172736. Current Snapshot: MvccSnapshot[applied={T|T < 7188255953330008064}]
I20250811 20:48:21.155398 5627 mvcc.cc:204] Tried to move back new op lower bound from 7188255953330008064 to 7188255952845172736. Current Snapshot: MvccSnapshot[applied={T|T < 7188255953330008064}]
I20250811 20:48:21.842011 5499 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250811 20:48:21.859532 5366 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250811 20:48:21.891758 5232 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250811 20:48:21.915746 5099 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250811 20:48:23.512858 5321 consensus_peers.cc:489] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb -> Peer f13fccb690a248468d5564dcf758fb07 (127.31.250.193:37929): Couldn't send request to peer f13fccb690a248468d5564dcf758fb07. Status: Network error: Client connection negotiation failed: client connection to 127.31.250.193:37929: connect: Connection refused (error 111). This is attempt 6: this message will repeat every 5th retry.
I20250811 20:48:23.733644 5499 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250811 20:48:23.745023 5366 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250811 20:48:23.755730 5099 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250811 20:48:23.781625 5232 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
W20250811 20:48:26.214347 5321 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.31.250.193:37929: connect: Connection refused (error 111) [suppressed 11 similar messages]
I20250811 20:48:26.218034 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 5034
W20250811 20:48:26.218926 5321 consensus_peers.cc:489] T ef85281c3b7345e9bcfd66632f6f8042 P 335438d8c109465b91eecf43f787cffb -> Peer f13fccb690a248468d5564dcf758fb07 (127.31.250.193:37929): Couldn't send request to peer f13fccb690a248468d5564dcf758fb07. Status: Network error: Client connection negotiation failed: client connection to 127.31.250.193:37929: connect: Connection refused (error 111). This is attempt 11: this message will repeat every 5th retry.
I20250811 20:48:26.241433 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 5168
I20250811 20:48:26.275454 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 5301
I20250811 20:48:26.314991 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 5435
I20250811 20:48:26.339900 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 4809
2025-08-11T20:48:26Z chronyd exiting
[ OK ] EnableKudu1097AndDownTS/MoveTabletParamTest.Test/4 (19759 ms)
[----------] 1 test from EnableKudu1097AndDownTS/MoveTabletParamTest (19759 ms total)
[----------] 1 test from ListTableCliSimpleParamTest
[ RUN ] ListTableCliSimpleParamTest.TestListTables/2
I20250811 20:48:26.402493 32747 test_util.cc:276] Using random seed: 194420384
I20250811 20:48:26.406663 32747 ts_itest-base.cc:115] Starting cluster with:
I20250811 20:48:26.406836 32747 ts_itest-base.cc:116] --------------
I20250811 20:48:26.406994 32747 ts_itest-base.cc:117] 1 tablet servers
I20250811 20:48:26.407131 32747 ts_itest-base.cc:118] 1 replicas per TS
I20250811 20:48:26.407277 32747 ts_itest-base.cc:119] --------------
2025-08-11T20:48:26Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T20:48:26Z Disabled control of system clock
I20250811 20:48:26.451520 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:41147
--webserver_interface=127.31.250.254
--webserver_port=0
--builtin_ntp_servers=127.31.250.212:45933
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.31.250.254:41147 with env {}
W20250811 20:48:26.750558 5716 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:26.751168 5716 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:26.751664 5716 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:26.787021 5716 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 20:48:26.787386 5716 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:26.787657 5716 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 20:48:26.787885 5716 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 20:48:26.822891 5716 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:45933
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.31.250.254:41147
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:41147
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
--webserver_interface=127.31.250.254
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:26.824246 5716 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:26.825810 5716 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:26.836154 5722 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:28.239754 5721 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 5716
W20250811 20:48:28.647652 5716 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.811s user 0.560s sys 1.248s
W20250811 20:48:28.648083 5716 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.812s user 0.560s sys 1.248s
W20250811 20:48:26.836971 5723 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:28.649964 5725 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:28.652989 5724 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1812 milliseconds
I20250811 20:48:28.653064 5716 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:48:28.654301 5716 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:48:28.656715 5716 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:48:28.658041 5716 hybrid_clock.cc:648] HybridClock initialized: now 1754945308657999 us; error 58 us; skew 500 ppm
I20250811 20:48:28.658836 5716 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:28.664948 5716 webserver.cc:489] Webserver started at http://127.31.250.254:39201/ using document root <none> and password file <none>
I20250811 20:48:28.665875 5716 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:28.666092 5716 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:28.666570 5716 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:48:28.670915 5716 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/instance:
uuid: "307c96946a2e42fb8b45423d438348ae"
format_stamp: "Formatted at 2025-08-11 20:48:28 on dist-test-slave-4gzk"
I20250811 20:48:28.672063 5716 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal/instance:
uuid: "307c96946a2e42fb8b45423d438348ae"
format_stamp: "Formatted at 2025-08-11 20:48:28 on dist-test-slave-4gzk"
I20250811 20:48:28.679157 5716 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.007s sys 0.000s
I20250811 20:48:28.684721 5732 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:28.685735 5716 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.002s
I20250811 20:48:28.686065 5716 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
uuid: "307c96946a2e42fb8b45423d438348ae"
format_stamp: "Formatted at 2025-08-11 20:48:28 on dist-test-slave-4gzk"
I20250811 20:48:28.686400 5716 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:28.731197 5716 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:28.732699 5716 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:28.733139 5716 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:28.805475 5716 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.254:41147
I20250811 20:48:28.805569 5783 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.254:41147 every 8 connection(s)
I20250811 20:48:28.808209 5716 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/data/info.pb
I20250811 20:48:28.813110 5784 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:48:28.816823 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 5716
I20250811 20:48:28.817222 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/master-0/wal/instance
I20250811 20:48:28.836491 5784 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae: Bootstrap starting.
I20250811 20:48:28.841518 5784 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae: Neither blocks nor log segments found. Creating new log.
I20250811 20:48:28.843480 5784 log.cc:826] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae: Log is configured to *not* fsync() on all Append() calls
I20250811 20:48:28.848378 5784 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae: No bootstrap required, opened a new log
I20250811 20:48:28.865765 5784 raft_consensus.cc:357] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "307c96946a2e42fb8b45423d438348ae" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 41147 } }
I20250811 20:48:28.866441 5784 raft_consensus.cc:383] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:48:28.866676 5784 raft_consensus.cc:738] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 307c96946a2e42fb8b45423d438348ae, State: Initialized, Role: FOLLOWER
I20250811 20:48:28.867362 5784 consensus_queue.cc:260] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "307c96946a2e42fb8b45423d438348ae" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 41147 } }
I20250811 20:48:28.867864 5784 raft_consensus.cc:397] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:48:28.868119 5784 raft_consensus.cc:491] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:48:28.868427 5784 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:48:28.872466 5784 raft_consensus.cc:513] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "307c96946a2e42fb8b45423d438348ae" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 41147 } }
I20250811 20:48:28.873150 5784 leader_election.cc:304] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 307c96946a2e42fb8b45423d438348ae; no voters:
I20250811 20:48:28.874766 5784 leader_election.cc:290] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 20:48:28.875545 5789 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:48:28.877627 5789 raft_consensus.cc:695] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [term 1 LEADER]: Becoming Leader. State: Replica: 307c96946a2e42fb8b45423d438348ae, State: Running, Role: LEADER
I20250811 20:48:28.878352 5789 consensus_queue.cc:237] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "307c96946a2e42fb8b45423d438348ae" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 41147 } }
I20250811 20:48:28.879396 5784 sys_catalog.cc:564] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [sys.catalog]: configured and running, proceeding with master startup.
I20250811 20:48:28.887755 5791 sys_catalog.cc:455] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [sys.catalog]: SysCatalogTable state changed. Reason: New leader 307c96946a2e42fb8b45423d438348ae. Latest consensus state: current_term: 1 leader_uuid: "307c96946a2e42fb8b45423d438348ae" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "307c96946a2e42fb8b45423d438348ae" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 41147 } } }
I20250811 20:48:28.888778 5791 sys_catalog.cc:458] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [sys.catalog]: This master's current role is: LEADER
I20250811 20:48:28.891101 5790 sys_catalog.cc:455] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "307c96946a2e42fb8b45423d438348ae" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "307c96946a2e42fb8b45423d438348ae" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 41147 } } }
I20250811 20:48:28.891908 5790 sys_catalog.cc:458] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae [sys.catalog]: This master's current role is: LEADER
I20250811 20:48:28.893463 5798 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 20:48:28.905541 5798 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 20:48:28.922142 5798 catalog_manager.cc:1349] Generated new cluster ID: 0517018c679a4c4db16c8de6a086f24a
I20250811 20:48:28.922447 5798 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 20:48:28.933597 5798 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 20:48:28.935539 5798 catalog_manager.cc:1506] Loading token signing keys...
I20250811 20:48:28.952315 5798 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 307c96946a2e42fb8b45423d438348ae: Generated new TSK 0
I20250811 20:48:28.953169 5798 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 20:48:28.971298 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.193:0
--local_ip_for_outbound_sockets=127.31.250.193
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:41147
--builtin_ntp_servers=127.31.250.212:45933
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--consensus_rpc_timeout_ms=30000 with env {}
W20250811 20:48:29.274554 5808 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:29.275049 5808 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:29.275579 5808 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:29.305733 5808 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:29.306622 5808 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.193
I20250811 20:48:29.341223 5808 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:45933
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--log_cache_size_limit_mb=10
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.193:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:41147
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.193
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:29.342494 5808 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:29.344069 5808 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:29.356460 5814 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:29.361282 5817 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:29.360333 5815 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:30.547435 5816 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection time-out
I20250811 20:48:30.547652 5808 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:48:30.551568 5808 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:48:30.559834 5808 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:48:30.561254 5808 hybrid_clock.cc:648] HybridClock initialized: now 1754945310561196 us; error 77 us; skew 500 ppm
I20250811 20:48:30.562011 5808 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:30.568910 5808 webserver.cc:489] Webserver started at http://127.31.250.193:46519/ using document root <none> and password file <none>
I20250811 20:48:30.569801 5808 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:30.569991 5808 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:30.570447 5808 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:48:30.574751 5808 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/instance:
uuid: "05ffc9f1c1b546368150dbdd57bba94b"
format_stamp: "Formatted at 2025-08-11 20:48:30 on dist-test-slave-4gzk"
I20250811 20:48:30.575881 5808 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal/instance:
uuid: "05ffc9f1c1b546368150dbdd57bba94b"
format_stamp: "Formatted at 2025-08-11 20:48:30 on dist-test-slave-4gzk"
I20250811 20:48:30.583499 5808 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.003s sys 0.005s
I20250811 20:48:30.589690 5824 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:30.590754 5808 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.006s sys 0.000s
I20250811 20:48:30.591063 5808 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
uuid: "05ffc9f1c1b546368150dbdd57bba94b"
format_stamp: "Formatted at 2025-08-11 20:48:30 on dist-test-slave-4gzk"
I20250811 20:48:30.591424 5808 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:30.645269 5808 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:30.646745 5808 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:30.647166 5808 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:30.650193 5808 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:48:30.654183 5808 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:48:30.654373 5808 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:30.654656 5808 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:48:30.654803 5808 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:30.789193 5808 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.193:40131
I20250811 20:48:30.789294 5936 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.193:40131 every 8 connection(s)
I20250811 20:48:30.791589 5808 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/data/info.pb
I20250811 20:48:30.798235 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 5808
I20250811 20:48:30.798652 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.ListTableCliSimpleParamTest.TestListTables_2.1754945161385764-32747-0/raft_consensus-itest-cluster/ts-0/wal/instance
I20250811 20:48:30.812463 5937 heartbeater.cc:344] Connected to a master server at 127.31.250.254:41147
I20250811 20:48:30.812942 5937 heartbeater.cc:461] Registering TS with master...
I20250811 20:48:30.814168 5937 heartbeater.cc:507] Master 127.31.250.254:41147 requested a full tablet report, sending...
I20250811 20:48:30.817095 5749 ts_manager.cc:194] Registered new tserver with Master: 05ffc9f1c1b546368150dbdd57bba94b (127.31.250.193:40131)
I20250811 20:48:30.818881 5749 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.193:35203
I20250811 20:48:30.831091 32747 external_mini_cluster.cc:949] 1 TS(s) registered with all masters
I20250811 20:48:30.859717 5749 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:59452:
name: "TestTable"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 1
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
owner: "alice"
I20250811 20:48:30.913381 5872 tablet_service.cc:1468] Processing CreateTablet for tablet 688ae4b68d90493f9631c275a2669c0c (DEFAULT_TABLE table=TestTable [id=dd066aa81362487f994320d9fc9d6de7]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:48:30.914809 5872 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 688ae4b68d90493f9631c275a2669c0c. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:48:30.932816 5952 tablet_bootstrap.cc:492] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b: Bootstrap starting.
I20250811 20:48:30.937945 5952 tablet_bootstrap.cc:654] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b: Neither blocks nor log segments found. Creating new log.
I20250811 20:48:30.939639 5952 log.cc:826] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b: Log is configured to *not* fsync() on all Append() calls
I20250811 20:48:30.943825 5952 tablet_bootstrap.cc:492] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b: No bootstrap required, opened a new log
I20250811 20:48:30.944162 5952 ts_tablet_manager.cc:1397] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b: Time spent bootstrapping tablet: real 0.012s user 0.008s sys 0.002s
I20250811 20:48:30.960232 5952 raft_consensus.cc:357] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "05ffc9f1c1b546368150dbdd57bba94b" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40131 } }
I20250811 20:48:30.960779 5952 raft_consensus.cc:383] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:48:30.961000 5952 raft_consensus.cc:738] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 05ffc9f1c1b546368150dbdd57bba94b, State: Initialized, Role: FOLLOWER
I20250811 20:48:30.961628 5952 consensus_queue.cc:260] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "05ffc9f1c1b546368150dbdd57bba94b" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40131 } }
I20250811 20:48:30.962116 5952 raft_consensus.cc:397] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:48:30.962373 5952 raft_consensus.cc:491] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:48:30.962685 5952 raft_consensus.cc:3058] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:48:30.966719 5952 raft_consensus.cc:513] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "05ffc9f1c1b546368150dbdd57bba94b" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40131 } }
I20250811 20:48:30.967509 5952 leader_election.cc:304] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 05ffc9f1c1b546368150dbdd57bba94b; no voters:
I20250811 20:48:30.969421 5952 leader_election.cc:290] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 20:48:30.969769 5954 raft_consensus.cc:2802] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:48:30.971822 5954 raft_consensus.cc:695] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b [term 1 LEADER]: Becoming Leader. State: Replica: 05ffc9f1c1b546368150dbdd57bba94b, State: Running, Role: LEADER
I20250811 20:48:30.972716 5937 heartbeater.cc:499] Master 127.31.250.254:41147 was elected leader, sending a full tablet report...
I20250811 20:48:30.973098 5952 ts_tablet_manager.cc:1428] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b: Time spent starting tablet: real 0.029s user 0.026s sys 0.003s
I20250811 20:48:30.972726 5954 consensus_queue.cc:237] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "05ffc9f1c1b546368150dbdd57bba94b" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40131 } }
I20250811 20:48:30.985216 5749 catalog_manager.cc:5582] T 688ae4b68d90493f9631c275a2669c0c P 05ffc9f1c1b546368150dbdd57bba94b reported cstate change: term changed from 0 to 1, leader changed from <none> to 05ffc9f1c1b546368150dbdd57bba94b (127.31.250.193). New cstate: current_term: 1 leader_uuid: "05ffc9f1c1b546368150dbdd57bba94b" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "05ffc9f1c1b546368150dbdd57bba94b" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40131 } health_report { overall_health: HEALTHY } } }
I20250811 20:48:31.013195 32747 external_mini_cluster.cc:949] 1 TS(s) registered with all masters
I20250811 20:48:31.016146 32747 ts_itest-base.cc:246] Waiting for 1 tablets on tserver 05ffc9f1c1b546368150dbdd57bba94b to finish bootstrapping
I20250811 20:48:33.668891 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 5808
I20250811 20:48:33.693069 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 5716
2025-08-11T20:48:33Z chronyd exiting
[ OK ] ListTableCliSimpleParamTest.TestListTables/2 (7347 ms)
[----------] 1 test from ListTableCliSimpleParamTest (7347 ms total)
[----------] 1 test from ListTableCliParamTest
[ RUN ] ListTableCliParamTest.ListTabletWithPartitionInfo/4
I20250811 20:48:33.750217 32747 test_util.cc:276] Using random seed: 201768110
[ OK ] ListTableCliParamTest.ListTabletWithPartitionInfo/4 (12 ms)
[----------] 1 test from ListTableCliParamTest (12 ms total)
[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest
[ RUN ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0
2025-08-11T20:48:33Z chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -NTS -SECHASH -IPV6 +DEBUG)
2025-08-11T20:48:33Z Disabled control of system clock
I20250811 20:48:33.801589 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:46197
--webserver_interface=127.31.250.254
--webserver_port=0
--builtin_ntp_servers=127.31.250.212:36717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.31.250.254:46197 with env {}
W20250811 20:48:34.092995 5981 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:34.093626 5981 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:34.094066 5981 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:34.124588 5981 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 20:48:34.124910 5981 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:34.125160 5981 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 20:48:34.125406 5981 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 20:48:34.159636 5981 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:36717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.31.250.254:46197
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:46197
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.31.250.254
--webserver_port=0
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:34.160861 5981 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:34.162487 5981 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:34.173295 5987 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:34.173508 5988 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:35.577124 5986 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 5981
W20250811 20:48:35.958374 5981 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.785s user 0.576s sys 1.204s
W20250811 20:48:35.959740 5981 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.787s user 0.577s sys 1.205s
W20250811 20:48:35.960845 5990 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:35.964449 5989 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1785 milliseconds
I20250811 20:48:35.964588 5981 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:48:35.965754 5981 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:48:35.968189 5981 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:48:35.969574 5981 hybrid_clock.cc:648] HybridClock initialized: now 1754945315969544 us; error 49 us; skew 500 ppm
I20250811 20:48:35.970374 5981 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:35.976684 5981 webserver.cc:489] Webserver started at http://127.31.250.254:45209/ using document root <none> and password file <none>
I20250811 20:48:35.977610 5981 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:35.977797 5981 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:35.978204 5981 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:48:35.982923 5981 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data/instance:
uuid: "5be000e368b946d0abe2ca1d1f539b29"
format_stamp: "Formatted at 2025-08-11 20:48:35 on dist-test-slave-4gzk"
I20250811 20:48:35.984107 5981 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal/instance:
uuid: "5be000e368b946d0abe2ca1d1f539b29"
format_stamp: "Formatted at 2025-08-11 20:48:35 on dist-test-slave-4gzk"
I20250811 20:48:35.991489 5981 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.007s sys 0.000s
I20250811 20:48:35.997190 5997 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:35.998198 5981 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.000s
I20250811 20:48:35.998499 5981 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal
uuid: "5be000e368b946d0abe2ca1d1f539b29"
format_stamp: "Formatted at 2025-08-11 20:48:35 on dist-test-slave-4gzk"
I20250811 20:48:35.998788 5981 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:36.053629 5981 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:36.055071 5981 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:36.055542 5981 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:36.125137 5981 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.254:46197
I20250811 20:48:36.125202 6048 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.254:46197 every 8 connection(s)
I20250811 20:48:36.127884 5981 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data/info.pb
I20250811 20:48:36.128664 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 5981
I20250811 20:48:36.129133 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal/instance
I20250811 20:48:36.134590 6049 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:48:36.159004 6049 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29: Bootstrap starting.
I20250811 20:48:36.164711 6049 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29: Neither blocks nor log segments found. Creating new log.
I20250811 20:48:36.166471 6049 log.cc:826] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29: Log is configured to *not* fsync() on all Append() calls
I20250811 20:48:36.171068 6049 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29: No bootstrap required, opened a new log
I20250811 20:48:36.189970 6049 raft_consensus.cc:357] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5be000e368b946d0abe2ca1d1f539b29" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46197 } }
I20250811 20:48:36.190646 6049 raft_consensus.cc:383] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:48:36.190878 6049 raft_consensus.cc:738] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 5be000e368b946d0abe2ca1d1f539b29, State: Initialized, Role: FOLLOWER
I20250811 20:48:36.191581 6049 consensus_queue.cc:260] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5be000e368b946d0abe2ca1d1f539b29" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46197 } }
I20250811 20:48:36.192102 6049 raft_consensus.cc:397] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:48:36.192359 6049 raft_consensus.cc:491] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:48:36.192696 6049 raft_consensus.cc:3058] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:48:36.196749 6049 raft_consensus.cc:513] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5be000e368b946d0abe2ca1d1f539b29" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46197 } }
I20250811 20:48:36.197427 6049 leader_election.cc:304] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: 5be000e368b946d0abe2ca1d1f539b29; no voters:
I20250811 20:48:36.199276 6049 leader_election.cc:290] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 20:48:36.200136 6054 raft_consensus.cc:2802] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:48:36.202529 6054 raft_consensus.cc:695] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [term 1 LEADER]: Becoming Leader. State: Replica: 5be000e368b946d0abe2ca1d1f539b29, State: Running, Role: LEADER
I20250811 20:48:36.203289 6054 consensus_queue.cc:237] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5be000e368b946d0abe2ca1d1f539b29" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46197 } }
I20250811 20:48:36.204339 6049 sys_catalog.cc:564] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [sys.catalog]: configured and running, proceeding with master startup.
I20250811 20:48:36.212257 6056 sys_catalog.cc:455] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [sys.catalog]: SysCatalogTable state changed. Reason: New leader 5be000e368b946d0abe2ca1d1f539b29. Latest consensus state: current_term: 1 leader_uuid: "5be000e368b946d0abe2ca1d1f539b29" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5be000e368b946d0abe2ca1d1f539b29" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46197 } } }
I20250811 20:48:36.212951 6056 sys_catalog.cc:458] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [sys.catalog]: This master's current role is: LEADER
I20250811 20:48:36.212181 6055 sys_catalog.cc:455] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "5be000e368b946d0abe2ca1d1f539b29" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "5be000e368b946d0abe2ca1d1f539b29" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46197 } } }
I20250811 20:48:36.213848 6055 sys_catalog.cc:458] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29 [sys.catalog]: This master's current role is: LEADER
I20250811 20:48:36.216753 6062 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 20:48:36.227699 6062 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 20:48:36.245112 6062 catalog_manager.cc:1349] Generated new cluster ID: 50bb987c4ca04227a7435c0606b3e8f6
I20250811 20:48:36.245443 6062 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 20:48:36.259778 6062 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 20:48:36.261204 6062 catalog_manager.cc:1506] Loading token signing keys...
I20250811 20:48:36.277765 6062 catalog_manager.cc:5955] T 00000000000000000000000000000000 P 5be000e368b946d0abe2ca1d1f539b29: Generated new TSK 0
I20250811 20:48:36.278939 6062 catalog_manager.cc:1516] Initializing in-progress tserver states...
I20250811 20:48:36.295368 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.193:0
--local_ip_for_outbound_sockets=127.31.250.193
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:46197
--builtin_ntp_servers=127.31.250.212:36717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
W20250811 20:48:36.608206 6073 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:36.608703 6073 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:36.609179 6073 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:36.641083 6073 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:36.641975 6073 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.193
I20250811 20:48:36.678367 6073 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:36717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.193:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.31.250.193
--webserver_port=0
--tserver_master_addrs=127.31.250.254:46197
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.193
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:36.679737 6073 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:36.681306 6073 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:36.694126 6079 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:38.096412 6078 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 6073
W20250811 20:48:36.701017 6080 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:38.264787 6073 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.570s user 0.516s sys 1.006s
W20250811 20:48:38.266935 6082 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:38.267186 6073 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.573s user 0.516s sys 1.006s
I20250811 20:48:38.267534 6073 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
W20250811 20:48:38.267680 6081 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1565 milliseconds
I20250811 20:48:38.272428 6073 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:48:38.275501 6073 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:48:38.277061 6073 hybrid_clock.cc:648] HybridClock initialized: now 1754945318276968 us; error 77 us; skew 500 ppm
I20250811 20:48:38.278227 6073 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:38.286702 6073 webserver.cc:489] Webserver started at http://127.31.250.193:45383/ using document root <none> and password file <none>
I20250811 20:48:38.288084 6073 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:38.288383 6073 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:38.288965 6073 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:48:38.295989 6073 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data/instance:
uuid: "f7420749c7f2423db6d0842344dd0ee4"
format_stamp: "Formatted at 2025-08-11 20:48:38 on dist-test-slave-4gzk"
I20250811 20:48:38.297505 6073 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/wal/instance:
uuid: "f7420749c7f2423db6d0842344dd0ee4"
format_stamp: "Formatted at 2025-08-11 20:48:38 on dist-test-slave-4gzk"
I20250811 20:48:38.307936 6073 fs_manager.cc:696] Time spent creating directory manager: real 0.010s user 0.008s sys 0.003s
I20250811 20:48:38.315949 6089 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:38.317322 6073 fs_manager.cc:730] Time spent opening block manager: real 0.005s user 0.002s sys 0.002s
I20250811 20:48:38.317744 6073 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/wal
uuid: "f7420749c7f2423db6d0842344dd0ee4"
format_stamp: "Formatted at 2025-08-11 20:48:38 on dist-test-slave-4gzk"
I20250811 20:48:38.318202 6073 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:38.372826 6073 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:38.374209 6073 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:38.374631 6073 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:38.377413 6073 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:48:38.382053 6073 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:48:38.382254 6073 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:38.382495 6073 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:48:38.382650 6073 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:38.542107 6073 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.193:40061
I20250811 20:48:38.542275 6201 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.193:40061 every 8 connection(s)
I20250811 20:48:38.544610 6073 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data/info.pb
I20250811 20:48:38.551940 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 6073
I20250811 20:48:38.552439 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/wal/instance
I20250811 20:48:38.559456 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.194:0
--local_ip_for_outbound_sockets=127.31.250.194
--webserver_interface=127.31.250.194
--webserver_port=0
--tserver_master_addrs=127.31.250.254:46197
--builtin_ntp_servers=127.31.250.212:36717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250811 20:48:38.573086 6202 heartbeater.cc:344] Connected to a master server at 127.31.250.254:46197
I20250811 20:48:38.573556 6202 heartbeater.cc:461] Registering TS with master...
I20250811 20:48:38.574637 6202 heartbeater.cc:507] Master 127.31.250.254:46197 requested a full tablet report, sending...
I20250811 20:48:38.577210 6014 ts_manager.cc:194] Registered new tserver with Master: f7420749c7f2423db6d0842344dd0ee4 (127.31.250.193:40061)
I20250811 20:48:38.579479 6014 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.193:48693
W20250811 20:48:38.896262 6206 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:38.896768 6206 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:38.897284 6206 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:38.927865 6206 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:38.928701 6206 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.194
I20250811 20:48:38.961416 6206 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:36717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.194:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.31.250.194
--webserver_port=0
--tserver_master_addrs=127.31.250.254:46197
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.194
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:38.962608 6206 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:38.964159 6206 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:38.976625 6212 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:48:39.582890 6202 heartbeater.cc:499] Master 127.31.250.254:46197 was elected leader, sending a full tablet report...
W20250811 20:48:38.977263 6213 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:40.155385 6215 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:40.157315 6214 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1176 milliseconds
I20250811 20:48:40.157440 6206 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:48:40.158627 6206 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:48:40.160785 6206 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:48:40.162097 6206 hybrid_clock.cc:648] HybridClock initialized: now 1754945320162060 us; error 39 us; skew 500 ppm
I20250811 20:48:40.162789 6206 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:40.168738 6206 webserver.cc:489] Webserver started at http://127.31.250.194:43495/ using document root <none> and password file <none>
I20250811 20:48:40.169670 6206 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:40.169893 6206 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:40.170327 6206 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:48:40.174577 6206 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data/instance:
uuid: "4430edcc81c242dd8735c4971967e56b"
format_stamp: "Formatted at 2025-08-11 20:48:40 on dist-test-slave-4gzk"
I20250811 20:48:40.175765 6206 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/wal/instance:
uuid: "4430edcc81c242dd8735c4971967e56b"
format_stamp: "Formatted at 2025-08-11 20:48:40 on dist-test-slave-4gzk"
I20250811 20:48:40.182477 6206 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.000s
I20250811 20:48:40.187963 6222 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:40.188961 6206 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.003s sys 0.002s
I20250811 20:48:40.189280 6206 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/wal
uuid: "4430edcc81c242dd8735c4971967e56b"
format_stamp: "Formatted at 2025-08-11 20:48:40 on dist-test-slave-4gzk"
I20250811 20:48:40.189626 6206 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:40.239756 6206 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:40.241153 6206 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:40.241597 6206 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:40.244000 6206 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:48:40.248013 6206 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:48:40.248232 6206 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:40.248482 6206 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:48:40.248628 6206 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:40.380106 6206 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.194:44403
I20250811 20:48:40.380191 6334 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.194:44403 every 8 connection(s)
I20250811 20:48:40.382584 6206 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data/info.pb
I20250811 20:48:40.389806 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 6206
I20250811 20:48:40.390177 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/wal/instance
I20250811 20:48:40.396241 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.195:0
--local_ip_for_outbound_sockets=127.31.250.195
--webserver_interface=127.31.250.195
--webserver_port=0
--tserver_master_addrs=127.31.250.254:46197
--builtin_ntp_servers=127.31.250.212:36717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250811 20:48:40.403362 6335 heartbeater.cc:344] Connected to a master server at 127.31.250.254:46197
I20250811 20:48:40.403780 6335 heartbeater.cc:461] Registering TS with master...
I20250811 20:48:40.404719 6335 heartbeater.cc:507] Master 127.31.250.254:46197 requested a full tablet report, sending...
I20250811 20:48:40.406800 6014 ts_manager.cc:194] Registered new tserver with Master: 4430edcc81c242dd8735c4971967e56b (127.31.250.194:44403)
I20250811 20:48:40.408008 6014 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.194:43767
W20250811 20:48:40.692107 6339 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:40.692598 6339 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:40.693099 6339 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:40.723450 6339 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:40.724296 6339 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.195
I20250811 20:48:40.757144 6339 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:36717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.195:0
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.31.250.195
--webserver_port=0
--tserver_master_addrs=127.31.250.254:46197
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.195
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:40.758428 6339 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:40.760149 6339 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:40.774580 6346 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:48:41.410733 6335 heartbeater.cc:499] Master 127.31.250.254:46197 was elected leader, sending a full tablet report...
W20250811 20:48:40.781035 6345 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:40.778880 6348 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:41.983158 6347 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1203 milliseconds
I20250811 20:48:41.983302 6339 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:48:41.984560 6339 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:48:41.987264 6339 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:48:41.988747 6339 hybrid_clock.cc:648] HybridClock initialized: now 1754945321988687 us; error 73 us; skew 500 ppm
I20250811 20:48:41.989495 6339 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:41.996218 6339 webserver.cc:489] Webserver started at http://127.31.250.195:33793/ using document root <none> and password file <none>
I20250811 20:48:41.997095 6339 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:41.997277 6339 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:41.997700 6339 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:48:42.002190 6339 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data/instance:
uuid: "47f82216612d4f1ca2b3d5c8e278cb14"
format_stamp: "Formatted at 2025-08-11 20:48:41 on dist-test-slave-4gzk"
I20250811 20:48:42.003211 6339 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/wal/instance:
uuid: "47f82216612d4f1ca2b3d5c8e278cb14"
format_stamp: "Formatted at 2025-08-11 20:48:41 on dist-test-slave-4gzk"
I20250811 20:48:42.009992 6339 fs_manager.cc:696] Time spent creating directory manager: real 0.006s user 0.006s sys 0.000s
I20250811 20:48:42.015452 6356 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:42.016484 6339 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.002s sys 0.002s
I20250811 20:48:42.016775 6339 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/wal
uuid: "47f82216612d4f1ca2b3d5c8e278cb14"
format_stamp: "Formatted at 2025-08-11 20:48:41 on dist-test-slave-4gzk"
I20250811 20:48:42.017047 6339 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:42.099097 6339 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:42.100994 6339 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:42.101601 6339 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:42.104470 6339 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:48:42.108697 6339 ts_tablet_manager.cc:579] Loaded tablet metadata (0 total tablets, 0 live tablets)
I20250811 20:48:42.108919 6339 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:42.109160 6339 ts_tablet_manager.cc:610] Registered 0 tablets
I20250811 20:48:42.109315 6339 ts_tablet_manager.cc:589] Time spent register tablets: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:42.248732 6339 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.195:34797
I20250811 20:48:42.248832 6468 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.195:34797 every 8 connection(s)
I20250811 20:48:42.251540 6339 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data/info.pb
I20250811 20:48:42.262068 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 6339
I20250811 20:48:42.262843 32747 external_mini_cluster.cc:1442] Reading /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/wal/instance
I20250811 20:48:42.278663 6469 heartbeater.cc:344] Connected to a master server at 127.31.250.254:46197
I20250811 20:48:42.279071 6469 heartbeater.cc:461] Registering TS with master...
I20250811 20:48:42.280079 6469 heartbeater.cc:507] Master 127.31.250.254:46197 requested a full tablet report, sending...
I20250811 20:48:42.282421 6014 ts_manager.cc:194] Registered new tserver with Master: 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195:34797)
I20250811 20:48:42.283891 6014 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.195:33269
I20250811 20:48:42.283941 32747 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 20:48:42.314422 32747 test_util.cc:276] Using random seed: 210332321
I20250811 20:48:42.352741 6014 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:35098:
name: "pre_rebuild"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
W20250811 20:48:42.355173 6014 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table pre_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250811 20:48:42.416246 6270 tablet_service.cc:1468] Processing CreateTablet for tablet 27f845a2b1d541a5b32c24834d8426fd (DEFAULT_TABLE table=pre_rebuild [id=256ea993f5e34a5e862a979b5296c944]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:48:42.417094 6137 tablet_service.cc:1468] Processing CreateTablet for tablet 27f845a2b1d541a5b32c24834d8426fd (DEFAULT_TABLE table=pre_rebuild [id=256ea993f5e34a5e862a979b5296c944]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:48:42.418380 6270 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 27f845a2b1d541a5b32c24834d8426fd. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:48:42.419070 6137 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 27f845a2b1d541a5b32c24834d8426fd. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:48:42.418911 6404 tablet_service.cc:1468] Processing CreateTablet for tablet 27f845a2b1d541a5b32c24834d8426fd (DEFAULT_TABLE table=pre_rebuild [id=256ea993f5e34a5e862a979b5296c944]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:48:42.420831 6404 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 27f845a2b1d541a5b32c24834d8426fd. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:48:42.442816 6493 tablet_bootstrap.cc:492] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: Bootstrap starting.
I20250811 20:48:42.447088 6494 tablet_bootstrap.cc:492] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: Bootstrap starting.
I20250811 20:48:42.449014 6495 tablet_bootstrap.cc:492] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: Bootstrap starting.
I20250811 20:48:42.449537 6493 tablet_bootstrap.cc:654] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: Neither blocks nor log segments found. Creating new log.
I20250811 20:48:42.452679 6493 log.cc:826] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: Log is configured to *not* fsync() on all Append() calls
I20250811 20:48:42.455006 6494 tablet_bootstrap.cc:654] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: Neither blocks nor log segments found. Creating new log.
I20250811 20:48:42.455647 6495 tablet_bootstrap.cc:654] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: Neither blocks nor log segments found. Creating new log.
I20250811 20:48:42.457533 6494 log.cc:826] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: Log is configured to *not* fsync() on all Append() calls
I20250811 20:48:42.457667 6495 log.cc:826] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: Log is configured to *not* fsync() on all Append() calls
I20250811 20:48:42.458873 6493 tablet_bootstrap.cc:492] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: No bootstrap required, opened a new log
I20250811 20:48:42.459579 6493 ts_tablet_manager.cc:1397] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: Time spent bootstrapping tablet: real 0.018s user 0.006s sys 0.010s
I20250811 20:48:42.463368 6494 tablet_bootstrap.cc:492] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: No bootstrap required, opened a new log
I20250811 20:48:42.463775 6495 tablet_bootstrap.cc:492] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: No bootstrap required, opened a new log
I20250811 20:48:42.463994 6494 ts_tablet_manager.cc:1397] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: Time spent bootstrapping tablet: real 0.017s user 0.005s sys 0.011s
I20250811 20:48:42.464264 6495 ts_tablet_manager.cc:1397] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: Time spent bootstrapping tablet: real 0.016s user 0.006s sys 0.008s
I20250811 20:48:42.479089 6493 raft_consensus.cc:357] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:48:42.479846 6493 raft_consensus.cc:383] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:48:42.480124 6493 raft_consensus.cc:738] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4430edcc81c242dd8735c4971967e56b, State: Initialized, Role: FOLLOWER
I20250811 20:48:42.480886 6493 consensus_queue.cc:260] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:48:42.489776 6493 ts_tablet_manager.cc:1428] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: Time spent starting tablet: real 0.030s user 0.021s sys 0.006s
I20250811 20:48:42.490351 6495 raft_consensus.cc:357] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:48:42.491293 6495 raft_consensus.cc:383] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:48:42.491648 6495 raft_consensus.cc:738] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 47f82216612d4f1ca2b3d5c8e278cb14, State: Initialized, Role: FOLLOWER
I20250811 20:48:42.491293 6494 raft_consensus.cc:357] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:48:42.492208 6494 raft_consensus.cc:383] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:48:42.492492 6494 raft_consensus.cc:738] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f7420749c7f2423db6d0842344dd0ee4, State: Initialized, Role: FOLLOWER
I20250811 20:48:42.492651 6495 consensus_queue.cc:260] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:48:42.493348 6494 consensus_queue.cc:260] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:48:42.496104 6469 heartbeater.cc:499] Master 127.31.250.254:46197 was elected leader, sending a full tablet report...
I20250811 20:48:42.498055 6495 ts_tablet_manager.cc:1428] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: Time spent starting tablet: real 0.034s user 0.032s sys 0.000s
I20250811 20:48:42.503389 6494 ts_tablet_manager.cc:1428] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: Time spent starting tablet: real 0.039s user 0.031s sys 0.008s
W20250811 20:48:42.505816 6470 tablet.cc:2378] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250811 20:48:42.554381 6203 tablet.cc:2378] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250811 20:48:42.641372 6336 tablet.cc:2378] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 20:48:42.664623 6499 raft_consensus.cc:491] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:48:42.665122 6499 raft_consensus.cc:513] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:48:42.667507 6499 leader_election.cc:290] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195:34797), f7420749c7f2423db6d0842344dd0ee4 (127.31.250.193:40061)
I20250811 20:48:42.678735 6424 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "27f845a2b1d541a5b32c24834d8426fd" candidate_uuid: "4430edcc81c242dd8735c4971967e56b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" is_pre_election: true
I20250811 20:48:42.678746 6157 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "27f845a2b1d541a5b32c24834d8426fd" candidate_uuid: "4430edcc81c242dd8735c4971967e56b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "f7420749c7f2423db6d0842344dd0ee4" is_pre_election: true
I20250811 20:48:42.679440 6424 raft_consensus.cc:2466] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 4430edcc81c242dd8735c4971967e56b in term 0.
I20250811 20:48:42.679450 6157 raft_consensus.cc:2466] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 4430edcc81c242dd8735c4971967e56b in term 0.
I20250811 20:48:42.680518 6223 leader_election.cc:304] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 4430edcc81c242dd8735c4971967e56b, 47f82216612d4f1ca2b3d5c8e278cb14; no voters:
I20250811 20:48:42.681239 6499 raft_consensus.cc:2802] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250811 20:48:42.681573 6499 raft_consensus.cc:491] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 20:48:42.681799 6499 raft_consensus.cc:3058] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:48:42.686039 6499 raft_consensus.cc:513] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:48:42.687662 6499 leader_election.cc:290] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [CANDIDATE]: Term 1 election: Requested vote from peers 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195:34797), f7420749c7f2423db6d0842344dd0ee4 (127.31.250.193:40061)
I20250811 20:48:42.688282 6424 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "27f845a2b1d541a5b32c24834d8426fd" candidate_uuid: "4430edcc81c242dd8735c4971967e56b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "47f82216612d4f1ca2b3d5c8e278cb14"
I20250811 20:48:42.688521 6157 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "27f845a2b1d541a5b32c24834d8426fd" candidate_uuid: "4430edcc81c242dd8735c4971967e56b" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "f7420749c7f2423db6d0842344dd0ee4"
I20250811 20:48:42.688670 6424 raft_consensus.cc:3058] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:48:42.688942 6157 raft_consensus.cc:3058] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:48:42.693234 6157 raft_consensus.cc:2466] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 4430edcc81c242dd8735c4971967e56b in term 1.
I20250811 20:48:42.693418 6424 raft_consensus.cc:2466] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 4430edcc81c242dd8735c4971967e56b in term 1.
I20250811 20:48:42.694173 6225 leader_election.cc:304] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 4430edcc81c242dd8735c4971967e56b, f7420749c7f2423db6d0842344dd0ee4; no voters:
I20250811 20:48:42.694880 6499 raft_consensus.cc:2802] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:48:42.696414 6499 raft_consensus.cc:695] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 1 LEADER]: Becoming Leader. State: Replica: 4430edcc81c242dd8735c4971967e56b, State: Running, Role: LEADER
I20250811 20:48:42.697250 6499 consensus_queue.cc:237] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:48:42.707901 6012 catalog_manager.cc:5582] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b reported cstate change: term changed from 0 to 1, leader changed from <none> to 4430edcc81c242dd8735c4971967e56b (127.31.250.194). New cstate: current_term: 1 leader_uuid: "4430edcc81c242dd8735c4971967e56b" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } health_report { overall_health: UNKNOWN } } }
I20250811 20:48:42.911052 6157 raft_consensus.cc:1273] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 1 FOLLOWER]: Refusing update from remote peer 4430edcc81c242dd8735c4971967e56b: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250811 20:48:42.911340 6424 raft_consensus.cc:1273] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14 [term 1 FOLLOWER]: Refusing update from remote peer 4430edcc81c242dd8735c4971967e56b: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250811 20:48:42.912842 6504 consensus_queue.cc:1035] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [LEADER]: Connected to new peer: Peer: permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250811 20:48:42.913547 6499 consensus_queue.cc:1035] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [LEADER]: Connected to new peer: Peer: permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 20:48:42.934991 6512 mvcc.cc:204] Tried to move back new op lower bound from 7188256042628063232 to 7188256041774575616. Current Snapshot: MvccSnapshot[applied={T|T < 7188256042628063232}]
I20250811 20:48:42.950903 6513 mvcc.cc:204] Tried to move back new op lower bound from 7188256042628063232 to 7188256041774575616. Current Snapshot: MvccSnapshot[applied={T|T < 7188256042628063232}]
I20250811 20:48:42.968658 6514 mvcc.cc:204] Tried to move back new op lower bound from 7188256042628063232 to 7188256041774575616. Current Snapshot: MvccSnapshot[applied={T|T < 7188256042628063232}]
I20250811 20:48:48.066351 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 5981
W20250811 20:48:48.437094 6546 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:48.437795 6546 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:48.471191 6546 flags.cc:425] Enabled experimental flag: --enable_multi_tenancy=false
W20250811 20:48:49.028290 6202 heartbeater.cc:646] Failed to heartbeat to 127.31.250.254:46197 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.31.250.254:46197: connect: Connection refused (error 111)
W20250811 20:48:49.036386 6469 heartbeater.cc:646] Failed to heartbeat to 127.31.250.254:46197 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.31.250.254:46197: connect: Connection refused (error 111)
W20250811 20:48:49.042888 6335 heartbeater.cc:646] Failed to heartbeat to 127.31.250.254:46197 (0 consecutive failures): Network error: Failed to send heartbeat to master: Client connection negotiation failed: client connection to 127.31.250.254:46197: connect: Connection refused (error 111)
W20250811 20:48:49.793102 6546 thread.cc:641] rpc reactor (reactor) Time spent creating pthread: real 1.277s user 0.510s sys 0.765s
W20250811 20:48:49.793476 6546 thread.cc:608] rpc reactor (reactor) Time spent starting thread: real 1.278s user 0.510s sys 0.765s
I20250811 20:48:49.924135 6546 minidump.cc:252] Setting minidump size limit to 20M
I20250811 20:48:49.926533 6546 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:49.928006 6546 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:49.940598 6580 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:49.941309 6581 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:50.028123 6583 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:48:50.029271 6546 server_base.cc:1047] running on GCE node
I20250811 20:48:50.031477 6546 hybrid_clock.cc:584] initializing the hybrid clock with 'system' time source
I20250811 20:48:50.032045 6546 hybrid_clock.cc:648] HybridClock initialized: now 1754945330032011 us; error 103924 us; skew 500 ppm
I20250811 20:48:50.032969 6546 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:50.038482 6546 webserver.cc:489] Webserver started at http://0.0.0.0:41515/ using document root <none> and password file <none>
I20250811 20:48:50.039628 6546 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:50.039916 6546 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:50.040474 6546 server_base.cc:895] This appears to be a new deployment of Kudu; creating new FS layout
I20250811 20:48:50.047405 6546 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data/instance:
uuid: "ec68b17b32754a83b1ef8876f64ed39f"
format_stamp: "Formatted at 2025-08-11 20:48:50 on dist-test-slave-4gzk"
I20250811 20:48:50.048918 6546 fs_manager.cc:1068] Generated new instance metadata in path /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal/instance:
uuid: "ec68b17b32754a83b1ef8876f64ed39f"
format_stamp: "Formatted at 2025-08-11 20:48:50 on dist-test-slave-4gzk"
I20250811 20:48:50.056828 6546 fs_manager.cc:696] Time spent creating directory manager: real 0.007s user 0.005s sys 0.003s
I20250811 20:48:50.063056 6591 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:50.064106 6546 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.001s sys 0.002s
I20250811 20:48:50.064482 6546 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal
uuid: "ec68b17b32754a83b1ef8876f64ed39f"
format_stamp: "Formatted at 2025-08-11 20:48:50 on dist-test-slave-4gzk"
I20250811 20:48:50.064882 6546 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:50.255295 6546 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:50.256932 6546 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:50.257365 6546 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:50.262475 6546 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 00000000000000000000000000000000. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:48:50.277081 6546 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f: Bootstrap starting.
I20250811 20:48:50.281906 6546 tablet_bootstrap.cc:654] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f: Neither blocks nor log segments found. Creating new log.
I20250811 20:48:50.283555 6546 log.cc:826] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f: Log is configured to *not* fsync() on all Append() calls
I20250811 20:48:50.288043 6546 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f: No bootstrap required, opened a new log
I20250811 20:48:50.303810 6546 raft_consensus.cc:357] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ec68b17b32754a83b1ef8876f64ed39f" member_type: VOTER }
I20250811 20:48:50.304343 6546 raft_consensus.cc:383] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:48:50.304556 6546 raft_consensus.cc:738] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: ec68b17b32754a83b1ef8876f64ed39f, State: Initialized, Role: FOLLOWER
I20250811 20:48:50.305222 6546 consensus_queue.cc:260] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ec68b17b32754a83b1ef8876f64ed39f" member_type: VOTER }
I20250811 20:48:50.305675 6546 raft_consensus.cc:397] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 0 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:48:50.305922 6546 raft_consensus.cc:491] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 0 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:48:50.306208 6546 raft_consensus.cc:3058] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:48:50.310061 6546 raft_consensus.cc:513] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ec68b17b32754a83b1ef8876f64ed39f" member_type: VOTER }
I20250811 20:48:50.310690 6546 leader_election.cc:304] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: ec68b17b32754a83b1ef8876f64ed39f; no voters:
I20250811 20:48:50.312901 6546 leader_election.cc:290] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [CANDIDATE]: Term 1 election: Requested vote from peers
I20250811 20:48:50.313154 6598 raft_consensus.cc:2802] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:48:50.316766 6598 raft_consensus.cc:695] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 1 LEADER]: Becoming Leader. State: Replica: ec68b17b32754a83b1ef8876f64ed39f, State: Running, Role: LEADER
I20250811 20:48:50.317560 6598 consensus_queue.cc:237] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ec68b17b32754a83b1ef8876f64ed39f" member_type: VOTER }
I20250811 20:48:50.327531 6599 sys_catalog.cc:455] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 1 leader_uuid: "ec68b17b32754a83b1ef8876f64ed39f" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ec68b17b32754a83b1ef8876f64ed39f" member_type: VOTER } }
I20250811 20:48:50.327808 6600 sys_catalog.cc:455] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [sys.catalog]: SysCatalogTable state changed. Reason: New leader ec68b17b32754a83b1ef8876f64ed39f. Latest consensus state: current_term: 1 leader_uuid: "ec68b17b32754a83b1ef8876f64ed39f" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ec68b17b32754a83b1ef8876f64ed39f" member_type: VOTER } }
I20250811 20:48:50.328148 6599 sys_catalog.cc:458] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [sys.catalog]: This master's current role is: LEADER
I20250811 20:48:50.328346 6600 sys_catalog.cc:458] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [sys.catalog]: This master's current role is: LEADER
I20250811 20:48:50.337812 6546 tablet_replica.cc:331] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f: stopping tablet replica
I20250811 20:48:50.338446 6546 raft_consensus.cc:2241] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 1 LEADER]: Raft consensus shutting down.
I20250811 20:48:50.338856 6546 raft_consensus.cc:2270] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 1 FOLLOWER]: Raft consensus is shut down!
I20250811 20:48:50.340757 6546 master.cc:561] Master@0.0.0.0:7051 shutting down...
W20250811 20:48:50.341274 6546 acceptor_pool.cc:196] Could not shut down acceptor socket on 0.0.0.0:7051: Network error: shutdown error: Transport endpoint is not connected (error 107)
I20250811 20:48:50.367945 6546 master.cc:583] Master@0.0.0.0:7051 shutdown complete.
I20250811 20:48:51.404011 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 6073
I20250811 20:48:51.439435 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 6206
I20250811 20:48:51.477877 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 6339
I20250811 20:48:51.512506 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
master
run
--ipki_ca_key_size=768
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:46197
--webserver_interface=127.31.250.254
--webserver_port=45209
--builtin_ntp_servers=127.31.250.212:36717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--rpc_reuseport=true
--master_addresses=127.31.250.254:46197 with env {}
W20250811 20:48:51.816272 6609 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:51.816905 6609 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:51.817373 6609 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:51.847429 6609 flags.cc:425] Enabled experimental flag: --ipki_ca_key_size=768
W20250811 20:48:51.847771 6609 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:51.848048 6609 flags.cc:425] Enabled experimental flag: --tsk_num_rsa_bits=512
W20250811 20:48:51.848290 6609 flags.cc:425] Enabled experimental flag: --rpc_reuseport=true
I20250811 20:48:51.882825 6609 master_runner.cc:386] Master server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:36717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal
--ipki_ca_key_size=768
--master_addresses=127.31.250.254:46197
--ipki_server_key_size=768
--openssl_security_level_override=0
--tsk_num_rsa_bits=512
--rpc_bind_addresses=127.31.250.254:46197
--rpc_reuseport=true
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data/info.pb
--webserver_interface=127.31.250.254
--webserver_port=45209
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/logs
--logbuflevel=-1
--logtostderr=true
Master server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:51.884171 6609 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:51.885843 6609 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:51.896404 6615 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:53.300912 6614 debug-util.cc:398] Leaking SignalData structure 0x7b0800037cc0 after lost signal to thread 6609
W20250811 20:48:51.897213 6616 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:53.341698 6609 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.445s user 0.480s sys 0.954s
W20250811 20:48:53.342664 6609 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.446s user 0.480s sys 0.955s
W20250811 20:48:53.343896 6618 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:53.346473 6617 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1446 milliseconds
I20250811 20:48:53.346495 6609 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:48:53.347793 6609 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:48:53.350271 6609 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:48:53.351616 6609 hybrid_clock.cc:648] HybridClock initialized: now 1754945333351579 us; error 37 us; skew 500 ppm
I20250811 20:48:53.352433 6609 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:53.358417 6609 webserver.cc:489] Webserver started at http://127.31.250.254:45209/ using document root <none> and password file <none>
I20250811 20:48:53.359349 6609 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:53.359582 6609 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:53.367074 6609 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.005s sys 0.002s
I20250811 20:48:53.371495 6626 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:53.372509 6609 fs_manager.cc:730] Time spent opening block manager: real 0.003s user 0.004s sys 0.000s
I20250811 20:48:53.372814 6609 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal
uuid: "ec68b17b32754a83b1ef8876f64ed39f"
format_stamp: "Formatted at 2025-08-11 20:48:50 on dist-test-slave-4gzk"
I20250811 20:48:53.374670 6609 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:53.421267 6609 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:53.422732 6609 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:53.423170 6609 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:53.491143 6609 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.254:46197
I20250811 20:48:53.491223 6677 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.254:46197 every 8 connection(s)
I20250811 20:48:53.493970 6609 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data/info.pb
I20250811 20:48:53.501662 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 6609
I20250811 20:48:53.503420 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.193:40061
--local_ip_for_outbound_sockets=127.31.250.193
--tserver_master_addrs=127.31.250.254:46197
--webserver_port=45383
--webserver_interface=127.31.250.193
--builtin_ntp_servers=127.31.250.212:36717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250811 20:48:53.503696 6678 sys_catalog.cc:263] Verifying existing consensus state
I20250811 20:48:53.515398 6678 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f: Bootstrap starting.
I20250811 20:48:53.524981 6678 log.cc:826] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f: Log is configured to *not* fsync() on all Append() calls
I20250811 20:48:53.536558 6678 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f: Bootstrap replayed 1/1 log segments. Stats: ops{read=2 overwritten=0 applied=2 ignored=0} inserts{seen=2 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:48:53.537320 6678 tablet_bootstrap.cc:492] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f: Bootstrap complete.
I20250811 20:48:53.557368 6678 raft_consensus.cc:357] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ec68b17b32754a83b1ef8876f64ed39f" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46197 } }
I20250811 20:48:53.557994 6678 raft_consensus.cc:738] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: ec68b17b32754a83b1ef8876f64ed39f, State: Initialized, Role: FOLLOWER
I20250811 20:48:53.558784 6678 consensus_queue.cc:260] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ec68b17b32754a83b1ef8876f64ed39f" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46197 } }
I20250811 20:48:53.559317 6678 raft_consensus.cc:397] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 1 FOLLOWER]: Only one voter in the Raft config. Triggering election immediately
I20250811 20:48:53.559584 6678 raft_consensus.cc:491] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 1 FOLLOWER]: Starting leader election (initial election of a single-replica configuration)
I20250811 20:48:53.559897 6678 raft_consensus.cc:3058] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 1 FOLLOWER]: Advancing to term 2
I20250811 20:48:53.563869 6678 raft_consensus.cc:513] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ec68b17b32754a83b1ef8876f64ed39f" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46197 } }
I20250811 20:48:53.564550 6678 leader_election.cc:304] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 1 responses out of 1 voters: 1 yes votes; 0 no votes. yes voters: ec68b17b32754a83b1ef8876f64ed39f; no voters:
I20250811 20:48:53.566825 6678 leader_election.cc:290] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [CANDIDATE]: Term 2 election: Requested vote from peers
I20250811 20:48:53.567395 6682 raft_consensus.cc:2802] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 2 FOLLOWER]: Leader election won for term 2
I20250811 20:48:53.570461 6682 raft_consensus.cc:695] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [term 2 LEADER]: Becoming Leader. State: Replica: ec68b17b32754a83b1ef8876f64ed39f, State: Running, Role: LEADER
I20250811 20:48:53.571630 6682 consensus_queue.cc:237] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 2, Committed index: 2, Last appended: 1.2, Last appended by leader: 2, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ec68b17b32754a83b1ef8876f64ed39f" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46197 } }
I20250811 20:48:53.572028 6678 sys_catalog.cc:564] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [sys.catalog]: configured and running, proceeding with master startup.
I20250811 20:48:53.580753 6683 sys_catalog.cc:455] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [sys.catalog]: SysCatalogTable state changed. Reason: RaftConsensus started. Latest consensus state: current_term: 2 leader_uuid: "ec68b17b32754a83b1ef8876f64ed39f" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ec68b17b32754a83b1ef8876f64ed39f" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46197 } } }
I20250811 20:48:53.581418 6683 sys_catalog.cc:458] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [sys.catalog]: This master's current role is: LEADER
I20250811 20:48:53.583348 6684 sys_catalog.cc:455] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [sys.catalog]: SysCatalogTable state changed. Reason: New leader ec68b17b32754a83b1ef8876f64ed39f. Latest consensus state: current_term: 2 leader_uuid: "ec68b17b32754a83b1ef8876f64ed39f" committed_config { opid_index: -1 OBSOLETE_local: true peers { permanent_uuid: "ec68b17b32754a83b1ef8876f64ed39f" member_type: VOTER last_known_addr { host: "127.31.250.254" port: 46197 } } }
I20250811 20:48:53.584004 6684 sys_catalog.cc:458] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f [sys.catalog]: This master's current role is: LEADER
I20250811 20:48:53.584542 6688 catalog_manager.cc:1477] Loading table and tablet metadata into memory...
I20250811 20:48:53.605453 6688 catalog_manager.cc:671] Loaded metadata for table pre_rebuild [id=3a7032ec138742a3a76b0f90f7a39d43]
I20250811 20:48:53.614574 6688 tablet_loader.cc:96] loaded metadata for tablet 27f845a2b1d541a5b32c24834d8426fd (table pre_rebuild [id=3a7032ec138742a3a76b0f90f7a39d43])
I20250811 20:48:53.616201 6688 catalog_manager.cc:1486] Initializing Kudu cluster ID...
I20250811 20:48:53.648284 6688 catalog_manager.cc:1349] Generated new cluster ID: d86cf709ba804503aeac5fd4923c3202
I20250811 20:48:53.648566 6688 catalog_manager.cc:1497] Initializing Kudu internal certificate authority...
I20250811 20:48:53.686808 6688 catalog_manager.cc:1372] Generated new certificate authority record
I20250811 20:48:53.688884 6688 catalog_manager.cc:1506] Loading token signing keys...
I20250811 20:48:53.713063 6688 catalog_manager.cc:5955] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f: Generated new TSK 0
I20250811 20:48:53.714120 6688 catalog_manager.cc:1516] Initializing in-progress tserver states...
W20250811 20:48:53.850315 6680 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:53.850819 6680 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:53.851395 6680 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:53.881829 6680 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:53.882642 6680 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.193
I20250811 20:48:53.916646 6680 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:36717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.193:40061
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data/info.pb
--webserver_interface=127.31.250.193
--webserver_port=45383
--tserver_master_addrs=127.31.250.254:46197
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.193
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:53.917941 6680 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:53.919593 6680 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:53.931768 6706 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:55.334975 6705 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 6680
W20250811 20:48:55.349623 6680 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.418s user 0.496s sys 0.922s
W20250811 20:48:53.934265 6707 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:55.350013 6680 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.418s user 0.496s sys 0.922s
W20250811 20:48:55.351883 6709 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:55.354669 6708 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Connection timed out after 1422 milliseconds
I20250811 20:48:55.354687 6680 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:48:55.356600 6680 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:48:55.358806 6680 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:48:55.360183 6680 hybrid_clock.cc:648] HybridClock initialized: now 1754945335360118 us; error 76 us; skew 500 ppm
I20250811 20:48:55.360967 6680 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:55.367810 6680 webserver.cc:489] Webserver started at http://127.31.250.193:45383/ using document root <none> and password file <none>
I20250811 20:48:55.368862 6680 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:55.369195 6680 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:55.378072 6680 fs_manager.cc:714] Time spent opening directory manager: real 0.006s user 0.001s sys 0.004s
I20250811 20:48:55.383596 6716 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:55.384770 6680 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.003s sys 0.001s
I20250811 20:48:55.385106 6680 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/wal
uuid: "f7420749c7f2423db6d0842344dd0ee4"
format_stamp: "Formatted at 2025-08-11 20:48:38 on dist-test-slave-4gzk"
I20250811 20:48:55.387136 6680 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:55.456971 6680 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:55.458527 6680 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:55.459040 6680 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:55.462271 6680 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:48:55.468874 6723 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250811 20:48:55.476727 6680 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250811 20:48:55.476958 6680 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.010s user 0.000s sys 0.003s
I20250811 20:48:55.477293 6680 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250811 20:48:55.481901 6680 ts_tablet_manager.cc:610] Registered 1 tablets
I20250811 20:48:55.482168 6680 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.003s sys 0.000s
I20250811 20:48:55.482553 6723 tablet_bootstrap.cc:492] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: Bootstrap starting.
I20250811 20:48:55.662573 6829 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.193:40061 every 8 connection(s)
I20250811 20:48:55.662583 6680 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.193:40061
I20250811 20:48:55.666644 6680 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data/info.pb
I20250811 20:48:55.676170 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 6680
I20250811 20:48:55.678517 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.194:44403
--local_ip_for_outbound_sockets=127.31.250.194
--tserver_master_addrs=127.31.250.254:46197
--webserver_port=43495
--webserver_interface=127.31.250.194
--builtin_ntp_servers=127.31.250.212:36717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250811 20:48:55.714542 6830 heartbeater.cc:344] Connected to a master server at 127.31.250.254:46197
I20250811 20:48:55.715022 6830 heartbeater.cc:461] Registering TS with master...
I20250811 20:48:55.716234 6830 heartbeater.cc:507] Master 127.31.250.254:46197 requested a full tablet report, sending...
I20250811 20:48:55.720945 6643 ts_manager.cc:194] Registered new tserver with Master: f7420749c7f2423db6d0842344dd0ee4 (127.31.250.193:40061)
I20250811 20:48:55.729120 6643 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.193:52853
I20250811 20:48:55.865885 6723 log.cc:826] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: Log is configured to *not* fsync() on all Append() calls
W20250811 20:48:56.149804 6834 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:56.150471 6834 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:56.151206 6834 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:56.204161 6834 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:56.205552 6834 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.194
I20250811 20:48:56.242688 6834 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:36717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.194:44403
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data/info.pb
--webserver_interface=127.31.250.194
--webserver_port=43495
--tserver_master_addrs=127.31.250.254:46197
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.194
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:56.244021 6834 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:56.245517 6834 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:56.257740 6841 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:48:56.733901 6830 heartbeater.cc:499] Master 127.31.250.254:46197 was elected leader, sending a full tablet report...
W20250811 20:48:57.665949 6840 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 6834
W20250811 20:48:56.258479 6842 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:48:57.939913 6834 thread.cc:641] GCE (cloud detector) Time spent creating pthread: real 1.684s user 0.668s sys 1.011s
W20250811 20:48:57.940577 6834 thread.cc:608] GCE (cloud detector) Time spent starting thread: real 1.684s user 0.668s sys 1.011s
W20250811 20:48:57.949270 6846 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:48:57.949694 6834 server_base.cc:1047] running on GCE node
I20250811 20:48:57.950712 6834 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:48:57.953138 6834 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:48:57.954622 6834 hybrid_clock.cc:648] HybridClock initialized: now 1754945337954580 us; error 54 us; skew 500 ppm
I20250811 20:48:57.955415 6834 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:48:57.961710 6834 webserver.cc:489] Webserver started at http://127.31.250.194:43495/ using document root <none> and password file <none>
I20250811 20:48:57.962673 6834 fs_manager.cc:362] Metadata directory not provided
I20250811 20:48:57.962904 6834 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:48:57.970968 6834 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.001s sys 0.005s
I20250811 20:48:57.976248 6851 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:48:57.977363 6834 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.004s sys 0.000s
I20250811 20:48:57.977663 6834 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/wal
uuid: "4430edcc81c242dd8735c4971967e56b"
format_stamp: "Formatted at 2025-08-11 20:48:40 on dist-test-slave-4gzk"
I20250811 20:48:57.979583 6834 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:48:58.029633 6834 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:48:58.031015 6834 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:48:58.031450 6834 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:48:58.034456 6834 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:48:58.040863 6858 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250811 20:48:58.051716 6834 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250811 20:48:58.052017 6834 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.013s user 0.002s sys 0.000s
I20250811 20:48:58.052300 6834 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250811 20:48:58.057040 6834 ts_tablet_manager.cc:610] Registered 1 tablets
I20250811 20:48:58.057256 6834 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.004s sys 0.000s
I20250811 20:48:58.057701 6858 tablet_bootstrap.cc:492] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: Bootstrap starting.
I20250811 20:48:58.100967 6723 tablet_bootstrap.cc:492] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: Bootstrap replayed 1/1 log segments. Stats: ops{read=206 overwritten=0 applied=206 ignored=0} inserts{seen=10250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:48:58.102023 6723 tablet_bootstrap.cc:492] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: Bootstrap complete.
I20250811 20:48:58.103827 6723 ts_tablet_manager.cc:1397] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: Time spent bootstrapping tablet: real 2.622s user 2.529s sys 0.080s
I20250811 20:48:58.121526 6723 raft_consensus.cc:357] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:48:58.124791 6723 raft_consensus.cc:738] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: f7420749c7f2423db6d0842344dd0ee4, State: Initialized, Role: FOLLOWER
I20250811 20:48:58.125953 6723 consensus_queue.cc:260] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 206, Last appended: 1.206, Last appended by leader: 206, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:48:58.130136 6723 ts_tablet_manager.cc:1428] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: Time spent starting tablet: real 0.026s user 0.018s sys 0.006s
I20250811 20:48:58.240263 6834 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.194:44403
I20250811 20:48:58.240559 6965 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.194:44403 every 8 connection(s)
I20250811 20:48:58.242811 6834 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data/info.pb
I20250811 20:48:58.249850 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 6834
I20250811 20:48:58.251956 32747 external_mini_cluster.cc:1366] Running /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
/tmp/dist-test-taskexcsLP/build/tsan/bin/kudu
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/wal
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data
--block_manager=log
--webserver_interface=localhost
--never_fsync
--enable_minidumps=false
--redact=none
--metrics_log_interval_ms=1000
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/logs
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data/info.pb
--server_dump_info_format=pb
--rpc_server_allow_ephemeral_ports
--unlock_experimental_flags
--unlock_unsafe_flags
--logtostderr
--logbuflevel=-1
--ipki_server_key_size=768
--openssl_security_level_override=0
tserver
run
--rpc_bind_addresses=127.31.250.195:34797
--local_ip_for_outbound_sockets=127.31.250.195
--tserver_master_addrs=127.31.250.254:46197
--webserver_port=33793
--webserver_interface=127.31.250.195
--builtin_ntp_servers=127.31.250.212:36717
--builtin_ntp_poll_interval_ms=100
--ntp_initial_sync_wait_secs=10
--time_source=builtin with env {}
I20250811 20:48:58.272887 6966 heartbeater.cc:344] Connected to a master server at 127.31.250.254:46197
I20250811 20:48:58.273473 6966 heartbeater.cc:461] Registering TS with master...
I20250811 20:48:58.274811 6966 heartbeater.cc:507] Master 127.31.250.254:46197 requested a full tablet report, sending...
I20250811 20:48:58.279068 6643 ts_manager.cc:194] Registered new tserver with Master: 4430edcc81c242dd8735c4971967e56b (127.31.250.194:44403)
I20250811 20:48:58.282121 6643 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.194:54981
I20250811 20:48:58.313346 6858 log.cc:826] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: Log is configured to *not* fsync() on all Append() calls
W20250811 20:48:58.563409 6970 flags.cc:425] Enabled unsafe flag: --openssl_security_level_override=0
W20250811 20:48:58.563889 6970 flags.cc:425] Enabled unsafe flag: --rpc_server_allow_ephemeral_ports=true
W20250811 20:48:58.564373 6970 flags.cc:425] Enabled unsafe flag: --never_fsync=true
W20250811 20:48:58.595865 6970 flags.cc:425] Enabled experimental flag: --ipki_server_key_size=768
W20250811 20:48:58.596784 6970 flags.cc:425] Enabled experimental flag: --local_ip_for_outbound_sockets=127.31.250.195
I20250811 20:48:58.630745 6970 tablet_server_runner.cc:78] Tablet server non-default flags:
--builtin_ntp_poll_interval_ms=100
--builtin_ntp_servers=127.31.250.212:36717
--ntp_initial_sync_wait_secs=10
--time_source=builtin
--fs_data_dirs=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data
--fs_wal_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/wal
--ipki_server_key_size=768
--openssl_security_level_override=0
--rpc_bind_addresses=127.31.250.195:34797
--rpc_server_allow_ephemeral_ports=true
--metrics_log_interval_ms=1000
--server_dump_info_format=pb
--server_dump_info_path=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data/info.pb
--webserver_interface=127.31.250.195
--webserver_port=33793
--tserver_master_addrs=127.31.250.254:46197
--never_fsync=true
--redact=none
--unlock_experimental_flags=true
--unlock_unsafe_flags=true
--enable_minidumps=false
--local_ip_for_outbound_sockets=127.31.250.195
--log_dir=/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/logs
--logbuflevel=-1
--logtostderr=true
Tablet server version:
kudu 1.19.0-SNAPSHOT
revision 8e873eb37b157a0cf7cb97cc7690367de9707107
build type FASTDEBUG
built by None at 11 Aug 2025 20:41:23 UTC on 24a791456cd2
build id 7521
TSAN enabled
I20250811 20:48:58.632110 6970 env_posix.cc:2264] Not raising this process' open files per process limit of 1048576; it is already as high as it can go
I20250811 20:48:58.633620 6970 file_cache.cc:492] Constructed file cache file cache with capacity 419430
W20250811 20:48:58.645803 6977 instance_detector.cc:116] could not retrieve AWS instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
I20250811 20:48:59.286173 6966 heartbeater.cc:499] Master 127.31.250.254:46197 was elected leader, sending a full tablet report...
I20250811 20:48:59.480718 6983 raft_consensus.cc:491] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:48:59.483110 6983 raft_consensus.cc:513] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
W20250811 20:48:59.506099 6717 proxy.cc:239] Call had error, refreshing address and retrying: Network error: Client connection negotiation failed: client connection to 127.31.250.195:34797: connect: Connection refused (error 111)
I20250811 20:48:59.513221 6983 leader_election.cc:290] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195:34797), 4430edcc81c242dd8735c4971967e56b (127.31.250.194:44403)
W20250811 20:48:59.534195 6717 leader_election.cc:336] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [CANDIDATE]: Term 2 pre-election: RPC error from VoteRequest() call to peer 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195:34797): Network error: Client connection negotiation failed: client connection to 127.31.250.195:34797: connect: Connection refused (error 111)
I20250811 20:48:59.567096 6921 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "27f845a2b1d541a5b32c24834d8426fd" candidate_uuid: "f7420749c7f2423db6d0842344dd0ee4" candidate_term: 2 candidate_status { last_received { term: 1 index: 206 } } ignore_live_leader: false dest_uuid: "4430edcc81c242dd8735c4971967e56b" is_pre_election: true
W20250811 20:48:59.598109 6719 leader_election.cc:343] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [CANDIDATE]: Term 2 pre-election: Tablet error from VoteRequest() call to peer 4430edcc81c242dd8735c4971967e56b (127.31.250.194:44403): Illegal state: must be running to vote when last-logged opid is not known
I20250811 20:48:59.598852 6719 leader_election.cc:304] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate lost. Election summary: received 3 responses out of 3 voters: 1 yes votes; 2 no votes. yes voters: f7420749c7f2423db6d0842344dd0ee4; no voters: 4430edcc81c242dd8735c4971967e56b, 47f82216612d4f1ca2b3d5c8e278cb14
I20250811 20:48:59.600725 6983 raft_consensus.cc:2747] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 1 FOLLOWER]: Leader pre-election lost for term 2. Reason: could not achieve majority
W20250811 20:49:00.048913 6976 debug-util.cc:398] Leaking SignalData structure 0x7b0800034ea0 after lost signal to thread 6970
W20250811 20:48:58.647228 6978 instance_detector.cc:116] could not retrieve Azure instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:49:00.178769 6970 thread.cc:641] OpenStack (cloud detector) Time spent creating pthread: real 1.531s user 0.446s sys 1.022s
W20250811 20:49:00.180495 6970 thread.cc:608] OpenStack (cloud detector) Time spent starting thread: real 1.533s user 0.446s sys 1.022s
W20250811 20:49:00.181217 6980 instance_detector.cc:116] could not retrieve OpenStack instance metadata: Network error: curl error: HTTP response code said error: The requested URL returned error: 404
W20250811 20:49:00.185693 6979 instance_detector.cc:116] could not retrieve GCE instance metadata: Timed out: curl timeout: Timeout was reached: Resolving timed out after 1533 milliseconds
I20250811 20:49:00.185777 6970 server_base.cc:1042] Not found: could not retrieve instance metadata: unable to detect cloud type of this node, probably running in non-cloud environment
I20250811 20:49:00.186949 6970 hybrid_clock.cc:584] initializing the hybrid clock with 'builtin' time source
I20250811 20:49:00.189559 6970 hybrid_clock.cc:630] waiting up to --ntp_initial_sync_wait_secs=10 seconds for the clock to synchronize
I20250811 20:49:00.191072 6970 hybrid_clock.cc:648] HybridClock initialized: now 1754945340191026 us; error 61 us; skew 500 ppm
I20250811 20:49:00.191900 6970 server_base.cc:847] Flag tcmalloc_max_total_thread_cache_bytes is not working since tcmalloc is not enabled.
I20250811 20:49:00.198467 6970 webserver.cc:489] Webserver started at http://127.31.250.195:33793/ using document root <none> and password file <none>
I20250811 20:49:00.199517 6970 fs_manager.cc:362] Metadata directory not provided
I20250811 20:49:00.199752 6970 fs_manager.cc:368] Using write-ahead log directory (fs_wal_dir) as metadata directory
I20250811 20:49:00.208120 6970 fs_manager.cc:714] Time spent opening directory manager: real 0.005s user 0.001s sys 0.006s
I20250811 20:49:00.213651 6992 log_block_manager.cc:3788] Time spent loading block containers with low live blocks: real 0.000s user 0.000s sys 0.000s
I20250811 20:49:00.214743 6970 fs_manager.cc:730] Time spent opening block manager: real 0.004s user 0.005s sys 0.001s
I20250811 20:49:00.215047 6970 fs_manager.cc:647] Opened local filesystem: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data,/tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/wal
uuid: "47f82216612d4f1ca2b3d5c8e278cb14"
format_stamp: "Formatted at 2025-08-11 20:48:41 on dist-test-slave-4gzk"
I20250811 20:49:00.217063 6970 fs_report.cc:389] FS layout report
--------------------
wal directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/wal
metadata directory: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/wal
1 data directories: /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data/data
Total live blocks: 0
Total live bytes: 0
Total live bytes (after alignment): 0
Total number of LBM containers: 0 (0 full)
Did not check for missing blocks
Did not check for orphaned blocks
Total full LBM containers with extra space: 0 (0 repaired)
Total full LBM container extra space in bytes: 0 (0 repaired)
Total incomplete LBM containers: 0 (0 repaired)
Total LBM partial records: 0 (0 repaired)
Total corrupted LBM metadata records in RocksDB: 0 (0 repaired)
I20250811 20:49:00.287084 6970 rpc_server.cc:225] running with OpenSSL 1.1.1 11 Sep 2018
I20250811 20:49:00.288587 6970 env_posix.cc:2264] Not raising this process' running threads per effective uid limit of 18446744073709551615; it is already as high as it can go
I20250811 20:49:00.289029 6970 kserver.cc:163] Server-wide thread pool size limit: 3276
I20250811 20:49:00.292083 6970 txn_system_client.cc:432] TxnSystemClient initialization is disabled...
I20250811 20:49:00.299357 6999 ts_tablet_manager.cc:542] Loading tablet metadata (0/1 complete)
I20250811 20:49:00.310254 6970 ts_tablet_manager.cc:579] Loaded tablet metadata (1 total tablets, 1 live tablets)
I20250811 20:49:00.310521 6970 ts_tablet_manager.cc:525] Time spent load tablet metadata: real 0.013s user 0.001s sys 0.001s
I20250811 20:49:00.310814 6970 ts_tablet_manager.cc:594] Registering tablets (0/1 complete)
I20250811 20:49:00.315640 6970 ts_tablet_manager.cc:610] Registered 1 tablets
I20250811 20:49:00.315893 6970 ts_tablet_manager.cc:589] Time spent register tablets: real 0.005s user 0.005s sys 0.000s
I20250811 20:49:00.316370 6999 tablet_bootstrap.cc:492] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: Bootstrap starting.
I20250811 20:49:00.545120 6970 rpc_server.cc:307] RPC server started. Bound to: 127.31.250.195:34797
I20250811 20:49:00.545310 7105 acceptor_pool.cc:272] collecting diagnostics on the listening RPC socket 127.31.250.195:34797 every 8 connection(s)
I20250811 20:49:00.548904 6970 server_base.cc:1179] Dumped server information to /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data/info.pb
I20250811 20:49:00.559487 32747 external_mini_cluster.cc:1428] Started /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu as pid 6970
I20250811 20:49:00.587415 7106 heartbeater.cc:344] Connected to a master server at 127.31.250.254:46197
I20250811 20:49:00.587904 7106 heartbeater.cc:461] Registering TS with master...
I20250811 20:49:00.589115 7106 heartbeater.cc:507] Master 127.31.250.254:46197 requested a full tablet report, sending...
I20250811 20:49:00.592924 6643 ts_manager.cc:194] Registered new tserver with Master: 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195:34797)
I20250811 20:49:00.594904 32747 external_mini_cluster.cc:949] 3 TS(s) registered with all masters
I20250811 20:49:00.596086 6643 master_service.cc:496] Signed X509 certificate for tserver {username='slave'} at 127.31.250.195:56229
I20250811 20:49:00.667419 6999 log.cc:826] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: Log is configured to *not* fsync() on all Append() calls
I20250811 20:49:01.372262 6858 tablet_bootstrap.cc:492] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: Bootstrap replayed 1/1 log segments. Stats: ops{read=206 overwritten=0 applied=206 ignored=0} inserts{seen=10250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:49:01.373031 6858 tablet_bootstrap.cc:492] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: Bootstrap complete.
I20250811 20:49:01.374423 6858 ts_tablet_manager.cc:1397] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: Time spent bootstrapping tablet: real 3.317s user 3.040s sys 0.088s
I20250811 20:49:01.379556 6858 raft_consensus.cc:357] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:49:01.381515 6858 raft_consensus.cc:738] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4430edcc81c242dd8735c4971967e56b, State: Initialized, Role: FOLLOWER
I20250811 20:49:01.382303 6858 consensus_queue.cc:260] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 206, Last appended: 1.206, Last appended by leader: 206, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:49:01.385401 6858 ts_tablet_manager.cc:1428] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: Time spent starting tablet: real 0.011s user 0.009s sys 0.004s
I20250811 20:49:01.600466 7106 heartbeater.cc:499] Master 127.31.250.254:46197 was elected leader, sending a full tablet report...
I20250811 20:49:01.656356 7119 raft_consensus.cc:491] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 1 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:49:01.656738 7119 raft_consensus.cc:513] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 1 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:49:01.658077 7119 leader_election.cc:290] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [CANDIDATE]: Term 2 pre-election: Requested pre-vote from peers 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195:34797), 4430edcc81c242dd8735c4971967e56b (127.31.250.194:44403)
I20250811 20:49:01.659397 6921 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "27f845a2b1d541a5b32c24834d8426fd" candidate_uuid: "f7420749c7f2423db6d0842344dd0ee4" candidate_term: 2 candidate_status { last_received { term: 1 index: 206 } } ignore_live_leader: false dest_uuid: "4430edcc81c242dd8735c4971967e56b" is_pre_election: true
I20250811 20:49:01.660207 6921 raft_consensus.cc:2466] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 1 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate f7420749c7f2423db6d0842344dd0ee4 in term 1.
I20250811 20:49:01.661849 6719 leader_election.cc:304] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [CANDIDATE]: Term 2 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 4430edcc81c242dd8735c4971967e56b, f7420749c7f2423db6d0842344dd0ee4; no voters:
I20250811 20:49:01.662964 7119 raft_consensus.cc:2802] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 1 FOLLOWER]: Leader pre-election won for term 2
I20250811 20:49:01.663447 7119 raft_consensus.cc:491] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 1 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 20:49:01.663928 7119 raft_consensus.cc:3058] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 1 FOLLOWER]: Advancing to term 2
I20250811 20:49:01.674360 7119 raft_consensus.cc:513] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 2 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:49:01.677480 6921 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "27f845a2b1d541a5b32c24834d8426fd" candidate_uuid: "f7420749c7f2423db6d0842344dd0ee4" candidate_term: 2 candidate_status { last_received { term: 1 index: 206 } } ignore_live_leader: false dest_uuid: "4430edcc81c242dd8735c4971967e56b"
I20250811 20:49:01.678015 6921 raft_consensus.cc:3058] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 1 FOLLOWER]: Advancing to term 2
I20250811 20:49:01.681146 7119 leader_election.cc:290] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [CANDIDATE]: Term 2 election: Requested vote from peers 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195:34797), 4430edcc81c242dd8735c4971967e56b (127.31.250.194:44403)
I20250811 20:49:01.686686 6921 raft_consensus.cc:2466] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 2 FOLLOWER]: Leader election vote request: Granting yes vote for candidate f7420749c7f2423db6d0842344dd0ee4 in term 2.
I20250811 20:49:01.687642 6719 leader_election.cc:304] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [CANDIDATE]: Term 2 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 4430edcc81c242dd8735c4971967e56b, f7420749c7f2423db6d0842344dd0ee4; no voters:
I20250811 20:49:01.688238 7119 raft_consensus.cc:2802] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 2 FOLLOWER]: Leader election won for term 2
I20250811 20:49:01.689958 7119 raft_consensus.cc:695] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 2 LEADER]: Becoming Leader. State: Replica: f7420749c7f2423db6d0842344dd0ee4, State: Running, Role: LEADER
I20250811 20:49:01.690953 7119 consensus_queue.cc:237] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 206, Committed index: 206, Last appended: 1.206, Last appended by leader: 206, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:49:01.683763 7060 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "27f845a2b1d541a5b32c24834d8426fd" candidate_uuid: "f7420749c7f2423db6d0842344dd0ee4" candidate_term: 2 candidate_status { last_received { term: 1 index: 206 } } ignore_live_leader: false dest_uuid: "47f82216612d4f1ca2b3d5c8e278cb14"
I20250811 20:49:01.683066 7061 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "27f845a2b1d541a5b32c24834d8426fd" candidate_uuid: "f7420749c7f2423db6d0842344dd0ee4" candidate_term: 2 candidate_status { last_received { term: 1 index: 206 } } ignore_live_leader: false dest_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" is_pre_election: true
W20250811 20:49:01.695725 6717 leader_election.cc:343] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [CANDIDATE]: Term 2 election: Tablet error from VoteRequest() call to peer 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195:34797): Illegal state: must be running to vote when last-logged opid is not known
W20250811 20:49:01.697086 6717 leader_election.cc:343] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [CANDIDATE]: Term 2 pre-election: Tablet error from VoteRequest() call to peer 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195:34797): Illegal state: must be running to vote when last-logged opid is not known
I20250811 20:49:01.702196 6643 catalog_manager.cc:5582] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 reported cstate change: term changed from 0 to 2, leader changed from <none> to f7420749c7f2423db6d0842344dd0ee4 (127.31.250.193), VOTER 4430edcc81c242dd8735c4971967e56b (127.31.250.194) added, VOTER 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195) added, VOTER f7420749c7f2423db6d0842344dd0ee4 (127.31.250.193) added. New cstate: current_term: 2 leader_uuid: "f7420749c7f2423db6d0842344dd0ee4" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } health_report { overall_health: HEALTHY } } }
I20250811 20:49:02.085491 6921 raft_consensus.cc:1273] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 2 FOLLOWER]: Refusing update from remote peer f7420749c7f2423db6d0842344dd0ee4: Log matching property violated. Preceding OpId in replica: term: 1 index: 206. Preceding OpId from leader: term: 2 index: 207. (index mismatch)
I20250811 20:49:02.087149 7119 consensus_queue.cc:1035] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [LEADER]: Connected to new peer: Peer: permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 207, Last known committed idx: 206, Time since last communication: 0.000s
W20250811 20:49:02.107924 6717 consensus_peers.cc:489] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 -> Peer 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195:34797): Couldn't send request to peer 47f82216612d4f1ca2b3d5c8e278cb14. Error code: TABLET_NOT_RUNNING (12). Status: Illegal state: Tablet not RUNNING: BOOTSTRAPPING. This is attempt 1: this message will repeat every 5th retry.
I20250811 20:49:02.128436 6785 consensus_queue.cc:237] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 207, Committed index: 207, Last appended: 2.207, Last appended by leader: 206, Current term: 2, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:49:02.132170 6921 raft_consensus.cc:1273] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 2 FOLLOWER]: Refusing update from remote peer f7420749c7f2423db6d0842344dd0ee4: Log matching property violated. Preceding OpId in replica: term: 2 index: 207. Preceding OpId from leader: term: 2 index: 208. (index mismatch)
I20250811 20:49:02.133173 7119 consensus_queue.cc:1035] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [LEADER]: Connected to new peer: Peer: permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 208, Last known committed idx: 207, Time since last communication: 0.000s
I20250811 20:49:02.138453 7122 raft_consensus.cc:2953] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 2 LEADER]: Committing config change with OpId 2.208: config changed from index -1 to 208, VOTER 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195) evicted. New config: { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } } }
I20250811 20:49:02.139921 6921 raft_consensus.cc:2953] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 2 FOLLOWER]: Committing config change with OpId 2.208: config changed from index -1 to 208, VOTER 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195) evicted. New config: { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } } }
I20250811 20:49:02.148269 6629 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 27f845a2b1d541a5b32c24834d8426fd with cas_config_opid_index -1: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250811 20:49:02.154476 6643 catalog_manager.cc:5582] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 reported cstate change: config changed from index -1 to 208, VOTER 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195) evicted. New cstate: current_term: 2 leader_uuid: "f7420749c7f2423db6d0842344dd0ee4" committed_config { opid_index: 208 OBSOLETE_local: false peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } health_report { overall_health: HEALTHY } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } health_report { overall_health: HEALTHY } } }
I20250811 20:49:02.174567 6785 consensus_queue.cc:237] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 208, Committed index: 208, Last appended: 2.208, Last appended by leader: 206, Current term: 2, Majority size: 1, State: 0, Mode: LEADER, active raft config: opid_index: 209 OBSOLETE_local: false peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:49:02.177134 7122 raft_consensus.cc:2953] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 2 LEADER]: Committing config change with OpId 2.209: config changed from index 208 to 209, VOTER 4430edcc81c242dd8735c4971967e56b (127.31.250.194) evicted. New config: { opid_index: 209 OBSOLETE_local: false peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } } }
I20250811 20:49:02.181857 7041 tablet_service.cc:1515] Processing DeleteTablet for tablet 27f845a2b1d541a5b32c24834d8426fd with delete_type TABLET_DATA_TOMBSTONED (TS 47f82216612d4f1ca2b3d5c8e278cb14 not found in new config with opid_index 208) from {username='slave'} at 127.0.0.1:44358
I20250811 20:49:02.186971 6629 catalog_manager.cc:5095] ChangeConfig:REMOVE_PEER RPC for tablet 27f845a2b1d541a5b32c24834d8426fd with cas_config_opid_index 208: ChangeConfig:REMOVE_PEER succeeded (attempt 1)
I20250811 20:49:02.189924 6643 catalog_manager.cc:5582] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 reported cstate change: config changed from index 208 to 209, VOTER 4430edcc81c242dd8735c4971967e56b (127.31.250.194) evicted. New cstate: current_term: 2 leader_uuid: "f7420749c7f2423db6d0842344dd0ee4" committed_config { opid_index: 209 OBSOLETE_local: false peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } health_report { overall_health: HEALTHY } } }
W20250811 20:49:02.201051 6627 catalog_manager.cc:4908] TS 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195:34797): delete failed for tablet 27f845a2b1d541a5b32c24834d8426fd because tablet deleting was already in progress. No further retry: Already present: State transition of tablet 27f845a2b1d541a5b32c24834d8426fd already in progress: opening tablet
I20250811 20:49:02.223892 6901 tablet_service.cc:1515] Processing DeleteTablet for tablet 27f845a2b1d541a5b32c24834d8426fd with delete_type TABLET_DATA_TOMBSTONED (TS 4430edcc81c242dd8735c4971967e56b not found in new config with opid_index 209) from {username='slave'} at 127.0.0.1:45490
I20250811 20:49:02.236462 7141 tablet_replica.cc:331] stopping tablet replica
I20250811 20:49:02.237335 7141 raft_consensus.cc:2241] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 2 FOLLOWER]: Raft consensus shutting down.
I20250811 20:49:02.238076 7141 raft_consensus.cc:2270] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b [term 2 FOLLOWER]: Raft consensus is shut down!
I20250811 20:49:02.271219 7141 ts_tablet_manager.cc:1905] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250811 20:49:02.286335 7141 ts_tablet_manager.cc:1918] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 2.208
I20250811 20:49:02.286901 7141 log.cc:1199] T 27f845a2b1d541a5b32c24834d8426fd P 4430edcc81c242dd8735c4971967e56b: Deleting WAL directory at /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/wal/wals/27f845a2b1d541a5b32c24834d8426fd
I20250811 20:49:02.288833 6629 catalog_manager.cc:4928] TS 4430edcc81c242dd8735c4971967e56b (127.31.250.194:44403): tablet 27f845a2b1d541a5b32c24834d8426fd (table pre_rebuild [id=3a7032ec138742a3a76b0f90f7a39d43]) successfully deleted
W20250811 20:49:02.433853 32747 scanner-internal.cc:458] Time spent opening tablet: real 1.806s user 0.007s sys 0.001s
I20250811 20:49:02.929317 6999 tablet_bootstrap.cc:492] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: Bootstrap replayed 1/1 log segments. Stats: ops{read=206 overwritten=0 applied=206 ignored=0} inserts{seen=10250 ignored=0} mutations{seen=0 ignored=0} orphaned_commits=0. Pending: 0 replicates
I20250811 20:49:02.930351 6999 tablet_bootstrap.cc:492] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: Bootstrap complete.
I20250811 20:49:02.932102 6999 ts_tablet_manager.cc:1397] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: Time spent bootstrapping tablet: real 2.616s user 2.490s sys 0.051s
I20250811 20:49:02.939888 6999 raft_consensus.cc:357] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14 [term 1 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:49:02.943414 6999 raft_consensus.cc:738] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14 [term 1 FOLLOWER]: Becoming Follower/Learner. State: Replica: 47f82216612d4f1ca2b3d5c8e278cb14, State: Initialized, Role: FOLLOWER
I20250811 20:49:02.945278 6999 consensus_queue.cc:260] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 206, Last appended: 1.206, Last appended by leader: 206, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } } peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } }
I20250811 20:49:02.948855 6999 ts_tablet_manager.cc:1428] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: Time spent starting tablet: real 0.016s user 0.013s sys 0.000s
I20250811 20:49:02.956151 7041 tablet_service.cc:1515] Processing DeleteTablet for tablet 27f845a2b1d541a5b32c24834d8426fd with delete_type TABLET_DATA_TOMBSTONED (Replica has no consensus available (current committed config index is 209)) from {username='slave'} at 127.0.0.1:44358
I20250811 20:49:02.962646 7150 tablet_replica.cc:331] stopping tablet replica
I20250811 20:49:02.963747 7150 raft_consensus.cc:2241] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14 [term 1 FOLLOWER]: Raft consensus shutting down.
I20250811 20:49:02.966071 7150 raft_consensus.cc:2270] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14 [term 1 FOLLOWER]: Raft consensus is shut down!
I20250811 20:49:02.997743 7150 ts_tablet_manager.cc:1905] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: Deleting tablet data with delete state TABLET_DATA_TOMBSTONED
I20250811 20:49:03.017493 7150 ts_tablet_manager.cc:1918] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: tablet deleted with delete type TABLET_DATA_TOMBSTONED: last-logged OpId 1.206
I20250811 20:49:03.017942 7150 log.cc:1199] T 27f845a2b1d541a5b32c24834d8426fd P 47f82216612d4f1ca2b3d5c8e278cb14: Deleting WAL directory at /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/wal/wals/27f845a2b1d541a5b32c24834d8426fd
I20250811 20:49:03.019840 6627 catalog_manager.cc:4928] TS 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195:34797): tablet 27f845a2b1d541a5b32c24834d8426fd (table pre_rebuild [id=3a7032ec138742a3a76b0f90f7a39d43]) successfully deleted
I20250811 20:49:03.230748 6901 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250811 20:49:03.251421 7041 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250811 20:49:03.270432 6765 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
Master Summary
UUID | Address | Status
----------------------------------+----------------------+---------
ec68b17b32754a83b1ef8876f64ed39f | 127.31.250.254:46197 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+----------------------+-------------------------
builtin_ntp_servers | 127.31.250.212:36717 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+----------------------+---------+----------+----------------+-----------------
4430edcc81c242dd8735c4971967e56b | 127.31.250.194:44403 | HEALTHY | <none> | 0 | 0
47f82216612d4f1ca2b3d5c8e278cb14 | 127.31.250.195:34797 | HEALTHY | <none> | 0 | 0
f7420749c7f2423db6d0842344dd0ee4 | 127.31.250.193:40061 | HEALTHY | <none> | 1 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.31.250.193 | experimental | 127.31.250.193:40061
local_ip_for_outbound_sockets | 127.31.250.194 | experimental | 127.31.250.194:44403
local_ip_for_outbound_sockets | 127.31.250.195 | experimental | 127.31.250.195:34797
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data/info.pb | hidden | 127.31.250.193:40061
server_dump_info_path | /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data/info.pb | hidden | 127.31.250.194:44403
server_dump_info_path | /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data/info.pb | hidden | 127.31.250.195:34797
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+----------------------+-------------------------
builtin_ntp_servers | 127.31.250.212:36717 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.19.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
-------------+----+---------+---------------+---------+------------+------------------+-------------
pre_rebuild | 1 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 0
First Quartile | 0
Median | 0
Third Quartile | 1
Maximum | 1
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 1
Tablets | 1
Replicas | 1
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250811 20:49:03.526772 32747 log_verifier.cc:126] Checking tablet 27f845a2b1d541a5b32c24834d8426fd
I20250811 20:49:03.800222 32747 log_verifier.cc:177] Verified matching terms for 209 ops in tablet 27f845a2b1d541a5b32c24834d8426fd
I20250811 20:49:03.802763 6642 catalog_manager.cc:2482] Servicing SoftDeleteTable request from {username='slave'} at 127.0.0.1:39570:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250811 20:49:03.803391 6642 catalog_manager.cc:2730] Servicing DeleteTable request from {username='slave'} at 127.0.0.1:39570:
table { table_name: "pre_rebuild" } modify_external_catalogs: true
I20250811 20:49:03.816769 6642 catalog_manager.cc:5869] T 00000000000000000000000000000000 P ec68b17b32754a83b1ef8876f64ed39f: Sending DeleteTablet for 1 replicas of tablet 27f845a2b1d541a5b32c24834d8426fd
I20250811 20:49:03.818714 32747 test_util.cc:276] Using random seed: 231836606
I20250811 20:49:03.818588 6765 tablet_service.cc:1515] Processing DeleteTablet for tablet 27f845a2b1d541a5b32c24834d8426fd with delete_type TABLET_DATA_DELETED (Table deleted at 2025-08-11 20:49:03 UTC) from {username='slave'} at 127.0.0.1:44854
I20250811 20:49:03.820497 7174 tablet_replica.cc:331] stopping tablet replica
I20250811 20:49:03.821269 7174 raft_consensus.cc:2241] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 2 LEADER]: Raft consensus shutting down.
I20250811 20:49:03.821841 7174 raft_consensus.cc:2270] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4 [term 2 FOLLOWER]: Raft consensus is shut down!
I20250811 20:49:03.865759 6642 catalog_manager.cc:2232] Servicing CreateTable request from {username='slave'} at 127.0.0.1:37772:
name: "post_rebuild"
schema {
columns {
name: "key"
type: INT32
is_key: true
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "int_val"
type: INT32
is_key: false
is_nullable: false
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
columns {
name: "string_val"
type: STRING
is_key: false
is_nullable: true
encoding: AUTO_ENCODING
compression: DEFAULT_COMPRESSION
cfile_block_size: 0
immutable: false
}
}
num_replicas: 3
split_rows_range_bounds {
}
partition_schema {
range_schema {
columns {
name: "key"
}
}
}
W20250811 20:49:03.869479 6642 catalog_manager.cc:6944] The number of live tablet servers is not enough to re-replicate a tablet replica of the newly created table post_rebuild in case of a server failure: 4 tablet servers would be needed, 3 are available. Consider bringing up more tablet servers.
I20250811 20:49:03.898540 6901 tablet_service.cc:1468] Processing CreateTablet for tablet 01f6214dad8549b08e0bc3b63539f40d (DEFAULT_TABLE table=post_rebuild [id=b3a7e40d52584f70af394cd0eeffc6aa]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:49:03.898699 7174 ts_tablet_manager.cc:1905] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: Deleting tablet data with delete state TABLET_DATA_DELETED
I20250811 20:49:03.899992 6901 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 01f6214dad8549b08e0bc3b63539f40d. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:49:03.906574 7041 tablet_service.cc:1468] Processing CreateTablet for tablet 01f6214dad8549b08e0bc3b63539f40d (DEFAULT_TABLE table=post_rebuild [id=b3a7e40d52584f70af394cd0eeffc6aa]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:49:03.907971 7041 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 01f6214dad8549b08e0bc3b63539f40d. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:49:03.907609 6765 tablet_service.cc:1468] Processing CreateTablet for tablet 01f6214dad8549b08e0bc3b63539f40d (DEFAULT_TABLE table=post_rebuild [id=b3a7e40d52584f70af394cd0eeffc6aa]), partition=RANGE (key) PARTITION UNBOUNDED
I20250811 20:49:03.909453 6765 data_dirs.cc:400] Could only allocate 1 dirs of requested 3 for tablet 01f6214dad8549b08e0bc3b63539f40d. 1 dirs total, 0 dirs full, 0 dirs failed
I20250811 20:49:03.918146 7174 ts_tablet_manager.cc:1918] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: tablet deleted with delete type TABLET_DATA_DELETED: last-logged OpId 2.209
I20250811 20:49:03.918709 7174 log.cc:1199] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: Deleting WAL directory at /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/wal/wals/27f845a2b1d541a5b32c24834d8426fd
I20250811 20:49:03.919873 7174 ts_tablet_manager.cc:1939] T 27f845a2b1d541a5b32c24834d8426fd P f7420749c7f2423db6d0842344dd0ee4: Deleting consensus metadata
I20250811 20:49:03.923794 6629 catalog_manager.cc:4928] TS f7420749c7f2423db6d0842344dd0ee4 (127.31.250.193:40061): tablet 27f845a2b1d541a5b32c24834d8426fd (table pre_rebuild [id=3a7032ec138742a3a76b0f90f7a39d43]) successfully deleted
I20250811 20:49:03.931516 7182 tablet_bootstrap.cc:492] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14: Bootstrap starting.
I20250811 20:49:03.937400 7182 tablet_bootstrap.cc:654] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14: Neither blocks nor log segments found. Creating new log.
I20250811 20:49:03.945385 7183 tablet_bootstrap.cc:492] T 01f6214dad8549b08e0bc3b63539f40d P f7420749c7f2423db6d0842344dd0ee4: Bootstrap starting.
I20250811 20:49:03.949529 7184 tablet_bootstrap.cc:492] T 01f6214dad8549b08e0bc3b63539f40d P 4430edcc81c242dd8735c4971967e56b: Bootstrap starting.
I20250811 20:49:03.954443 7182 tablet_bootstrap.cc:492] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14: No bootstrap required, opened a new log
I20250811 20:49:03.954890 7182 ts_tablet_manager.cc:1397] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14: Time spent bootstrapping tablet: real 0.024s user 0.007s sys 0.014s
I20250811 20:49:03.957216 7183 tablet_bootstrap.cc:654] T 01f6214dad8549b08e0bc3b63539f40d P f7420749c7f2423db6d0842344dd0ee4: Neither blocks nor log segments found. Creating new log.
I20250811 20:49:03.957651 7182 raft_consensus.cc:357] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } } peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } }
I20250811 20:49:03.958406 7182 raft_consensus.cc:383] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:49:03.958709 7182 raft_consensus.cc:738] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 47f82216612d4f1ca2b3d5c8e278cb14, State: Initialized, Role: FOLLOWER
I20250811 20:49:03.959530 7182 consensus_queue.cc:260] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } } peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } }
I20250811 20:49:03.962818 7184 tablet_bootstrap.cc:654] T 01f6214dad8549b08e0bc3b63539f40d P 4430edcc81c242dd8735c4971967e56b: Neither blocks nor log segments found. Creating new log.
I20250811 20:49:03.972501 7183 tablet_bootstrap.cc:492] T 01f6214dad8549b08e0bc3b63539f40d P f7420749c7f2423db6d0842344dd0ee4: No bootstrap required, opened a new log
I20250811 20:49:03.973093 7182 ts_tablet_manager.cc:1428] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14: Time spent starting tablet: real 0.018s user 0.008s sys 0.006s
I20250811 20:49:03.973994 7183 ts_tablet_manager.cc:1397] T 01f6214dad8549b08e0bc3b63539f40d P f7420749c7f2423db6d0842344dd0ee4: Time spent bootstrapping tablet: real 0.029s user 0.008s sys 0.014s
I20250811 20:49:03.974567 7184 tablet_bootstrap.cc:492] T 01f6214dad8549b08e0bc3b63539f40d P 4430edcc81c242dd8735c4971967e56b: No bootstrap required, opened a new log
I20250811 20:49:03.975021 7184 ts_tablet_manager.cc:1397] T 01f6214dad8549b08e0bc3b63539f40d P 4430edcc81c242dd8735c4971967e56b: Time spent bootstrapping tablet: real 0.026s user 0.006s sys 0.007s
I20250811 20:49:03.977182 7183 raft_consensus.cc:357] T 01f6214dad8549b08e0bc3b63539f40d P f7420749c7f2423db6d0842344dd0ee4 [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } } peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } }
I20250811 20:49:03.978143 7183 raft_consensus.cc:383] T 01f6214dad8549b08e0bc3b63539f40d P f7420749c7f2423db6d0842344dd0ee4 [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:49:03.978469 7183 raft_consensus.cc:738] T 01f6214dad8549b08e0bc3b63539f40d P f7420749c7f2423db6d0842344dd0ee4 [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: f7420749c7f2423db6d0842344dd0ee4, State: Initialized, Role: FOLLOWER
I20250811 20:49:03.978489 7184 raft_consensus.cc:357] T 01f6214dad8549b08e0bc3b63539f40d P 4430edcc81c242dd8735c4971967e56b [term 0 FOLLOWER]: Replica starting. Triggering 0 pending ops. Active config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } } peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } }
I20250811 20:49:03.979333 7184 raft_consensus.cc:383] T 01f6214dad8549b08e0bc3b63539f40d P 4430edcc81c242dd8735c4971967e56b [term 0 FOLLOWER]: Consensus starting up: Expiring failure detector timer to make a prompt election more likely
I20250811 20:49:03.979686 7184 raft_consensus.cc:738] T 01f6214dad8549b08e0bc3b63539f40d P 4430edcc81c242dd8735c4971967e56b [term 0 FOLLOWER]: Becoming Follower/Learner. State: Replica: 4430edcc81c242dd8735c4971967e56b, State: Initialized, Role: FOLLOWER
I20250811 20:49:03.979449 7183 consensus_queue.cc:260] T 01f6214dad8549b08e0bc3b63539f40d P f7420749c7f2423db6d0842344dd0ee4 [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } } peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } }
I20250811 20:49:03.980456 7184 consensus_queue.cc:260] T 01f6214dad8549b08e0bc3b63539f40d P 4430edcc81c242dd8735c4971967e56b [NON_LEADER]: Queue going to NON_LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 0, Majority size: -1, State: 0, Mode: NON_LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } } peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } }
I20250811 20:49:03.989183 7183 ts_tablet_manager.cc:1428] T 01f6214dad8549b08e0bc3b63539f40d P f7420749c7f2423db6d0842344dd0ee4: Time spent starting tablet: real 0.015s user 0.003s sys 0.011s
I20250811 20:49:03.993237 7184 ts_tablet_manager.cc:1428] T 01f6214dad8549b08e0bc3b63539f40d P 4430edcc81c242dd8735c4971967e56b: Time spent starting tablet: real 0.018s user 0.005s sys 0.009s
W20250811 20:49:04.005936 6967 tablet.cc:2378] T 01f6214dad8549b08e0bc3b63539f40d P 4430edcc81c242dd8735c4971967e56b: Can't schedule compaction. Clean time has not been advanced past its initial value.
W20250811 20:49:04.078666 7107 tablet.cc:2378] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 20:49:04.111562 7187 raft_consensus.cc:491] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [term 0 FOLLOWER]: Starting pre-election (no leader contacted us within the election timeout)
I20250811 20:49:04.112073 7187 raft_consensus.cc:513] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [term 0 FOLLOWER]: Starting pre-election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } } peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } }
I20250811 20:49:04.114570 7187 leader_election.cc:290] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [CANDIDATE]: Term 1 pre-election: Requested pre-vote from peers 4430edcc81c242dd8735c4971967e56b (127.31.250.194:44403), f7420749c7f2423db6d0842344dd0ee4 (127.31.250.193:40061)
I20250811 20:49:04.128604 6921 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "01f6214dad8549b08e0bc3b63539f40d" candidate_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "4430edcc81c242dd8735c4971967e56b" is_pre_election: true
I20250811 20:49:04.129242 6921 raft_consensus.cc:2466] T 01f6214dad8549b08e0bc3b63539f40d P 4430edcc81c242dd8735c4971967e56b [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 47f82216612d4f1ca2b3d5c8e278cb14 in term 0.
I20250811 20:49:04.130379 6785 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "01f6214dad8549b08e0bc3b63539f40d" candidate_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "f7420749c7f2423db6d0842344dd0ee4" is_pre_election: true
I20250811 20:49:04.131060 6785 raft_consensus.cc:2466] T 01f6214dad8549b08e0bc3b63539f40d P f7420749c7f2423db6d0842344dd0ee4 [term 0 FOLLOWER]: Leader pre-election vote request: Granting yes vote for candidate 47f82216612d4f1ca2b3d5c8e278cb14 in term 0.
I20250811 20:49:04.132061 6995 leader_election.cc:304] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [CANDIDATE]: Term 1 pre-election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 4430edcc81c242dd8735c4971967e56b, 47f82216612d4f1ca2b3d5c8e278cb14; no voters:
I20250811 20:49:04.132711 7187 raft_consensus.cc:2802] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [term 0 FOLLOWER]: Leader pre-election won for term 1
I20250811 20:49:04.133036 7187 raft_consensus.cc:491] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [term 0 FOLLOWER]: Starting leader election (no leader contacted us within the election timeout)
I20250811 20:49:04.133317 7187 raft_consensus.cc:3058] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:49:04.137562 7187 raft_consensus.cc:513] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [term 1 FOLLOWER]: Starting leader election with config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } } peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } }
I20250811 20:49:04.138998 7187 leader_election.cc:290] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [CANDIDATE]: Term 1 election: Requested vote from peers 4430edcc81c242dd8735c4971967e56b (127.31.250.194:44403), f7420749c7f2423db6d0842344dd0ee4 (127.31.250.193:40061)
I20250811 20:49:04.139824 6921 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "01f6214dad8549b08e0bc3b63539f40d" candidate_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "4430edcc81c242dd8735c4971967e56b"
I20250811 20:49:04.139967 6785 tablet_service.cc:1813] Received RequestConsensusVote() RPC: tablet_id: "01f6214dad8549b08e0bc3b63539f40d" candidate_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" candidate_term: 1 candidate_status { last_received { term: 0 index: 0 } } ignore_live_leader: false dest_uuid: "f7420749c7f2423db6d0842344dd0ee4"
I20250811 20:49:04.140205 6921 raft_consensus.cc:3058] T 01f6214dad8549b08e0bc3b63539f40d P 4430edcc81c242dd8735c4971967e56b [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:49:04.140501 6785 raft_consensus.cc:3058] T 01f6214dad8549b08e0bc3b63539f40d P f7420749c7f2423db6d0842344dd0ee4 [term 0 FOLLOWER]: Advancing to term 1
I20250811 20:49:04.144387 6921 raft_consensus.cc:2466] T 01f6214dad8549b08e0bc3b63539f40d P 4430edcc81c242dd8735c4971967e56b [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 47f82216612d4f1ca2b3d5c8e278cb14 in term 1.
I20250811 20:49:04.145148 6995 leader_election.cc:304] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [CANDIDATE]: Term 1 election: Election decided. Result: candidate won. Election summary: received 2 responses out of 3 voters: 2 yes votes; 0 no votes. yes voters: 4430edcc81c242dd8735c4971967e56b, 47f82216612d4f1ca2b3d5c8e278cb14; no voters:
I20250811 20:49:04.145722 7187 raft_consensus.cc:2802] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [term 1 FOLLOWER]: Leader election won for term 1
I20250811 20:49:04.147349 6785 raft_consensus.cc:2466] T 01f6214dad8549b08e0bc3b63539f40d P f7420749c7f2423db6d0842344dd0ee4 [term 1 FOLLOWER]: Leader election vote request: Granting yes vote for candidate 47f82216612d4f1ca2b3d5c8e278cb14 in term 1.
I20250811 20:49:04.147781 7187 raft_consensus.cc:695] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [term 1 LEADER]: Becoming Leader. State: Replica: 47f82216612d4f1ca2b3d5c8e278cb14, State: Running, Role: LEADER
I20250811 20:49:04.148936 7187 consensus_queue.cc:237] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [LEADER]: Queue going to LEADER mode. State: All replicated index: 0, Majority replicated index: 0, Committed index: 0, Last appended: 0.0, Last appended by leader: 0, Current term: 1, Majority size: 2, State: 0, Mode: LEADER, active raft config: opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } } peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } }
I20250811 20:49:04.161327 6642 catalog_manager.cc:5582] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 reported cstate change: term changed from 0 to 1, leader changed from <none> to 47f82216612d4f1ca2b3d5c8e278cb14 (127.31.250.195). New cstate: current_term: 1 leader_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" committed_config { opid_index: -1 OBSOLETE_local: false peers { permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 } health_report { overall_health: UNKNOWN } } peers { permanent_uuid: "47f82216612d4f1ca2b3d5c8e278cb14" member_type: VOTER last_known_addr { host: "127.31.250.195" port: 34797 } health_report { overall_health: HEALTHY } } }
W20250811 20:49:04.212878 6831 tablet.cc:2378] T 01f6214dad8549b08e0bc3b63539f40d P f7420749c7f2423db6d0842344dd0ee4: Can't schedule compaction. Clean time has not been advanced past its initial value.
I20250811 20:49:04.424028 6785 raft_consensus.cc:1273] T 01f6214dad8549b08e0bc3b63539f40d P f7420749c7f2423db6d0842344dd0ee4 [term 1 FOLLOWER]: Refusing update from remote peer 47f82216612d4f1ca2b3d5c8e278cb14: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250811 20:49:04.424202 6921 raft_consensus.cc:1273] T 01f6214dad8549b08e0bc3b63539f40d P 4430edcc81c242dd8735c4971967e56b [term 1 FOLLOWER]: Refusing update from remote peer 47f82216612d4f1ca2b3d5c8e278cb14: Log matching property violated. Preceding OpId in replica: term: 0 index: 0. Preceding OpId from leader: term: 1 index: 2. (index mismatch)
I20250811 20:49:04.425813 7187 consensus_queue.cc:1035] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [LEADER]: Connected to new peer: Peer: permanent_uuid: "4430edcc81c242dd8735c4971967e56b" member_type: VOTER last_known_addr { host: "127.31.250.194" port: 44403 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.000s
I20250811 20:49:04.426646 7195 consensus_queue.cc:1035] T 01f6214dad8549b08e0bc3b63539f40d P 47f82216612d4f1ca2b3d5c8e278cb14 [LEADER]: Connected to new peer: Peer: permanent_uuid: "f7420749c7f2423db6d0842344dd0ee4" member_type: VOTER last_known_addr { host: "127.31.250.193" port: 40061 }, Status: LMP_MISMATCH, Last received: 0.0, Next index: 1, Last known committed idx: 0, Time since last communication: 0.001s
I20250811 20:49:04.466857 7204 mvcc.cc:204] Tried to move back new op lower bound from 7188256130745171968 to 7188256129641431040. Current Snapshot: MvccSnapshot[applied={T|T < 7188256130745171968}]
I20250811 20:49:04.473729 7206 mvcc.cc:204] Tried to move back new op lower bound from 7188256130745171968 to 7188256129641431040. Current Snapshot: MvccSnapshot[applied={T|T < 7188256130745171968}]
W20250811 20:49:04.714704 6995 outbound_call.cc:321] RPC callback for RPC call kudu.consensus.ConsensusService.UpdateConsensus -> {remote=127.31.250.194:44403, user_credentials={real_user=slave}} blocked reactor thread for 67183.4us
I20250811 20:49:09.362912 6901 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
I20250811 20:49:09.365077 7041 tablet_service.cc:1430] Tablet server has 1 leaders and 0 scanners
I20250811 20:49:09.367475 6765 tablet_service.cc:1430] Tablet server has 0 leaders and 0 scanners
Master Summary
UUID | Address | Status
----------------------------------+----------------------+---------
ec68b17b32754a83b1ef8876f64ed39f | 127.31.250.254:46197 | HEALTHY
Unusual flags for Master:
Flag | Value | Tags | Master
----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_ca_key_size | 768 | experimental | all 1 server(s) checked
ipki_server_key_size | 768 | experimental | all 1 server(s) checked
never_fsync | true | unsafe,advanced | all 1 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 1 server(s) checked
rpc_reuseport | true | experimental | all 1 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 1 server(s) checked
server_dump_info_format | pb | hidden | all 1 server(s) checked
server_dump_info_path | /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/master-0/data/info.pb | hidden | all 1 server(s) checked
tsk_num_rsa_bits | 512 | experimental | all 1 server(s) checked
Flags of checked categories for Master:
Flag | Value | Master
---------------------+----------------------+-------------------------
builtin_ntp_servers | 127.31.250.212:36717 | all 1 server(s) checked
time_source | builtin | all 1 server(s) checked
Tablet Server Summary
UUID | Address | Status | Location | Tablet Leaders | Active Scanners
----------------------------------+----------------------+---------+----------+----------------+-----------------
4430edcc81c242dd8735c4971967e56b | 127.31.250.194:44403 | HEALTHY | <none> | 0 | 0
47f82216612d4f1ca2b3d5c8e278cb14 | 127.31.250.195:34797 | HEALTHY | <none> | 1 | 0
f7420749c7f2423db6d0842344dd0ee4 | 127.31.250.193:40061 | HEALTHY | <none> | 0 | 0
Tablet Server Location Summary
Location | Count
----------+---------
<none> | 3
Unusual flags for Tablet Server:
Flag | Value | Tags | Tablet Server
----------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------+-------------------------
ipki_server_key_size | 768 | experimental | all 3 server(s) checked
local_ip_for_outbound_sockets | 127.31.250.193 | experimental | 127.31.250.193:40061
local_ip_for_outbound_sockets | 127.31.250.194 | experimental | 127.31.250.194:44403
local_ip_for_outbound_sockets | 127.31.250.195 | experimental | 127.31.250.195:34797
never_fsync | true | unsafe,advanced | all 3 server(s) checked
openssl_security_level_override | 0 | unsafe,hidden | all 3 server(s) checked
rpc_server_allow_ephemeral_ports | true | unsafe | all 3 server(s) checked
server_dump_info_format | pb | hidden | all 3 server(s) checked
server_dump_info_path | /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-0/data/info.pb | hidden | 127.31.250.193:40061
server_dump_info_path | /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-1/data/info.pb | hidden | 127.31.250.194:44403
server_dump_info_path | /tmp/dist-test-taskexcsLP/test-tmp/kudu-admin-test.5.IsSecure_SecureClusterAdminCliParamTest.TestRebuildMaster_0.1754945161385764-32747-0/minicluster-data/ts-2/data/info.pb | hidden | 127.31.250.195:34797
Flags of checked categories for Tablet Server:
Flag | Value | Tablet Server
---------------------+----------------------+-------------------------
builtin_ntp_servers | 127.31.250.212:36717 | all 3 server(s) checked
time_source | builtin | all 3 server(s) checked
Version Summary
Version | Servers
-----------------+-------------------------
1.19.0-SNAPSHOT | all 4 server(s) checked
Tablet Summary
The cluster doesn't have any matching system tables
Summary by table
Name | RF | Status | Total Tablets | Healthy | Recovering | Under-replicated | Unavailable
--------------+----+---------+---------------+---------+------------+------------------+-------------
post_rebuild | 3 | HEALTHY | 1 | 1 | 0 | 0 | 0
Tablet Replica Count Summary
Statistic | Replica Count
----------------+---------------
Minimum | 1
First Quartile | 1
Median | 1
Third Quartile | 1
Maximum | 1
Total Count Summary
| Total Count
----------------+-------------
Masters | 1
Tablet Servers | 3
Tables | 1
Tablets | 1
Replicas | 3
==================
Warnings:
==================
Some masters have unsafe, experimental, or hidden flags set
Some tablet servers have unsafe, experimental, or hidden flags set
OK
I20250811 20:49:09.600374 32747 log_verifier.cc:126] Checking tablet 01f6214dad8549b08e0bc3b63539f40d
I20250811 20:49:10.343724 32747 log_verifier.cc:177] Verified matching terms for 205 ops in tablet 01f6214dad8549b08e0bc3b63539f40d
I20250811 20:49:10.344544 32747 log_verifier.cc:126] Checking tablet 27f845a2b1d541a5b32c24834d8426fd
I20250811 20:49:10.344827 32747 log_verifier.cc:177] Verified matching terms for 0 ops in tablet 27f845a2b1d541a5b32c24834d8426fd
I20250811 20:49:10.356204 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 6680
I20250811 20:49:10.399116 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 6834
I20250811 20:49:10.435801 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 6970
I20250811 20:49:10.477684 32747 external_mini_cluster.cc:1658] Killing /tmp/dist-test-taskexcsLP/build/tsan/bin/kudu with pid 6609
2025-08-11T20:49:10Z chronyd exiting
[ OK ] IsSecure/SecureClusterAdminCliParamTest.TestRebuildMaster/0 (36776 ms)
[----------] 1 test from IsSecure/SecureClusterAdminCliParamTest (36776 ms total)
[----------] Global test environment tear-down
[==========] 9 tests from 5 test suites ran. (189084 ms total)
[ PASSED ] 8 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] AdminCliTest.TestRebuildTables
1 FAILED TEST
I20250811 20:49:10.545181 32747 logging.cc:424] LogThrottler /home/jenkins-slave/workspace/build_and_test_flaky@2/src/kudu/client/meta_cache.cc:302: suppressed but not reported on 2 messages since previous log ~49 seconds ago